article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
the thomas fermi ( tf ) equation has proved useful for the treatment of many physical phenomena that include atoms , molecules , atoms in strong magnetic fields , crystals and dense plasmas among others .for that reason there has been great interest in the accurate solution of that equation , and , in particular , in the accurate calculation of the slope at origin . besides, the mathematical aspects of the tf equation have been studied in detail .some time ago liao proposed the application of a technique called homotopy analysis method ( ham ) to the solution of the tf equation and stated that `` it is the first time such an elegant and explicit analytic solution of the thomas fermi equation is given '' .this claim is surprising because at first sight earlier analytical approaches are apparently simpler and seem to have produced much more accurate results .recently , khan and xu improved liao s ham by the addition of adjustable parameters that improve the convergence of the perturbation series .the purpose of this paper is to compare the improved ham with a straightforward analytical procedure based on pad approximants supplemented with a method developed some time ago . in section [ sec : ham ]we outline the main ideas of the ham , in section [ sec : hpm ] apply the hankel pad method ( hpm ) to the tf equation , and in section [ sec : conclusions ] we compare the ham with the hpm and with other approaches .in order to facilitate later discussion we outline the main ideas behind the application of the ham to the tf equation .the tf equation is an example of two point nonlinear boundary value problem . when solving this ordinary differential equation one faces problem of the accurate calculation of the slope at origin that is consistent with the physical boundary conditions indicated in equation ( [ eq : tf ] ) . in what followswe choose the notation of khan and xu whose approach is more general than the one proposed earlier by liao .they define the new solution , where and rewrite the tf equation as where is the inverse of the slope at origin ( ) and is an adjustable parameter .khan and xu state that the solution to eq .( [ eq : tf2 ] ) can be written in the form that reduces to liao s expansion when . in principlethere is no reason to assume that the series ( [ eq : g_series ] ) converges and no proof is given in that sense . besides , the partial sums of the series ( [ eq : g_series ] ) will not give the correct asymptotic behaviour at infinity as other expansions do .liao and kahn and xu do not use the ansatz ( [ eq : g_series ] ) directly to solve the problem but resort to perturbation theory .for example ,kahn and xu base their approach on the modified equation = q\hbar \mathcal{n}% \left [ \phi ( \xi ; q),\gamma ( q)\right ] \label{eq : ham}\ ] ] where and are linear and nonlinear operators , respectively , is a perturbation parameter and is another adjustable parameter . besides , is a conveniently chosen initial function and becomes the solution to equation ( [ eq : tf2 ] ) when .both and are expanded in a taylor series about as in standard perturbation theory , and is another adjustable parameter .the authors state that ham is a very flexible approach that enables one to choose the linear operator and the initial solution freely and also to introduce several adjustable parameters .however , one is surprised that with so many adjustable parameters the results are far from impressive , even at remarkable great perturbation orders .for example the ] pad approximant of the expansion provides slightly better results . a more convenient expansion of the solution of the tf equation leads to many more accurate digits with less terms .in what follows we outline a simple , straightforward analytical method for the accurate calculation of . in order to facilitate the application of the hpm we define the variables and ,so that the tf equation becomes -f(t)f^{\prime } ( t)-2t^{2}f(t)^{3}=0 \label{eq : tf3}\ ] ] we expand the solution to this differential equation in a taylor series about : where the coefficients depend on . on substitution of the series ( [ eq : f_series ] ) into equation ( [ eq : tf3 ] ) we easily calculate as many coefficients as desired ; for example , the first of them are the hpm is based on the transformation of the power series ( [ eq : f_series ] ) into a rational function or pad approximant (t)=\frac{\sum_{j=0}^{m}a_{j}t^{j}}{\sum_{j=0}^{n}b_{j}t^{j } } \label{eq:[m / n]}\ ] ] one would expect that in order to have the correct limit at infinity ; however , in order to obtain an accurate value of it is more convenient to choose , as in previous applications of the approach to the schrdinger equation ( in this case it was called riccati pad method ( rpm)) .the rational function ( [ eq:[m / n ] ] ) has coefficients that we may choose so that ,t)=\mathcal{o}(t^{2n+d+1}) ] we have another equation from which we obtain . however , it is convenient to proceed in a different ( and entirely equivalent ) way and require that (t)-\sum_{j=0}^{2n+d+1}f_{j}t^{j}=\mathcal{o}(t^{2n+d+2 } ) \label{eq:[m / n]2}\ ] ] in order to satisfy this condition it is necessary that the hankel determinant vanishes where is the dimension of the hankel matrix .each hankel determinant is a polynomial function of and we expect that there is a sequence of roots } ] that converges towards the actual value of .[ fig : logconv ] shows }-2f_{2}^{[d-1,d]}\right| ] . from the sequences for we estimate which we believe is accurate to the last digit .we are not aware of a result of such accuracy in the literature with which we can compare our estimate .it is certainly far more accurate than the result obtained by kobayashi et al by numerical integration that is commonly chosen as a benchmark .present rational approximation to the tf function is completely different from previous application of the pad approximants , where the slope at origin was determined by the asymptotic behaviour of at infinity .our approach applies to and the slope at origin is determined by a local condition at that point ( [ eq:[m / n]2 ] ) which results in the hankel determinant ( [ eq : hankel ] ) . in this senseour approach is similar to ( although more systematic and consistent than ) tu s one as mentioned above .once we have the slope at origin we easily obtain an analytical expression for in terms of the rational approximation ( [ eq:[m / n ] ] ) to . in order to have the correct behaviour at infinity we choose .table [ tab : u(x ) ] shows values of and its first derivative for ( the approximation is obviously much better for ) given by the approximant ] pad approximant on the straightforward series expansion ( [ eq : f_series ] ) with ] approximants on an elaborated perturbation series .any accurate analytical expression of the solution to the tf equation requires an accurate value of the unknown slope at origin , and the hpm provides it in a simple and straightforward way . in this sensethe hpm appears to be preferable to other accurate approaches and is far superior to the ham .notice for example that our estimate }=-1.588 ] , is better than the result provided by a ] from the series ( [ eq : f_series ] ) and obtained the tf function and its derivative with an accuracy that outperforms the ] pad approximants on the ham perturbation series .it is clear that the hpm is by far simpler , more straightforward , and much more accurate than the ham .in addition to the physical utility of the hpm we think that its mathematical features are most interesting .although we can not provide a rigorous proof of the existence of a convergent sequence of roots for each nonlinear problem , or that the sequences will converge towards the correct physical value of the unknown , a great number of successful applications to the schrdinger equation suggest that the hpm is worth further investigation .notice that we obtain a global property of the tf equation from a local approach : the series expansion about the origin ( [ eq : f_series ] ) .the fact that our original rational approximation ( [ eq:[m / n ] ] ) does not have the correct behaviour at infinity is not at all a problem because we may resort to a more conventient expansion once we have an accurate value of the unknown slope at origin .d .. 2d .. 12d .. 12 & & + 1 & 0.424008 & 0.273989 + 5 & 0.078808 & 0.023560 + 10 & 0.024315 & 0.0046028 + 20 & 0.005786 & 0.00064727 + 30 & 0.002257 & 0.00018069 + 40 & 0.001114 & 0.00006969 + 50 & 0.000633 & 0.00003251 + 60 & 0.000394 & 0.0000172 + 70 & 0.0002626&0.000009964 + 80 & 0.0001838 & 0.000006172 + 90 & 0.0001338 & 0.000004029 + 100&0.0001005 & 0.000002743 + 1000&0.000000137&0.00000000040 +
we discuss a recently proposed analytic solution to the thomas fermi ( tf ) equation and show that earlier approaches provide more accurate results . in particular , we show that a simple and straightforward rational approximation to the tf equation yields the slope at origin with unprecedented accuracy , as well as remarkable values of the tf function and its first derivative for other coordinate values .
visual localization / tracking plays a central role for many applications like intelligent video surveillance , smart transportation monitoring systems . localization and tracking algorithms aim to find the most similar region to the target in an image .recently , kernel - based tracking algorithms have attracted much attention as an alternative to particle filtering trackers .one of the most crucial difficulties in robust tracking is the construction of representation models ( likelihood models in bayesian filtering trackers ) that can accommodate illumination variations , deformable appearance changes , partial occlusions , .most current tracking algorithms use a single static template image to construct a target representation based on density models . for both kernel - based trackers and particle filtering trackers ,a popular method is to exploit color distributions in simple regions ( region - wise density models ) .generally semi - parametric kernel density estimation techniques are adopted .however , it is difficult to update this target model , and the target representation s fragility usually breaks these trackers over a long image sequence . considerable effort has been expended to ease these difficulties .we believe that the key to finding a solution is to find the right representation . in order to accommodate appearance changes ,the representation model should be learned from as many training examples as possible .fundamentally two methods , namely on - line and off - line learning , can be used for the training procedure .on - line learning means constantly updating the representation model during the course of tracking . proposes an incremental eigenvector update strategy to adapt the target representation model .a linear probabilistic principal component analysis model is used .the main disadvantage of the eigen - model is that it is not generic and is usually only suitable for characterizing texture - rich objects . in wavelet model is updated using the expectation maximization ( em ) algorithm .a classification function is progressively learned using adaboost for visual detection and tracking in and respectively . adopts pixel - wise gaussian mixture models ( gmms ) to represent the target model and sequentially update them . to date , however , less work has been reported on how to elegantly update _ region - wise density _ models in tracking .in contrast , classification is a powerful bottom - up procedure : it is trained off - line and works on - line . due to the trainingbeing typically built on very large amounts of training data , its performance is fairly promising even without on - line updating of the classifier / detector .inspired by image classification tasks with color density features and real - time detection , we learn off - line a density representation model from multiple training data . by considering tracking as a binary classification problem , a discriminative classification rule is learned to distinguish between the tracked object and background patterns . in this waya robust object representation model is obtained .this proposal provides a basis for considering the design of enhanced kernel - based trackers using robust kernel object representations .a by - product of the training is the classification function , with which the tracking problem is cast into a binary classification problem .an object detector directly using the classification function is then available .combining a detector into the tracker makes the tracker more robust and provides the capabilities of automatic initialization and recovery from momentary tracking failures . in theory ,many classifiers can be used to achieve our goal . in this paperwe show that the popular kernel based non - linear support vector machine ( svm ) well fits the kernel - based tracking framework . within this frameworkthe traditional kernel object trackers proposed in and can be expressed as special cases . because we use probabilistic density features , the learning process is closely related to probabilistic kernels based svms .it is imperative to minimize computational costs for real - time applications such as tracking .a desirable property of the proposed algorithm is that the computational complexity is independent of the number of support vectors .furthermore we empirically demonstrate that our algorithm requires fewer iterations to achieve convergence .our approach differs from although both use the svm classification score as the cost function . in , avidan builds a tracker along the line of standard optical flow tracking . only the homogeneous quadratic polynomial kernel ( or kernels with a similar quadratic structure ) can be used in order to derive a closed - form solution .this restriction prevents one using a more appropriate kernel obtained by model selection .an advantage of is that it can be used consistently with the optical flow tracking , albeit only gray pixel information can be used .moreover , the optimization procedure of our approach is inspired by the kernel - based object tracking paradigm .hence extended work such as is also applicable here , which enables us to find the global optimum .if joint spatial - feature density is used to train an svm , a fixed - point optimization method may also be derived that is similar to .the classification function of the svm trained for vehicle recognition is not smooth spatial mis - registration ( see fig . 1 in ) .we employ a spatial kernel to smooth the cost function when computing the histogram feature . in this way ,gradient based optimization methods can be used . using statistical learning theory, we devise an object tracker that is consistent with ms tracking .the ms tracker is initially derived from kernel density estimation ( kde ) .our work sheds some light on the connection between svm and kde .another important part of our tracker is its on - line re - training in parallel with tracking .continuous updating of the representation model can capture changes of the target appearance / backgrounds .previous work such as has demonstrated the importance of this on - line update during the course of tracking .the incremental svm technique meets this end , which efficiently updates a trained svm function whenever a sample is added to or removed from the training set . for our proposed tracking framework , the target model can be learned in either batch svm training or on - line svm learning .we adopt a sophisticated on - line svm learning proposed in for its efficiency and simplicity .we address the crucial problem of adaptation , , the on - line learning of discriminant appearance model while avoiding drift .the main contributions of our work are to solve ms trackers two drawbacks : the template model can only be built from a single image ; and it is difficult to update the model . the solution is to extend the use of statistical learning algorithms for object localization and tracking .svm has been used for tracking by means of spatial perturbation of the svm .we exploit svm for tracking in a novel way ( along the line of ms tracking ) .the key ingredients of our approach are : * probabilistic kernel based svms are trained and incorporated into the framework of ms tracking . by carefully selecting the kernel ,we show that no extra computation is required compared with the conventional single - view ms tracking . * an on - linesvm can be used to adaptively update the target model .we demonstrate the benefit of on - line target model update . *we show that the annealed ms algorithm proposed in can be viewed as a special case of the continuation method under an appropriate interpretation . with the new interpretation , annealed mscan be extended to more general cases .extension and new discovers are discussed .an efficient localizer is built with global mode seeking techniques .* again , by exploiting the svm binary classifier , it is able to determine the scale of the target .an improved annealed ms - like algorithm with a cascade architecture is developed .it enables a more systematic and easier design of the annealing schedule , in contrast with _ ad hoc _ methods in previous work .the remainder of the paper is organized as follows . in [ sec : pre ] , the general theory of ms tracking and svm is reviewed for completeness .our proposed tracker is presented in [ sec : gkt ] .finally experimental results are reported in [ sec : exp ] .we conclude this work in [ sec : conclusion ] .for self - completeness , we review mean shift tracking , support vector machine and its on - line learning version in this section . mean shift ( ms ) tracking was firstly presented in . in ms tracking ,the object is represented by a square region which is cropped and normalized into a unit circle . by denoting as the color histogram of the target model , and as the target candidate color histogram with the center at ,the similarity function between and is ( when bhattacharyya divergence is used ) , here is the dissimilarity measurement . let be a region s pixel positions in image with the center at . in order to make the cost function smooth otherwise gradient based ms optimization can not be applied a kernel with profile is employed to assign smaller weights to those pixels farther from the center , considering the fact that the peripheral pixels are less reliable .an -bin color histogram is built for an image patch located at , , where here is the homogeneous spatial weighting kernel profile and is its bandwidth . is the delta function and normalizes .the function maps a feature of into a histogram bin . is the kernel center ; and for the target model usually .the representation of candidate takes the same form .given an initial position , the problem of localization / tracking is to estimate a best displacement such that the measurement at the new location best matches the target , , by taylor expanding at the start position and keeping only the linear item ( first - order taylor approximation ) , the above optimization problem can be resolved by an iterative procedure : } = \frac { \sum_{\ell=1}^n { \bi_\ell { \widetilde w}_\ell g ( \vert \frac{\bc ^ { [ \tau ] } - \bi_\ell}{h } \vert^2 ) } } { \sum_{\ell=1}^n { { \widetilde w}_\ell g ( \vert \frac{\bc ^ { [ \tau ] } - \bi_\ell}{h } \vert^2 ) } } , \ ] ] where and the superscript , indexes the iteration step .the weights are calculated as : see for details .we limit our explanation of the support vector machine classifiers algorithm to an overview .large margin classifiers have demonstrated their advantages in many vision tasks .svm is one of the popular large margin classifiers which has a very promising generalization capacity .the linear svm is the best understood and simplest to apply .however , linear separability is a rather strict condition .kernels are combined into margins for relaxing this restriction .svm is extended to deal with linearly non - separable problems by mapping the training data from the input space into a high - dimensional , possibly infinite - dimensional , feature space , , . using the kernel trick ,the map is not necessarily known explicitly . like other kernel methods, svm constructs a symmetric and positive definite kernel matrix ( gram matrix ) which represents the similarities between all training datum points .given training data , the kernel matrix is written as : .when is large , the labels of and , and , are expected to be the same . here , .the decision rule is given by with where , , are support vectors , is the number of support vectors , is the weight associated with , and is the bias .the training process of svm then determines the parameters by solving the optimization problem where is the slack variable set and the regularization parameter determines the trade - off between svm s generalization capability and training error . corresponds to -norm and -norm svm respectively .the solution takes the form . here, and most of them are , yielding sparseness .the optimization can be efficiently solved by linear programming ( -norm svm ) or quadratic programming ( -norm svm ) in its dual .refer to for details .a simple on - line kernel - based algorithm , termed , has been proposed for a variety of standard machine learning tasks in .the algorithm is computationally cheap at each update step .we have implemented here for on - line svm learning .see fig . in for the backbone of the algorithm .we omit the details due to space constraint .as mentioned , visual tracking is naturally a time - varying problem .an on - line learning method allows updating the model during the course of tracking .the standard kernel - based ms tracker is generalized by maximizing a sophisticated cost function defined by svm . measuringthe similarity between images and image patches is of central importance in computer vision . in svms, the kernel plays this role .most commonly used kernels such as gaussian and polynomial kernels are not defined on the space of probability distributions . recently various probabilistic kernels have been introduced , including the fisher kernel , top , kullback - leibler kernel and probability product kernels ( ppk ) , to combine generative models into discriminative classifiers .a probabilistic kernel is defined by first fitting a probabilistic model to each training vector .the kernel is then a measure of similarity between probability distributions .ppk is an example , with kernel given by where is a constant .when , ppk reduces to a special case , termed the bhattacharyya kernel : in the case of discrete histograms , , ^\t ] , becomes when , computes the expectation of one distribution over the other , and hence is termed the expected likelihood kernel . in corresponding statistical affinity is used as similarity measurement for tracking .the bhattacharyya kernel is adopted in this work due to : * the standard ms tracker uses the bhattacharyya distance .it is clearer to show the connection between the proposed tracker and the standard ms tracker by using bhattacharyya kernel . *it has been empirically shown , at least for image classification , that the generalization capability of expected likelihood kernel is weaker than the bhattacharyya kernel . meanwhile , non - linear probabilistic kernels including bhattacharyya kernel , kullback - leibler kernel , rnyi kernel perform similarly .moreover , bhattacharyya kernel is simple and has no kernel parameter to tune .the ppk has an interesting characteristic that the mapping function is explicitly known : .this is equivalent to directly setting and the kernel .consequently for discrete ppk based svms , in the test phase the computational complexity is independent of the number of support vectors .this is easily verified .the decision function is ^\t \bp ( \bx ) ^\rho + b } \\ & = \left [ \sum_{i=1}^{n_s } { \hbeta_i \bq ( \bx_i ) ^\rho } \right]^\t \bp ( \bx ) ^\rho + b. \end{aligned}\ ] ] the first term in the bracket can be calculated beforehand .for example , for histogram based image classification like , given a test image , the histogram vector is immediately available .in fact we can interpret discrete ppk based svms as _ linear _ svms in which the input vectors are features _ non - linearly _ , it is linear .the non - linear probabilistic kernels induce a transformed feature space ( as the bhattacharyya kernel does ) to smooth density such that they significantly improve classification over the linear kernel . ]extracted from image densities .again , one might argue that , since the bhattacharyya kernel is very similar to the linear svm , it might not have the same power in modelling complex classification boundaries as the traditional non - linear kernels like the gaussian or polynomial kernel .the experiments in indicate that the classification performance of a probabilistic kernel which consists an exponential calculation is not clearly better : exponential kernels like the kullback - leibler kernel and rnyi kernel performs similarly as bhattacharyya kernel on various datasets for image classification .moreover our main purpose is to learn a representation model for visual tracking .unlike other image classification tasks in which high generalization accuracy is demanded for visual tracking achieving very high accuracy might not be necessary and may not translate to a significant increase in tracking performance .note that ppks are less compelling when the input data are vectors with no further structure .however , even the gaussian kernel is a special case of ppk ( in equation and is a single gaussian fit to by maximum likelihood ) .by contrast , the reduced set method is applied in to reduce the number of support vectors for speeding up the classification phase .applications which favour fast computation in the testing phase , such as large scale image retrieval , might also benefit from this discrete ppk s property .it is well known that the magnitude of the svm score measures the _ confidence _ in the prediction .the proposed tracking is based on the assumption that the local maximum of the svm score corresponds to the target location we seek , starting from an initial guess close to the target . if the local maximum is positive , the tracker accepts the candidate .otherwise an exhaustive search or localization process will start . the tracked position at time is the initial guess of the next frame and so forth .we now show how the local maximum of the decision score is determined . as in ,a histogram representation of the image region can be computed as equation . with equations , and , we have to represent the image region .we also use the image center to represent the image region . for claritywe define notation . ] we assume the search for the new target location starts from a near position , then a taylor expansion of the kernel around is applied , similar to . after some manipulations and putting those terms independent of together , denoted by , becomes where and } { \sqrt{p_u ( \bc_0 ) } } \delta ( { \vartheta ( \bi_\ell -u ) } ) } .\label{eq : svtweight2}\ ] ] here is obtained by swapping the order of summation .the first term of is the weighted kernel density estimate with kernel profile at .it is clear now that our cost function has an identical format as the standard ms tracker .can we simply set which leads to a fixed - point iteration procedure to _ maximize _ as the standard ms does ?if it works , the optimization would be similar to .unfortunately , can not guarantee a local maximum convergence .that means , the fixed point iteration can converge to a local minimum .we know that only when all the weights are positive , converges to a local maximum as the standard ms does .see appendix for the theoretical analysis .however , in our case , a negative support vector s weight is negative , which means some of the weights computed by could be negative .the traditional ms algorithm requires that the sample weights must be non - negative . has discussed the issue on ms with negative weights and a _ heuristic _modification is given to make ms able to deal with samples with negative weights . according to ,the modified ms is } = \frac { \sum_{\ell=1}^n { \bi_\ell \hw_\ell g ( \vert \frac{\bc ^ { [ \tau ] } - \bi_\ell}{h } \vert^2 ) } } { \sum_{\ell=1}^n { \vert \hw_\ell g ( \vert \frac{\bc ^ { [ \tau ] } - \bi_\ell}{h } \vert^2 ) \vert } } .\ ] ] here is the absolute value operation . alas this heuristic solution is problematic .note that no theoretical analysis is given in .we show that the methods in can not guarantee converging to a local maximum mode .see appendix for details .the above problem may be avoided by using -class svms in which is strictly positive .however the discriminative power of svm is also eliminated due to its unsupervised nature . in this work , we use a quasi - newton gradient descent algorithm for maximizing in .in particular , the l - bfgs algorithm is adopted for implementing the quasi - newton algorithm .we provide callbacks for calculating the value of the svm classification function and its gradient .typically , only few iterations of the optimization procedure are performed at each frame .it has been shown that quasi - newton can be a better alternative to ms optimization for visual tracking in terms of accuracy .quasi - newton was also used in for kernel - based template alignment . besides, in the authors have shown that quasi - newton converges around twice faster than the standard ms does for data clustering .the essence behind the proposed svm score maximization strategy is intuitive .the cost function favors both the dissimilarity to negative training data ( , background ) and the similarity to positive training data . compared to the standard ms tracking, our strategy provides the capability to utilize a large amount of training data .the terms with positive in the cost function play the role to attract the target candidate while the negative terms repel the candidate . in zhaohave extended ms tracking by introducing a background term to the cost function , , . is the background color histogram in the corresponding region .it also linearly combines both positive and negative terms into tracking and better performance has been observed .it is simple and no training procedure is needed .nevertheless it lacks an elegant means to exploit available training data and the weighting parameters and need to be tuned manually .the original ms tracker s analysis relies on kernel properties .we argue that the main purpose of the kernel weighting scheme is to smooth the cost function such that iterative methods are applicable .kernel properties then derive an efficient ms optimization .as observed by many other authors , the kernels used as weighting kernel density estimation .we can simply treat the feature distribution as a weighted histogram to smooth the cost function and , at the same time , to account for the non - rigidity of tracked targets .note that ( 1 ) the optimization reduces to the standard ms tracking if ; ( 2 ) other probability kernels like are also applicable here .the only difference is that in will be in other forms . in previous contentswe have shown that in the testing phase discrete ppk s support vectors do not introduce extra computation .again , for our tracking strategy , no computation overhead is introduced compared with the traditional ms tracking in .this can be seen from equation .the summation in ( the bracketed term ) can be computed off - line .the only extra computation resides in the training phase : the proposed tracking algorithm has the _ same _ computation complexity as the standard ms tracker .it is also straightforward to extend this tracking framework to spatial - feature space which has proved more robust .+ a technique is proposed in , dubbed annealed mean shift ( ) , to reliably find the global _ density _ mode .is motivated by the observation that the number of modes of a kernel density estimator with a gaussian kernel is monotonically non - increasing the bandwidth of the kernel .here we re - interpret this global optimization and show that it is essentially a special case of the _ continuation _ approach . with the new interpretation ,it is clear now that this technique is applicable to a broader types of cost functions , not necessary to a density function .the continuation method is one of the unconstrained global optimization techniques which shares similarities with deterministic annealing . a series of gradually deformed but smoothed cost functions are successively optimized , where the solution obtained in the previous step serves as an initial point in the current step .this way the convergence information is conveyed . with sufficient smoothing , the first cost function will be concave / convex such that the global optimum can be found .the algorithm iterates until it traces the solution back to the original cost function .we now recall some basic concepts of the continuation method . given a non - linear function , the transformation for is defined such that , where is a smoothing function ; usually the gaussian is used . is a positive scalar which controls the degree of smoothing . is a normalization constant such that note the similarity between the smoothing function and the definition of the kernel in kde . from, the defined transformation is actually the convolution of the cost function with . in the frequency domain ,the frequency response of equals the product of the frequency responses of and . being a smoothing filter ,the effect of is to remove high frequency components of the original function .therefore one of the requirements for is its frequency response must be a low - pass frequency filter .we know that popular kernels like gaussian or epanechnikov kernel are low - pass frequency filters .this is one of the principle justifications for using gaussian or epanechnikov to smooth a function .when is increased , becomes smoother and for , the function is the original function .the annealed version of mean shift introduced in for global mode seeking is a special case of the general continuation method defined in equation .let the original function take the form of a dirac delta comb ( impulse train in signal processing ) , , , where is known . with the fundamental property that for any function , we have this is exactly same as a kde .this discovers that is a special case of the continuation method .when with ( can be negative ) , the above analysis still holds and this case corresponds to the svm score maximization in [ sec : dsm ] .it is not a trivial problem to determine the optimal scale of the spatial kernel bandwidth , , the size of the target , for kernel - based tracking .a line search method is introduced in . for, an important open issue is how to design the annealing schedule .armed with an svm classifier , it is possible to determine the object s scale . if only the color feature is used , due to its lack of spatial information and insensitive to scale change , it is difficult to estimate a fine scale of the target . by combining other features , betterestimates are expected . as we will see in the experiments ,reasonable results can be obtained with only color .it is natural to combine into a cascade structure , like the cascade detector of .we start ms search from a large bandwidth .after convergence , an extra verification is applied to decide whether to terminate the search .if , it means is too large . then we need to reduce the bandwidth to and start ms with the initial location .this procedure is repeated until , . and are the final scale and position .little extra computation is needed because only a decision verification is introduced at each stage .in this section we implement a localizer and tracker and discuss related issues .experimental results on various data sets are shown .for the first experiment , we have trained a face representation model . faces cropped from caltech- are used as positive raw images , and negative images are randomly cropped from images which do not contain faces .the image size is reduced to pixels .kernel - weighted rgb colour histograms , consisting of bins , are extracted for classification . by defaultwe use a soft svm trained with libsvm ( slightly modified to use customized kernels ) .test accuracy on the training data is ; and on a test data set which contains totally negative data .note that our main purpose is not to train a powerful face detector ; rather , we want to obtain an appearance model that is more robust than the single - view appearance model .we now test how well the algorithm maximizes the svm score .first , we feed the algorithm a rough initial guess and run ms .see fig .[ fig : trackingexp1 ] for details .the first example in fig .[ fig : trackingexp1 ] comes from the training data set .the initial svm score is negative . in this case, a single step is required to switch to a positive score it moves closely to the target after one iteration .we plot the corresponding cost function in fig .[ fig:1 ] . by comparison ,the cost function of the standard ms is also plotted ( the target template is cropped from the same image ) .we can clearly see the difference .the other two test images are from outside of the training data set . despite the significant face color difference and variation in illumination ,our svm localizer works well in both tests . to compare the robustness, we use the first face as a template to track the second face in fig .[ fig : trackingexp1 ] , the standard ms tracker fails to converge to the true position .we now apply the global maximum seeking algorithm to object localization . in , it has been shown that it is possible to locate a target no matter from which initial position the ms tracker starts . herewe use the learned classification rule to determine when to stop searching .we start the annealed continuation procedure with the initial bandwidth . then the bandwidth pyramid works with the rule , . is the maximum number of iterations .we stop the search when for some the svm score is positive upon convergence .the image center is set to be the initial position of the search for these tests .we present the results in fig .[ fig : localisationexp1 ] . in the first test ,our proposed algorithm works well : it successfully finds the face location , and also the final bandwidth well fits the target .[ fig : localisationexp1 ] ( right ) shows how the svm score evolves .it can be seen that every bandwidth change significantly increases the score .if the target size is large and there is a significant overlap between the target and a search region at a coarse bandwidth , , the overlap can make the cascade search stop _ prematurely _ ( see the second test in fig .[ fig : localisationexp1 ] ) .again this problem is mainly caused by the color feature s weak discriminative power .a remedy is to include more features .however , for certain applications where the scale - size is not critically important , our localization results have been usable .furthermore , better results could be achieved when we train a model for a specific object ( , train an appearance model for a specific person ) with a single color feature .effectiveness of the proposed generalized kernel - based tracker is tested on a number of video sequences .we have compared with two popular color histogram based methods : the standard ms tracker and particle filters .unlike the first experiment , we do not train an _ off - line _ svm model for tracking .it is not easy to have a large amount of training data for a general object , therefore in the tracking experiment , an on - line svm described in [ sec : onlinesvm ] is used for training .the user crops several negative data and positive data for initial training . during the course of tracking the on - line svm updates its model by regarding the tracked region as a positive example and randomly selecting a few sub - regions ( background area ) around the target as negative examples .a -binned color histogram is used for both the generalized kernel tracker and standard ms tracker . for the particle filter , with or particles, the tracker fails at the first a few frames .so we have used particles .and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) . , title="fig:",scaledwidth=11.7% ] + and the frame rate is frames per second ( fps ) . , title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] + and the frame rate is frames per second ( fps ) . , title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) ., title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) . , title="fig:",scaledwidth=11.7% ] and the frame rate is frames per second ( fps ) . , title="fig:",scaledwidth=11.7% ] and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] and the frame rate is fps . , title="fig:",scaledwidth=11.7% ] and the frame rate is fps . , title="fig:",scaledwidth=11.7% ] and the frame rate is fps . ,title="fig:",scaledwidth=11.7% ] + and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] and the frame rate is fps . ,title="fig:",scaledwidth=11.7% ] and the frame rate is fps . , title="fig:",scaledwidth=11.7% ] + and the frame rate is fps . , title="fig:",scaledwidth=11.7% ] and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] and the frame rate is fps ., title="fig:",scaledwidth=11.7% ] in the first experiment , the tracked person moves quickly .hence the displacement between neighboring frames is large .the illumination also changes .the background scene is cluttered and contains materials with similar color as the target .the proposed algorithm tracks the whole sequence successfully .[ fig : tracking1 ] summarizes the tracking results .the standard ms tracker fails at frame # 57 ; recovers at frame # 74 and then fails again .the particle filter also loses the target due to motion blur and fast movement .our on - line adaptive tracker achieves the most accurate results .[ fig : tracking2 ] shows that the results on a more challenging video .the target turns around and at some frames it even moves out of the view . at frame # 194, the target disappears .generalized kernel tracker and particle filter recovers at the following frames while the ms tracker fails .again we can see the proposed tracker performs best due to its learned template model and on - line adaptivity .when the head turns around , all trackers can lock the target because compared with the background , the hair color is more similar to the face color .these two experiments show the proposed tracker s robustness to motion blur , large pose change and target s fast movement over the standard ms tracker and particle filter based tracker . in the experiments , to initialize the proposed tracker , we randomly pick up a few negative samples from the background .we have found this simple treatment works well .we present more samples from three more sequences in figs .[ fig : tracking3 ] , [ fig : tracking4 ] and [ fig : tracking5 ] .we mark only our tracker in these frames . from figs .[ fig : tracking3 ] and [ fig : tracking4 ] we see that despite the target moving into shadow at some frames , our tracker successfully tracks the target through the whole sequences .we have shown promising tracking results of the proposed tracker on several video clips .we now present some quantitative comparisons of our algorithm with other trackers .first , we run the proposed tracker , ms , and particle filter trackers on the cubicle sequence 1 . in fig .[ fig : cubicle1 ] , we show some tracking frames of our method and particle filtering . compared with particle filtering , ours are much better in terms of accuracy and much faster in terms of the tracking speed .our results are also _ slightly _ better than the standard ms tracker .but visually there is no significant difference , so we have not included ms results in fig .[ fig : cubicle1 ] .again , the particle filter tracker uses particles .we have run the particle filter times and the best result is reported .[ fig : qant1 ] shows the absolute deviation of the tracked object s center at each frame .clearly the generalized kernel tracker demonstrates the best result .we have reported the average tracking error ( the euclidean distance of the object s center against the ground truth ) in table [ tab : quat1 ] , which shows the proposed tracker outperforms ms and particle filter . in table[ tab : quat1 ] , the error variance estimates are calculated from the tracking results of all frames regardless the target is lost or not .we have also proved the importance of on - line svm update .as mentioned , when we switch off the on - line update , our proposed tracker would behave similarly to the standard ms tracker .we see from table [ tab : quat1 ] that even without updating , the generalized kernel tracker is slightly better than the standard ms tracker .this might be because the initialization schemes are different : the generalized kernel tracker can take multiple positive as well as negative training examples to _ learn _ an appearance model , while ms can only take a single image for initialization .although we only use very few training examples ( less than ) , it is already better than the standard ms tracker . in this sequence ,when the target object is occluded , the particle filter tracker only tracks the visible region such that the deviation becomes large .our approach updates the learned appearance model using on - line svm .the region that partially contains the occlusion is added to the object class database gradually based on the on - line update procedure .this way our tracker tracks the object position close to the ground truth .we also report the tracking failure rate ( fr ) for this video , which is the percentage of the number of failure frames in the total number of frames .if the distance between the tracked center and the ground truth s center is larger than a threshold , we mark it a failure . we have defined the threshold as or of the diagonal length of the ground truth s bounding box , which results in two criteria : and respectively .the former is more strict than the latter .as shown in table [ tab : quat1 ] , our tracker with on - line update produces lowest tracking failures under either criterion . and fps . , title="fig:",scaledwidth=11.7% ] and fps . ,title="fig:",scaledwidth=11.7% ] and fps ., title="fig:",scaledwidth=11.7% ] and fps . ,title="fig:",scaledwidth=11.7% ] + and fps ., title="fig:",scaledwidth=11.7% ] and fps ., title="fig:",scaledwidth=11.7% ] and fps . ,title="fig:",scaledwidth=11.7% ] and fps . ,title="fig:",scaledwidth=11.7% ] [ tab : quat1 ] we also compare the running time of trackers , which is an important issue for real - time tracking applications .table [ tab : time1 ] reports the results on two sequences .. a desktop with intel core duo 2.4-ghz cpu and -g ram is used for running all the experiments . ]the generalized kernel tracker ( around fps ) is comparable to the standard ms tracker , and much faster than the particle filter .this coincides with the theoretical analysis : our generalized kernel tracker s computational complexity is independent of the number of support vectors , so in the test phrase , the complexity is almost same as the standard ms .one may argue that the on - line update procedure introduces some overhead .but the generalized kernel tracker employs the l - bfgs optimization algorithm which is about twice faster than ms , as shown in .therefore , overall , the generalized kernel tracker runs as fast as the ms tracker .because the particle filter is stochastic , we have run it times and the average and standard deviation are reported . for our tracker and ms , they are deterministic and the standard deviation is negligible .note that the computational complexity if the particle filter tracker is linearly proportional to the number of particles .-norm absolute error ( pixels ) of the object s center against the ground truth on the cubicle sequence 1 .the two figures correspond to - , and -axis , respectively .the proposed tracker with on - line updating gives the best result .as expected , the proposed tracker without updating shows a similar performance with the standard ms tracker ., title="fig:",scaledwidth=40.0% ] -norm absolute error ( pixels ) of the object s center against the ground truth on the cubicle sequence 1 .the two figures correspond to - , and -axis , respectively .the proposed tracker with on - line updating gives the best result .as expected , the proposed tracker without updating shows a similar performance with the standard ms tracker . , title="fig:",scaledwidth=40.0% ] .running time per frame ( seconds ) .the stochastic particle filter tracker has run times and the standard deviation is also reported . [cols="^,^,^,^",options="header " , ] [ tab : time1 ] we have run another test on cubicle sequence 2 .we show some results of our method and particle filtering in fig .[ fig : cubicle2 ] .although all the methods can track this sequence successfully , the proposed method achieves most accurate results .we see that when the tracked object turns around , our algorithm is still able to track it accurately .table [ tab : quat2 ] summarizes the quantitative performance .our method is also slightly better ms .again we see that on - line update does indeed improve the accuracy .we have also reported the tracking failure rates on this video .our tracker with on - line update has the lowest tracking failures and the one without on - line update is the second best .these results are consistent with the previous experiments .[ tab : quat2 ] and frame rate fps ., title="fig:",scaledwidth=11.7% ] and frame rate fps . ,title="fig:",scaledwidth=11.7% ] and frame rate fps ., title="fig:",scaledwidth=11.7% ] and frame rate fps ., title="fig:",scaledwidth=11.7% ] + and frame rate fps . ,title="fig:",scaledwidth=11.7% ] and frame rate fps ., title="fig:",scaledwidth=11.7% ] and frame rate fps . , title="fig:",scaledwidth=11.7% ] and frame rate fps . ,title="fig:",scaledwidth=11.7% ] -norm absolute error ( pixels ) of the object s center against the ground truth on the walker sequence 3 .the two figures correspond to - , and -axis , respectively .it clearly shows that on - line update of the generalized kernel tracker is beneficial : without on - line update , the error is larger ., title="fig:",scaledwidth=40.0% ] -norm absolute error ( pixels ) of the object s center against the ground truth on the walker sequence 3 .the two figures correspond to - , and -axis , respectively .it clearly shows that on - line update of the generalized kernel tracker is beneficial : without on - line update , the error is larger ., title="fig:",scaledwidth=40.0% ] to demonstrate the effectiveness of the on - line svm learning , we switch off the on - line update and run the tracker on the walker sequence 3 .we plot the -norm absolute deviation of the tracked object s center in pixels at each frame in fig .[ fig : qant3 ] . apparently , at most frames , on - line update produces more accurate tracking results .the average euclidean tracking error is pixels with on - line update and pixels without on - line update .conclusions that we can draw from these experiments are : ( 1 ) the proposed generalized kernel - based tracker performs better than the standard ms tracker on all the sequences that we have used ; ( 2 ) on - line learning often improves tracking accuracy .to summarize , we have proposed a novel approach to kernel based visual tracking , which performs better than conventional single - view kernel trackers . instead of minimizing the density distance between the candidate region and the template , the generalized ms tracker works by maximizing the svm classification score .experiments on localization and tracking show its efficiency and robustness . in this way, we show the connection between standard ms tracking and svm based tracking .the proposed method provides a generalized framework to the previous methods .future work will focus on the following possible avenues : * other machine learning approaches such as relevance vector machines ( rvm ) , might be employed to learn the representation model . since in the test phrase , rvm and svmtake the same form , rvm can be directly used here .rvm achieves comparable recognition accuracy to the svm , but requires substantially fewer kernel functions .it would be interesting to compare different approaches performances ; * the strategy in this paper can be easily plugged into a particle filter as an observation model . improved tracking results are anticipated than for the simple color histogram particle filter tracker developed in .generally collins modified mean shift ( equation ) can not guarantee to converge to a local maximum .it is obvious that a fixed point obtained by iteration using equation will not satisfy is the original cost function .therefore , generally , will not even be an extreme point of the original cost function . in the following example, obtained by collins modified mean shift converges to a point which is close to a local _ minimum _ , but not the exact minimum . in fig .[ fig : negmsexp ] we give an example on a mixture of gaussian kernel which contains some negative weights . in this case both the standard ms and collins modified ms fail to converge to a maximum .p. prez , c. hue , j. vermaak , and m. gangnet , `` color - based probabilistic tracking , '' in _ proc .conf . comp ._ , copenhagen , denmark , 2002 , vol .2350 of _ lecture notes in computer science _ , pp . 661675 . c. shen , a. van den hengel , and a. dick , `` probabilistic multiple cue integration for particle filter based tracking , '' in _ proc .conf . digital image computing techniques & applications _ , sydney , australia , 2003 , pp .309408 . o. javed , s. ali , and m. shah , `` online detection and classification of moving objects using progressively improving detectors , '' in _ proc .ieee conf . comp_ , san diego , ca , 2005 , vol . 1 ,696701 .p. j. moreno , p. ho , and n. vasconcelos , `` a kullback - leibler divergence based kernel for svm classification in multimedia applications , '' in _ proc .neural inf . process ._ , vancouver , canada , 2003 .r. jenssen , d. erdogmus , j. c. principe , and t. eltoft , `` towards a unification of information theoretic learning and kernel methods , '' in _ proc .ieee workshop on machine learning for signal proce ._ , sao luis , brazil , 2004 , pp . 93102 .r. jenssen , d. erdogmus , j. c. principe , and t. eltoft , `` the laplacian pdf distance : a cost function for clustering in a kernel feature space , '' in _ proc . adv .neural inf . process ._ , 2004 , vol .17 , pp . 625632 .c. yang , r. duraiswami , and l. davis , `` efficient spatial - feature tracking via the mean - shift and a new similarity measure , '' in _ proc .ieee conf . comp ._ , san diego , ca , 2005 , vol . 1 ,176183 .[ ] chunhua shen received the b.sc . and m.sc .degrees from nanjing university , china , and the ph.d .degree from university of adelaide , australia .he has been working as a research scientist in nicta , canberra research laboratory , australia since october 2005 .he is also an adjunct research follow at australian national university and an adjunct lecturer at university of adelaide .his research interests include statistical machine learning , convex optimization and their application in computer vision .junae kim is a phd student at the research school of information sciences and engineering , australian national university .she is also attached to nicta , canberra research laboratory .she received the b.sc .degree from ewha womans university , korea in 2000 , m.sc . from pohang university of science and technology , korea in 2002 , and m.sc . from australian national university in 2007 .she was a researcher in electronics and telecommunications research institute ( etri ) , korea for 5 years before she moved to australia .her research interests include computer vision and machine learning .[ ] hanzi wang received his b.sc .degree in physics and m.sc .degree in optics from sichuan university , china , in 1996 and 1999 , respectively .he received his ph.d .degree in computer vision from monash university , australia , in 2004 .he is a senior research fellow at the school of computer science , university of adelaide , australia .his current research interest are mainly concentrated on computer vision and pattern recognition including robust statistics , model fitting , optical flow calculation , visual tracking , image segmentation , fundamental matrix estimation and related fields .he has published more than 30 papers in major international journals and conferences .he is a member of the ieee society .
kernel - based mean shift ( ms ) trackers have proven to be a promising alternative to stochastic particle filtering trackers . despite its popularity , ms trackers have two fundamental drawbacks : ( 1 ) the template model can only be built from a single image ; ( 2 ) it is difficult to adaptively update the template model . in this work we generalize the plain ms trackers and attempt to overcome these two limitations . it is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker . however , little work has been done on building a robust template model for kernel - based ms tracking . in contrast to building a template from a single frame , we train a robust object representation model from a large amount of data . tracking is viewed as a binary classification problem , and a discriminative classification rule is learned to distinguish between the object and background . we adopt a support vector machine ( svm ) for training . the tracker is then implemented by maximizing the classification score . an iterative optimization scheme very similar to ms is derived for this purpose . compared with the plain ms tracker , it is now much easier to incorporate on - line template adaptation to cope with inherent changes during the course of tracking . to this end , a sophisticated on - line support vector machine is used . we demonstrate successful localization and tracking on various data sets . = 1 kernel - based tracking , mean shift , particle filter , support vector machine , global mode seeking .
transmembrane voltage is often recorded during physiological study of biological neurons . however , voltage - gated ion channel activity and neurotransmitter levels are quite difficult to measure directly and are usually unobserved in such studies . in addition , there is a great diversity of neuron morphology , protein expression , and plasticity which may affect voltage dynamics and synaptic transmission . early development and senescencemay also be major determinants of voltage response profiles .synaptic tuning in particular is thought to be an essential mediator of learning , stimulus response integration , and memory .there is evidence that memory and learning may depend critically on several distinct types of dynamic behavior in the voltage of neurons .the ml model reproduces the voltage of a single neuron and , depending on parameterization and initial conditions , can exhibit many of the experimentally observed behaviors of biological neurons . in this paper , we explore a simple neural network consisting of two biologically identical , reciprocally coupled ml neurons . have shown that this modest model can exhibit a wide range of oscillating or non - oscillating voltage depending on the values of just a few parameters , specifically in this study , . in the absence of noise , the model can predict synchronous or asynchronous firing , as well as either equal or unequal action potential amplitudes .additionally , in the presence of even small noise in the applied current and weak synaptic coupling , the system can exhibit mixed - mode oscillations ( mmo ) characterized by periods of small amplitude oscillation interrupted by large amplitude excursions .in further work with the two ml neuron model , explored two synaptically decoupled neurons driven by both common and independent intrinsic noise terms .they found that shared common noise promotes synchronous firing of the two neurons , while separate intrinsic noise terms promoted asynchronous firing .the relative scaling of the two noise sources was observed to be key in predicting the degree of synchrony .in addition , while they did not specifically look at mmo , they hypothesized that such synchrony in a synaptically coupled network would increase the probability of mmo , by facilitating longer residence times within the unstable periodic orbits adjacent to the system s stable periodic orbits .indeed , in this paper we will detail the relative positions of these parameter regions as they are of key importance to our conditioned likelihood approach . specifically , we will provide a quick look - up table for the region in parameter space where stable periodic orbits are possible . develop a expectation - maximization ( em ) stochastic particle filter method to estimate the parameters in a single ml neuron based on observation of voltage only .a key aspect of their approach is that they assume both the voltage and the channel gating variables are in an oscillatory regime , but stochastically perturbed . these perturbations are considered nuisance parameters which their method marginalizes away . specifically , they treat the unobserved channel gating variable from the model as a completely latent variable . starting from estimates of the initial conditions for the voltage and channel gating variables , they iteratively predict the gating variable and voltage andthen update the predicted voltage to the next time step using a modification of the well - known euler differential equation solver .they discuss that an assumption of stationarity in their method limits applicability to only short time windows over which current input can be considered constant ( e.g. 600ms ) .they also note that certain parameters , conductances and reversal potentials in particular , are sensitive to choice of tuning parameters required by the method .these studies demonstrate the active progress as well as the challenges of model parameter estimation for biological neuronal models and , more generally , for relaxation oscillator models .each of these studies derives asymptotic approximations or general forms for model likelihood , but use fundamentally different techniques and assumptions in doing so . in eachstudy the approach is specifically crafted to the model . in this paperwe attempt to develop a convenient bayesian estimation scheme with only a few tuning parameters and relatively few mild assumptions .we focus our attention on deterministic synaptically coupled ml neurons .application of our method to stochastically coupled ml neurons is on - going work in our group . in the case of ml , estimation of non - trivial due to the diversity of possible dynamic behavior and the abrupt transitions among these seen with just small changes in these parameters values .however , we can better understand the critical values of these parameters by studying the system s bifurcation structure .we are able to locate parameter regimes where dramatic changes in the system appear .the neurons analyzed in this study are classified as type ii neurons , characterized by discontinuous drastic shifting between behavioral states . because there is a distinct switch in behavior , bifurcation analyses determine a closed region of parameter space over which the relevant dynamics may occur .sampling over such a feasibility region amounts to conditioning the inference on an _ a priori _ assumed class of dynamics ( e.g. stable node , limit cycle , steady state etc . ) . while facilitating conditioning the likelihood on feature statistics of the voltage, this may translate into increased confidence and reduced bias in the parameter estimates .our goal is parameter inference based on the temporal voltage response of two synaptically coupled neurons which are deterministically coupled to voltage - gated ionic conductance dynamics .a single ml model has a two - dimensional phase space and is known to reproduce many of the behaviors experimentally observed in biological neurons .therefore , systems of coupled ml neurons may offer a reasonable starting point for developing statistical inference methods for models of neuronal networks .the ml network we study is , where note that in the stochastic version of this model and and are standard independent wiener process variables . in this paper , however we will be concerned with the deterministic version of this model where . .* * [ cols="^,^,^,^",options="header " , ] let have fourier series representation , + + then , it it to be shown that proof is by induction and is adapted from . in the base case ( ) , t & \overbrace{+2\pi^3 a_1 ^ 2 \phi_1 ^ 3 \sin(4\pi\phi_1 t)}^{g(t)~\in{~{\ensuremath{\mathcal{o}\!\left(1\right)}\xspace}}}\\ & - 2\pi^3 b_1 ^ 2 \phi_1 ^ 3 \sin(4\pi\phi_1t)\\ & + 16\pi^4a_1b_1\phi_1 ^ 4 \cos^2(2\pi\phi_1t)\\ & -16\pi a_1b_1\phi_1 ^ 4\end{aligned}\ ] ]then it is supposed that for we have , +g_{n-1}(t)\end{aligned}\ ] ] +g_n(t)\end{aligned}\ ] ] next , collecting the terms from the summation yields , expanding the square in the previous result gives , the first term is recognized as the induction hypothesis and so has the form .the integrals of the remaining terms are evaluated with the extensive use of trigonometric identities .+g_{n-1}(t)\\ & + \sum_{k=1}^{n-1}8\pi^3a_ka_n\phi_k^2\phi_n^2\left[\frac{\sin(2\pi(\phi_k-\phi_n)t)}{\phi_k-\phi_n}+\frac{\sin(2\pi(\phi_k+\phi_n)t)}{\phi_k+\phi_n}\right]\\ & -\sum_{k=1}^{n-1}8\pi^3b_ka_n\phi_k^2\phi_n^2\left[\frac{\cos(2\pi(\phi_k-\phi_n)t)}{\phi_k-\phi_n}+\frac{\cos(2\pi(\phi_k+\phi_n)t)}{\phi_k+\phi_n}\right]\\ & + \sum_{k=1}^{n-1}8\pi^3b_ka_n\phi_k^2\phi_n^2\left[\frac{1}{\phi_k-\phi_n}+\frac{1}{\phi_k+\phi_n}\right]\\ & -\sum_{k=1}^{n-1}8\pi^3a_kb_n\phi_k^2\phi_n^2\left[\frac{\cos(2\pi(\phi_k-\phi_n)t)}{\phi_k-\phi_n}+\frac{\cos(2\pi(\phi_k+\phi_n)t)}{\phi_k+\phi_n}\right]\\ & + \sum_{k=1}^{n-1}8\pi^3a_kb_n\phi_k^2\phi_n^2\left[\frac{1}{\phi_k-\phi_n}+\frac{1}{\phi_k+\phi_n}\right]\\ & + \sum_{k=1}^{n-1}8\pi^3b_kb_n\phi_k^2\phi_n^2\left[\frac{\sin(2\pi(\phi_k-\phi_n)t)}{\phi_k-\phi_n}+\frac{\sin(2\pi(\phi_k+\phi_n)t)}{\phi_k+\phi_n}\right]\\ & \mathbf{+8\pi^4\phi_n^4a_n^2t}+2\pi^3a_n^2\phi_n^3\sin(4\pi\phi_nt)\\ & + 16\pi^4a_nb_n\phi_n^4\sin^2(2\pi\phi_nt)\\ & \mathbf{+8\pi^4\phi_n^4b_n^2t}-2\pi^3b_n^2\phi_n^3\sin(4\pi\phi_nt)\end{aligned}\]]the bold terms may be combined with the leading term of the induction hypothesis raising the upper bound of the summation from to .the remaining terms , only containing as arguments of sines and cosines , can be merged with from the induction hypothesis . callingthis merger completes the induction .cumulative power has been written in the desired form +g_n(t)\\ & = & c\cdot{t } + { \ensuremath{\mathcal{o}\!\left(1\right)}\xspace}\end{aligned}\ ] ] since clearly it follows that their sum proving the desired result .this report summarizes work that was done as part of the summer undergraduate research institute of experimental mathematics ( suriem ) held at the lyman briggs college of michigan state university .we are very grateful to the national security agency and the national science foundation for funding this research .we would also like to thank our advisor , professor daniel p. dougherty , for his guidance throughout the summer , and our graduate assistant , joseph e. roth , for his assistance .
the morris - lecar ( ml ) model has applications to neuroscience and cognition . a simple network consisting of a pair of synaptically coupled ml neurons can exhibit a wide variety of deterministic behaviors including asymmetric amplitude state ( aas ) , equal amplitude state ( eas ) , and steady state ( ss ) . in addition , in the presence of noise this network can exhibit mixed - mode oscillations ( mmo ) , which represent the system being stochastically driven between these behaviors . in this paper , we develop a method to specifically estimate the parameters representing the coupling strength ( ) and the applied current ( ) of two reciprocally coupled and biologically similar neurons . this method employs conditioning the likelihood on cumulative power and mean voltage . conditioning has the potential to improve the identifiability of the estimation problem . conditioning likelihoods are typically much simpler to model than the explicit joint distribution , which several studies have shown to be difficult or impossible to determine analytically . we adopt a rejection sampling procedure over a closed defined region determined by bifurcation continuation analyses . this rejection sampling procedure is easily embedded within the proposal distribution of a bayesian markov chain monte carlo ( mcmc ) scheme and we evaluate its performance . this is the first report of a bayesian parameter estimation for two reciprocally coupled morris - lecar neurons , and we find a proposal utilizing rejection sampling reduces parameter estimate bias relative to naive sampling . application to stochastically coupled ml neurons is a future goal .
quantum cryptography has brought us new ways of exchanging a secret key between two users ( known as alice and bob ) .the security of such quantum key distribution ( qkd ) methods is based on a very basic rule of nature and quantum mechanics the `` no - cloning '' principle .the first qkd protocol was suggested in a seminal paper by bennett and brassard in 1984 , and is now known as bb84 . during recent yearsmany security analyses were published which proved the information - theoretical security of the bb84 scheme against the most general attack by an unlimited adversary ( known as eve ) , who has full control over the quantum channel .those security proofs are limited as they always consider a theoretical qkd that uses perfect qubits .although these security proofs do take errors into account , and the protocols use error correction and privacy amplification ( to compensate for these errors and for reducing any partial knowledge that eve might have ) , in general , they avoid security issues that arise from the implementation of qubits in the _real world_. a pivotal paper by brassard , ltkenhaus , mor , and sanders presented the `` photon number splitting ( pns ) attack '' and exposed a security flaw in experimental and practical qkd : one must take into account the fact that alice does not generate perfect qubits ( 2 basis - states of a single photon ) , but , instead , generates states that reside in an enlarged hilbert space ( we call it `` quantum space '' here ) , of six dimensions .the reason for that discrepancy in the size of the used quantum space is that each electromagnetic pulse that alice generates contains ( in addition to the two dimensions spanned by the single - photon states ) also a vacuum state and three 2-photon states , and these are extremely useful to the eavesdropper .that paper proved that , in contrast to what was assumed in previous papers , eve can make use of the enlarged space , and get a lot of information on the secret key , sometimes even full information , without inducing any noise .many attacks on the practical protocols then followed ( e.g. , ) , based on extensions of the quantum spaces , exploring various additional security flaws ; other papers suggested possible ways to overcome such attacks . on the one hand , several security proofs , considering specific imperfections , were given for the bb84 protocol . yet on the other hand , it is generally impossible now to prove the security of a practical protocol , since _ a general framework _ that considers such realistic qkd protocols , _ and _ the possible attacks on such protocols , is still missing .we show that the pns attack , and actually all attacks directed at the channel , are various special cases of a general attack that we define here , the _ quantum - space attack _ ( qsa ) .the qsa generalizes existing attacks and also offers novel attacks .the qsa is based on the fact that the `` qubits '' manipulated in the qkd protocol actually reside in a larger hilbert space , and this enlarged space _ can be assessed_. although this enlarged space is not fully accessible to the legitimate users , they can still analyze it , and learn what a fully powerful eavesdropper can do .we believe that this assessment of the enlarged `` quantum space of the protocol '' is a vital step on the way to proving or disproving the unconditional security of practical qkd schemes .we focus on schemes in which the quantum communication is uni - directional , namely , from alice s laboratory ( lab ) to bob s lab .we consider an adversary that can attack all the quantum states that come out of alice s lab , and all the quantum states that go into bob s lab .the paper is organized as follows : definitions of the quantum spaces involved in the realization of a protocol , and of the `` quantum space of the protocol '' , are presented and discussed in section [ sec : qsop ] .the `` quantum - space attack '' is defined and discussed in section [ sec : qsa ] . using the general framework when the information carriers are photons is discussed in section [ sec : qsaphotonicworld ] .next , in section [ sec : knownqsa ] we show that the best known attacks on practical qkd are special cases of the qsa .section [ sec : interferobb84 ] demonstrates and analyzes a novel qsa on an interferometric implementation of the bb84 and the six - state qkd protocols .last , we discuss a few subtleties and open problems for future research in section [ sec : conclusion ] .we would like to emphasize that our ( crypt)analysis presents the difficulty of proving unconditional security for practical qkd setups , yet also provides an important ( probably even vital ) step in that direction .the quantum space attack ( qsa ) is the most general attack on the quantum channel that connects alice to bob .it can be applied to any realistic qkd protocol , yet here we focus on uni - directional schemes and on implementations of the bb84 protocol and the six - state protocol .we need to have a proper model of the protocol in order to understand the hilbert space that an unlimited eve can attack .this space has never been analyzed before except for specific cases .our main finding is a proper description of this space , which allows , for the first time , defining the most general eavesdropping attack on the channel .we start with a model of a practical `` qubit '' , continue with understanding the spaces used by alice and bob , and end by defining the relevant space , the _ quantum space of the protocol _ ( qsop ) , used by eve to attack the protocol .the attacks on the qsop are what we call _ quantum - space attacks_. in most qkd protocols , alice sends bob qubits , namely , states of 2 dimensional quantum spaces ( ) .a realistic view should take into account any deviation from theory , caused by alice s equipment .for example , alice might encode the qubit via a polarized photon : via a photon polarized horizontally , and polarized vertically .this can be written using fock notation are called fock states , see section [ sec : qsaphotonicworld ] . ] as where ( ) represents the number of horizontal ( vertical ) photons ; then and .when alice s photon is lost within her equipment ( or during the transmission ) , bob gets the state , so that alice s realistic space becomes .alice might send multiple photons and then is of higher dimension , see section [ sec : alicerealphotonic ] .[ def : ha ] * alice s realistic space , , * is the minimal space containing the actual quantum states sent by alice to bob during the qkd protocol . in the bb84 protocol , alice sends qubits in two , etc . ]fixed conjugate bases .theoretically , alice randomly chooses a basis and a bit value and sends the chosen bit encoded in the appropriate chosen basis as a state in ( e.g. , , , and ) .to a better approximation , the states sent by alice are four different states ( ) in her realistic space , spanned by these four states .this space is of dimension , commonly between 2 and 4 , depending on the specific implementation .as practical instruments often diverse from theory , alice might send quite different states . as an extreme example , see the _ tagging attack _( section [ sec : tagasqsa ] ) , which is based on the fact that alice s space could contain more than just these four theoretical states , so that is possible .bob commonly receives one of several possible states sent by alice , and measures it .the most general measurement bob can perform is to add an ancilla , perform a unitary transformation on the joint system , perform a complete measurement , and potentially `` forget '' some of the outcomes . however ,once alice s space is larger than , the extra dimensions provided by alice could be used by bob for his measurement , _ instead of _ adding an ancilla . interestingly , by his measurement bob might be _ extending _ the space vulnerable to eve s attack well beyond .this is possible since in many cases the realistic space , , is embedded inside a larger space .[ def : m ] the space is the space in which is embedded , . the space is the actual space available for alice and an eavesdropper . due to the presence of an eavesdropper ,bob s choice whether to add an ancilla or to use the extended space is vital for security analysis . in the first casethe ancilla is added by bob , inside his lab , while in the second it is controlled by alice , transferred through the quantum channel and exposed to eve s deeds .eve might attack the extended space , and thus have a different effect on bob , considering his measurement method .for example , suppose alice sends two non - orthogonal states of a qubit , and , with a fixed and known angle .bob would like to distinguish between them , while allowing inconclusive results sometimes , but no errors .bob can add the ancilla and perform the following transformation : where .this operation leads to a conclusive result with probability ( when the measured ancilla is ) , and inconclusive result otherwise .it is simple to see that the same measurement can be done , _ without the use of an ancilla _, if the states and are embedded at alice s lab in a larger space , e.g. , using bob s transformation in the general case , the space might be very large , even infinite .bob might use only parts of it , for his measurements .a complication in performing security analysis is due to bob s option to _ both _ use an ancilla and extend the space used by alice .our analysis in the following sections starts with the space extension only ( sections [ sec : hb][sec : hp ] ) , and later on deals with the general case ( sections [ sec : hb+anc][sec : hp+anc ] ) .let us formulate the spaces involved in the protocol , as described above .assume alice uses the space according to definition [ def : ha ] , which is embedded in a ( potentially larger ) space .ideally , in the bb84 protocol , bob would like to measure just the states in , but in practice he usually can not do so .each one of alice s states is transformed by bob s equipment into some pure . for the notion of mixed states or quantum mixture see .] state .the space which is spanned by those states contains all the information about alice s states .more important , bob might be measuring un - needed subspaces of which alice s states do not span .for instance , examine the case where bob uses detectors to measure the fock states and .bob is usually able to distinguish a loss ( the state ) or an error ( e.g. , one horizontal photon and one vertical photon ) , from the two desired states , but he can not distinguish between other states containing multiple photons .this means that bob measures a much larger subspace of the entire space , but ( inevitably ) interprets outcomes outside as legitimate states ; e.g. the states , , etc .are ( mistakenly ) interpreted as .see further discussion in section [ sec : photonicextension ] .we denote bob s setup ( beam splitters , phase shifters , etc . ) by the unitary operation , followed by a measurement ; all these operations are operating on the space ( or parts of it ) .bob might have several different setups ( e.g. a different setup for the -basis and for the -basis ) .let be the set of unitary transformations in all bob s setups .[ def : hb ] * [ this definition is temporary . ] * given a specific setup - transformation , let be the subsystem actually measured by bob , having basis states .the set of * bob s measured spaces * is the set of spaces .we have already seen that bob might be measuring un - needed dimensions . on the other hand he might not measure certain subspaces of , even when alice s state might reach there . in either case , the deviation is commonly due to limitations of bob s equipment .the `` quantum space of the protocol '' ( qsop ) is in fact alice s _ extended _ space , taking into consideration its _ extensions _ due to bob s measurements .the security analysis of a protocol depends on the space defined below .[ def : hb-1 ] * [ this definition is temporary . ] * * the reversed space * is the hilbert space spanned by the states , for each possible setup , and for each basis state of the appropriate .the space usually resides in a larger space than .for instance , using photons , the ideal space consists of two modes with 2 basis states , see section [ sec : qsaphotonicworld ] .now could have an infinite space in each mode , but also could have more modes . in order to derive the quantum space of the protocol we need to define the way alice s spaceis extended according to , for this simple case where bob does not add an ancilla . in this case, the space simply extends alice s space to yield the qsop via . formally speaking [ def : hp ]* [ this definition is temporary . ] * * the quantum space of the protocol * , , is the space spanned by the basis states of the space and the basis states of the space . if alice s realistic space is fully measured by bob s detection process , then is a subspace of , hence . in the general case , one must consider bob s option to add an ancilla during his measurement process .this addition causes a considerable difficulty in analyzing a protocol , however it is often an inherent part of the protocol , and can not be avoided .we denote the added ancilla as the state that resides in the space .[ def : m+anc ] is the space that includes the physical space used by alice as defined in definition [ def : m ] , in addition to bob s ancilla , .bob measures a subspace of the space , so the ( permanent ) definitions of his measured spaces and the reversed space should be modified accordingly .[ def : hb+anc ] given a specific setup - transformation let be the subsystem actually measured by bob , having basis states .the set of * bob s measured spaces * , is the set of spaces .the quantum space of the protocol is still alice s _ extended _ space , while considering its _ extensions _ due to bob s measurements .yet , the added ancilla makes things much more complex .the security analysis of a protocol depends now _ not _ on the space defined below , but on a ( potentially _ much larger _ ) space obtained from it by tracing - out bob s ancilla .as before , we first define the reversed space .[ def : hb-1+anc ] * the reversed space * is the hilbert space spanned by the states , for each possible setup , and for each basis state of the appropriate .once a basis state of one of bob s measured spaces is reversed by we result with a state that might , partially , reside in bob s ancillary space .since eve has no access to this space it must be traced - out ( separated out ) , for deriving the qsop .let us redefine the qsop given the addition of the ancilla : [ def : hp+anc ] * the quantum space of the protocol , * , is the space spanned by * ( a ) * the basis states of the space ; and * ( b ) * the states $ ] , ( namely , after tracing out bob ) , for each possible setup , and for each basis state of the appropriate space . whenever entangles bob s ancilla with the system sent from alice , tracing out bob s ancilla after performing might cause an increase of the qsop to the dimension of bob s ancillary space . for instance , assume alice s state is embedded in an -qubit space to which bob adds an ancilla of -qubits and performs a unitary transformation , such that for one state measured by bob , . tracing out bob from this state yields the maximally mixed state , so that in this example the whole -qubits space is spanned .when alice and bob use qubits , in theoretical qkd , eve can attack the protocol in many ways . in her simplest attack , the so - called `` measure - resend attack '' ,eve performs any measurement ( of her choice ) on the qubit , and accordingly decides what to send to bob .a generalization of that attack is the `` translucent attack '' , in which eve attaches an ancilla , in an initial state ( and in any dimension she likes ) , and entangles the ancilla and alice s qubit , using where is a basis for alice s qubit , and eve s states after the unitary transformation are . using this transformationone can define the most general `` individual - particle attack '' , and also the most general `` collective attack '' . in the individual - particle attack evedelays the measurement of her ancilla till after learning anything she can about the qubit ( e.g. , its basis ) , while in the collective attack eve delays her measurements further till she learns anything she can about _ all _ the qubits ( e.g. , how the final key is generated from the obtained string of shared bits ) , so she attacks directly the _final key_. the most general attack that eve could perform on the channel is to attack all those qubits transmitted from alice to bob , using _one _ large ancilla .this is the `` joint attack '' .security , in case eve tries to learn a maximal information on the final key , was proven in via various methods .the attack s unitary transformation is written as before , but with a binary string of bits , and so is , . by replacing the qubit space by alice s realistic `` qubit '' in the space , and by defining eve s attack on the entire space of the protocol , we can generalize each of the known attacks on theoretical qkd to a `` quantum space attack '' ( qsa ) .we can easily define now eve s most general _ individual - transmission qsa _ on a realistic `` qubit '' , which generalizes the individual - particle attack earlier described .eve prepares an ancilla in a state , and attaches it to alice s state , but actually her ancilla is now attached to the entire qsop .eve performs a unitary transformation on the joint state .if eve s attack is only on , we write the resulting transformation on any basis state of , , as , where the sum is over the dimension of .the photon - number - splitting attack ( see section [ sec : pnsasqsa ] ) is an example for such an attack .the most general individual - transmission qsa is based on a translucent qsa on the qsop , where the sum is over the dimension of .the subsystem in is then sent to bob while the rest ( the subsystem ) is kept by eve .we write the transformation on any basis state of , , but note that it is sufficient to define the transformation on the different states in , namely for all states of the form , since other states of the qsop are never sent by alice ( any other additional subsystem of the qsop is necessarily at a known state when it enters eve s transformation ) .attacks that are more general than the _ individual transmission qsa _ , the _ collective qsa _ and the _ joint qsa _ , can now be defined accordingly . in the most general collective qsa ,eve performs the above translucent qsa on many ( say , ) realistic `` qubits '' ( potentially a different attack on each one , if she likes ) , waits till she gets all data regarding the generation of the final key , and she then measures all the ancillas together , to obtain the optimal information on the final key or the final secret .the most general attack that eve could perform on the channel is to attack all those realistic `` qubits '' transmitted from alice to bob , using _one _ large ancilla .this is the `` joint qsa '' . the attack s unitary transformation is written as before , but with a string of digits rather than a single digit ( digits of the relevant dimension of ) , and so is , eve measures the ancilla , after learning all classical information , to obtain the optimal information on the final key or the final secret . as before ,it is sufficient to define the transformation on the different input states from .we would like to emphasize several issues : 1. when analyzing specific attacks , or when trying to obtain a limited security result , it is always legitimate to restrict the analysis to the relevant ( smaller ) subspace of the qsop , for simplicity , e.g. , to , or to , etc .2. any bi - directional protocol will have a much more complicated qsop , thus it might be extremely difficult to analyze any type of qsa ( even the simplest ones ) on such protocols .this remark is especially important since bi - directional protocols play a very important role in qkd , since they appear in many interesting protocols such as the plug - and - play , the ping - pong , and the classical bob protocols .specifically they provided ( via the plug - and - play ) the only commerical qkd so far .3. it is well known that the collective or joint attack is only finished after eve gets all quantum and classical information , since she delays her measurements till then ; if she expects more information , she better wait and attack the final secret rather than the final key ; it is important to notice that if the key will be used to encode quantum information ( say , qubits ) then the quantum - space of the protocol will require a modification , potentially a major one ; it is interesting to study if this new notion of qsop has an influence on analysis of such usage of the key as done ( for the ideal qubits ) in .since most of the practical qkd experiments and products are done using photons , in this section we demonstrate our qsop and qsa definitions and methods via photons .our analysis uses the fock - space notations for describing photonic quantum spaces . for clarity , states written using the fock notation are denoted with the superscript ` f ' , e.g. , , and .a photon can not be treated as a quantum system in a straightforward way .for instance , unlike dust particles or grains of sand , photons are indistinguishable particles , meaning that when a couple of photons are interacting , one can not define the evolution of the specific particle , but rather describe the whole system .let us examine a cavity , for instance .it can contain photons of specific wavelengthes ( , , etc . ) and the energy of a photon of wavelength is directly proportional to . while one can not distinguish between photons of the same wavelength, one can distinguish between photons of different wavelengths .therefore , it is convenient to define distinguishable `` photonic modes '' , such that each wavelength corresponds to a specific mode ( so a mode inside a cavity can be denoted by its wavelength ) , and then count the number of photons in each mode . if a single photon in a specific mode carries some unit of energy , then such photons of the same wavelength carry times that energy .if the cavity is at its ground ( minimal ) energy level , we say that there are `` no photons '' in the cavity and denote the state as vacuum state . the convention is to denote only those modes that are potentially populated , so if we can find photons in one mode , and no photons in any other mode , we write , . if two modes are populated by and photons , and all other modes are surely empty , we write ( or ) .when there is no danger of confusion , and the number of photons per mode is small ( smaller than ten ) , we just write for photons in one mode and in the other .in addition to its wavelength , a photon also has a property called polarization , and a basis for that property is , for instance , the horizontal and vertical polarizations mentioned earlier .thus , two modes ( in a cavity ) can also have the same energy , but different polarizations . outsidea cavity photons travel with the speed of light , say from alice to bob , yet modes can still be described , e.g. , by using `` pulses '' of light .the modes can then be distinguished by different directions of the light beams ( or by different paths ) , or by the timing of pulses ( these modes are denoted by non - overlapping time - bins ) , or by orthogonal polarizations . a proper description of a photonic qubit is commonly based on using two modes ` ' and ` ' which are populated by exactly a single photon , namely , a photon in mode , so the state is , or a photon in mode , so the state is .however , a quantum space that consists of a single given photonic mode ` ' is not restricted to a single photon , and can be populated by any number of photons . a basis for this space is with , so that the quantum space is infinitely large , .theoretically , a general state in this space is can be written as the superposition , with , .similarly , a quantum space that consists of two photonic modes has the basis states , for and a general state is of the form with , .this quantum space is described as a tensor product of two `` systems '' . using _exactly _ two photons in two different ( and orthogonal ) modes assists in clarifying the difference between photons and dust particles ( or grains of sand ) : due to the indistiguishability of photons , only 3 different states can exist ( instead of 4 ) : , and .the last state has one photon in mode ` ' and another photon in ` ' , however , exchanging the photons is meaningless since one can never tell one photon from another .a realistic model of a photon source ( in a specific mode ) is of a coherent pulse ( a poissonian distribution ) including terms that describe the possibility of emitting any number of photons . as the number of photons increases beyond some number , the probability decreases , so it is common to neglect the higher orders . in qkd , experimentalists commonly use a `` weak '' coherent state ( such that ) and then terms with can usually be neglected . there is also a lot of research about sources that emit ( to a good approximation ) single photons , and then , again , terms with can usually be neglected .while the theoretical qubit lives in , a realistic view defines the space actually used by alice to be much larger .the possibility to emit empty pulses increases alice s realistic space into , due to the vacuum state . when alice sends a qubit using two modes , using a weak coherent state ( or a `` single - photon '' source ) , her realistic space , , is embedded in .terms containing more than two photons can be neglected , so these are excluded from alice s space .the appropriate realistic quantum space of alice , , is now a quhexit : the six - dimensional space spanned by , , , , , . the pns attack demonstrated in section [ sec : pnsasqsa ] , is based on attacking this 6 dimensional space .note also that terms with more than two photons still appear in , and thus could potentially appear in the qsop ( and then used by eve ) . at times ,alice s realistic space is even larger , due to extra modes that are sent through the channel , and are not meant to be a part of the protocol .these extra modes might severely compromise the security of the protocol , since they might carry some vital information about the protocol .a specific qsa based on that flaw is the `` tagging attack '' ( section [ sec : tagasqsa ] ) . note that even if alice uses exactly two modes , the quantum space where is embedded , certainly contains other modes as well .let us discuss bob s measurement of photonic spaces .there are ( mainly ) two types of detectors that can be used .the common detector can not distinguish a single photon from more than one photon ( these kind of detectors are known as _ threshold detectors _ ) .the hilbert space where bob s measurement is defined is infinite by some large number . ] , since a click in the detector tells bob that the number of photons occupying the mode is `` not zero '' i.e.the detector clicks when is detected , for .this means that bob measures the state , or he measures , , but then `` forgets '' how many photons were detected .bob might severely compromise the security , since he inevitably interprets a measurement of a state containing multiple photons as the `` legal '' state that contains only a single photon .an attack based on a similar limitation is the `` trojan - pony '' attack described below , in section [ subsec : trojan ] . in order to avoid false interpretations of the photon number reaching the detector, bob could use an enhanced type of detector known as the _ photon - number resolving detector _ or a _ counter _ ( which is still under development ) .this device distinguishes a single photon from photons , hence any eavesdropping attempt that generates multi - photon states can potentially be noticed by bob .a much enhanced security can be achieved now , although the qsop is infinite also in this case , due to identifying correctly the legitimate state , from various legitimate states .the number of modes in the qsop depends on bob s detectors as well .bob commonly increases the number of measured modes by `` opening '' his detector for more time - bin modes or more frequency modes .for instance , suppose bob is using a detector whose detection time - window is quite larger than the width of the pulse used in the protocol , since he does not know when exactly alice s pulse might arrive .the result is an extension of the space used by alice , so that the qsop includes the subspace of that contains all these measured modes .when a single detector is used to measure more than one mode _ without distinguishing them _ , the impact on the security might be severe , see the `` fake state '' attack ( section [ sec : fakeasqsa ] ) .in addition to the known attacks described in the following subsection , a new qsa is analyzed in section [ sec : interferobb84 ] , where we examine the more general case of qsa , in which bob adds an ancilla during the process .all known attacks can be considered as special cases of the quantum - space attack . in this sectionwe show a description of several such attacks using qsa terms . for each and every attack we briefly describe the specific protocol used , the quantum space of the protocol , and a realization of the attack as a qsa .* the protocol . * consider a bb84 protocol , where alice uses a `` weak pulse '' laser to send photons in two modes corresponding to the vertical and horizontal polarizations when using the basis ( the diagonal polarizations then relate to using the basis ) .bob uses a device called a pockel cell to rotate the polarization ( by ) for measuring the basis , or performs no rotation if measuring the basis .the measurement of the state is then done using two detectors and a `` polarization beam splitter '' that passes the first mode to one detector and the second mode to the other detector ( for a survey of polarization - based qkd experiments , see ) .* the quantum space of the protocol . * every pulse sent by alice is in one of four states , each in a superposition of the 6 orthogonal states , , ,, , , where the space used by alice is .bob uses two setups , for the basis , and for the basis , which is more complex and described in appendix [ app : polu ] .the detectors used by bob can not distinguish between modes having single photon and multiple photons .each one of his two detectors measures the basis elements for ( of the specific mode directed to that specific detector ) , where bob interprets the states with as measuring the state of the same mode .bob s measured space is thus infinite and spanned by the states for .the qsop is equal to ( ) since performing does not change the dimensionality of the spanned space ( in both setups ) . *the attack . * eve measures the number of photons in the pulse , using non - demolition measurement . if she finds that the number of photons is , she blocks the pulse and generates a loss . in the caseshe finds that the pulse consists of 2 photons , she splits one photon out of the pulse and sends it to bob , keeping the other photon until the bases are revealed , thus getting full information of the key - bit .eve sends the eavesdropped qubits to bob via a lossless channel so that bob will not notice the enhanced loss - rate . as is common in experimental qkd , bob is willing to accept a high loss - rate ( he does not count losses as errors ) , since most of alice s pulses are empty .see the precise mathematical description of this attack in appendix [ app : mathpns ] .* the protocol . * consider a bb84 qkd protocol in which alice sends an enlarged state rather than a qubit .this state contains , besides the information qubit , a _ tag _ giving eve some information about the bit .the tag can , for example , tell eve the basis being used by alice . for a potentially realistic example , let the tag be an additional qutrit indicating if alice used the -basis , or the -basis , or whether the basis is _ unknown _ : whenever alice switches basis , a single photon comes out of her lab prior to the qubit - carrying pulse , telling the basis , say using the states and , and when there is no change of basis , what comes out prior to the qubit is just the vacuum . * the quantum space of the protocol . * in this example, alice is using the space .bob , unaware of the enlarged space used by alice , expects and receives only the subspace .we assume that bob ideally measures this space with a single setup , therefore .since bob s setup does not change the space , as well . however , the tag is of a much use to eve , and indeed the qsop following definition [ def : hp ] , defined to be . * the attack .* eve uses the tag in order to retrieve information about the qubit without inducing error ( e.g.via cloning the qubit in the proper basis ) .the attack is then an intercept - resend qsa .we mention that this attack is very similar to a side - channel cryptanalysis of classic cryptosystems .12 pt * a short summery . * it can be seen that the pns attack described above is actually a special case of the tagging attack , where the _ tag _ in that case is in fact another copy of the transmitted qubit .this copy is kept by eve until the bases are revealed , then it can be measured so the the key - bit value is exposed with certainty .both those qsa attacks are based on the fact that alice ( realistic ) space is larger than the theoretical one .although in the pns example , the qsop is further extended due to bob s measurement , the attack is not based on that extension but on the fact that is larger than . in the following attacksbob s measurements cause the enlargement of the qsop , allowing eve to exploit the larger qsop for her attack . in trojan - pony attackseve modifies the state sent to bob in a way that gives her information .in contrast to a `` trojan - horse '' that goes in - and - out of bob s lab , the `` pony '' only goes in , therefore , it is not considered an attack on the lab , but only on the channel .we present here an interesting example . *the protocol .* assume a polarization - encoded bb84 protocol , in which alice is ideal , namely , sending perfect qubits ( ) .however , bob uses realistic threshold detectors that suffer from losses and dark counts , and that can not distinguish between one photon and photons for . in order to be able to `` prove '' security , for a longer distance of transmission bob wants to keep the error - rate low although the increase of dark counts impact with the distance .therefore , bob assumes that eve has no control over dark counts , and whenever both detectors click , alice and bob agree to consider it as _ a loss _ since it is outside of eve s control ( i.e. the qsop is falsely considered to be ) .namely , they assume that _ an error _ occurs only when bob measures in the right basis , and only one detector clicks , ( which is the detector corresponding to the wrong bit - value ) .* the quantum space of the protocol . * same as in section [ sec : pnsasqsa ] , bob s measured spaces , , the reversed space as well as the qsop , are merely the spaces describing two modes ( with up to photons ) , .bob s detectors can not distinguish between receiving a single - photon pulse from a multi - photon pulse , so his measurement is properly described as a projection of the received state onto the space containing followed by `` forgetting '' the exact result , and keeping only one of three results : `` detector-1 clicks '' , `` detector-2 clicks '' , and else it is , a `` loss '' . in formal , _ generalized - measurements _ language ( called povm , see ) these three possible results are written as : , , , and their sum is the identity matrix . * the attack . * eve s attack is the following : ( a ) randomly choose a basis ( b ) measure the arriving qubit in that specific chosen basis ( c ) send bob -photons identical to the measured qubit , where . obviously , when eve chooses the same basis as alice and bob then bob measuresthe exact value sent by alice , and eve gets full information .otherwise , both of his detectors click , implying a `` loss '' , except for a negligible probability , , thus eve induces no errors .the main observation of this measure - resend qsa is that treating a count of more than a single photon as a loss , rather than as an error , is usually not justified .a second conclusion is that letting bob use counters instead of threshold detectors ( to distinguish a single photon from multiple photons ) , together with treating any count of more than one photon as an error , could be vital for proving security against qsa .the price is that dark counts put severe restrictions on the distance to which communication can still be considered secure , as suggested already by . *the protocol .* in this example , we examine a polarization encoded bb84 protocol , and an ideal alice ( ) .this time bob s detectors are imperfect so that their detection windows do not fully overlap , meaning that there exist times in which one detector is blocked ( or it has a low efficiency ) , while the other detector is still regularly active .thus , if eve can control the precise timing of the pulse , she can control whether the photon will be detected or lost .the setup is built four detectors and a rotating mirror ( since bob does not want to spend money on a pockel cell ( polarization rotator ) , he actually uses 2 fixed different setups ) .using the rotating mirror bob sends the photon into a detection setup for basis or a detection setup for basis .suppose the two detection setups use slightly different detectors , or slightly different delay lines , or slightly different shutters , and eve is aware of this ( or had learnt it during her past attacks on the system ) . for simplicity, we model the non - overlapping detection windows , as additional two modes , one slightly prior to alice s intended mode ( the pulse ) , and one right after it . * the quantum space of the protocol . *the original qubit is sent in a specific time - bin ( namely , ) .the setup is a set of two detectors and a polarized beam splitter , separating the horizontal and the vertical modes to the detectors , where separate the diagonal modes into a set of two ( different ) detectors .let the detectors for one basis , say , be able to measure a pulse arriving at or , while the detectors for the other basis ( ) measure pulses arriving at or . for simplicity ,we degenerate the space to contain one or less photons . ] , so that is , i.e. two possible time - bins consisting each of two ( polarization ) modes of one or less photons .the measured space of the -setup has two possible time - bins and two possible polarization modes , thus as well , however , the two time - bins for this setup are and .following definition [ def : hb-1 ] we get that that the reversed space contains three time - bins ( , and ) with two polarization modes in each , therefore , under the single - photon assumption .the qsop , following definition [ def : hp ] equals since . * the attack . *eve exploit the larger space by sending `` fake '' states using the external time bins ( and ) .eve randomly chooses a basis , measures the qubit sent by alice , and sends bob the same polarization state she found , but at if she have used the basis , or at if she have used the basis . since no ancilla is kept by eve , this is an intercept - resend qsa .bob will get the same result as eve if he uses the same basis , or _ a loss _ otherwise .the mathematical description of the attack is as follows : eve can generate superpositions of states of the form , where the index denotes this mode has vertical or horizontal polarization , and its subscript denotes the time - bin of the mode .eve s measure - resend attack is described as measuring alice s qubit in the basis , creating a new copy of the measured qubit , and performing the transformation ; or as performing a measurement in the basis , and performing the transformation ; on the generated copy .12 pt * a short summery * we see that eve can `` force '' a desired value ( or a loss ) on bob , thus gaining all the information while inducing no errors ( but increasing the loss rate ) .bob can use a shutter to block the irrelevant time - bins but such a shutter could generate a similar problem in the frequency domain .this attack is actually a special case of the trojan - pony attack , in which the imperfections of bob s detectors allow eve to send states that will be un - noticed unless the measured basis equals to eve s chosen basis .in order to demonstrate the power of qsa , and to see its advantages , this section presents a partial security analysis of some interferometric bb84 and 6-state schemes .interferometric schemes are more common than any other type of implementation in qkd experiments and products . in this sectionwe define the specific equipment used by bob , and we formulate and bob s measurements .we then find the spaces , , and the qsop , . finally , we demonstrate a novel attack which is found to be very successful against a specific variant of the bb84 interferometric scheme ; this specific qsa , which we call the `` reversed - space attack '' , is designed using the tools developed in sections [ sec : qsop ] and [ sec : qsa ] .we begin with a description of interferometric ( bb84 and six - state ) schemes , which is based on sending phase - encoded qubits arriving in two time - separated modes .alice encodes her qubit using two time - bins and , where a photon in the first mode , , represents the state , and a photon in the other mode , , represents .the bb84 protocol of ( and many others ) uses the and bases , meaning that alice ( ideally ) sends one of the following four states : ; ; ; and .bob uses an interferometer built from two beam splitters with one short path and one long path ( figure [ fig : lab - xy ] ) . a pulse of light travels through the short arm of the interferometer in seconds , and through the long arm in seconds , where is also _ precisely _ the time separation between the two arriving modes of the qubit , . a controlled phase shifter ,is placed in the long arm of the interferometer .it performs a phase shift by a given phase , i.e. . the phase shifter is set to ( ) when bob measures the ( ) basis . and basis .( a ) alice sends a qubit ; ( b ) vacuum states are added in the interferometer ; ( c ) , ( d ) beam - splitters ; ( e ) phase shifter . ] each beam splitter interferes two input arms ( modes 1 , 2 ) into two output arms ( modes 3 , 4 ) , in the following way ( for a single photon ) : , and .the photon is transmitted / reflected with a probability of ; the transmitted part keeps the same phase as the incoming photon , while the reflected part gets an extra phase of , if it carries a single photon .when a single mode , carrying at least a single photon , enters a beam splitter from one arm , and nothing enters the other input arm , we must consider the other entry to be an additional mode ( an ancilla ) in a vacuum state .when a single mode ( carrying one or more photons ) enters the interferometer at time , see figure [ fig : lab - xy ] , it yields two modes at time due to traveling through the short arm , and two modes at time due to traveling through the long arm .those four output modes are : times , in the ` ' ( straight ) arm of the interferometer , and times , in the ` ' ( down ) arm .a basis state in this fock space is then . in the case of having that single mode carrying exactly a single photon , the transformation , which requires three additional empty ancillas .] , is . note that a pulse which is sent at a different time ( say , ) results in the same output state , but with the appropriate delays , i.e. where the resulting state is defined in the fock space whose basis states are .let us now examine any superposition of two modes ( and ) that enter the interferometer one after the other , with exactly the same time difference as the difference lengths of the arms .the state evolves in the following way ( see appendix [ app : modesevo ] ) : describing the evolution for any possible bb84 state sent by alice ( , , , determined by the value of , , , respectively , when ) . as a result of this precise timing, these two modes are transformed into a superposition of 6 possible modes ( and not 8 modes ) at the outputs , due to interference at the second beam splitter .only four vacuum - states ancillas ( and not six ) are required for that process .the resulting 6 modes are , , in the ` ' arm and in the ` ' arm of the interferometer .denote this fock space as , with basis elements .the measurement is performed as follows : bob opens his detectors at time in both output arms of the interferometer .a click in the `` down '' direction means measuring the bit - value , while a click in the `` straight '' direction means .the other modes are commonly considered as a loss ( they are not measured ) since they give an inconclusive result regarding the original qubit .we refer this bb84 variant as `` -bb84 '' .one might want to use the basis in his qkd protocol ( using , and or ) , for instance , in order to avoid the need for a controlled phase shifter or for another equipment - related reason , or in order to perform `` qkd with classical bob '' .a potentially more important reason might be to perform the 6-state qkd protocol , due to its improved immunity against errors ( 27.4% errors versus only 20% in bb84 ) .a possible and easy to implement variant for realizing a measurement in the basis is the following : bob uses the setup ( i.e. he sets to ) , and opens his detectors at times and , corresponding to the bit - values and respectively ( see equation ( [ eqn : interf_evu ] ) ) . unfortunately, technological limitations , e.g. of telecommunication wavelength ( ir ) detectors , might make it difficult for bob to open his detectors for more than a single detection window per pulse .bob could perform a measurement of _ just _ the states , opening the arm detector at time ( to measure ) and the arm detector at time ( to measure ) .we refer this variant as `` -six - state '' .we assume alice to be almost ideal , having the realistic space ( a qubit or a vacuum state ) , using two time - bin modes .as we have seen , four ancillary modes in vacuum states are added to each transmission .therefore , the interferometer setups and transform the 2-mode states of into a subspace that resides in the 6 modes space . for simplicity, we assume that eve does not generate -photon states , with , so we can ignore high photon numbers in the space , this assumption is not legitimate when proving unconditional security of a protocol . ] .therefore , we redefine , the space spanned by the vacuum , and the six single - photon terms in each of the above modes . using the and bases , bob measures only time - bin , so his actual measured spaces consist of two modes : time - bin in the ` ' arm and the ` ' arm . in that case , the measured spaces are , spanned by the states , , .when bob uses the basis , he measures two different modes , so is spanned by the states , , .let us define the appropriate space for the 6-state protocol , according to definition [ def : hb-1+anc ] .the space is spanned by the states given by performing on , , , as well as the states given by performing on , , .interestingly , once applying , the resulting states are embedded in an 8-mode space defined by the two incoming arms of the interferometer , ` ' ( from alice ) and ` ' ( from bob ) , at time bins , , , and .the basis states of are listed in appendix [ app : interf_hb-1_basis ] . following definition [ def : hp+anc ] , the qsop of this implementation for the 6-state protocol , is the subsystem of which is _ controlled _ by eve .it is spanned by the 8-mode states spanning after tracing out bob .the space that contains those `` traced - out '' states has only four modes that are controlled by eve , specifically , input ` ' of the interferometer at times to , having a basis state of the form . given the single - photon restriction , we get , namely , the space spanned by the vacuum state , and a single photon in each of the four modes , i.e. , , , , . this same result is obtained also if bob measures all the six modes in .bob might want to see how the basis states of the 4-mode qsop , , evolve through the interferometer in order to place detectors on the resulting modes , which will be used to identify eve s attack .it is interesting to note , that those basis states result in _ 10 different non - empty modes ( ! ) _ .if bob measures all these modes , he _ increases _ the qsop , and maybe allows eve to attack a larger space , and so on and so forth .therefore , in order to perform a security analysis , one must first fix the scheme and only then assess the qsop . otherwise , a `` ping - pong '' effect might increase the spaces dimensions to infinity . a similar ,yet reversed logic , hints that it could actually be better for bob , in terms of the simplicity of the analysis for the `` -bb84 '' scheme , to measure _ just _ the two modes at ( i.e. the space spanned by ) , thus reducing the qsop to a 2-mode space , , see appendix [ app : qsop2modesintf ] .although eve is allowed to attack a larger space than this two - mode , she has no advantage in doing so : pulses that enter the interferometer on different modes ( i.e. other time - bins than and ) , never interfere with the output pulses of time - bin measured by bob .therefore , state occupying different modes can not be distinguished from the states in which those modes are empty . consider a bb84 variant in which bob uses only the and the bases , using a single interferometer , where the -basis measurement is performed according to the description in the last few lines of section [ sec : xy - setup ] .we refer this variant as `` -bb84 '' .the qsop of this scheme , is the space described above for the `` -six - state '' protocol .the following attack which we call `` the reversed - space attack '' , allows eve to acquire information about the transmitted qubits , without inducing _ any _ errors .the states denote eve s ancilla which is not necessarily a photonic system .the state and are the regular states send by alice , where we added the relevant extension of in .when is sent by alice , the attacked state reaches bob s interferometer , and interferes in a way such that it can never reach bob s detector at time , i.e. . although the attacked state reaches modes that alice s original state can never reach , bob never measures those modes , and can not notice the attack .a similar argument applies when alice sends . as for the basis , etc . ], this attack satisfies the first element in the sum results in the desired interference in bob s lab , while the second is not measured by bob s detectors at time . by letting eve s probes and be orthogonal states , eve gets a lot of information while inducing no errors at all . yet, we find that eve is increasing the loss rate by this attack to 87.5% , but a very high loss rate is anyhow expected by bob ( as explained in the analysis of the pns and the tagging attacks ) . in conclusion, this attack demonstrates the risk of using various setups without giving full security analysis for the _ specific _ setup .we are not familiar with any other security analysis that takes into account the enlarged space generated by the inverse - transformation of bob s space .in this paper we have defined the qsa , a novel attack that generalizes all currently known attacks on the channel .this new attack brings a new method for performing security analysis of protocols .the attack is based on a realistic view of the quantum spaces involved , and in particular , the spaces that become larger than the theoretical ones , due to practical considerations .although this paper is explicitly focused on the case of uni - directional implementations of a few schemes , its main observations and methods apply to any uni - directional qkd protocol , to bi - directional qkd protocols , and maybe also to any realistic quantum cryptography scheme beyond qkd .the main conclusion of this research is that the quantum space which is attacked by eve can be assessed , given a proper understanding of the experimental limitations .this assessment requires a novel cryptanalysis formalism analyzing the states generated in alice s lab , as well as the states that are to be measured by bob ( assessing them as if they go backwards in time from bob s lab ) ; this type of analysis resembles the two - time formalism in quantum theory .open problems for further theoretical research include : 1. generalization of the qsa to other conventional protocols ( such as the two - state protocol , epr - based protocols , d - level protocols , etc . ); such a generalization should be rather straightforward .2. proving unconditional security ( or more limited security results such as `` robustness '' ) against various qsas .this is especially important for the interferometric setup , where the qsop is much larger than alice s six - dimensional space ( the one spanned by ) .3. describing the qsa for more complex protocols , such as two - way protocols in which the quantum communication is bi - directional , and protocols which use a larger set of states such as data - rejected protocols or decoy - state protocols . 4. extend the analysis and results to composable qkd .5(a). in some cases , if bob uses `` counters '' and treats various measurement outcomes as errors , the effective qsop relevant for proving security is potentially _ much smaller _ than the qsop defined here .5(b). adding counters on more modes increases the qsop defined here , but might allow analysis of a smaller `` attack s qsop '' , if those counters are used to identify eve s attack .more generally , the connection between the way bob interprets his measured outcomes , and the `` attack s qsop '' is yet to be further analyzed .[ [ acknowledgments . ] ] * acknowledgments . * + + + + + + + + + + + + + + + + + + we thank michel boyer , dan kenigsberg and hoi - kwong lo for helpful remarks . 10 d. z. albert , y. aharonov , and s. damato .curious new statistical prediction of quantum mechanics ., 54(1):57 , jan 1985 .s. m. barnett , b. huttner , and s. j. d. phoenix . . ,40:25012513 , dec .1993 . h. bechmann - pasquinucci and n. gisin . ., 59(6):42384248 , jun . 1999 .m. ben - or , m. horodecki , d. w. leung , d. mayers , and j. oppenheim .the universal composable security of quantum key distribution . in _tcc 2005 : second theory of cryptography conference _ , pages 386406 , jan . 2005 . c. h. bennett and g. brassard . ., pages 175179 , dec . 1984 .e. biham , m. boyer , p. o. boykin , t. mor , and v. p. roychowdhury .a proof of the security of quantum key distribution . in _ proceedings of the 32nd annual acm symposium on theory of computing ( stoc ) _ , pages 715724 , new york , 2000 .acm press .e. biham , m. boyer , p. o. boykin , t. mor , and v. p. roychowdhury .a proof of the security of quantum key distribution ., 19(4):381439 , 2006 .e. biham , m. boyer , g. brassard , j. van de graaf , and t. mor . . , 34:372388 ,e. biham and t. mor .security of quantum cryptography against collective attacks ., 78(11):22562259 , mar 1997 .k. bostrm and t. felbinger .deterministic secure direct communication using entanglement ., 89(18):187902 , oct 2002 .m. boyer , d. kenigsberg , and t. mor . quantum key distribution with classical bob .arxiv quantum physics e - prints , 2007 .quant - ph/0703107 .g. brassard , n. ltkenhaus , t. mor , and b. c. sanders . .eurocrypt 2000 : international conference on the theory and application of cryptographic techniques _, lncs 1807:289299 , 2000 g. brassard , n. ltkenhaus , t. mor , and b. c. sanders . . , 85:13301333 ,k. j. blow , r. loudon , s. phoenix and t. j. shepherd ., 42(7):41024114 , oct . 1990 . d. bru . . , 81:30183021 ,h. f. chau . ., 66(6):060302 , dec ._ for different ( slightly smaller ) numbers , see ._ m. dusek , n. lutkenhaus , and m. hendrych . .arxiv quantum physics e - prints , jan .quant - ph/0601207 . c. elliott , d. pearson , and g. troxel .quantum cryptography in practice . in _sigcomm 03 : proceedings of the 2003 conference on applications , technologies , architectures , and protocols for computer communications _ , pages 227238 , new york , ny , usa , 2003 .acm press .a. ekert , b. huttner , g. palma and a. peres .eavesdropping on quantum - cryptographical systems ., 50(2):10471056 , aug .1994 . c. fuchs , n. gisin , r. griffiths , c.s .niu and a. peres .optimal eavesdropping in quantum cryptography .i. information bound and optimal strategy ., 56(2):11631172 , aug . 1997 .n. gisin , s. fasel , b. kraus , h. zbinden , and g. ribordy . ., 73(2):022320+ , feb .n. gisin , b. kraus , and r. renner .lower and upper bounds on the secret key rate for qkd protocols using one way classical communication ., 95:080501 , 2005 .n. gisin , g. ribordy , w. tittel , and h. zbinden . . ,74:145195 , jan .d. gottesman , h .- k .lo , n. ltkenhaus , and j. preskill .security of quantum key distribution with imperfect devices ., 5:325360 , 2004 .quantum key distribution with high loss : toward global secure communication ., 91(5):057901 , aug .hwang , i .-lim , and j .- w .no - clicking event in quantum key distribution .arxiv quantum physics e - prints , 2004 .quant - ph/0412206 .h. inamori , n. ltkenhaus , and d. mayers . ., 41:599627 , mar .lo and h. f. chau . unconditional security of quantum key distribution over arbitrarily long distances . , 283:20502056 , 1999 ., 1(2):8194 , aug .lo , x. ma , and k. chen . . ,94(23):230504+ , jun . 2005 .v. makarov , a. anisimov , and j. skaar .effects of detector efficiency mismatch on security of quantum cryptosystems . , 74:022313 , 2006 . c. marand and p. townsend , 20:16951697 , aug .a. muller , t. herzog , b. huttner , w. tittel , h. zbinden and n. gisin . ., 70:793395 , feb 1997 . v. makarov and d. r. hjelme . . , 52:691705 ,may 2005 .d. mayers .unconditional security in quantum cryptography ., 48(3):351406 , 2001 , _ based on . a. niederberger , v. scarani , and n. gisin .photon - number - splitting versus cloning attacks in practical implementations of the bennett - brassard 1984 protocol for quantum cryptography ., 71:042316 , 2005 .m. a. nielsen and i. l. chuang . .cambridge university press , cambridge , uk , 2000 .a. peres . ., 128:19 , mar 1988 .a. peres . .kluwer , dordrecht , 1993 .v. scarani , a. acn , g. ridbory and n. gisin .quantum cryptography robust against photon number splitting attacks for weak laser pulse implementations ., 92(5):057901+ , feb .m. scully and m. s. zubairy . .cambridge university press , cambridge , united kingdom , 1997 .p. w. shor and j. preskill .simple proof of security of the bb84 quantum key distribution protocol ., 85:441444 , 2000 , _ based on . p. d. townsend . . , 30:809811l. vaidman , y. aharonov , and d. z. albert . how to ascertain the values of , , and of a spin-1/2 particle . , 58(14):13851387 , apr .x. b. wang . ., 94:230503 , jun . 2005 ., pages 6775 , las vegas , navada , united states , 1995 .z. l. yuan , a. w. sharpe , and a. j. shields . ., 90:1118+ , jan . 2007 .the pns attack can be realized using ( an infinite set of ) polarization independent beams splitters .eve uses a beam splitter to split photons from alice s state . using a non - demolition measurement eve measures the number of photons in one output of the beam splitter , and repeat the splitting until she acquires exactly one photon .formally is defined : whenever alice sends a pulse with two photons of the same polarization , eve and bob end up , each , with having a single photon of the original polarization .eve s pns attack for a pulse of 2 photons , gives eve full information while inducing no errors .according to its definition it is trivial to verify the attack for the horizontal and vertical polarizations and ( where means photons having polarization ) . using the standard creation and annihilation operators ( and ) ,we can write the state of two photons in the diagonal polarization ( basis ) : , similarly . which completes the proof .a polarization based qkd protocol makes a use of a pockel cell ( ) , rotating the polarization of the photons going through it . for a single photon ,its action is trivial , for a state that contains multiple photons , the transformation is not intuitive , and most simply defined using the creation and annihilation operators . in a somewhat simplified way , the pokcel cell can be considered as performing and , so that a state is transformed in the following way order to simplify the analysis ( a simplification that is not allowed when proving the full security of a scheme ) we look at the ideal case in which exactly one photon ( or none ) is sent by alice .the basis states are then the vacuum , and the six states ( that we denote for simplicity by ) ; ; ; ; and .the full transformation of a single photon pulse through the interferometer is given by equation .alice sends photons at time bins and only , so the interferometer transformation on alice s basis states is , and where denotes ancilla added during the process ) are originated by alice extended space and by bob ( ) . performing reveals the exact origin of those ancillas . ] .equation [ eqnb+-on01 ] can be used to describe the interferometer effect on a general qubit , shown in equation .the states sent by alice during the `` -bb84 '' protocol evolve in the interferometer as follows : bob can distinguish the computation basis elements of bases and , measuring time - bin , i.e. the states for and for in the measured basis .other states give bob no information about the state sent by alice .let bob be using interferometric setups and measuring 6 modes ( corresponding the space with a basis state ) with one or less photons . following definition [ def : hb-1+anc ] , the states spanning the space can be derived using equation ( adjusted to the appropriate space ) : defined over the space with basis state .note that performing requires an additional ancilla , since the modes number increases from six to eight .assume bob measures only time - bin in both output arms of the interferometer , i.e. the measured space is subspace spanned by . assuming a single - photon restriction , the reversed space , of that measured space that is spanned by : as can be verified using equation .the space is embedded in a 4-mode space , having the basis element , i.e. alice modes at times and and bob s added ancillary modes at times and respectively .the resulting six states span a 4-dimensional space , i.e. .the qsop in this special case is , spanned by with one or less photons .
theoretical quantum key distribution ( qkd ) protocols commonly rely on the use of qubits ( quantum bits ) . in reality , however , due to practical limitations , the legitimate users are forced to employ a larger quantum ( hilbert ) space , say a quhexit ( quantum six - dimensional ) space , or even a much larger quantum hilbert space . various specific attacks exploit of these limitations . although security can still be proved in some very special cases , a general framework that considers such realistic qkd protocols , _ as well as _ attacks on such protocols , is still missing . we describe a general method of attacking realistic qkd protocols , which we call the ` quantum - space attack ' . the description is based on assessing the enlarged quantum space actually used by a protocol , the ` quantum space of the protocol ' . we demonstrate these new methods by classifying various ( known ) recent attacks against several qkd schemes , and by analyzing a novel attack on interferometry - based qkd .
the number of dark spots in the sun s surface has been counted in a systematic way since rudolf wolf introduced the concept , in the first half of the nineteenth century .more than any other solar observable , the sunspot number is considered the strongest signature of the 22-year magnetic cycle . moreover , since the sunspot number is the longest time series from all solar observables , it makes it the preferred proxy to study the variability and irregularity of the solar magnetic cycle . in the suns interior the large scale magnetic field is generated by a magnetohydrodynamic dynamo that converts part of the kinetic energy of the plasma motions into magnetic energy .polarity reversals occur every 11 years approximately , as it can be observed directly in the sun s dipolar field , and taking a full 22-years to complete a magnetic cycle .in fact during each magnetic cycle , the sun experiences two periods of maximum magnetic activity , during which magnetic flux tubes created in the tachocline layer , rise to the sun s surface by the action of buoyancy , emerging as sunspots pairs .the polarity switch is also observed in the change of polarity alignment of these bipolar active regions .although we know that the solar dynamo resides within the convection zone , we still do nt have a complete picture where all the physical mechanisms operate .there is a strong consensus that the physical mechanism behind the production of the large scale toroidal field component , the so called -effect , is located in the tachocline , a shear layer created by differential rotation and located at the base of the convection zone .the major source of uncertainty is the location of the -effect , the physical mechanism responsible to convert toroidal into poloidal field and close the system . in truth, this effect could be in fact a collection of several physical mechanisms that operate at different places and with different efficiencies .some examples are the babcock - leighton mechanism that operates in the solar surface and converts the product of decaying active regions into poloidal field , or the action of the turbulent magnetic helicity that takes place in the bulk of the convection zone .one of the main questions that is still being debated is the quantification of the importance and relative contribution of each component to the operation of the solar dynamo . because different authors choose to give the leading role to one or another source term , there is vast number of dynamo models .most of these are two dimensional models ( usually referred as 2.5d because they include two spatial coordinates plus time ) and are constructed using the mean - field theory framework proposed by . despite some short - comes , fruit of the approximations and formulation used , this type of models running in the kinematic regime , i.e. with prescribed large scale flows , has been very popular within the solar community because they can explain many of the observable features of the solar cycle . a detailed discussion on solar dynamo models , stellar magnetism and corresponding references to the vast literature on this subject can be found in the reviews by and . another way of tackling the solar dynamo problem is by producing 3d magnetohydrodynamic ( mhd ) simulations of the solar convection zone . these computer intensive simulationssolve the full set of the mhd equations ( usually under the anelastic approximation ) and are fully dynamical in every resolved scale , i.e. they take into consideration the interactions between flow and field and vice - versa unlike the kinematic regime usually used in mean field models , where only the flow influences the field .recently these simulations have started to show stable large scale dynamo behaviour and they are starting to emerge as virtual laboratories for understanding in detail some of the mechanisms behind the dynamo . on the other end of the modelling spectrum , we can find oscillator models , that use simplified parameterizations of the main physical mechanisms that participate in the dynamo process .although in the sun s interior the magnetic field generated by the dynamo has a very rich and complex structure , as a consequence of the structure of the magnetohydrodynamic differential equations , some of its main properties can be understood by analyzing low order differential equations obtained by simplification and truncation of their original mhd counterparts. then , several properties of the cycle that can be extracted by studying these non - linear oscillator models , as is usually done in nonlinear dynamics .these models have a solid connection to dynamical systems and are , from the physics point of view the most simple .this does not mean that they are the easiest to understand because the reduction in the number of dimensions can sometimes be difficult to interpret ( viz .introduction section of ) . these low order dynamo models ( lodm ) ,as they are some times called , allow for fast computation and long integration times ( thousands of years ) when compared to their 2.5d and 3d counterparts .they can be thought as a first order approximation to study the impact of certain physical mechanisms in the dynamo solution , or some of the properties of the dynamo itself as a dynamical system .the variability exhibited by the sunspot number time series , inspired researchers to look for chaotic regimes in the equations that describe the dynamo . for a complete review on this subject consult and references therein .some of the first applications of lodm were done in this context ( e.g. ) .these authors found solutions with cyclic behaviour and variable amplitude , including extended periods of low amplitude reminiscent of the grand minima behaviour we see in the sun .the downside of these initial works was the fact that although the proposed model equations made sense from a mathematical point of view , the physics realism they attained was small .these low order models , with higher or lower degrees of physical complexity , can be used in many areas and several of their results have been validated by 2.5d spatially distributed mean field models , which grants them a certain degree of robustness .this happens specially in lodm whose formulation is directly derived from mhd or mean - field theory equations .some examples of the results obtained with lodm that have been validated by more complex mean field models are : the study of the parameter space , variability and transitions to chaos in dynamo solutions ; the role of lorentz force feedback on the meridional flow ; and the influence of stochastic fluctuations in the meridional circulation and in the -effect .some models even include time delays that embody the spatial segregation and communication between the location of source layers of the - and -effects .these have been applied to a more general stellar context by and recently , showed that one of this type of time delay lodm that incorporates two different source terms working in parallel , can explain how the sun can enter and exit in a self - consistently way from a grand minimum episode .a couple of lodm even ventured in the `` dangerous '' field of predictions .for example combined his lodm with an autoregressive model in order to forecast the amplitude of future solar cycles . in this articlewe show how can one of these lodm be used as a tool to study the properties of the solar magnetic cycle . for this purposewe use the international sunspot number time series during the past 23 solar magnetic cycles .nevertheless , the main focus of this work is to present a strategy inspired by helioseismology , were an _ inversion methodology _is used to infer variations of some of the lodm parameters over time .since these parameters are related to the physical mechanisms that regulate the solar dynamo , this should in principle , allow for a _ first order _ reconstruction of the main dynamo parameters over the last centuries . in a similar manner to helioseismology , the comparison between model solutions and data can be done by means of a _ forward method _ in which solar observational data is directly compared with the theoretical predictions , or by means of a _backward method _ in which the data is used to infer the behaviour of leading physical quantities of the theoretical model .naturally , it is necessary to develop an inversion technique or methodology that allows to reconstruct the quantities that have changed during the evolution of solar dynamo .this type of studies is well suited to explore several aspects of the solar and stellar dynamo theory .this can be done by : _( i ) _ building a tool to study the dynamo regimes operating in stars ; _ ( ii ) _ establishing an inversion methodology to infer the leading quantities responsible for the dynamics and variability of the solar cycle over time ; _ ( iii ) _ comparing the dynamo numerical simulations with the observational data ; _ ( iv ) _ use this tool as a toy model to test global properties of the solar dynamo . herewe particularly focus in discussing the three last items of this list , with special attention on the development of an inversion method applied here to the sunspot number time series .this is used to infer some of the dynamics of the solar dynamo back - in - time . in principlethis should allow us to determine the variation profiles of the quantities that drive the evolution of the magnetic cycle during the last few centuries . in section [ sec : lodmlsmf ], we present a non - linear oscillator derived from the equations of a solar dynamo that is best suited to represent the sunspot number . in section [ sec : lodminversion ] , we discuss how the non - linear oscillator analogue can be used to invert some of the leading quantities related with solar dynamo . in section[ sec : smc ] is discussed how solar observational data is use to infer properties of the solar magnetic cycle . in section [ sec : numericmodels ] we present a discussion about how the low order dynamo model can be used to test the basic properties of modern axisymmetric models and numerical simulations , as well to infer some leading properties of such dynamo models . in section [ sec :outlook ] , we discuss the outlook for the sun and other stars .the basic equations describing the dynamo action in the interior of a star are obtained from the magnetic - hydrodynamic induction , and the navier - stokes equations augmented by a lorentz force . under the usual kinematic approximation the dynamo problem consists in finding a flow field with a velocity that has the necessary properties capable of maintaining the magnetic field , against ohmic dissipation . for a star like the sunsuch dynamo models should be able to reproduce well - known observational features such as : cyclic magnetic polarity reversals with a period of 22 years , equatorward migration of during the cycle ( dynamo wave ) , the phase lag between poloidal and toroidal components of , the antisymmetric parity across the equator , predominantly negative / positive magnetic helicity in the northern / southern hemisphere , as well as many of the empirical correlations found in the sunspot records , like the waldmeier rule anti - correlation between cycle duration and amplitude ; the gnevyshev - ohl rule alternation of higher - than - average and lower - than - average cycle amplitude and grand minima episodes ( like the maunder minimum ) epochs of very low surface magnetic activity that span over several cycles . given the amount of complex features that a solar dynamo model has to reproduce , the task at hand is far from simple .the vast majority of dynamo models currently proposed to explain the evolution of the solar magnetic cycle ( kinematic mean - field models ) became very popular with the advance of helioseismology inversions and the inclusion of the differential rotation profile . in the kinematic regime approximation , the flow field is prescribed and only the magnetic induction equation is used to determine the evolution of . generally , the large scale magnetic field , the one responsible for most of the features observed in the sun is modelled as the interaction of field and flow where two source terms ( and ) naturally emerge from mean - field theory ( e.g. , * ? ? ? * ; * ? ? ?* ; * ? ? ?from the mean - field electrodynamics , the induction equation reads where is the large - scale mean flow , and is the total magnetic diffusivity ( including the turbulent diffusivity and the molecular diffusivity ) .currently , as inferred from helioseismology , can be interpreted as a large - scale flow with at least two major flow components , the differential rotation throughout the solar interior , and the meridional circulation in the upper layers of the solar convection .given all the points above , and based on their popularity among the community , we start our study by considering a reference model based in the kinematic mean - field flux transport framework .although the results obtained here are based on this specific type of model , most of the analysis method used , as well the conclusions reached , can easily be extended to other models . as usual , under the simplification of axi - symmetry the large - scale magnetic field can be conveniently expressed as the sum of toroidal and poloidal components , that in spherical polar coordinates can be written as similarly , the large - scale flow field as probed by helioseismology can be expressed as the sum of an axisymmetric azimuthal ( differential rotation ) and poloidal ( meridional flow ) components : where , in the angular velocity and is the velocity of the meridional flow . accordingly, such decomposition of ( that satisfy the induction equation [ eq : vecb ] ) and leads to the following set of equations : \cdot \nabla \omega -\gamma ( b_\phi ) b_\phi \label{eq : bphi}\end{aligned}\ ] ] where is the magnetic diffusivity and is the source term of ( the mechanism to convert toroidal to poloidal field ) .moreover , following the suggestions of we also considered that the toroidal field can be removed from the layers where it is produced by magnetic buoyancy and obeying , where is a constant related to the removal rate and is the plasma density .as the sun s magnetic field changes sign from one solar cycle to the next it is a plausible idea to attribute alternating signs in odd / even cycles also to other solar activity indicators such as the sunspot number ( ssn ) .the resulting time series displays cyclic variations around zero in the manner of an oscillator .this suggests an oscillator as the simplest mathematical model of the observed ssn series . as , however, the profile of sunspot cycles is known to be markedly asymmetric ( a steep rise in 34 years from minimum to maximum , followed by a more gradual decline to minimum in years ) , a simple linear oscillator would be clearly a very poor representation of the sunspot cycle .a _ damped _ linear oscillator will , on the other hand , naturally result on asymmetric profiles similar to what is observed .the obvious problem that the oscillation will ultimately decay due to the damping could be remedied somewhat artificially by applying a periodic forcing or by reinitializing the model at each minimum . a much more natural way to counteract the damping , however ,is the introduction of nonlinearities into the equation indeed , such nonlinearities are naturally expected to be present in any physical system , see below .as long as the nonlinearity is relatively weak , the parameters and can be expanded into taylor series according to . due to the requirement of symmetry ( i.e. the behaviour of the oscillators should be invariant to a sign change in ) only terms of even degree will arise in the taylor series . to leading order , then , we can substitute into equation ( [ eq : linosc ] ) resulting in in the particular case when ( i.e. the nonlinearity affects the damping only ) and the other parameters are positive , the system described by equation ( [ eq : nonlinosc ] ) is known as a van der pol oscillator .the alternative case when nonlinearity affects the directional force / frequency only , i.e. , and , in turn , represents a _duffing oscillator_. due to their simplicity and universal nature these two systems are among those most extensively studied in nonlinear dynamics .it is straightforward to see that the oscillator is non - decaying , i.e. the origin is repeller , whenever ( negative damping ) in the case of a van der pol oscillator and/or and in a duffing oscillator.when a nonlinearity is present in both paramters ( i.e. and are both non - zero ) a combined _ van der pol - duffing oscillator _ results .the van der pol duffing oscillator , however , is more than just a good heuristic model of the solar cycle .in fact , an oscillator equation of this general form can be derived by a truncation of the dynamo equations . as noted before, we are especially interested in capturing the temporal dynamics associated with the large scale magnetic field . in order to construct a low order model aimed at capturing this dynamics, we follow the procedures described in .it has been suggested by and that a dimensional truncation of the dynamo equations ( [ eq : ap ] ) and ( [ eq : bphi ] ) is an effective method to reduce the system s dimensions and capture phenomena just on that scale . following that ansatz , gradient and laplacian operators are approximated by a typical length scale of the system ( e.g. convection zone length or width of the tachocline ) , leading to and .analogously this can be interpreted as a collapse of all spatial dimensions , leaving only the temporal behaviour . in terms of dynamical systems, we are projecting a higher dimensional space into a single temporal plane .after grouping terms in and ( now functions only dependent of the time ) we get where we have defined the _ structural coefficients _ , , as we now concentrate in creating an expression for the time evolution of since it is the field component directly associated with the productions of sunspots .we derive expression ( [ eq : dbdt ] ) in order to the time , and substitute ( [ eq : dadt ] ) in it to take away the dependence yielding where , , and are model parameters that depend directly on the structural coefficients .the name used to describe comes from the fact that these coefficients contain all the background physical structure ( rotation , meridional circulation , diffusivity , etc . )in which the magnetic field evolves .this oscillator ( equation [ eq : vdp ] ) is a van der pol - duffing oscillator and it appears associated with many types of physical phenomena that imply auto - regulated systems .this equation is a quite general result which should satisfy . in this case , unlike in the classical van der pol - duffing oscillator , the parameters are interconnected by a set of relations that link the present oscillation model with the original set of dynamo equations ( [ eq : ap]-[eq : bphi ] ) .this interdependency between parameters will eventually constrain the solution s space . as in the classical case , controls the frequency of the oscillations or the period of the solar magnetic cycle , controls the asymmetry ( or non - linearity ) between the rising and falling parts of the cycle and affects directly the amplitude .the parameter , related to buoyancy loss mechanism sets the overall amplitude peak amplitude of the solution .figure ( [ fig:1 ] ) shows the solution of equation ( [ eq : vdp ] ) in a time vs. amplitude diagram ( left ) and in a , phase space . from this figurewe find that this dynamo solution , under suitable parametrization ( viz .next section ) is a self - regulated system that rapidly relaxes to a stable 22-year oscillation . in the phase spacethe solution tend to a limit cycle or attractor .a complete ( clockwise ) turn in the phase space corresponds to a complete solar magnetic cycle .obtained from equation ( [ eq : vdp ] ) with parameters , , , . in the rightwe have a ( , ) phase space representation of the solution .the blue arrows indicate the direction of increasing time and the red dot the initial value used .also indicated in this panel are the regions corresponding to the maxima and minima of the cycle . _adapted from _ . ]in order to estimate values for the coefficients in equation ( [ eq : vdp ] ) , and fitted this oscillator model either to a long period of the solar activity ( several solar cycles ) or to each magnetic cycle individually .we shall return to this point in subsequent sections .a more general approach to the problem of finding the parameter combinations with which the classical van der pol - duffing oscillator returns solar - like solutions was taken by .the authors mapped the parameter space of the oscillator by adding stochastic noise to its parameters using different methods .the objective was to constrain the parameter regime where this nonlinear model shows the observed attributes of the sunspot cycle , the most important requirement being the presence of the waldmeier effect according to the definition of .noise was introduced either as an ornstein uhlenbeck process or as a piecewise constant function keeping a constant value for the interval of the correlation time .the effect of this noise was assumed to be either additive or multiplicative .the amplitudes and correlation times of the noise defined the phase space .the attributes of the oscillator model were first examined in the case of the van der pol oscillator ( no duffing cubic term ) with perturbation either in the nonlinearity parameter , or the damping parameter , , as shown in the equations below : \dot{x}\\ \label{e : vdp - xi}\ddot x&=&-\omega_0 ^ 2x-\mu_0\left [ \xi(t ) x^2 - 1\right]\dot{x}. \end{aligned}\ ] ] the constant parameters , , and used were taken from the fitted values listed by .note that in this simple case , variations in these parameters were assumed to be independent from each other , whereas in reality they are interrelated ( see equation [ eq : vdp ] ) .the results show that the model presents solar - like solutions when a multiplicative noise is applied to the nonlinearity parameter , as in equation ( [ e : vdp - xi ] ) .an example of a time series produced by this type of oscillator is shown in figure ( [ fig : nmpk2013 ] ) . ) .ssn values were defined as .the noise applied ( piecewise constant in this case ) is shown in the top panel . ] andssn values were here defined as , following .the noise applied is shown in the top and middle panels .the duffing parameter was here given a constant value .] as a next step towards a fully general study , let us consider the case where both and are simultaneously perturbed and the duffing term is also kept in the oscillator equation ( [ eq : nonlinosc ] ) .noise is applied to but it also affects as the values of and are assumed to be related as ; here , and are constants .a mapping of the parameter space shows that in this case solar - like solutions are more readily reproduced compared to the case when only one parameter was assumed to vary ( see fig .[ fig : nmpk201x ] ) .this finding is in line with the information derived from the lodm developed in the previous section .an ongoing study shows that the corresponding time dependence in the duffing parameter , as predicted by the lodm , has a significant effect on the character of the solution .we note that additive noise was first applied to one parameter ( ) of a van der pol oscillator by but the focus of that work was on reproducing cycle to cycle fluctuations , withour considering the waldmeier effect .this model was further analysed by ( and references therein ) who studied the behavior of the hurst exponent of this system and concluded that this type of fluctuations implies that the stochastic process which underlies the solar cycle is not simply brownian .this means that long - range time correlations could probably exist , opening the way to the possibility of forecasts on time scales comparable to the cycle period .an attempt to introduce the effects of such non - gaussian noise statistics into the lodm was made by who suggest that this may contribute to cyclic variations of solar activity on time scales shorter than 11 years .in the previous example a perturbation method was studied in order to find solar like solutions for this non - linear oscillator .another way of thinking is to pair the oscillator directly to some solar observable and try to constraint its parameters . as mentioned in the introduction we choose the international sunspot number , and we use it to build a proxy of the toroidal magnetic component .since the is usually taken to be proportional to the toroidal field magnetic energy that erupts at the solar surface ( ) , , this makes it ideal for compare with solutions of the lodm . taking this in consideration , have built a toroidal field proxy based on the sunspot number by following the procedure proposed by , i.e. .details about the construction of the toroidal proxy ( see fig . [ fig:2 ] ) can be found in and .is obtained by calculating , changing the sign of alternate cycles ( represented in gray ) , and smoothing it down using an fft low pass filter of 6 months .the vertical thin dotted lines represent solar cycle minima . ]the solution obtained for equation ( [ eq : vdp ] ) presented in figure ( [ fig:1 ] ) shows that the solar cycle is a self - regulated system that tends to a stable solution defined by an attractor ( limit cycle ) .if we allow for the different physical processes responsible for the solar dynamo and embedded in the structural coefficients ( [ eq : c1 ] , [ eq : c2 ] and [ eq : c3 ] ) , i.e. the differential rotation , the meridional circulation flow , the mechanism , and the magnetic diffusion , to change slowly from cycle to cycle then we start to observe deviations from the equilibrium state . if deviations from this sort of dynamical balance occur , such that if one of these processes changes due to an external cause , the other mechanisms also change to compensate this variation and ensure that the solar cycle finds a new equilibrium .to test this idea of an equilibrium limit cycle , we fit the lodm parameters to the \{, } phase space of the built toroidal proxy ( see fig .[ fig:4 ] ) . :the crosses correspond to local area averaged values found by dividing the data into 32 temporal intervals .the red dashed curve is a fit to the crosses .the continuous black curves correspond to a fit to all data points ( not grouped in intervals ) . for the red curvewe have that , , and .figure _ adapted from _ . ]if one of the parameter s variation is very large the system can be dramatically affected , leading to a quite distinct evolution path like the ones found during the solar grand minima .we will develop this subject in a subsequent section .solutions with fluctuation similar to those we see on the solar cycle are easily set by variations in the parameter ( and the physical processes associated with it ) . by definition the structural coefficient that regulates this parameter ( ) also has an important role in the other parameters ( , and ) . in the lodm equation ( [ eq : vdp ] ) the and quantitiesregulate the strength and the non - linearity of the damping .moreover , an occasional variation on , like a perturbation on the meridional flow amplitude , ( see structural coefficients [ eq : c1 ] ) will affect all sets of parameters leading to the solar dynamo ( equation [ eq : vdp ] ) to find a new equilibrium , which will translate into the solar magnetic cycle observable like the sunspots number , showing an irregular behaviour .the well - known relation discover by max waldmeier , that the time that the sunspot number takes to rise from minimum to maximum is inversely proportional to the cycle amplitude in naturally captured by the lodm assuming discrete variations in .notice that the waldmeier effect occurs as a consequence of the limit cycle becoming increasingly sharp as increases , i.e. , the sunspot number amplitude increases as the cycle s rising times gets shorter .from the physical point of view , based on observations , we know that in the sun some of the physical background structures that are taken as constant in our standard dynamo solution are nt so . in order to test that specific changes of the background state lead to the observed changes in the amplitude of the solar cycle , the following strategy was devised . at a first approximationwe assume that the structural coefficients can change only discretely in time , more specifically from cycle to cycle while the magnetic field is allowed to evolve continuously .the idea is that changing coefficients will generate theoretical solutions with different amplitudes , periods and eigen - shapes at different times and by comparing these different solution pieces with the observed variations in the solar magnetic field , we are able to infer information about the physical mechanisms associated with the coefficients . to do this we compare our theoretical solution with a proxy built from the international monthly averaged ssn since 1750 to the present .as mentioned before we assume that .the proxy data is separated into individual cycles and fitted using equation ( [ eq : vdp ] ) , considering that the buoyancy properties of the system are immutable , i.e. is constant throughout the time series .this means that when we fit the lodm to solar cycle , we will retrieve the set of coefficients that best describe that cycle .this allows to probe how these coefficients vary from cycle to cycle and consequently how the physical mechanisms associated with them evolve in time .equation ( [ eq : vdp ] ) is afterwards solved by changing the parameters to their fit value , at every solar minimum using a stepwise function ( similar to that presented in figure ( [ fig : vp_c1_var ] ) for ) .figures ( [ fig : vp_c1_var ] ) and ( [ fig : fit ] ) highlight this procedure . the fact that such a simplified dynamo model can get this degree of resemblance with the observed data just by controlling one or two parameters is an indication that it captures the most important physical processes occurring in the sun .the simple procedure previously described allows to reconstruct the behavior of solar parameters back in time . using an improved fitting methodology, obtained with this model the reconstruction of the variation levels of the solar meridional circulation for every solar ( sunspot ) cycle over the last 250 years .one must notice that in this specific lodm the amplitude of the cycle depends directly on the amplitude of the meridional flow during the previous cycle .it is completely possible to imagine that other models that consider a different theoretical setup might return a different behaviour .looking at equation ( [ eq : c1 ] ) , we can see that the coefficient depends on two physical parameters , the magnetic diffusivity , , and the amplitude of the meridional circulation , .the magnetic diffusivity of the system is a property tightly connected with turbulent convection and is generally believed to change only in time scales of the order of stellar evolution .this leaves variations in as the only plausible explanation for the variation observed from cycle to cycle .therefore , by looking at the evolution of we can can effectively assume that we are looking at the variation in the strength of meridional circulation .the results obtained are presented in figure ( [ fig : vp_c1_var ] ) .( black line ) compared to smoothed the sunspot number ( gray ) . _ adapted from _ ] although this result is in itself interesting , a more important concept came from this study .when presented their results for the first time , they introduced the idea that coherent long term variations ( of the order of the cycle period ) in the strength of the meridional circulation could provide an explanation for the variability observed in the solar cycle ( see fig .[ fig : lodm_surya_comp ] ) .this result was also _ a posteriori _ numerically validated using a 2.5d flux transport model ( and ) .only a couple of years later , presented meridional circulation measurements spanning over the last solar cycle .their measurements confirmed that the amplitude of this plasma flow changes considerably from cycle to cycle .recently two other groups have tested this idea with their 2.5d dynamo models finding additional features based in this effect , c.f . , and .for example it was found that the instant at which the change in the meridional flow takes place , has an influence in the duration of the following solar cycle .this was used as an explanation for the abnormally long duration of the last minimum . just for reference ,the numbering of solar cycles only started after 1750 with solar cycle 1 beginning in 1755 . at this momentwe are in the rising phase of solar cycle 24 . ) ._ adapted from _ and . ]solar grand minima correspond to extended periods ( a few decades ) where very low or no solar activity occurs . during thesesperiods no sunspots ( or very few ) are observed in the solar photosphere and it is believed that other solar phenomena also exhibit low levels of activity .the most famous grand minimum that has been registered is the maunder minimum which occurred between the years of 1645 and 1715 ( ) .a possible explanation for the origin of these quiescent episodes was put forward by . using a lodm, they showed that a steep decrease in the meridional flow amplitude can lead to grand minima episodes like the maunder minimum ( see figs .[ fig : grandmin ] and [ fig : bgrandmin ] ) .this effect presents the same visual characteristics as the observed data , namely a rapid decrease of magnetic intensity and a gradual recovery into normal activity ( see fig .[ fig : grandmin ] ) after the meridional circulation amplitude returns to its normal values .a similar result was later obtained by , again using a more complex 2.5d numerical flux transport model .nevertheless the reasons that could lead to a decrease of the meridional flow amplitude were not explored .this served as a motivation to study the behavior of this lodm in the non - kinematic regime , explained in section 5.2 .some examples mentioned in the introduction , hint that fluctuations in the mechanism can also trigger grand minima .we focus on a specific example now , the lodm developed by . in this workthe authors used a time - delay lodm similar to that presented in but expanded with the addition of a second effect .this model incorporates two of these mechanisms , one that mimics the surface babcock - leighton mechanism ( bl ) , and another one analogous to the classical mean - field ( mf ) -effect that operates in the bulk of the convection zone .this set up captures the idea that the bl mechanism should only act on strong magnetic fields that reach the surface , and that weak magnetic fields that diffuse through the convection zone should feel the influence of the mf .the authors subject these two effects to different levels of fluctuations and find that in certain parameter regimes , the solution of the system shows the same characteristics as a grand minimum .these results were also validated by implementing a similar set up into a 2.5d mean - field flux transport dynamo model .again this shows the usefulness of low order models to probe ideas before their implementation into more complex models . for the near future , perhaps one of most interesting applications of thislodm is its use in the predictability of future solar cycles amplitudes .the first step towards this objective is presented in .the authors studied the correlations between the lodm fitted structural coefficients and cycle s characteristics ( amplitude , period and rising time ) .they found very useful relationships between these quantities measured for cycle n and the amplitude of cycle n+1 .these relationships were put to the test by predicting the amplitude of current solar cycle 24 ( see fig .[ fig : predict24 ] ) . .[ fig : predict24 ]in recent years there have been strong developments of different types of dynamo models to compute the evolution of the solar magnetic activity and to explore some of the causes of magnetic variability .two classes of models have been quite successful , the kinematic dynamo models and , more recently , the global magnetohydrodynamical models .both types of dynamo models have a quite distinct approach to the dynamo theory , the first one resolves the induction magnetic equation for a prescribed velocity field ( which is consistent with helioseismology ) , and the second one obtains global magnetohydrodynamical simulations of the solar convection zone .many of these models are able to reproduce some of the many observational features of the solar magnetic cycle .nevertheless , it remains quite a difficult task to successfully identify which are the leading physical processes in current dynamo models that actually drive the dynamo in the solar interior .the usual method to test these dynamo models is to compare their theoretical predictions with the different sets of data , including the sunspot numbers , however , in many cases the conclusions obtained are very limited , as different physical mechanisms lead to very identical predictions .this problem also arises in the comparison between different dynamo models , including different types of numerical simulations .a possible solution to this problem is to use inverted quantities ( obtained form observational data ) to test the quality of the different solar dynamo model , rather than making direct comparison of data . for those of you familiarized with helioseismology, there is a good analogue : it is the equivalent to compare the inverted sound speed profile ( obtained from observational data ) with the sound speed profile predicted by solar models ( _ backward approach _ ) , rather than compare predicted frequencies with observational frequencies ( _ forward approach _ ) .the former method to test physical models is more insightful than the latter one . at the present level of our understanding of the solar dynamo theory , as a communitywe could gain a more profound understanding of the mechanisms behind the solar magnetic variability , if we start developing some backward methods to analyse solar observational data and test dynamo models .is constant ) ; ( b ) a variable kinematic dynamo model in which the for each magnetic cycle corresponds to the value obtained from the sunspot number temporal series .both simulations correspond to 130-year time series .the small variability present in the left panel is due to the stabilization of the numerical solution .figure _ adapted from _ . , title="fig : " ] is constant ); ( b ) a variable kinematic dynamo model in which the for each magnetic cycle corresponds to the value obtained from the sunspot number temporal series .both simulations correspond to 130-year time series .the small variability present in the left panel is due to the stabilization of the numerical solution .figure _ adapted from _ ., title="fig : " ] using the meridional velocity inverted from the sunspot number time series ( see fig . [fig : vp_c1_var ] ) , showed that most of the long term variability of the sunspot number could be explained as being driven by the meridional velocity decadal variations , assuming that the evolution of the solar magnetic field is well described by an axisymmetric kinematic dynamo model .figure ( [ fig : lodm_surya_comp ] ) shows a reconstructed sunspot times series that has been obtained using the meridional velocity inverted from the sunspot observational time series , and figure ( [ fig : compsuryssn ] ) shows the phase space of a standard axisymmetric kinematic dynamo model ( with the same for all cycles ) ( ) and a solar dynamo model where the changes from cycle to cycle as inverted from the sunspot times series ( ) .it is quite encouraging to find that such class of dynamo models for which the changes overtime successfully reproduced the main features found in the observational data .moreover , in their article tested two different methods of implementing the velocity variation for each magnetic cycle , namely , by considering that amplitude variations in that take place at sunspot minima or at sunspot maxima .all the time series show a few characteristics that are consistent with the observed sunspot records .in particular , all the simulations show the existence of low amplitudes on the sunspot number time series between 1800 and 1840 and between 1870 and 1900 .the simulation that best reproduces the solar data corresponds to the model ssnrec[3 ] ( see fig . [fig : lodm_surya_comp ] ) , in which was implemented a smoothed variation profile between consecutive cycles and taking place at the solar maximum .this clearly highlights the potential of such methodology . here, we discuss the same methodology as the one used in the previous section , but instead of applying it to observational sunspot number records , it is used to reconstruct the sunspot time series .the results obtained clearly show that the present kinematic dynamo models can reproduce in some detail the observed variability of the solar magnetic cycle .the fact that for one of the sunspot models model ssnrec[3 ] , it presents a strong level of correlation with the observational time series , lead us to believe that the main idea behind this _ backward approach _ is correct and it is very likely that the inverted variation is probably very close to the variation that happens in the real sun .clearly , under the assumed theoretical framework the meridional circulation is the leading quantity responsible for the magnetic variability found in the sunspot number time series and current solar dynamo models are able to reproduce such variability to a certain degree .so far , the vast majority of the lodm applications presented here followed the traditional assumption that the solar dynamo can be correctly modeled in the kinematic regime , where only the plasma flows influence the production of magnetic field , and not the other way around .this kinematic approximation is used in the vast majority of the present 2.5d spatially resolved dynamo models . in the last couple of yearsthough , evidence started to appear supporting the claim that this kinematic regime might be overlooking important physical mechanisms for the evolution of the dynamo .the idea that the meridional flow strength can change over time and affect the solar cycle amplitude coupled with the measurements of and indicate that the observed variation in this flow is highly correlated with the levels of magnetic activity .this leads to the fundamental question : _ `` is the flow driving the field or is the field driving the flow ? ''_ the first clues are starting to appear from 3d mhd simulations of solar convection .the recent analysis of the output of one of the large - eddy global mhd simulations of the solar convection zone done by shows interesting clues .these simulations solve the full set of mhd equations in the anelastic regime , in a broad , thermally - forced stratified plasma spherical shell mimicking the scz and are fully dynamical on all spatiotemporally - resolved scales .this means that a two way interaction between field and flow is always present during the simulation .the analysis shows that the interaction between the toroidal magnetic field and the meridional flow in the base of the convection zone indicates that the magnetic field is indeed acting on the equatorward deep section of this flow , accelerating it .this observed relationship runs contrary to the usually assumed kinematic approximation . in order to checkif this non - kinematic regime has any impact in the long term dynamics of the solar dynamo , implemented a term that accounts for the lorentz force feedback in a lodm similar to the one presented here .this allows to fully isolate the global aspects of the dynamical interactions between the meridional flow and magnetic field in a simplified way .they assumed that the large - scale meridional circulation , , is divided into a `` kinematic '' constant part , ( due to angular momentum distribution ) and a time dependent part , , that encompasses the lorentz feedback of the magnetic field .therefore they redefine as where the time dependent part evolves according to the first term is a magnetic nonlinearity representing the lorentz force and the second is a `` newtonian drag '' that mimics the natural resistance of the flow to an outside kinematic perturbation . under these conditions the lorentz force associated with the cyclic large - scale magnetic fieldacts as a perturbation on the otherwise dominant kinematic meridional flow .this idea was not new and it was used before in the context of magnetically - mediated variations of differential rotation in mean - field dynamo models , and .the modified lodm equation they end up defining are where , is defined as and takes the role of magnetic diffusivity , while the other coefficients remain the same .while the values used for the structural coefficients , are mean values extracted from the works presented in the previous in sections , the parameters associated with the meridional flow evolution , , and deserved now the attention .these parameters have an important role in the evolution of the solution space .the behavior observed in the solutions range from fixed - amplitude oscillations closely resembling kinematic solutions , multiperiodic solutions , and even chaotic solutions .this is easier to visualize in figure ( [ fig - bifurcation_maps ] ) where are presented analogs of classical bifurcation diagrams by plotting successive peak values of cycle amplitudes , for solutions with fixed ( , ) combinations but spanning through values of .transitions to chaos through bifurcations are also observed when holding fixed and varying instead . between and for different and .( a ) single period regime , , ; ( b ) appearance of period doubling , , and ( c ) shows signatures of chaotic regimes with multiple attractors and windows , obtained with , . _adapted from _.,title="fig:",width=143 ] between and for different and .( a ) single period regime , , ; ( b ) appearance of period doubling , , and ( c ) shows signatures of chaotic regimes with multiple attractors and windows , obtained with , ._ adapted from _.,title="fig:",width=143 ] between and for different and .( a ) single period regime , , ; ( b ) appearance of period doubling , , and ( c ) shows signatures of chaotic regimes with multiple attractors and windows , obtained with , . _adapted from _.,title="fig:",width=143 ] the authors expanded the methodology used and applied stochastic fluctuations to parameter , the one that controls the influence of the lorentz force . as a result , and depending on the range of fluctuations , they observed that the short term stochastic kicks in the lorentz force amplitude create long term modulations in the amplitude of the cycles ( hundreds of years ) and even episodes where the field decays to near zero values , analog to the previously mentioned grand minima .the duration and frequency of these long quiescent phases , where the magnetic field decays to very low values , is determined by the level of fluctuations of and the value of .the stronger this drag term is , the shorter the minima are and the higher the level of fluctuation of , the more common these intermittency episodes become .figure ( [ fig - lodm_stochastic_a ] ) shows a section of a solution that spanned for 40000 years and that presents all the behaviors described before .$ ] , and .all other model parameters are the same as in the reference solution .panel ( a ) shows a section of the simulation where the long term modulation can be seen . in blackis , red and blue a scaled version of the meridional flow , in this case 5 . in panel( b ) the same quantities but this time zooming in into a grand minimum ( off phase ) period . _adapted from _.,width=377 ] in this specific example they used 100% fluctuation in and maintaining all the other parameters constant . in the the parameter space used to produce this figure , the solution without stochastic forcing is well behaved in the sense that it presents a single period regime .therefore , the fluctuations observed in this solution are a direct consequence of the stochastic forcing of the lorentz force and not from a chaotic regime of the solution s space . to understand how the grand minima episodes arise they resort to visualizing one of these episodes with phase space diagrams of \{ , ,}. this allows to see how these quantities vary in relation to each other and try to understand the chain of events that trigger a grand minimum .the standard solution for the lodm without stochastic forcing , i.e. with fixed at the mean value of the random number distribution used , is the limit cycle attractor , i.e. , a closed trajectory in the \{ , } phase space .this curve is represented as a black dashed trajectory in the panels of figure ( [ fig - lodm_phasespace ] ) .the gray points in this figure are the stochastic forced solution values sampled at 1 year interval .these points scatter around the attractor representing the variations in amplitude of the solution .occasionally the trajectories defined by these points collapse to the center of the phase space ( the point \{0,0, } is also another natural attractor of the system ) indicating a decrease in amplitude of the cycle , i.e. a grand minimum .the colored trajectory evolving in time from purple to red represents one of those grand minimum .this happens when the solution is at a critical distance from the limit cycle attractor and gets a random kick further away from it .this kick makes the field grow rapidly . in turn , since the amplitude of the field grows fast , the lorentz force will induce a similar growth in eventually making change sign .when this occurs , behaves as a sink term quenching the field growth very efficiently .this behavior is seen in the two bottom panels of figure ( [ fig - lodm_phasespace ] ) where decays to its imposed value after the fields decay . after this collapse of it starts behaving has a source term again and the cyclic activity proceeds . ._ adapted from _.,width=377 ] one clear advantage of low order models emerges from this example .currently 3d mhd simulations of solar convection spanning a thousand years take a couple of months to run in high efficiency computational clusters or in supercomputers .longer simulations are at the moment prohibitive not only for the amount of time they take but also for the huge amount of data they generate .statistical studies on grand minima originated by the kind of magnetic back - reaction described here , require long integration times where many thousands of cycles need to be simulated .the lodm calculations can be done in a few minutes or hours in any current desktop . the grand minima mechanism presented in this sectionis now being studied by looking at the data available from 3d simulations .some effects are easier to find when you know what to look for .so far we have shown that low order dynamo models ( for which the approximation must be carefully chosen to keep the relevant physics within ) could lead the way to explore some features of the solar magnetic activity including the long - term variability .the study of the phase diagram clearly shows that on a scale of a few centuries the solar magnetic cycle shows evidence for a van der pool attractor put in evidence by the mean solar magnetic cycle , although on a time - scale of a few solar magnetic cycles the phase space trajectory changes dramatically . in some cases the trajectory collapses completely for several magnetic cycles as in the periods of grand minimathis gives us an indication about the existence of a well defined self regulated system under all this observed magnetic variability , for which we still need to identify the leading physical mechanisms driving the solar dynamo to extreme activity scenarios like periods of grand minima .actually , the fact that a well - defined averaged van der pool limit curve exists for all the sunspot records , can be used to test different solar dynamo models , including numerical simulations , against observational data or between different dynamo models .moreover , the fact that such well - defined attractor exists in the phase space , and several dynamo models are able to qualitatively reproduce the solar variability ( as observed in the phase space gives us hope that in the near future we will be able to make quite reliable short term predictions of the solar magnetic cycle variability , at least within certain time intervals of solar magnetic activity .a significant contribution can be done by the utilization of more accurate sunspot time series in which many of the historical inaccuracies were corrected . in the future ,similar inversion techniques could be developed , namely to study the possible asymmetry between the north and south hemispheres using the sunspot areas , either by treating each of the sunspot areas as two distinct times series or by attempting two - dimension inversions of sunspot butterfly diagrams . in the former case ,recently have analysed these long - term sunspot areas time series and found that turbulent convection and solar granulation are responsible by the stochastic nature of the sunspot area variations . in the last case , we could learn about the evolution of the solar magnetic cycle in the tachocline during the last two and a half centuries . moreover ,most of the inversion methods used for the sunspot number can be easily extended to other solar magnetic cycles proxies such as tsi , h and magnetograms .the oscillator models , as a first order dynamo model are particularly suitable to study the magnetic activity in other stars . a good proxy of magnetic activity in stars in the chromospheric variations of ca ii h and k emission lines . have found many f2 and m2 stars which seem to have cyclic magnetic cycle activity , as observed in the sun ( see fig . [fig : msstar ] ) . in some of these starsthe observational time series covers several cycles of activity .in particular , it will be interesting to identify how the dynamo operating in these stars differs from the solar case .more recently , the corot and kepler space missions have observed photometric variability associated with solar - like activity in a very large number of main sequence and sub - giant stars . while the time coverage is too short to derive cycle periods for stars very close to the sun , the overall _ level _ of activity and its dependence on various stellar parameters can be studied on a large statistical sample .nevertheless , with so many stars with quite distinct masses and radius , it is reasonable to expect that we will find quite different type of dynamos and regimes of stellar magnetic cycle .actually , we think it is likely to find a magnetic diversity identical to the one found in the acoustic oscillation spectra measured for the more than 500 sun - like stars already discovered , some of which have already shown evidence of a magnetic cycle activity . have obtained a proxy of the starspots number for the star hd49933 from amplitudes and frequencies of the acoustic modes of vibration . as in the nonlinear oscillator modelsthe activity level is determined by the structural parameters which in turn depends on the dynamo model .these studies potentially offer a simple theoretical scheme against which to test the observational findings .the authors thank the anonymous referee for the suggestions made to improve the quality of the article .thanks the convenors of the workshop and the international space science institute for the invitation and financial support . i.l . and d.p .would also like to thank arnab choudhuri and his collaborators for making the surya code publicly available .i.l . would like to thank his collaborators in this subject of research : ana brito , elisa cardoso , hugo silva , amaro rica da silva and sylvaine turck - chize .the work of i.l . was supported by grants from `` fundao para a cincia e tecnologia '' and `` fundao calouste gulbenkian '' .d.p . acknowledges the support from the fundao para a cincia e tecnologia ( fct ) grant sfrh / bpd/68409/2010 .m. n. acknowledges support from the hungarian science research fund ( otka grant no .
this article reviews some of the leading results obtained in solar dynamo physics by using temporal oscillator models as a tool to interpret observational data and dynamo model predictions . we discuss how solar observational data such as the sunspot number is used to infer the leading quantities responsible for the solar variability during the last few centuries . moreover , we discuss the advantages and difficulties of using inversion methods ( or backward methods ) over forward methods to interpret the solar dynamo data . we argue that this approach could help us to have a better insight about the leading physical processes responsible for solar dynamo , in a similar manner as helioseismology has helped to achieve a better insight on the thermodynamic structure and flow dynamics in the sun s interior .
what is irreversibility of a process ?this question , in this form , does not make much sense .we first have to specify `` irreversibility with respect to what '' .it means we first need to decide a set of rules e. a set of allowed transformations together with some free resource to which one has to conform when trying to revert the process .we can then say that irreversibility basically measures the deterioration of some resource that does not come for free , within the rules we specified .when studying quantum error correction , one usually considers an extremely strict scenario , where legitimate corrections only amount to a fixed quantum channel applied after the action of the noise .this scenario corresponds to the task of trying to restore the entanglement initially shared by the input system ( undergoing the noise ) with an inaccessible reference , only by using local actions on the output system , being any kind of communication between the two systems impossible .being quantum error correction a basic task in quantum information theory , the literature on the subject grew rapidly in the last 15 years .it is however possible to devise two main sectors of research : the first one is devoted to the design of good quantum error correcting codes , and directly stems from an algebraic approach to _ perfect _ quantum error correction ; the second one tries to understand conditions under which _ approximate _ quantum error correction is possible . usually , while the former is more practically oriented , the latter is able to give information theoretical bounds on the performance of the optimum correction strategy , even when perfect correction is not possible , while leaving unspecified the optimum correction scheme itself .our contribution follows the second approach : we will derive some bounds relating the loss of entanglement due to the local action of a noisy channel on a bipartite state with the possibility of undoing such a noise . the original point in our analysisis that we will consider many inequivalent ways to measure entanglement in bipartite mixed states , hence obtaining many inequivalent measures of irreversibility . after reviewing the main results of ref . , we will show how we can relate such entropic quantities with different norm - induced measures of irreversibility , like those exploiting the cb - norm distance or the channel fidelity , therefore providing measures of the overall i .e. state independent irreversibility of a quantum channel .in the following , quantum systems will be often identified with the ( finite dimensional ) hilbert spaces supporting them , that is , the roman letter [ resp . , rigorously denoting the system only , will also serve as a shorthand notation instead of the more explicit [ resp . .the ( complex ) dimension of [ resp . will be denoted as [ resp . .the set of possible states of the system [ resp . , that is , the set of positive semi - definite operators with unit trace acting on [ resp . , will be equivalently denoted with [ resp . or [ resp . . a general quantum noise is described as a completely positive trace - preserving map i .channel_. if the input system is initially described by the state , we will write to denote . the aim of this section is to understand how one can measure the coherence of the evolution induced by on .( we will see in the following how to get rid of the explicit dependence on the input state and obtain a quantity measuring the overall invertibility of a given channel , as a function the channel only . ) before continuing the discussion , we should clarify what we mean with the term `` coherence '' .imagine that the input system is actually the subsystem of a larger bipartite system , where the letter stands for _ reference _ , initially described by a pure state , such that =\rho^a.\ ] ] the situation is depicted in fig .[ fig:3 ] . is purified with respect to a reference system into the state .the noise acts on the system only , in such a way that is mapped into .,width=302 ] notice that the input state is mixed if and only if the pure state is entangled .then , the coherence of the evolution ( [ eq : evolution ] ) can be understood as the amount of residual entanglement survived in the bipartite output ( generally mixed ) state after the noise locally acted on only .however , any naive attempt to formalise such an intuitive idea is soon frustrated by the fact that there exist many different and generally inequivalent ways to measure the entanglement of a mixed bipartite system .this well - known phenomenon turns out in the existence of many different and generally inequivalent , but all in principle valid , ways to measure the coherence of an evolution .one possibility to overcome such a problem was considered already in ref . . there, schumacher introduced the quantity called _ entanglement fidelity _ of a channel with respect to an input state , defined as such a quantity ( which does not depend on the particular purification considered ) accurately describes how close the channel is to the noiseless channel on the support of .however , it was noticed that , as defined in eq .( [ eq : ent_fid ] ) , is _ not _ related to the coherence of the evolution , in that it is easy to see that a unitary channel e. completely coherent can result in a null entanglement fidelity .we then have to consider a more general situation , like the one depicted in fig . [ fig:1 ] ., here , after the noise , we apply a subsequent correction via a local restoring channel .the corrected bipartite output state is denoted by .,width=377 ] after the local noise produced the bipartite state , we apply a local restoring channel to obtain notice that in general the restoring channel can explicitly depend on the input state and on the noise .however , for sake of clarity of notation , we will leave such dependence understood , and make it explicit again , by writing , only when needed .we now compute the _ corrected _ entanglement fidelity and take the supremum over all possible corrections this is now a good measure of the coherence of the noisy evolution : by construction it is directly related to the degree of invertibility of the noise on the support of .the maximisation over all possible correcting channels in eq .( [ eq : corr_ent_fid ] ) can be extremely hard to compute . moreover , we are still interested in understanding how the coherence of a transformation is related to the theory of bipartite entanglement .the idea is that of finding some quantity ( typically an entropic - like function ) which is able to capture at one time both the amount of coherence preserved by the channel as well as the invertibility of the channel itself , possibly bypassing the explicit evaluation of , for which accurate upper and lower bounds would suffice .a key - concept in the theory of approximate quantum error correction is that of _ coherent information _ , which , for a bipartite state , is defined as where ] , is the so - called _ entropy of pure - state entanglement_. here we refrain from provide even a short review of the other entropic - like entanglement measures we mentioned , which would be far beyond the scope of the present contribution .the interested reader is directed to refs . and . for our purposes , we are content with recalling that , given a bipartite state , the following inequalities hold where is the _ quantum mutual information_. moreover notice that it is commonly found that and , as dimensions of subsystems and increase , a mixed state picked up at random in the convex set of mixed bipartite states almost certainly ( that is , with probability approaching one exponentially fast in the dimension ) displays an even more dramatic separation our motivation is to work out a result analogous to theorem 1 , where , instead of the coherent information loss introduced in eq .( [ eq : coherent - info - loss ] ) , we would like to use some other entanglement measure loss where the letter `` '' could stand , for example , for `` '' ( squashed entanglement loss ) or `` '' ( entanglement of formation loss ) .already at a first glance , we can already say that , thanks to eqs .( [ eq : bounds]-[eq : hashing ] ) , the second part of theorem 1 can be extended to other entanglement loss measures , that is for every channel . instead , the generalisation of the first part of theorem 1 is not straightforward : because of the typical entanglement behaviour summarised in eq .( [ eq : extreme - bounds ] ) , we could easily have , for example , a channel causing a _ vanishingly small _ entanglement of formation loss with , at the same time , a relatively _ severe _ coherent information loss .still , the following argument suggests that _ there must be _ an analogous of eq .( [ eq : miao ] ) for alternative entanglement losses : in fact , when evaluated on pure states , all mentioned entanglement measures coincide with the entropy of pure - state entanglement .moreover , many of these entanglement measures are known to be continuous in the neighbourhood of pure states .this is equivalent to the fact that , in the neighbourhood of pure states , they have to be reciprocally boundable .therefore , if the action of the noise is `` sufficiently gentle '' and the output state exhibits an entanglement structure which is `` sufficiently close '' to pure - state entanglement itself being pure .a trivial example of a mixed state with pure - state entanglement structure is given by , where and are two subsystems of .] , then it should be possible to write the analogous of eq .( [ eq : miao ] ) in terms of or , for example , as well .the problem is to explicitly write down such analogous formula . in ref . , the interested reader can find the proof of the following theorem let be the input state for a channel .let and be the corresponding losses of squashed entanglement and entanglement of formation , respectively . then and notice the large numerical factor , depending on the dimensions of the underlying subsystems , in front of the entanglement of formation loss : this feature is reminiscent of the previously mentioned irreversibility gap between distillable entanglement and entanglement of formation , and makes it possible the situation where the noise causes a vanishingly ( in the dimensions ) small entanglement of formation loss , even though its action is extremely dissipative with respect to the loss of coherent information . on the contrary, the loss of squashed entanglement seems to be an efficient indicator of irreversibility , almost as good as the coherent information loss in fact , only an extra constant factor of appears in eq .( [ eq : direct1 ] ) with respect to eq .( [ eq : miao]) ; on the other hand , it is symmetric under the exchange of the input system with the output system , a property that does not hold for the coherent information loss .summarising this section , the important thing is that there always exist a threshold ( which is strictly positive for finite dimensional systems ) below which all entanglement losses become equivalent , in the sense that they can be reciprocally bounded ( it is noteworthy that , in the case of squashed entanglement loss and coherent information loss , we can have dimension - independent bounds , which is a desirable property when dealing with quantum channels alone , see section 5 below ) . it is interesting now to forget for a moment about the channel itself , and see what eqs .( [ eq : miao ] ) , ( [ eq : direct1 ] ) , and ( [ eq : direct2 ] ) mean in terms of a given bipartite mixed state only .first of all , notice that , for every mixed state , there exist two pure states , and , and two channels , and , such that and .now , for a given state , let us define and where the letter is used as before are defined in the same way , by simply exchanging subsystems labels , as . ] .then , theorems 1 and 2 tell us that there exist channels and , and two pure states , and , with =\tau^a ] , such that where , , and . in a sense , either or being small . ] , it means that the entanglement present in the state is basically pure - state entanglement , even if is itself a mixed state .this is the reason for which we can establish a quantitative relation between typically inequivalent entanglement measures , as the following corollary of theorems 1 and 2 clearly states for an arbitrary bipartite mixed state , with , the following inequality holds where is a function as in eq .( [ eq : converse ] ) in theorem 1 . this corollary is in a sense the quantitative version of the intuitive argument given before theorem 2 , and it represents a first attempt in complementing the findings of ref . , summarised in eq .( [ eq : extreme - bounds ] ) . ) shows the behaviour , for a bipartite system of two qutrits , of the lower bound in eq .( [ eq : fgprime ] ) for coherent information as a function of entanglement of formation .coherent information , and hence distillable entanglement , are bounded from below by the thick curve .notice that has to be extraordinarily close to its maximum value in order to have a non trivial bound from eq .( [ eq : fgprime ] ) .this fact suggests that the bound itself could be improved.,width=377 ] it is also possible to invert eq .( [ eq : gap ] ) and obtain a function such that for all bipartite state .the plot of is given in fig .[ fig:2 ] for ( for qubits every entangled state is also distillable ) , for a state for which . the plotted curve displays the typical behaviour of the bound ( [ eq : fgprime ] ) .notice from fig .[ fig:2 ] that entanglement of formation has to be extremely close to its maximum attainable value in order to obtain a non trivial bound from eq .( [ eq : fgprime ] ) .this is a strong evidence that the bound itself could probably be improved . nonetheless , we believe that such an improvement , if possible , would only make smaller some ( unimportant ) constants which are independent of the dimension , while leaving the leading order of dependence on in the right hand side of eq .( [ eq : gap ] ) untouched .the previous analysis , following ref . , was done in order to quantify the invertibility of a noisy evolution with respect to _ a given _ input state . in this section ,we want to derive quantities characterising the `` overall '' invertibility of a given channel . in other words, we would like to get rid of the explicit dependence on the input state and obtain the analogous of eqs .( [ eq : miao ] ) , ( [ eq : miao2 ] ) , ( [ eq : direct1 ] ) , and ( [ eq : direct2 ] ) as functions of the channel only .intuitively , to do this , we should quantify how close the corrected channel can be to the noiseless channel , for all possible corrections .however , in doing this , we have to be very careful about which channel distance function we adopt in order to measure `` closeness '' . a safe choice consists in using the distance induced by the so - called _ norm of complete boundedness _ , for short _ cb - norm _ , defined as where is the identity channel on density matrices , and \le 1}\tr\left[\,|\mn(\rho)|\,\right].\ ] ] ( we put the absolute value inside the trace because in literature one often deals also with non completely positive maps , so that the extension can be non positive . ) notice , that , in general , , and the two norms can be inequivalent . a part of the rather technical definition of cb - norm ( the extension in eq .( [ eq : def - cb ] ) is necessary , basically for the same reasons for which we usually consider complete positivity instead of the simple positivity ) , we will be content with knowing that , for channels , and , and that the following theorem holds let be a channel , with . then where the infimum of the entanglement fidelity is done over all normalised states . it is then natural to define a cb - norm based measure of the overall invertibility of a given channel as with the infimum taken over all possible correcting channels . for a moment , let us now go back to the other functions we introduced before. we will be able to relate them , in some cases with dimension independent bounds , to the cb - norm based invertibility . given the loss function , where is used to denote the coherent information loss , the squashed entanglement loss , and the entanglement of formation loss , respectively , we define the following quantity where the supremum is taken over all possible input states .analogously , from eq .( [ eq : corr_ent_fid ] ) , let us define such quantities are now functions of the channel only , and we want to understand how well and capture the `` overall '' invertibility of a channel .first of all , let us understand how they are related .let be the state for which is achieved .then , on the other hand , let be achieved with . then , where , as usual , , , and .we are now in position , thanks to theorem 3 , to show how , , and are related to each other .let the value be achieved by the couple .then , {2k_x\delta_x(\mn ) } , \end{split}\ ] ] where in the second line we used theorem 3 , since the channel has equal input and output spaces .conversely , let be achieved by .then , thanks to eq .( [ eq : converse2 ] ) for all channels .let be the channel achieving the infimum in eq .( [ eq : cb - invert ] ) .then , summarising , we found that {2k_x\delta_x(\mn)}. \end{split}\ ] ] in the function , the dependence on the dimension is present ( see theorem 1 ) , however only inside a logarithm : this is not bad , in view of coding theorems .the dependence on can instead be dramatic in ; on the contrary , both and are independent on the dimension .this work is funded by japan science and technology agency , through the erato - sorst project on quantum computation and information .the author would like to thank in particular m hayashi , and k matsumoto for interesting discussions and illuminating suggestions .m gregoratti and r f werner , j. mod .opt . * 50 * , 915 ( 2003 ) ; f buscemi , g chiribella , and g m dariano , phys .lett . * 95 * , 090501 ( 2005 ) ; j a smolin , f verstraete , and a winter , phys . rev . a * 72 * , 052317 ( 2005 ) ; f buscemi , phys . rev . lett . *99 * , 180501 ( 2007 ) .m a nielsen and i l chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 200 ) ; j kempe , in _ quantum decoherence , poincar seminar 2005 _ , progress in mathematical physics series ( birkhauser verlag , berlin , 2006 ) .
the action of a channel on a quantum system , when non trivial , always causes deterioration of initial quantum resources , understood as the entanglement initially shared by the input system with some reference purifying it . one effective way to measure such a deterioration is by measuring the loss of coherent information , namely the difference between the initial coherent information and the final one : such a difference is `` small '' , if and only if the action of the channel can be `` almost perfectly '' corrected with probability one . in this work , we generalise this result to different entanglement loss functions , notably including the entanglement of formation loss , and prove that many inequivalent entanglement measures lead to equivalent conditions for approximate quantum error correction . in doing this , we show how different measures of bipartite entanglement give rise to corresponding distance - like functions between quantum channels , and we investigate how these induced distances are related to the cb - norm .
all astronomers recognize that spectroscopy offers a wealth of information that can help characterize the properties of the observing target . in the context of stellar astrophysics, spectroscopy plays many fundamental roles .the relative strengths and widths of stellar absorption lines provide access to physical properties like effective temperature ( ) and surface gravity ( ) , enabling model comparisons in the hertzsprung - russell diagram to estimate the masses and ages so crucial to understanding stellar evolution , as well as individual elemental abundances or the collective metallicity " ( typically parameterized as } ] . in the most direct case of spectral synthesis ,a model atmosphere structure is assembled and simulations of energy transport through it are conducted with a radiative transfer code ( e.g. , * ? ? ?* ; * ? ? ?* ) . however , in general this approach is often computationally prohibitive for most iterative methods of probabilistic inference .one partial compromise is to interpolate over a library of atmosphere structures that were pre - computed for a discrete set of parameter values , , for some arbitrary .then , perform a radiative transfer calculation with that interpolated atmosphere to synthesize ( e.g. , sme ; * ? ? ?a more common variant is to interpolate over a pre - synthesized library of model spectra , ( e.g. , * ? ? ?* ; * ? ? ?although the former approach is preferable , the computational cost of repeated spectral synthesis is enough to make a detailed exploration of parameter space less appealing ( although see section [ sec : discussion ] ) .although the framework we are advocating is applicable for _ any _ back - end " that generates a model spectrum , it is illustrated here using the latter approach with the phoenix library . in practice , this reliance on spectral interpolation within a model library requires a sophisticated treatment of associated uncertainties .the key problems are that the spectra themselves do not vary in a straightforward way as a function of ( especially within spectral lines ) , and that the typical model library is only sparsely sampled in . because of these issues , standard interpolation methods necessarily result in some information loss .the practical consequence is that the inferred posteriors on the model parameters are often sharply peaked near a grid point in the library , , potentially biasing the results and artificially shrinking the inferred parameter uncertainties ( e.g. , ) . to mitigate these effects ,we develop a spectral emulator " that smoothly interpolates in a sparse model library and records a covariance term to be used in the likelihood calculation that accounts for the associated uncertainties .the emulator is described in detail in appendix [ sec : appendix ] .we first decompose the model library into a representative set of eigenspectra using a principal component analysis . at each gridpoint in the library, the corresponding spectrum can be reconstructed with a linear combination of these eigenspectra .the weights associated with each eigenspectrum contribution vary smoothly as a function of the parameters , and so are used to train a gaussian process to interpolate the weights associated with any arbitrary . in this way , the emulator delivers a probability distribution that represents the range of possible interpolated spectra . by then marginalizing over this distribution, we can modify the likelihood function to propagate the associated interpolation uncertainty . in the remainder of this section , the details of generating the reconstructed ( interpolated ) spectrum are not especially relevant ( see appendix [ sec : appendix ] ) .typically , the raw " interpolated model spectrum that was generated above is highly over - sampled , and does not account for several additional observational and instrumental effects that become important in comparisons with real data .therefore , a certain amount of post - processing is required before assessing the model quality .we treat that post - processing in two stages . the first stage deals with an additional set of extrinsic " parameters , , that incorporate some dynamical considerations as well as observational effects related to geometry and the relative location of the target .the second stage employs a suite of nuisance parameters , , designed to forward model some imperfections in the data calibration .we can further divide into those parameters that impact the model primarily in the spectral or flux dimensions . for the former ,we consider three kernels that contribute to the line - of - sight velocity distribution function .the first , , treats the instrumental spectral broadening . for illustrative purposes , we assume is a gaussian with a mean of zero and a constant width at all , although more sophisticated forms could be adopted .the second , , characterizes the broadening induced by stellar rotation , parameterized by as described by ( * ? ? ?* his eq . 18.14 ) , the rotation velocity at the stellar equator projected on the line of sight ( where is the inclination of the stellar rotation axis ) . andthe third , , incorporates the radial velocity through a doppler shift .the model spectrum is modified by the parameters ] are then applied as with simplified notation such that ] .some spectral libraries provide spectra as with peak fluxes normalized to a constant value , in that case , will simply serve as an arbitrary scaling parameter .the procedure so far is composed of straightforward operations demanded by practical astronomical and computing issues . if the data were _ perfectly _ calibrated , we could proceed to a likelihood calculation that makes a direct comparison with .however , the calibration of the continuum shape for data with reasonably large spectral range is often not good enough to do this .a common example of this imperfect calibration can be readily seen when comparing the overlaps between spectral orders from echelle observations .even if such imperfections ( e.g. , in the flat field or blaze corrections , or perhaps more likely in the flux calibration process ) induce only minor , low - level deviations in the continuum shape , they can add up to a significant contribution in the likelihood function and thereby potentially bias the results . the traditional approach to dealing with this issue has been avoidance ; a low - order polynomial or spline function is matched ( separately ) to the model and the data and then divided off to normalize the spectra . while this is straightforward to do for earlier type stars , it only masks the problem .this normalization procedure disposes of _ useful _ physical information content available in the continuum shape , and can be considerably uncertain in cases where the spectral line density is high ( e.g. , for cooler stellar photospheres ) .moreover , it can not propagate the uncertainty inherent in deriving the normalization functions into a proper inference framework . instead, we employ a more rigorous approach that forward - models the calibration imperfections with a set of nuisance parameters that modify the shape of the model spectrum . by later marginalizing over these nuisance parameters ,we properly account for any uncertainties that these kinds of calibration imperfections induce on the stellar parameters of interest while also maintaining the useful information in the continuum shape . in practice , this is achieved by distorting segments of the model with polynomials , ( e.g. , * ? ? ?* ; * ? ? ?figure [ fig : chebyshev ] demonstrates how these nuisance parameters are applied to the model . for spectral orders ,each denoted with index , the model spectrum can be decomposed as where is an degree chebyshev function .the coefficients are considered a set of nuisance parameters , ] , an amplitude ( ) and a scale ( ) .the are termed _hyperparameters _ here ; because a gaussian process describes a population of functions generated by random draws from a probability distribution set by a mean vector and a covariance matrix , the kernel parameters are naturally part of a hierarchical model . in this specific case ,the functions described by these hyperparameters represent many realizations of covariant residuals from a spectral fit .figure [ fig : matrix ] shows an example of the gaussian process kernel and the covariant residuals that can be generated from it . to ensure that remains a relatively sparse matrix ( for computational expediency ) , we employ a hann window function to taper the kernel .the truncation distance can be set to a multiple of the scale ( we set ) .in addition to the global covariance structure , there can be local regions of highly correlated residuals .these patches of large are usually produced by pathologically incorrect spectral features in the model , due to systematic imperfections like missing opacity sources or poorly constrained atomic / molecular data ( e.g. , oscillator strengths ) .some representative examples are shown in figure [ fig : badlines ] .to parameterize such regions in , we introduce a sequence of non - stationary kernels that explicitly depend on the actual wavelength values of a pair of pixels ( and ) , and not simply their separation ( ) . assuming that these local residual features are primarily due to discrepancies in the spectral line depth ( rather than the line shape or central wavelength ) , a simple gaussian is a reasonable residual model . in that case, the pixel residuals of the -th such local feature could be described as \ ] ] with peak amplitude , central wavelength , and width .we assume that the amplitude of this gaussian feature is drawn from a normal distribution with mean 0 and variance .the pixels in this gaussian - shaped residual are correlated because each pixel shares a common random scale factor ( ) .then , the covariance of any two pixels in this region is given by eq .[ eqn : expectation ] , where the expectation value is taken with respect to the probability distribution in eq .[ eqn : amplitude ] a_k \exp \left [ - \frac{r^2(\lambda_j , \mu_k)}{2 \sigma_k^2 } \right ] \right \rangle \nonumber \\ & = & \langle a_k^2 \rangle \exp \left [ - \frac{r^2(\lambda_i , \mu_k ) + r^2(\lambda_j , \mu_k)}{2 \sigma_k^2 } \right ] \nonumber \\ & = & a_k^2 \exp \left [ - \ , \frac{r^2(\lambda_i , \mu_k ) + r^2(\lambda_j , \mu_k)}{2 \sigma_k^2}\right ] .\label{eqn : kregion}\end{aligned}\ ] ] the full local covariance kernel covering all of the possible gaussian residuals is composed of a linear combination of kernels , with a corresponding set of hyperparameters ] .the factor is a parameter that scales up the poisson noise in each pixel by a constant factor to account for additional detector or data reduction uncertainties ( e.g. , read noise , uncertainties in the spectral extraction procedure , etc . ) ; typically for well - calibrated optical spectra .if there are local covariance patches ( see section [ subsec : mcmc ] on how this is determined ) , then there are elements in the set of covariance hyperparameters , .figure [ fig : matrix ] provides a graphical illustration of how the kernels that comprise the covariance matrix are able to reproduce the structure present in a typical residual spectrum .the bayesian framework of this inference approach permits us to specify prior knowledge about the model parameters , .as will be discussed further in sections [ sec : examples ] and [ sec : discussion ] , in most cases it is necessary to utilize some independent information ( e.g. , from asteroseismology constraints or stellar evolution models ) as a prior on the surface gravity .but otherwise we generally recommend a conservative assignment of uniform priors , such that is flat over the spectral library grid ( and zero elsewhere ) and is flat for physically meaningful values ( e.g. , , and ). for ( early type ) stars with a clear continuum , it makes sense to assume flat priors on the polynomial parameters .however , information about the calibration accuracy ( e.g. , from comparisons of multiple calibration sources in the same observation sequence ) can be encoded into a simple prior on the chebyshev coefficients ; for example , gaussian priors with widths that represent the fractional variance between different derived calibration functions would be reasonable . for ( late type )stars with a poorly defined continuum , some judicious tapering of the priors ( such that small coefficients at high are preferred ) may be required to ensure that broad spectral features are not absorbed into the polynomial ( see section [ sec : examples ] ) . in general , uniform ( non - negative ) priorsare recommended for the global kernel hyperparameters . for the local kernels , we typically adopt uniform priors for the amplitudes and means \{ , } , but construct a logistic prior for the widths \{ } that is flat below the width of the line - of - sight velocity distribution function ( defined as the convolution of three broadening kernels in eq .[ eqn : broadening ] ) , , and smoothly tapers to zero at larger values : such a prior formulation prevents local kernels from diffusing to large and low , since that kind of behavior is better treated by the global kernel .when modeling real data , there is no _ a priori _ information about the locations of the local kernels ; they are instantiated as needed ( see section [ subsec : mcmc ] ) .however , using the knowledge gained from previous inferences of similar targets , one could instead start by instantiating kernels at the outset with priors on where there are known to be systematic issues with the synthetic spectra .the inference framework developed here has a natural blocked structure between the collections of interesting " parameters , ] posterior is only marginally broadened ( by % ) . upon closer inspection of the latter ,it becomes clear that the posterior has an artificially sharp peak located at a grid point in the model library ( }= -0.5 ] , and inflates the associated uncertainty by a factor of 2.5 compared to the standard " inference approach .the uncertainty on is now 5 larger than in the original test . finally ,in a fourth test we fold in the methodology for the local covariance kernels described in section [ subsec : local_covariance ] .this has little effect on the widths of the parameter posteriors ( % increase ) , but does shift their peaks to slightly higher values in both and } ] , produced because the phoenix models tend to have more ` outlier ' spectral lines with _ over - predicted _ line depths . without the local covariance kernels to downweight these outliers , the models tend toward lower metallicity to account for them .but when the local kernels are included , this bias is reduced and a more appropriate higher } ] are shown together in figure [ fig : gl51_posterior ] .like wasp-14 , we find that a more appropriate treatment of the covariance matrix results in a substantial broadening of the parameter posteriors ; the uncertainties on and } ] .but when we consider the more sophisticated versions of that employ gaussian processes to treat correlated residuals , the contribution of these features to the likelihood calculation is reduced , and therefore } ] or for cases influenced by prominent ` outlier ' lines ) .this issue of implausibly small formal uncertainties has long been recognized in the stellar spectroscopy community .the standard solution has been to add ( in quadrature ) a ` floor ' contribution , imposed independently on each parameter and meant to be representative of the systematics ( e.g. , see or for clear and open discussions of this approach ) .the key problems with this tactic are that these systematics are in reality degenerate ( and so should not be applied independently ) and that they dominate the uncertainty budget , but are in a large sense arbitrary they are not self - consistently derived in the likelihood framework .our goal here has been to treat one aspect of this systematic uncertainty budget internal to the forward - modeling process , by employing a non - trivial covariance matrix that accounts for generic issues in the pixel - by - pixel inference problem . given the results above , we have demonstrated that this procedure successfully accounts for a substantial fraction of the ( empirically motivated ) _ ad hoc _ systematic ` floor ' contribution typically adopted in inference studies . however , although a likelihood function that can properly account for the character of the residuals is important , it does not by itself treat _ all _ of the important kinds of systematics in the general spectroscopic inference problem . in future work that can build on the flexible likelihood formalism we have advocated here ,there are three other important sources of systematic uncertainty that should be considered : ( 1 ) data calibration ; ( 2 ) optimized parameter sensitivity ; and ( 3 ) model assumptions , or flexibility .we discuss each of these issues briefly , with attention paid to potential remedies that fit within the likelihood framework developed here .perhaps the most familiar source of systematics lies with issues in the data calibration . in the idealized case of perfect calibration, the physical parameters inferred from different observations of the same ( static ) source should be indistinguishable . but given the complexity of a detailed spectroscopic calibration , that is not typically the case in practice . the common approach to quantify the systematic uncertainties contributed by calibration issuesis to compare the inferences made using different spectra ( e.g. , from different instruments and/or observations ) .the final parameter values are usually presented as an average of these separate inferences , with the uncertainties inflated by adding in quadrature some parameter - independent terms that account for their dispersion .the more appropriate way of combining these inferences is to model the individual spectra simultaneously in a hierarchical framework like the one discussed in section [ sec : method ] : in that way , the dispersion is appropriately propagated into the parameter uncertainties while any intrinsic degeneracies are preserved ( which is not possible in the standard ` weighted average ' approach ) .ultimately , one could also introduce some empirically - motivated nuisance parameters that are capable of forward - modeling imperfections in the data calibration , similar to the approach adopted in section [ subsec : postprocess ] ( e.g. , see fig . [fig : chebyshev ] ) .another important source of systematic _ bias _ comes from the fact that certain physical parameters have only a relatively subtle effect on the spectrum .stellar spectroscopists are familiar with this being an issue when inferring the surface gravity , , since it is primarily manifested as low - level modifications to the wings of certain spectral lines like mg b and in the equivalent widths of lines from singly - ionized elements like and .when modeling data with a large spectral range , the effects of varying are small compared to the residuals introduced by the many other model imperfections .consequently , the surface gravity will not be constrained well , and inferences on ( and therefore other degenerate parameters ) can be substantially biased . as an example , when fitting the wasp-14 data in section [ subsec : wasp ] without prior information on the surface gravity , we find a shift of .9 dex to lower ( and accompanying shifts in and } ] are in excellent agreement with those derived by using the spc method , but are shifted by 150k ( higher ) and 0.15dex ( higher ) , respectively , compared to the phoenix results . while the relevant physics included in these models is very similar for these temperatures and the inferred stellar parameters are similar in an absolute sense , it is still striking that the systematic shift between models is several times larger than the statistical uncertainties derived from our likelihood function . at this point, there is little to be done to rectify these model - dependent differences ; in the future , one hopes that the model inputs can be refined based on feedback from the data ( see sect . [ sec : discussion ] ) .any inferences of physical parameters should only be considered in the context of the assumed models . aside from these different assumptions and inputs ,the limited _ flexibility _ of these models certainly also contributes to the systematic uncertainty budget , and is possibly also a source of systematic bias .model spectral libraries typically have neglected dimensions in parameter - space that , if made available , would be expected to broaden and possibly shift the posteriors for the primary physical parameters .one typical example lies with element - specific abundance patterns , often distilled to the enhancement of -elements ( i.e. , [ /fe ] ) . if the target star has a non - zero [ /fe ] ( an enhancement or deficit relative to the solar ratios ) , but is fit with a single , global metallicity pattern , it is not clear that the sophisticated covariance formalism developed here would be capable of appropriately capturing such residual behavior .another prominent example of an important hidden parameter dimension is the microturbulence , which for some spectral types and spectral resolution may impact the spectrum in a similar way as the surface gravity ( and may be partly responsible for the bias discussed above ; * ? ? ?* ) . to mitigate the resulting deficiencies in precision ( and potentially accuracy ) on the inference of other parameters , we would ideally employ libraries or modeling front - ends that can incorporate some flexibility in these hidden ( i.e. , ignored ) dimensions of parameter - space ( e.g. , individual elemental or group - based abundance patterns , microturbulence , etc . )astronomers exploit spectroscopy to retrieve physical information about their targets .ideally , such inferences are made with the maximal precision afforded by the measurement noise , and accurately reflect the uncertainties with minimal systematic bias .but in practice , the spectral models used as references are never perfect representations . even modest mismatches between data and model can propagate substantial systematic uncertainty into the inference problem . in high - sensitivity applications ( e.g. , stellar and exoplanetary astrophysics ) , ignoring these systematicscan give a false sense of both precision and accuracy in the inferences of key parameters .typically , the more egregious of these imperfections are mitigated " by dismissal ( explicitly not considering a subset of the data ; e.g. , masking , clipping ) .rarely , they are confronted directly with painstaking , computationally expensive fine - tuning of more general ( nuisance ) parameters in the model ( e.g. , oscillator strengths , opacities ) , albeit only over a very limited spectral range and region of physical parameter - space .we have presented an alternative approach to dealing with this fundamental issue , grounded in a generative bayesian framework .the method advocated here constructs a sophisticated likelihood function , employing a non - trivial covariance matrix to treat the correlated pixel - to - pixel residuals generated from intrinsically imperfect models . that matrix is composed of a linear combination of _ global _ ( stationary ) and _ local _ ( non - stationary ) gaussian process kernels , which parameterize an overall mild covariance structure as well as small patches of highly discrepant outlier features .in the context of a given model parameterization ( i.e. , synthetic spectral library , or a more complex and flexible model generator ) , the framework we have developed provides a better inference than the standard ( or cross - correlation ) comparison .we have built up a series of tests that demonstrates how the emulator , global kernels , and local kernels affect the nature of the inference on the stellar parameters . to demonstrate how the framework is used , we determined the surface parameters of main - sequence stars with mid - f and mid - m spectral types from high - s / n optical and near - infrared data , with reference to pre - computed model libraries ( sect .[ sec : examples ] ) .the source code developed here is open and freely available for use : see http://iancze.github.io/starfish .the novelty of employing this kind of likelihood function in the spectroscopic inference problem is that the treatment of data model mismatches ( in essence , the fit quality ) is explicitly built into the forward - modeling framework .this offers the unique advantage that discrepant spectral features ( outliers ) , which may contain substantial ( even crucial ) information about the parameters of interest , can still effectively propagate their useful information content into the posteriors with a weighting that is determined self - consistently . from a practical standpoint ,this means that a larger spectral range can be used and model imperfections can be downweighted by the usage of covariance kernels .the global covariance framework provides more appropriate estimates of the posterior probability distribution functions ( i.e. , the precision or uncertainty estimates ) for the model parameters . the automated identification and disciplined downweighting of problematic outlier " spectral lines ( those that can not be reproduced with any combination of the model parameters ) with local covariance kernels can prevent them from overly influencing ( and possibly biasing , especially in cases with few spectral features available ) the inferences . in many cases ,the underlying physical problem lies with incorrect ( or inaccurate ) atomic and/or opacity data used in the models . in this sense, the posteriors of the hyperparameters of the local covariance kernels can actually indicate in what sense and scale these inputs need to be modified to better reproduce observational reality .the approach we describe is generally applicable to any spectroscopic inference problem ( e.g. , population synthesis in unresolved star clusters or galaxies , physical / chemical models of emission line spectra in star - forming regions , etc . ) .moreover , it has the flexibility to incorporate additional information ( as priors ) or parametric complexity ( if desired ) , and could be deployed as a substitute for a simplistic metric in already - established tools ( e.g. , sme ). another potential application might be in the estimation of radial velocities using traditional doppler - tracking pipelines for exoplanet or binary star research .poorly modeled micro - tellurics can lead to incorrect measurements of radial velocities for certain contaminated chunks of the spectrum , causing them to give unrealistically precise but biased velocity measurements .a flexible noise model would broaden the posteriors on these points and allow them to be combined into a more accurate systemic velocity .ultimately , the benefits of employing covariance kernels to accommodate imperfect models could be extended well beyond modeling the spectra of individual targets . in principle , the approach we have described here can be used to systematically discover and quantify imperfections in spectral models and eventually to build data - driven improvements of those models that are more appropriate for spectroscopic inference . by fitting many stellar spectra with the same family of models, we can catalog the covariant structure of the fit residuals especially the parameters of the local covariance kernels to collate quantitative information about where and how the models tend to deviate from observational reality . that information can be passed to the spectral synthesis community , in some cases enabling modifications that will improve the quality of the spectral models . on a large enough scale, this feedback between observers and modelers could be used to refine inputs like atomic and molecular data ( oscillator strengths , opacities ) , elemental abundance patterns , and perhaps the stellar atmosphere structures .if one has access to the radiative synthesis process that generates the model spectra , there are many possible means to improve their quality .in particular , a process of history matching can be used to rule out regions of parameter space where the models do not fit well ( e.g. , for a use in galaxy formation simulations , see ) .for example , if one had full control over the radiative synthesis code , stellar structure code , and atomic line database , one could improve the performance of the spectral emulator by ruling out regions of parameter space for these separate components that are inconsistent with a collection of observed spectra , such as a set of standard stars spanning the full range of spectral classifications . in a similar vein, we could also simultaneously use several synthetic spectral libraries to infer the stellar parameters while also identifying discrepant regions of the spectrum .a treatment using multiple synthetic libraries would likely reveal interesting correlations between model discrepancies , such as a specific signature among many lines ( e.g. deviations in spectral line shape that can not be explained by variations in ) .conversely , if a discrepant feature is seen for all models , it could be due to either an anomaly with the given star ( e.g. , a chromospheric line due to activity or perhaps an intervening interstellar absorption line ) or a correlated difficulty among all models ( e.g. , an incorrect atomic constant ) .alternatively , this kind of feedback could be used to make data - driven modifications to the already existing models , creating a new semi - empirical model library .this could be accomplished by linking the parameters of the covariance kernels while fitting many stars of similar spectral type in a hierarchical bayesian model , which would add confidence to the assessment that certain spectral features are _systematic _ outliers and offer general quantitative guidance on how to weight them in the likelihood calculation . rather than simply assembling an empirical spectral library using only observations , this combined machine - learning approach would naturally provide a physical anchoring for the key physical parameters , since they are reflected in the spectra based on the physical assumptions in the original models .this kind of large - scale analysis holds great promise in the ( ongoing ) era of large , homogeneous high resolution spectroscopic datasets ( e.g. , like those being collected in programs like the apogee and hermes surveys ; ) , since they provide enormous leverage for identifying and improving the underlying model systematics .+ the authors would like to acknowledge the following people for many extraordinarily helpful discussions and key insights : daniel foreman - mackey , guillermo torres , david latham , lars buchhave , john johnson and the exolab , daniel eisenstein , rebekah dawson , tom loredo and the exostat group , allyson bieryla , and maxwell moe .two anonymous reviewers provided encouraging and very useful comments on the manuscript draft that greatly improved its clarity and focus .ic is supported by the nsf graduate fellowship and the smithsonian institution .is supported at harvard by nsf grant ast-1211196 .this research made extensive use of astropy and the julia language .the spectral emulator is designed to serve as an improved interpolator for the synthetic spectral library . rather than a ( tri-)linear interpolator , which would deliver a single spectrum for a given , the spectral emulator delivers a probability distribution of possible interpolated spectra .in this manner , it is possible to incorporate realistic uncertainties about the interpolation process into the actual likelihood calculation . in the limit of moderate to high signal - to - noise spectra, these interpolation uncertainties can have a significant effect on the posterior distribution of .a schematic of the emulator is shown in figure [ fig : flowchart_appendix ] , which is a continuation of figure [ fig : flowchart ] .briefly , the emulator consists of a set of eigenspectra , representing the synthetic spectral library , that can be summed together with different weights to reproduce any spectrum originally in the library . to produce spectra that have in between ,the weights are modeled with a smooth gaussian process ( gp ) .this gp delivers a probability distribution over interpolated spectra , which can then be incorporated into the covariance matrix introduced in section [ subsec : likelihood ] .here we describe the design and construction of our spectral emulator .model library spectra are stored as ( 1-dimensional ) arrays of fluxes , sampled on high resolution wavelength grids . in the case of interest here ,the sets of model parameters }\}] ] ; the library steps by 250k in , but the phoenix library has finer coverage in 100k increments .the first step in designing a spectral emulator is to break down the library into an appropriate basis .we chose the principal component basis to decompose the library into a set of eigenspectra " , following the techniques of . prior to this decomposition ,we isolate a subset of the library ( containing spectra ) with parameter values that will be most relevant to the target being considered ( e.g. , for gl 51 , this means considering only effective temperatures below ) .we then standardize these spectra by subtracting off their mean spectrum and then whiten " them by dividing off the standard deviation spectrum measured in each pixel across the grid .the mean spectrum is and the standard deviation spectrum is ^ 2 } , \ ] ] where denotes the full collection of the sets of stellar parameters under consideration in the library and denotes a single set of those parameters drawn from this collection .both and are vectors with length , the same size as a raw synthetic spectrum ( ) . in effect, all library spectra are standardized by subtracting the mean spectrum and dividing by the standard deviation spectrum the eigenspectra are computed from this standardized grid using principal component analysis ( pca ; * ? ? ?each eigenspectrum is a vector with length , denoted as , where is the principal component index .we decided to truncate our basis to the first eigenspectra , where is decided by the minimum number of eigenspectra required to reproduce any spectrum in the grid to better than 2% accuracy for all pixels ( the typical error for any given pixel is generally much smaller than this , ) .as an example , the eigenspectra basis computed for gl 51 using the phoenix library is shown in the top panel of figure [ fig : pca_reconstruct ] . using the principal component basis, we can lossily reconstruct any spectrum from the library with a linear combination of the eigenspectra where is the weight of the eigenspectrum .these weights are 3-dimensional scalar functions that depend on the stellar parameters . any given weight , which is generally a smooth function of the stellar parameters ( see the left panel of figure [ fig : gp_interp ] ) , can be determined at any grid point in the library by taking the dot product of the standardized synthetic spectrum with the eigenspectrum to simplify notation , we can write the collection of eigenspectra weights in a length- column vector and horizontally concatenate the eigenspectra into a matrix with rows and columns then , we can rewrite eq .( [ eqn : reconstruct ] ) as where represents the element - wise multiplication of two vectors . to recapitulate, the framework described above can be used to decompose the synthetic spectra in a model library into a principal component basis , allowing us to reconstruct any spectrum in the library as a ( weighted ) linear combination of eigenspectra .the weights corresponding to each eigenspectrum are moderately - smooth scalar functions of the three stellar parameters , .therefore , to create a spectrum corresponding to an arbitrary set of these parameters that is not represented in the spectral library , we must interpolate the weights to this new set . in practice, it may be possible to use a traditional scheme like spline interpolation to do this directly .however , we found that with sensitive spectra ( e.g. , for gl 51 the s / n is ) , the uncertainty in the interpolated representation of the spectrum can constitute a significant portion of the total uncertainty budget .this , combined with the under - sampling of the synthetic grid can cause artificial noding " of the posterior near grid points in the synthetic library , because the interpolated spectrum is not as good as the raw spectrum at the grid point . even explicitly accounting for interpolation error by doing drop - out " interpolation tests and empirically propagating it forward does not relieve this noding issue .so instead , we address this problem by employing a gaussian process to model the interpolation of the eigenspectra weights over , thereby encapsulating the range of possible interpolated spectra .each weight is modeled by a gaussian process for each eigenspectrum . for a single eigenspectrum with index , we denote the collection of evaluated for all the spectra in the library as a length vector .the gaussian process treats as a collection of random variables drawn from a joint multi - variate gaussian distribution , with denoting the covariances .the kernel that describes the covariance matrix for this distribution is assumed to be a 3-dimensional squared exponential , \nonumber \\ & & \times \exp \left [ -\frac{(\log g_i - \log g_j)^2}{2 \ , \ell_{{\log g}}^2 } \right ] \\ & & \times \exp \left [ -\frac { ( { [ { \rm fe}/{\rm h}]}_i - { [ { \rm fe}/{\rm h}]}_j)^2}{2 \ ,\ell_{{[{\rm fe}/{\rm h}]}}^2 } \right ] \nonumber , \label{eqn : emulator_kernel}\end{aligned}\ ] ] with hyperparameters , , , }}$ ] } representing an amplitude and length scale for each dimension of .unlike the matrn kernel used in section [ sec : method ] ( which produces a more structured behavior reminiscent of the spectral residuals ) , this squared exponential kernel has a smooth functional form that is more appropriate to represent the behavior of the eigenspectra weights across the library grid , as demonstrated in figure [ fig : gp_interp ] .the -dimensional covariance matrix is the evaluation of the covariance kernel for all pairings of stellar parameters at library gridpoints .once the gaussian processes for each are specified , we can construct the joint distribution . we use to denote the concatenation of vectors into a single length vector , and as the covariance matrix , although we could optimize the hyperparameters of each gaussian process independently based upon how well it reproduces the collection of weights for that eigenspectrum , ideally we would like to optimize the hyperparameters according to a metric that describes how well the emulator actually reproduces the original library of synthetic spectra .following , we write down a likelihood function describing how well the reconstructed spectra match the entirety of the original synthetic grid \label{eqn : em_data_likelihood}\end{gathered}\ ] ] here , represents a length vector that is the collection of all of the synthetic flux vectors concatenated end to end . the precision of the eigenspectra basis representation , or the statistical error in the ability of the emulator to reproduce the known eigenspectrais represented by . because we have truncated the eigenspectra basis to only components , where is much smaller than the number of raw spectra in the library, the emulator will not be able to reproduce the synthetic spectra perfectly . by including this nugget " term in the emulator , we are also forward propagating the interpolation uncertainty for near or at values of .we specify a broad function prior on because we expect it to be well constrained by the data . where shape and rate . to facilitate the manipulation of eqn [ eqn : em_data_likelihood ], we create a large matrix that contains the all of the eigenspectra \ ] ] where is the kronecker product .this operation creates a matrix , which , when multiplied by the vector , enables ( lossy ) reconstruction of the entire synthetic library up to truncation error in the eigenspectrum basis ( ) . for a given ,the maximum likelihood estimate for eqn [ eqn : em_data_likelihood ] is .using , we can factorize eqn [ eqn : em_data_likelihood ] into \\ \times \lambda_\xi^{m ( n_\textrm{pix } - m ) /2 } \exp \left [ -\frac{\lambda_\xi}{2 } { \cal f}^{\mathsf{t}}\left ( i - \phi(\phi^{\mathsf{t}}\phi)^{-1 } \phi^t \right ) { \cal f } \right ] \\\end{gathered}\ ] ] now , only the middle line of this distribution depends on , so we can reformulate this equation into a dimensionality reduced likelihood function and absorb the other terms into a modified prior on .\end{gathered}\ ] ] to summarize , we have reduced the dimensionality of the distribution from to although we introduced the likelihood function in eqn [ eqn : em_data_likelihood ] , we have yet to include the gaussian processes or the dependence on the emulator parameters .we do this by multiplying eqn [ eqn : dimensionality_reduced ] with our prior distribution on the weights ( eqn [ eqn : weight_prior ] ) , and integrate out the dependence on .we perform this integral using eqn a.7 of for the product of two gaussians , which yields \end{gathered}\ ] ] the dimensionality reduction operation changes the priors on ( eqn [ eqn : gamma_priors ] ) to to complete the posterior distribution for the emulator , we specify function priors on the gaussian process length scale kernel parameters .typically , these priors are broad and peak at lengths corresponding to a few times the spacing between grid points , which helps the gaussian process converge to the desired emulation behavior .the full posterior distribution is given by where the prior is given by } } , b_{{[{\rm fe}/{\rm h}]}}).\end{gathered}\ ] ] now that we have fully specified a posterior probability distribution , we can sample it and find the joint posteriors for the parameters and the for all simultaneously .once we have identified the best - fit parameters for the emulator , we fix these parameters for the remainder of the spectral fitting . now , the emulator is fully specified and can be used to predict the values of the weights at any arbitrary set of stellar parameters by considering them drawn from the joint distribution where is an augmented covariance matrix that includes the point . to simplify notation, we let \ ] ] with this notation , the matrix is the region of the covariance matrix that describes the relations between the set of parameters in the grid , .the matrix ( and its transpose ) describe the relations between the set of parameters in the grid and the newly chosen parameters to interpolate at .the structure of is set by evaluating ( eqn [ eqn : emulator_kernel ] ) across a series of rows of like in , for , and across columns of . is a diagonal matrix that represents evaluated at the zero - spacing parameter pair ( ) , .then , to predict a vector of weights at the new location , we use the conditional probability where these equations are also commonly referred to as kriging equations . though the notation is complex , the interpretation is straightforward : the probability distribution of a set of eigenspectra weights is a -dimensional gaussian distribution whose mean and covariance are a function of , conditional upon the ( fixed ) values of and the squared exponential hyperparameters ( an example for a single is shown in figure [ fig : gp_interp ] , right panel ) . if we desired actual values of the interpolated weights , for example to reconstruct a model spectrum , we could simply draw a gaussian random variable from the probability distribution in eq .( [ eqn : weight_conditional ] ) .however , because we now know the probability distribution of the weight as a function of , we can rewrite our data likelihood function ( eq . [ eqn : lnlikelihood ] ) in such a way that it is possible to analytically marginalize over all possible values of , and thus all probable spectral interpolations .up until this point , we have described the reconstruction of a spectrum as a linear combination of the eigenspectra that characterize the synthetic library ( figure [ fig : pca_reconstruct ] ) .but in practice , that reconstructed spectrum must be further post - processed as detailed in section [ subsec : postprocess ] . fortunately , because convolution is a linear operation , we can first post - process the raw eigenspectra according to , and then represent the reconstructed spectrum as a linear combination of these modified eigenspectra without loss of information . unfortunately , the doppler shift and resampling operations are not linear operations , and there will be some loss of information when trying to approximate them in this manner .however , we find that in practice when the synthetic spectra are oversampled relative to the instrument resolution by a reasonable factor , the flux error due to resampling is smaller than 0.2% across all pixels , and thus any effect of that information loss is negligible . for notational compactness , we let , , and represent the post - processed eigenspectra , with an implied dependence on the current values of the extrinsic observational parameters ( ) and the polynomial nuisance parameters . now , the model spectrum is a function of the vector of eigenspectra weights where because the gaussian process describes a probability distribution of the weights , we now have a distribution of possible ( interpolated ) models and the likelihood function ( eq . [ eqn : likelihood ] ) is specified conditionally on the weights , the final task of designing the spectral emulator is to combine this data likelihood function with the posterior predictive distribution of the eigenspectra weights ( eq . [ eqn : weight_conditional ] ) and then marginalize over the weights such that we are left with a modified posterior distribution of the data that incorporates the range of probable interpolation values for the model . to perform this multidimensional integral, we use a convenient lemma found in ( * ? ? ?* their appendix a ) : if the probability distributions of and are specified conditionally as in eq .[ eqn : weight_conditional ] and [ eqn : likelihood_conditional ] , respectively , then the marginal distribution ( eq . [ eqn : marginal ] ) is where the dependence on the model parameters is now made explicit .we can couch this modified likelihood function in the form of eqn [ eqn : lnlikelihood ] by rewriting where can be thought of as the mean model spectrum " given the model parameters , and the covariance matrix has been modified to account for the various probable manifestations of the model spectrum about that mean spectrum ., a. s. , reyl , c. , schultheis , m. , & allard , f. 2010 , in sf2a-2010 : proceedings of the annual meeting of the french society of astronomy and astrophysics , ed . s. boissier , m. heydari - malayeri , r. samadi , & d. valls - gabaud , 275 , d. b. , de silva , g. , freeman , k. , bland - hawthorn , j. , & hermes team .2012 , in astronomical society of the pacific conference series , vol .458 , galactic archaeology : near - field cosmology and the formation of the milky way , ed .w. aoki , m. ishigaki , t. suda , t. tsujimoto , & n. arimoto , 421
we present a modular , extensible likelihood framework for spectroscopic inference based on synthetic model spectra . the subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints ( pixels ) into the residual spectrum . for the high signal - to - noise data with large spectral range that is commonly employed in stellar astrophysics , that covariant structure can lead to dramatically underestimated parameter uncertainties ( and , in some cases , biases ) . we construct a likelihood function that accounts for the structure of the covariance matrix , utilizing the machinery of gaussian process kernels . this framework specifically address the common problem of mismatches in model spectral line strengths ( with respect to data ) due to intrinsic model imperfections ( e.g. , in the atomic / molecular databases or opacity prescriptions ) by developing a novel local covariance kernel formalism that identifies and self - consistently downweights pathological spectral line outliers . " by fitting many spectra in a hierarchical manner , these local kernels provide a mechanism to learn about and build data - driven corrections to synthetic spectral libraries . an open - source software implementation of this approach is available at http://iancze.github.io/starfish , including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters . we demonstrate some salient features of the framework by fitting the high resolution -band spectrum of wasp-14 , an f5 dwarf with a transiting exoplanet , and the moderate resolution -band spectrum of gliese 51 , an m5 field dwarf .
this paper is devoted to nonlinear schrdinger equations ( nls ) of the form here , is a complex valued function , is a possibly rough / discontinuous potential and is a smooth function ( in terms of the density ) that describes the nonlinearity .a common example is the cubic nonlinearity given by , for , for which the equation is known as the gross - pitaevskii equation modeling for instance the dynamics of bose - einstein condensates in a potential trap . in this paperwe study galerkin approximations of the nls using a finite element space discretization to account for missing regularity due to a possibly discontinuous potential and we use a crank - nicolson time discretization to conserve two important invariants of the nls , namely the mass and the energy. we aim at deriving rate - explicit a priori error estimates and the influence of rough potentials on these rates .the list of references to numerical approaches for solving the nls ( both time - dependent and stationary ) is long and includes and the references therein .a priori error estimates for finite element approximations for the nls have been studied in , where an implicit euler discretization is considered in , a mass conservative one - stage gauss - legendre implicit runge - kutta scheme is analyzed in , mass conservative linearly implicit two - step finite element methods are treated in and higher order ( dg and cg ) time - discretizations are considered in ( however these higher order schemes lack conservation properties ) . the only scheme that is both mass and energy conservative at the same time is the modified crank - nicolson scheme analyzed by sanz - serna and akrivis et al . , which is also the approach that we shall follow in this contribution .the analysis of this modified crank - nicolson scheme is devoted to optimal -error estimates for sufficiently smooth solutions in both papers and .sanz - serna treats the one - dimensional case and periodic boundary conditions and akrivis et al .consider and homogeneous dirichlet boundary conditions .although the modified crank - nicolson scheme is implicit , in both works , optimal error estimates require a constraint on the coupling between the time step and the mesh size . in constraint reads whereas a relaxed constraint of the form is required in .the results are related to the case of the earlier mentioned cubic nonlinearity of the form and a potential is not taken into account .the present paper generalizes the results of akrivis et al . to the case of a broader class of nonlinearities and , more importantly , accounts for potential terms in the nls .if the potential is sufficiently smooth , even the previous constraints on the time step can be removed without affecting the optimal convergence rates . to the best of our knowledge , the only other paper that includes potential terms in a finite element based nls discretization is which uses a one - stage gauss - legendre implicit runge - kutta scheme that is not energy - conserving .while these results essentially require continuous potentials , many physically relevant potentials are discontinuous and very rough .typical examples are disorder potentials or potentials representing quantum arrays in the context josephson oscillations . as the main result of the paper, we will also prove convergence in the presence of such potentials with convergence rates .the rates are smaller than the optimal ones for smooth solutions and a coupling condition between the discretization parameters shows up again .while the sharpness of these results remains open , we shall stress that we are not aware of a proof of convergence of any discretization ( finite elements , finite differences , spectral methods , etc . ) of the nls in the presence of purely -potentials and that we close this gap with our paper .the structure of this article is as follows .section [ sec : problem ] introduces the model problem and its discretization .the main results and the underlying assumptions are stated in section [ sec : main - results ] .sections [ s : errorsemi][s : errorfull ] are devoted to the proof of these results .we present numerical results in section [ sec : numexp ] .some supplementary material regarding the feasibility of our assumptions is provided as appendix [ appendix - b ] .let ( for ) be a convex bounded polyhedron that defines the computational domain .we consider a real - valued nonnegative disorder potential . besides being bounded, can be arbitrarily rough . given such , some finite time and some initial data , we seek a wave function ,h^1_0({\mathcal{d}})) ] such that and for all and almost every ] so that makes sense .the nonlinearity in the problem is described by a smooth ( real - valued ) function with and the growth condition observe that this implies by sobolev embeddings that is finite for any .we define then , for any , the ( non - negative ) energy is given by [ prop - exist - and - unique ] there exists at least one solution to problem .for a corresponding result we refer to ( * ? ? ?* proposition 3.2.5 , remark 3.2.7 , theorem 3.3.5 and corollary 3.4.2 ) . however , uniqueness is only known in exceptional cases. if and the solution is unique locally in time , i.e. , on a subinterval ( cf .* theorem 3.6.1 ) ) . for further settings that guarantee uniqueness ,see ( * ? ? ?* corollary 3.6.2 , remark 3.6.3 and remark 3.6.4 ) . _ temporal discretization ._ we consider a time interval ] and if .furthermore , we assume that the family of partitions is quasi - uniform , i.e. if denotes the time step size and if the maximum is denoted by , then there exists a ( discretization independent ) constant such that for all partitions from the family . _ spatial discretization ._ for the space discretization we consider a finite dimensional subspace of that is parametrized by a mesh size parameter .we make two basic assumptions on which are fulfilled for lagrange finite elements on quasi - uniform meshes .let us for this purpose introduce the ritz - projection . for ritz - projection is the unique solution to the problem in the following , we make an assumption on the approximation quality of , that is that there exists a generic -independent constant such that the second assumption is the availability of an inverse estimate , i.e. we assume that there exists a generic -independent constant such that in addition , we assume the existence of -independent with the above assumptions are standard in the context of finite elements if quasi - uniformity is available . for instance , for simplicial lagrange finite elements on a quasi - uniform mesh , the estimates and are fulfilled .the last property can be verified by splitting for some -stable clment - type quasi - interpolation operator .the estimate then follows from inverse inequalities and -estimates for . with these definitions ,we introduce the fully discrete crank - nicolson method as follows .[ crank - nic - gpe ] we consider the space and time discretizations as detailed above .let be the initial value from problem and let .then for , the fully discrete crank - nicolson approximation is given by for all and where .the scheme is mass conserving and energy conserving , i.e. we have for all .the mass conservation is verified by testing with in and taking the real part .the energy conservation is verified by testing in with and taking the imaginary part .the conservation properties do not immediately guarantee robustness with respect to numerical perturbations ( for instance arising from round - off errors ) , however , it still can be proved that even the perturbed approximations remain uniformly bounded .let and let ( for ) be an -perturbation of the discrete problem .we can think of as representing numerical errors .let for be any solution to the ( fully - discrete ) perturbed problem for all .then the solutions remain uniformly bounded in with we test in the problem formulation with and take the real part .this yields hence applying this iteratively gives us this basic stability of the method does not require any additional smoothness assumptions , our quantified convergence and error analysis of the method relies on the regularity of .we will use three types of regularity assumptions . *assume that ,h^2(\mathcal{d})) ] in ( r1 ) implies that for which is crucial for our proofs as they rely on uniform -bounds for the discrete solutions and there is no hope for such thing if the continuous solution is unbounded in .the third assumption ( r3 ) will be used to obtain optimal convergence rates for the time - discretization in the case of smooth potentials .we can not expect it to hold in the case of rough disorder potentials .still , it is possible to show that the assumptions ( r1 ) and ( r2 ) do not conflict with disorder potentials .we discuss this aspect in more detail in appendix [ appendix - b ] .note that even though we can not guarantee uniqueness in general , we have that every smooth solution that satisfies ( r1 ) must be unique .any two solutions of the nls that fulfill ( r1 ) must be identical .let for and let and denote two smooth solutions in the sense that ,h^2(\mathcal{d})) ] .with we obtain for time integration and then yield hence , grnwall s inequality can be applied and shows for all .the first main result of this paper states that , under the assumption of sufficient regularity , the crank - nicolson scheme admits a solution that remains uniformly bounded in and we obtain optimal convergence rates for the -error , independent of the coupling between the mesh size and the time - step size . [ main - theorem-2-a ] under the regularity assumption ( r1 ) , ( r2 ) and ( r3 ) , there positive constants and such that for all partitions with parameters and there exists a unique solution to the fully discrete crank - nicolson scheme with where and .moreover , the a priori error estimate holds with some constant that may depend on , , , and the constants appearing in - but not on the mesh parameters and .the uniqueness of fully discrete approximations in theorem [ main - theorem-2-a ] is to be understood in the sense that any other family of approximations must necessarily diverge in as .the second main result applies to the case of rough potentials .[ main - theorem-2-b ] assume only ( r1 ) and ( r2 ). then there exists such that for all partitions with paramters and for some there exists a unique solution to the fully - discrete crank - nicolson scheme such that with as defined in theorem [ main - theorem-2-a ] , and the a priori error estimate holds for some constant independent of and .sections [ s : errorsemi][s : errorfull ] below are devoted to the proof of theorems [ main - theorem-2-a ] and [ main - theorem-2-b ] .the results of theorem [ main - theorem-2-b ] are valid under the constraint for some .this means that the mesh size needs to be small enough compared the time step size .observe that this is a rather natural assumption if the potential is indeed a rough potential ( as addressed in the theorem ) .in such a case we wish use a fine spatial mesh to resolve the variations of , whereas the time step size is comparably large . hence, the constraint is not critical .conversely , the constraints appearing in works by sanz - serna and akrivis et al . are of a completely different nature , as they require the time step size to be small compared to the mesh size .therefore , using a fine spatial mesh to resolve the structure of would impose small time steps as well .in this section we shall consider a semi - discrete crank - nicolson approximation given as follows .[ semi - discrete - crank - nic - gpe ] let be the initial value from problem and let .then for , we define the semi - discrete crank - nicolson approximation as the solution to for all and where .we want to prove that the above problem is well - posed and we want to estimate the - and -error between and the exact solution .this requires some auxiliary results that allow us to control the error arising from the nonlinearity .we start with introducing a truncated version of the ( possibly ) nonlinear function . with this truncated function, we introduce an auxiliary problem that is central for our analysis .[ truncation - lemma ] let be a constant with .then there exists a smooth function and generic constants such that for all and all furthermore , for the antiderivative it holds for all with : before we can prove lemma [ truncation - lemma ] we need to introduce an inequality that we will frequently use in the rest of the paper .[ lemma01ap]let be a three times continuously differentiable function with locally bounded derivatives .then , for every with ( w.l.g . ) it holds let us define .first , we observe that hence with that and using taylor expansion for suitable , with we observe since for we obtain in the following , we let denote a generic constant .let us define and let be a curve that fulfills for and for . by polynomial interpolation we can chose in such a way that it is a polynomial on the interval ] denote the complex valued ( linear ) curve given by for ] now we investigate where we distinguish three cases .+ case 1 : and .we obtain with the lipschitz continuity of and ( and , otherwise everything is trivial ) . without loss of generalitylet we obtain where we used that ; and that for .+ case 3 : and we can use the results from case 1 and case 2 with the intermediate value to obtain [ lemma - l2-hatuhm - hatu ] suppose , and and for or .let denote a solution of the fully - discrete crank - nicolson method with truncation as stated in definition [ truncated - fully - discrete - crank - nic - gpe ] and let be small enough for the results of theorem [ main - theorem-1 ] to hold . if denotes the unique solution to with the properties stated in theorem [ main - theorem-1 ] , then hoolds with an -independent constant .first , observe that the assumptions imply ( and that it is unique ) .in the following , we denote by any generic constant that depends on , , , and .recall the definition of the continuous function from and let again .consider .from we have and from subtracting the terms from each other and defining gives us testing with and taking the real part yields for the first term we have with theorem [ main - theorem-1 ] the second term can be estimates as where we used again theorem [ main - theorem-1 ] . for the third term we can proceed analogously since is bounded .we obtain straightforwardly ( again with theorem [ main - theorem-1 ] ) that to bound term iv , we use lemma [ truncated - fully - discrete - crank - nic - gpe ] to estimate consequently , using that is uniformly bounded ( theorem [ main - theorem-1 ] ) we can conclude collecting the estimates for i , ii , iii and iv implies that using the inequality which holds for any with finishes the -error estimate .we can now conclude from lemma [ lemma - l2-hatuhm - hatu ] that remains uniformly bounded in which allows us to conclude for appropriately chosen . in summary we obtain theorem [ main - theorem-2-a ] . the detailed proof is given in the following .we choose .let and let denote any constant depending additionally on ( however , both are not allowed to depend on or ) . using the assumptions on , the bounds from theorem [ main - theorem-1 ] and lemma [ lemma - l2-hatuhm - hatu ] we have since and for and some , we conclude that there exists such that for all .hence , for sufficiently small we have .we conclude the existence of and the -independent bound for the -error estimate we split the error into the first term can be estimated with theorem [ main - theorem-1 ] for sufficiently small , the second term is bounded by ( again using theorem [ main - theorem-1 ] ) and the last term can be estimated with lemma [ lemma - l2-hatuhm - hatu ] which now holds with . in the setting of theorem [ main - theorem-2-a ] , this yields for all sufficiently small and . in the setting of theorem [ main - theorem-2-b ] ,the order is reduced to .the proof of uniqueness under some uniform bound independent of and is almost verbatim the same as in the semi - discrete case ( see the proof of theorem [ main - theorem-1 ] ) .we shall conclude with some simple and illustrative numerical experiment .the computational domain is given by ^ 2 ] .we wish to approximate \rightarrow { \mathbb{c}} ] ) around zero and we derived -error estimates under various regularity assumptions .all our estimates are valid for general disorder potentials in .however , it is not clear how or if our regularity assumptions might conflict with discontinuities in the potential . therefore we derived two graded results . in the first main result , we assume sufficient regularity of the exact solution and derive error estimates of optimal ( quadratic ) order in and . the novelty with respect to previous works is that our results cover a general class of nonlinearities , potential terms and we show that the method does indeed not require a time step constraint . on the contrary ,the results in are only valid , provided that the time step size is sufficiently small with respect to the spatial mesh size . in our second main result , we relax the regularity assumptions so that they appear not to be in conflict with discontinuous potentials . under these relaxed regularity assumptions, we can still derive -error estimates , however , only of linear order .furthermore , we encounter a time step constraint that was absent in the case of higher regularity .to check the practical performance of the method , we present a numerical experiment for a model problem with discontinuous potential .the corresponding numerical errors seem not to correlate with the pessimistic rates predicted for the low - regularity regime .we could neither observe degenerate convergence rates nor a practical time step constraint .instead , we observe the behavior as predicted for the high regularity regime , i.e. , convergence rates of optimal order and good approximations in all resolution regimes , independent of a coupling between mesh size and time step size .10 g. d. akrivis , v. a. dougalis , and o. a. karakashian . on fully discrete galerkin methods of second - order temporal accuracy for the nonlinear schrdinger equation ., 59(1):3153 , 1991 .x. antoine , w. bao , and c. besse .computational methods for the dynamics of the nonlinear schrdinger / gross - pitaevskii equations ., 184(12):26212633 , 2013 .x. antoine and r. duboscq .robust and efficient preconditioned krylov spectral solvers for computing the ground states of fast rotating and strongly interacting bose - einstein condensates . , 258:509523 , 2014 .w. bao and y. cai . mathematical theory and numerical methods for bose - einstein condensation . , 6(1):1135 , 2013 .w. bao and q. du . computing the ground state solution of bose - einstein condensates by a normalized gradient flow ., 25(5):16741697 , 2004 .w. bao and w. tang .ground - state solution of bose - einstein condensate by directly minimizing the energy functional ., 187(1):230254 , 2003 .e. cancs , r. chakir , and y. maday . numerical analysis of nonlinear eigenvalue problems . , 45(1 - 3):90117 , 2010 .t. cazenave ., volume 10 of _ courant lecture notes in mathematics_. new york university , courant institute of mathematical sciences , new york ; american mathematical society , providence , ri , 2003 .d. cruz - uribe and c. j. neugebauer .sharp error bounds for the trapezoidal rule and simpson s rule . , 3(4):article 49 , 22 , 2002 .i. danaila and f. hecht . a finite element method with mesh adaptivity for computing vortex states in fast - rotating bose - einstein condensates ., 229(19):69466960 , 2010 .i. danaila and p. kazemi . a new sobolev gradient method for direct minimization of the gross - pitaevskii energy with rotation ., 32(5):24472467 , 2010 .l. gauckler .convergence of a split - step hermite method for the gross - pitaevskii equation ., 31(2):396415 , 2011 .d. gilbarg and n. s. trudinger . .classics in mathematics .springer - verlag , berlin , 2001 .reprint of the 1998 edition .e. p. gross .structure of a quantized vortex in boson systems ., 20:454477 , 1961 .p. henning and a. mlqvist .the finite element method for the time - dependent gross - pitaevskii equation with angular momentum rotation .arxiv e - print 1502.05025 ( submitted ) , 2015 .p. henning , a. mlqvist , and d. peterseim .two - level discretization techniques for ground state computations of bose - einstein condensates . , 52(4):15251550 , 2014 .e. jarlebring , s. kvaal , and w. michiels .an inverse iteration method for eigenvalue problems with eigenvector nonlinearities ., 36(4):a1978a2001 , 2014 .o. karakashian and c. makridakis .a space - time finite element method for the nonlinear schrdinger equation : the discontinuous galerkin method ., 67(222):479499 , 1998 .o. karakashian and c. makridakis .a space - time finite element method for the nonlinear schrdinger equation : the continuous galerkin method ., 36(6):17791807 , 1999 .g. leoni ., volume 105 of _ graduate studies in mathematics_. american mathematical society , providence , ri , 2009 .e. h. lieb , r. seiringer , and j. yngvason. a rigorous derivation of the gross - pitaevskii energy functional for a two - dimensional bose gas ., 224(1):1731 , 2001 .dedicated to joel l. lebowitz . c. lubich . on splitting methods for schrdinger - poisson and cubic nonlinear schrdinger equations ., 77(264):21412153 , 2008 .b. nikolic , a. balaz , and a. pelster .dipolar bose - einstein condensates in weak anisotropic disorder . , 88(1 ) , 2013 .l. p. pitaevskii . .number 13 .soviet physics jetp - ussr , 1961 .j. m. sanz - serna .methods for the numerical solution of the nonlinear schroedinger equation ., 43(167):2127 , 1984 .m. thalhammer .convergence analysis of high - order time - splitting pseudospectral methods for nonlinear schrdinger equations ., 50(6):32313258 , 2012 .m. thalhammer and j. abhau .a numerical study of adaptive space and time discretisations for gross - pitaevskii equations ., 231(20):66656681 , 2012 .y. tourigny .optimal estimates for two time - discrete galerkin approximations of a nonlinear schrdinger equation . , 11(4):509523 , 1991 .j. wang .a new error analysis of crank - nicolson galerkin fems for a generalized nonlinear schrdinger equation ., 60(2):390407 , 2014 .j. williams , r. walser , c. wieman , j. cooper , and m. holland . achieving steady - state bose - einstein condensation ., 57(3):20302036 , 1998 .i. zapata , f. sols , and a. j. leggett .josephson effect between trapped bose - einstein condensates ., 57(1):r28r31 , 1998 . g. e. zouraris . on the convergence of a linear two - step finite element method for the nonlinear schrdinger equation ., 35(3):389405 , 2001 .in this section we demonstrate that a rough potential does not exclude the regularity assumptions ( r1 ) and ( r2 ) , however their compatibility will in general rely on the choice for the initial value . for simplicity of the presentationwe consider the case ( i.e. a linear problem ) .the nonlinear case is briefly discussed at the end of this section .let denote a rough disorder potential and let denote a ground state or excited state to the stationary schrdinger equation where is the corresponding eigenvalue ( the chemical potential ) and is -normalized , i.e. . from elliptic regularity theory we know that the solution to problem admits higher regularity , i.e. ( cf . ) . however , since is rough , we can not expect any regularity beyond . in order to investigate the dynamics of , the potential trap is reconfigured . in our casethis means that we set , where is a non - negative smooth perturbation , say ( for simplicity ) . with thiswe seek \rightarrow h^1_0(\mathcal{d})$ ] with and let us now assume that denotes a solution to that is sufficiently regular .then , from equation we conclude that for . taking only the imaginary part of the equationyields by integrating from to , we have this means that we have to verify that the compatibility `` '' is well - defined for rough potentials .for we exploit the initial condition and obtain hence analogously , we obtain for that hence furthermore , since where ( for which we just derived corresponding bounds depending on ) , we can also conclude by elliptic regularity theory that from the equation we also make an important observation : we have , however we do _ not _ have -regularity as this would require , which is clearly not available due to the roughness of .therefore we can not repeat the same argument for .observe that for we have which implies that , but it is not in . in order to obtain would require the disorder potential to be at lest in ( which contradicts the notion of a disorder potential ) .consequently , we can neither hope for nor for .the only thing that we can hope for is to verify . and indeed , analogously to the proof of energy conservation we easily observe that with we conclude that the right - hand side is well - defined and bounded for rough potentials . in the nonlinear case similar arguments can be used .however , the calculations become significantly more technical since we typically do no longer have the conservation properties such as for . still for small timesit is possible to show an inequality which takes a comparable role .for instance , in the case of the cubic nonlinearity ( where is a parameter that characterizes the type and the number of particles ) it is possible to show that there exists a minimum time and constant ( independent of the regularity of the potential ) , such that for and provided that is sufficiently smooth . with thisit is possible to proceed in a similar way as in the linear case and we can draw the same conclusions .
this paper analyses the numerical solution of a class of non - linear schrdinger equations by galerkin finite elements in space and a mass- and energy conserving variant of the crank - nicolson method due to sanz - serna in time . the novel aspects of the analysis are the incorporation of weak and strong disorder potentials , the consideration of some general class of non - linearities , and the proof of convergence with rates in under moderate regularity assumptions that are compatible with discontinuous potentials . for sufficiently smooth potentials , the rates are optimal without any coupling condition between the time step size and the spatial mesh width . crank - nicolson galerkin approximations to nonlinear schrdinger equations with disorder potentials + patrick henning , daniel peterseim +
this work is dealing with regularity , which is a property with deep implications in organisms . from the biological point of view regularityhas been related with radial symmetry , and irregularity with bilateral symmetry .the heuristic value of radial and bilateral symmetry in biology account for taxonomic issues , however , symmetry as well as disruption symmetry have been an empirical and intuitive approach accounting for structural properties in organisms . from a mathematical point of view, the property of regularity of a geometric form has not been formalized . based in previous results by , we hypothesize that _ eutacticity _ provides a measure of regularity based in the following argument .a set of vectors in , with a common origin , is called a star and a star is said to be eutactic if it can be viewed as the projection of orthogonal vectors in .it turns out that stars associated with regular polygons , polyhedra or , in general , polytopes , are eutactic and thus regularity and eutacticity are closely linked .a disadvantage of using eutacticity as a measure of regularity is that a star vector must be associated with the geometrical form under study .as we shall see , this is not a problem with echinoids .in fact , found that the flower - like patterns formed by the five ambulacral petals in 104 specimens of plane irregular echinoids ( from clypeasteroidea ) are eutactic .here we present a deeper study that overcome the restriction to plane irregular echinoids , using the five ocular plates ( op ) to define the star vector .additionally , we use a new criterion of eutacticity that provides a measure of the degree of eutacticity of a star which is not strictly eutactic . with these toolswe study the variability of eutacticity during geological time and to analyze pentamery variability during the evolution of sea urchins .sea urchins are pentameric organisms with an apical structure , called the apical disc .this structure includes five ocular plates ( op ) that can fold the vector star associate with each sea urchin species ( see fig .[ fig : fig1 ] and section [ sec : discoap ] for a detailed description ) . in this work ,we show that op can be useful even in ovoid echinoids , such as spatangoids , since the op are almost tangential to the aboral surface ( opposite to oral surface ) . using the op to define the star of vectors ,we analyze the regularity and changes in a macroevolutive and taxonomic level in a collection of 157 extinct and extant sea urchins .we conclude that evolution has preserved a high degree of regularity and , consequently , that the apical disk is a homogeneous and geometrically stable structure through the geological time .low values of regularity were recorded in some specific families and its biological consequences are discussed .this paper is organized as follows . in section [ sec : eutactic ] a mathematical introduction to the concept of eutactic star is presented .section [ sec : discoap ] describes the structure of the apical disc and its biological importance , making it the obvious choice to define a vector star which characterizes each specimen .experimental methods and results are devoted to section [ sec : resultados ] and , finally , discussion and conclusions are presented in section [ sec : discusion ] .our main hypothesis is that the concept of regularity of a biological form may play an important role in the study of phenotipic variation in evolution . for this goal, one must first be able to establish a formal criterion defining regularity of a geometrical form , including a measure of how regular a form is .mathematically , this property has not been defined and here , as a first step along this direction , we adopt the concept of eutacticity that , as we shall show , is closely related to regularity. we shall deal with a set of vectors in , with a common origin , called _star_. in this case so the set of vector con not be linearly independent .the star is called _ eutactic _ if its vectors are orthogonal projections of orthogonal vectors in , that is , there exist orthogonal vectors , in , and an orthogonal projector such that the notion of eutacticity ( from the greek _ _ eu__=good and _ _ taxy__=arrangement ) was firstly introduced by the swiss mathematician l. schlfli ( about 1858 ) in the context of regular polytopes .later , noticed that the vectors of an eutactic star are projections from an orthogonal basis in higher dimensional spaces and proved that the star associated to a regular polytope is eutactic .thus , eutacticity is associated with regularity and the remarkable properties of eutactic stars have been useful in different realms such as quantum mechanics , sphere packings , quasicrystals , graph and frame theory and crystal faceting ( see and references therein ) . a well known necessary and sufficient condition for a star to be eutactic is due to hadwiger himself , who proved that a star in is eutactic if and only if there is a real number such that is fulfilled for all . in the special casewhere , the star is said to be _normalized eutactic_. a more practical form of the eutacticity criterion is obtained if the so called structure matrix is introduced .let be the matrix whose columns are the components of the vectors , with respect to a given fixed orthonormal basis of . in this case, the matrix form of hadwiger s theorem sates that the star represented by is eutactic if and only if for some scalar ( here is the unit matrix ) . in this work we are dealing with stars measured in digital images of sea urchins and thus a reliable numerical criterion of eutacticity , suitable to work with experimental measurements , is need .notice that a criterion such as ( [ eqn : aat ] ) is not useful since experimental errors may produce a matrix which is not exactly the identity matrix .thus , it is desirable to obtain a numerical criterion capable of measuring the degree of eutacticity of a star which is not strictly eutactic .this criterion has already been proposed and asserts that a star in , represented by the structure matrix , is eutactic if and only if where .notice that the closer is to one , the more eutactic the star is . in the particular case of two - dimensional stars ( ), it can be proved that .in , vector stars were associated to the petaloid ambulacra of plane irregular echinoids .it was reported that , for 104 specimens of the natural history museum of london , the pentagonal stars thus defined fulfill very accurately an eutacticity criterion . the calculations carried out in that work present two main restrictions : a ) stars are associated to plane or almost plane sea urchin specimen and b ) a eutacticity criterion was used that depends on the coordinate system and does not allow a measurement of the degree of eutacticity of stars which are not strictly eutactic . herewe overcome these restrictions by using the eutacticity criterion of ( [ eqn : cosfi ] ) and using the five ocular plates ( op ) to associate a star of vectors to each sea urchin .as we shall see in what follows , besides the biological importance of the op , its use to define a star of vectors allows to study non planar echinoids .the apical disc in sea urchins , encircled in fig . [ fig : fig1 ] , represents a crown of biological structures in the apex of the test .it is positioned in the aboral surface of the test and is conformed by five genital plates , five ocular plates and the madreporite . the opare located at the point of origin of the ambulacral zones .the biological relevance of ambulacra is given by the following reasons .firstly , each ambulacra consist of two , or even more , columns of plates extending from the margin of an op to the edge of the mouth . in most echinoidseach mature plate is perforated by two pores forming a pore pair ; each pore pair gives passage to one tube foot , which is connected internally with the water vascular system .secondly , five ambulacra are a conspicuous sign of body plan pentamery in all extant and extinct sea urchins and , finally , ambulacral rays are seen as homologous structures in echinoderms .thus we conclude , that op has more biological implications as the origin of ambulacra than the end of petaloid ambulacra .in addition , the op are almost tangential to the aboral surface and thus it is useful even for ovoid echinoids .we then define the star vector associated to a particular echinoid as the set of five vectors pointing to the op with origin at the centroid .the star thus defined , allows us to test eutacticity in a wide range of echinoids and study changes in a macroevolutive and taxonomic level ..list of the 47 families studied in this work .the number of specimens considered in this work is indicated between parenthesis . [ cols="<,<,<,<",options="header " , ]we have analyzed 157 extant and extinct specimens of sea urchins from the collection of the instituto de ciencias del mar ( universidad nacional autnma de mxico ) and from images of the natural history museum of london web site ( http://www.nhm.ac.uk ) .as shown in table [ table : table1 ] , the analyzed sea urchins belong to 47 families in a taxonomic group of 95 families , according to the classification by .eleven of these specimens are radial and thirty six bilateral .to each sea urchin we associate a vector star , with vectors pointing to the op ( shown in fig .[ fig : fig1 ] ) and origin at the centroid .measurements were carried out on digital images of aboral surfaces , analyzed using the morphometric software packages _makefan6 _ and _ tpsdig2 _ . with the former ,the op is digitized and the vector star coordinates are obtained by using the second program .once the coordinates of the star vector are available , eq . ( [ eqn : cosfi ] ) is used to calculate the value of eutacticity of the star , _i.e. _ , . per oder :a ) clypeastroida , b ) cassiduoida , c ) holectypoida , d ) pygasteroida , e ) spatangoida , f ) holasteroida , g ) disasteroida .( b ) standar deviation of per family ( the bilateral families with the largest variations are only displayed . ) : a ) pygasteridae , b ) holectypidae , c ) cassiduloidae , d ) clypeasteridae , e ) arachnoidae , f ) fibulariidae , g ) neolaganidae , h ) rotulidae , i ) echinarachniidae , j ) scutellidae , k ) collyritidae , l ) toxasteridae , m ) micrasteridae , n ) brissidae , o ) hemiasteridae , p ) schizasteridae.,width=377 ] since one of our goals is to analize regularity of sea urchins through geological time , we must define a taxonomic group to define a phylogenetic reference in time .taxonomic keys use the apical disk as a reference to describe the family level , thus family taxonomic level constitutes the best choice since it should have a low variability in the apical disk .in fact , as shown in fig . [fig : fig2 ] , family level shows lower variability in eutactic values as compared to order .hence , this taxonomic level was used to represent regularity in geological time . by organizing the values of eutacticity per family , we are able to carry out a formal statistical analysis . from the properties of eutacticity , we can deduce that radial families have a high degree of eutacticity ( the stars form regular pentagons ) and , consequently , no variability ( up to experimental errors ) .contrarily , high variability is expected in bilateral families . before proceeding with a statistical analysis of the eutacticity values per family, we have to take into account the possibility of a stochastic nature of eutacticity .in order to reject this possibility , an experimental set of two hundred randomly generated bilateral stars was considered .a formal statistical analysis must then include three groups : radial , bilateral and random stars .the values of of the random sample yield a mean of 0.891026 with population standard error of 0.00604 .our experimental sample of radial stars yield a mean of 0.995187423 and standard deviation of 0.00526708 .finally , the experimental sample of bilateral stars gives a mean of 0.96499158 and standard deviation of 0.062079301 .the shapiro - wilk test applied to our samples yields and , for the joined radial and bilateral experimental sample , and and , for the random sample . from this result we conclude that neither the experimental or random distribution are normal and thus a non parametric statistical analysis is needed .this non parametric test produces and , consequently , the probability of finding random significant differences between radial , bilateral and random stars is lower that 0.0001 .the possibility of a stochastic origin of regularity is thus rejected .now , concerning the analysis of the experimental sample per family , in fig .[ fig : fig3 ] , a scatter plot of the eutacticity values of the 47 families is shown . from the figure, it is observed that the lowest degree of regularity are recorded in holasteridae , corystidae , collyritidae , pourtoalesiidae , toxasteridae and nucleolitidae .this observation will be revisited in the next section .a wilcoxon / kruskal - wallis analysis of the 47 families was carried out , yielding and .consequently , differences between families are also accepted . herethe variability of eutacticity per family through geological time is studied .the experimental sample includes extant and extinct specimens .once again we have to take into account that radial echinoids are associated with nearly eutactic stars ; the most primitive groups , like paleozoic groups , are almost always totally radial .contrarily , the eutacticity values of species from post - paleozoic groups are less uniform and thus more than two specimens per family are required .[ fig : fig4 ] shows the mean values of at four geological time intervals , namely paleozoic , triasic - jurasic , cretaceous and cenozoic .these scales were chosen because , according the paleontological records , at the beginning of each of these intervals there was a rise in the speciation rate ; there was an increase in the numbers of families . as shown in fig .[ fig : fig4 ] , post - paleozoic sea urchins show the highest degree of variability . a statistical analysis , however , gives the values and , implying that there are no statistical differences in regularity through geological time .) of the experimental sample in four intervals of the geological time.,width=377 ] in fig .[ fig : fig5 ] a plot of the eutacticity values , per family , through the geological time is shown .the lowest values of eutacticity are recorder firstly in early mesozoic collyritidae and low values continue with late mesozoic and cenozoic holasteridae , portuolesidae and corystidae . as a matter of fact, all these families belong to the holasteroida order which turns out to be the responsible of the prominent peak ( f ) in the standard deviation plot in fig .[ fig : fig2](a ) . in order to have a better understanding of the singularity of holasteroida order , in fig .[ fig : fig6 ] an evolutive cladogram showing phylogenetic relationships between orders is shown . in this cladograma representative star , and the mean value of eutacticity per order , is included .it is clearly shown that the lowest values of regularity comes from disasteroida and are recorded in spatangoida and holasteroida .in fact , from the measured values , we can say that holasteroida is an anti - eutacticgroup , _i.e. _ , mathematically irregular .most living representatives of holasteroida are deep - water inhabitants with exceedingly thin and fragile tests . besides that we consider that regularity and irregularity constitute two important parameters to approach ecological and evolutive topics , the observed departure from regularity could have been a way to increase the amount of complexity in sea urchin morphology . ) per family through the geological time.,width=377 ]traditionally , radial symmetry has been associated with regularity while bilateral symmetry with irregularity . in this workwe propose the eutacticity as a measure of the regularity of a biological form which is independent of the of the radial or bilateral condition .with this hypothesis , we have shown that regularity has dominance over irregularity in sea urchins evolution ; despite that variability increases over time , statistically sea urchins show a high degree of regularity .this regularity is nearly perfect in the most primitive groups , belonging to the paleozoic era , which were almost totally radial .a slight decreasing of regularity is observed in post - paleozoic sea urchins , with the notably exception of the holasteroida order which seems to constitute a critical evolutive event in sea urchins evolution .* acknowledgments*. this work was inspired by manuel torres , whose death will not dismiss the memory of his achievements and creativity ; he will be greatly missed by us .useful suggestions from m.e .alvarez - buylla , g. cocho , a. laguarda and f. sols are gratefully acknowledged .this work was financially supported by the mexican conacyt ( grant no .50368 ) and the spanish mcyt ( grant no .fis2004 - 03237 ) .n. holland ( 1988 ) the meaning of developmental asymmetry for echinoderm evolution : a new interpretation , in c.r.c paul and a.b .smith a.b .( eds . ) , echinoderm phylogeny and evolutionary biology .oxford university press , oxford .pp.13 - 25 .p. lebrun ( 2000 ) une histoire naturelle des chinides .2me partie : anatomie , ontogense et dimorphisme , locomotion , palocologie , origine et volution des chinides , minraux & fossiles , hors - srie 10 .melville and j.w .durham ( 1966 ) skeletal morphology , in r.c .moore ( ed . ) treatise on invertebrate paleontology , part u , echinodermata 3 , the geological society of america and university of kansas press , lawrence , kansas .
an eutactic star , in a -dimensional space , is a set of vectors which can be viewed as the projection of orthogonal vectors in a -dimensional space . by adequately associating a star of vectors to a particular sea urchin we propose that a measure of the eutacticity of the star constitutes a measure of the regularity of the sea urchin . then we study changes of regularity ( eutacticity ) in a macroevolutive and taxonomic level of sea urchins belonging to the echinoidea class . an analysis considering changes through geological time suggests a high degree of regularity in the shape of these organisms through their evolution . rare deviations from regularity measured in holasteroida order are discussed . eutactic stars , bilateral symmetry , regularity , sea urchins
test data generation ( tdg ) aims at automatically generating test - cases for interesting test _coverage criteria_. the coverage criteria measure how well the program is exercised by a test suite .examples of coverage criteria are : _ statement coverage _ which requires that each line of the code is executed ; _ path coverage _ which requires that every possible trace through a given part of the code is executed ; etc .there are a wide variety of approaches to tdg(see for a survey ) .our work focuses on _ glass - box _testing , where test - cases are obtained from the concrete program in contrast to _ black - box _testing , where they are deduced from a specification of the program . also , our focus is on _ static _ testing , where we assume no knowledge about the input data , in contrast to _ dynamic _ approaches which execute the program to be tested for concrete input values .the standard approach to generating test - cases statically is to perform a _symbolic _ execution of the program , where the contents of variables are expressions rather than concrete values .the symbolic execution produces a system of _ constraints _ consisting of the conditions to execute the different paths .this happens , for instance , in branching instructions , like if - then - else , where we might want to generate test - cases for the two alternative branches and hence accumulate the conditions for each path as constraints . the symbolic execution approach is usually combined with the use of _ constraint solvers _ in order to : handle the constraints systems by solving the feasibility of paths and , afterwards , to instantiate the input variables .tdg for declarative languages has received comparatively less attention than for imperative languages . in general ,declarative languages pose different problems to testing related to their own execution models , like laziness in functional programming ( fp ) and failing derivations in constraint logic programming ( clp ) .the majority of existing tools for fp are based on black - box testing ( see e.g. ) .an exception is where a glass - box testing approach is proposed to generate test - cases for curry . in the case of clp ,test - cases are obtained for prolog in ; and very recently for mercury in .basically the test - cases are obtained by first computing constraints on the input arguments that correspond to execution paths of logic programs and then solving these constraints to obtain test inputs for such paths . in recent work ,we have proposed to employ existing _ partial evaluation _ ( pe ) techniques developed for clpin order to automatically generate _ test - case generators _ for glass - box testing of bytecode .pe is an automatic program transformation technique which has been traditionally used to specialise programs w.r.t . a known part of its input data and , as futamura predicted , can also be used to compile programs in a ( source ) language to another ( object ) language ( see ) .the approach to tdgby pe of consists of two independent clp pe phases .( 1 ) first , the bytecode is transformed into an equivalent ( decompiled ) clp program by specialising a bytecode interpreter by means of existing pe techniques .( 2 ) a second pe is performed in order to supervise the generation of test - cases by execution of the clp decompiled program .interestingly , it is possible to employ control strategies previously defined in the context of clp pe in order to capture _ coverage criteria _ for glass - box testing of bytecode .a unique feature of this approach is that , this second pe phase allows generating not only test - cases but also test - case _ generators_. another important advantage is that , in contrast to previous work to tdgof bytecode , it does not require devising a dedicated symbolic virtual machine . in this work, we study the application of the above approach to tdgby means of pe to the prolog language .compared to tdgof an imperative language , dealing with prolog brings in as the main difficulty to generate test - cases associated to failing computations .this happens because an intrinsic feature of pe is that it only produces results associated to the _ non - failing _ derivations .while this is what we need for tdgof an imperative language ( like bytecode above ) , we now want to capture non - failing derivations in prolog and still rely on a standard partial evaluator .our proposal is to transform the original prolog program into an equivalent prolog program with explicit failure by partially evaluating a prolog interpreter which captures failing derivations w.r.t .the input program .this transformation is done in the phase ( 1 ) above . as another difference , in the case of bytecode , the underlying constraint domain only manipulates integers .however , the above phase ( 2 ) should properly handle the data manipulated by the program in the case of prolog .compared to existing approaches to tdgof prolog , our approach basically is of interest for bringing the advantages which are inherent in tdgby pe to the field of prolog : * it is _ more powerful _ in that we can produce test - case generators which are clp programs whose execution in clp returns further test - cases on demand without the need to start the tdg process from scratch ; * it is more _ flexible _ , as different coverage criteria can be easily incorporated to our framework just by adding the appropriate local control to the partial evaluator .* it is _ simpler _ to implement compared to the development of a dedicated test - case generator , as long as a clp partial evaluator is available .the rest of the paper is organized as follows . in the next section ,we give some basics on pe of logic programs and describe in detail the approach to tdgby pe proposed in .[ sec : control_flow ] discusses some fundamental issues like the prolog control - flow and the notion of computation path .then , sect .[ sec : explicit_failure ] describes the program transformation to make failure explicit , sect .[ sec : gener - test - cases ] outlines existing methods to properly handle symbolic data during the tdg phase , and finally sect .[ sec : future ] concludes and discusses some ideas for future work .in this section we recall the basics of partial evaluation of logic programming and summarize the general approach of relying on partial evaluation of clp for tdg of an imperative language , as proposed in .we assume familiarity with basic notions of logic programming and partial evaluation ( see e.g. ) .partial evaluation is a semantics - based program transformation technique which specialises a program w.r.t .given input data , hence , it is often called _program specialisation_. essentially , partial evaluators are non - standard interpreters which evaluate goals as long as termination is guaranteed and specialisation is considered profitable . in logic programming ,the underlying technique is to construct ( possibly ) _ incomplete _ sld trees for the set of atoms to be specialised . in an incomplete tree , it is possible to choose _ not _ to further unfold a goal .therefore , the tree may contain three kinds of leaves : failure nodes , success nodes ( which contain the empty goal ) , and non - empty goals which are not further unfolded .the latter are required in order to guarantee termination of the partial evaluation process , since the sld being built may be infinite . even if the sld trees for fully instantiated initial atoms ( as regards the _ input _ arguments ) are finite , the sld trees produced for partially instantiated initial atoms may be infinite .this is because the sld for partially instantiated atoms can have ( infinitely many ) more branches than the actual sld tree at run - time .the role of the _ local control _ is to determine how to construct the ( incomplete ) sld trees . in particular , the _ unfolding rule_ decides , for each resolvent , whether to stop unfolding or to continue unfolding it and , if so , which atom to select from the resolvent . on the other hand ,partial evaluators need to compute sld - trees for a number of atoms in order to ensure that all atoms which appear in non - failing leaves of incomplete sld trees are `` covered '' by the root of some tree ( this is known as the closedness condition of partial evaluation ) .the role of the _ global control _ is to ensure that we do not try to compute sld trees for an infinite number of atoms .the usual way of achieving this is by applying an _ abstraction operator _ which performs `` generalizations '' on the atoms for which sld trees are to be built .the global control returns a set of atoms .finally , the partial evaluation can then be systematically extracted from the set ( see for details ) .traditionally , there have been two different approaches regarding the way in which control decisions are taken , _ on - line _ and _ off - line _ approaches . in _ online _ pe , all control decisions are dynamically taken during the specialisation phase . in _ offline _ pe , a set of previously computed annotations ( often manually provided ) gives information to the control operators to decide , 1 ) when to stop unfolding ( _ memoise _ ) in the local control , and 2 ) how to perform generalizations in the global control .the development of pe techniques has allowed the so - called `` interpretative approach '' to compilation which consists in specialising an interpreter w.r.t . a fixed object code .interpretive compilation was proposed in futamura s seminal work , whereby compilation of a program written in a ( _ source _ ) programming language into another ( _ object _ ) programming language is achieved by partially evaluating an interpreter for written in w.r.t .the advantages of interpretive ( de-)compilation w.r.t.dedicated ( de-)compilers are well - known and discussed in the pe literature ( see , e.g. , ) . very briefly , they include : _ flexibility _ , it is easier to modify the interpreter in order to tune the decompilation ( e.g. , observe new properties of interest ) ; _ easier to trust _ , it is more difficult to prove that ad - hoc decompilers preserve the program semantics ; _ easier to maintain _ , new changes in the language semantics can be easily reflected in the interpreter . in recent work, we have proposed an approach to test data generation ( tdg ) by pe of clp and used it for tdgof bytecode .the approach is generic in that the same techniques can be applied to tdg other both low and high - level imperative languages . in figure[ fig : overview ] we overview the main two phases of this technique . in * phasei * , the input program written in some ( imperative ) language is compiled into an equivalent clp program .this compilation can be achieved by means of an ad - hoc decompiler ( e.g. , an ad - hoc decompiler of bytecode to prolog ) or , more interestingly , can be achieved automatically by relying on the first futamura projection by means of pe for logic programs as explained above ( e.g. , ) .now , the aim of * phase ii * is to generate test - cases which traverse as many different execution paths of as possible , according to a given coverage criteria . from this perspective, different test data will correspond to different execution paths . with this aim , rather than executing the program starting from different input values ,the standard approach consists in performing _ symbolic execution _ such that a single symbolic run captures the behavior of ( infinitely ) many input values .the central idea in symbolic execution is to use constraint variables instead of actual input values and to capture the effects of computation using constraints .hence , the compilation from to clp allows us to use the standard clp execution mechanism to carry out this phase .in particular , by running the program without input values , each successful execution corresponds to a different computation path in . rather than relying on the standard execution mechanism , we have proposed in to use pe of clp to carry out * phase ii*. essentially , we can rely on a clp partial evaluator which is able to solve the constraint system , in much the same way as a symbolic abstract machine would do .note that performing symbolic execution for tdgconsists in building a finite ( possibly unfinished ) evaluation tree by using a non - standard execution strategy which ensures both a certain coverage criterion and termination .this is exactly the problem that _ unfolding rules _ , used in partial evaluators of ( c)lp , solve .in essence , partial evaluators are non - standard interpreters which receive a set of partially instantiated atoms and evaluate them as determined by the so - called unfolding rule .thus , the role of the unfolding rule is to supervise the process of building finite ( possibly unfinished ) sld trees for the atoms .this view of tdgas a pe problem has important advantages .first , we can directly apply existing , powerful , unfolding rules developed in the context of pe .second , it is possible to explore additional abilities of partial evaluators in the context of tdg . in particular , the generation of a residual program from the evaluation tree returns a program which can be used as a _ test - case generator _, i.e. , a clp program whose execution in clp returns further test - cases on demand without the need to start the tdgprocess from scratch .in the rest of the paper , we study the application of this general approach to tdgof prolog programs .as we have already mentioned , test data generation is about producing test - cases which traverse as many different execution paths as possible . from this perspective, different test data should correspond to different execution paths .thus , a main concern is to specify the computation paths for which we will produce test - cases .this requires first to determine the control flow of the considered language . in this section, we aim at defining the control flow of prolog programs that we will use for tdg .test data will be generated for the computation paths in the control flow .as usual a prolog program consists of a set of predicates , where each predicate is defined as a sequence of clauses of the form : - with .a predicate is univocally determined by its _ predicate signature _ , being the name of the predicate and its arity . throughout the rest of the paper we will consider prolog programs with the following features : * rules are normalized , i.e., arguments in the head of the rule are distinct variables .the corresponding bindings will appear explicitly in the body as unifications .* atoms appearing in the bodies of rules can be : unifications ( considered as builtins ) , calls to defined predicates , term checking builtins ( ` = = /2 ` , ` \==/2 ` , etc ) , and arithmetic builtins ( ` is/2 ` , ` < /2 ` , ` = < /2 ` , etc ). other typical prolog builtins like ` fail/0 ` , ` ! /0 ` , ` if/3 ` , etc , have been deliberately left out to simplify the presentation . *all predicates must be moded and well - typed .we will assume the existence of a `` ` : - pred ` '' declaration associated with each predicate specifying the type expected for each argument ( see as example the declarations in fig .[ fig : cfgs ] ) . note that this assumption is sensible in the context of tdg(as the aim is the automatic generation of test _ input _ ) .also , it should not be a limitation as analyses that can automatically infer this information exist .the control flow in prolog programs is significantly more complex than in traditional imperative languages .the declarative semantics of prolog implies some additional features like : 1 ) several forms of backtracking , induced by the failure of a sub - goal , or by non - deterministic predicates ; or 2 ) forced control flow change by the predicate `` cut '' . traditionally , control - flow graphs ( cfgs for short ) are used to statically represent the control - flow of programs . typically , in a cfg, nodes are blocks containing a set of sequential instructions , and edges represent the flows that the program can follow w.r.t .the semantics of the corresponding programming language . in the literature ,cfgs for prolog ( and mercury ) have been used for the aim of tdgin ( for mercury ) . in particular , cfgsdetermine the computation paths for which test - cases will be produced .our framework relies on the cfgs of which are known as _ p - flowgraph _ s .as will be explained later , there are some differences between these cfgs and the ones in which lead to different test - cases . [ cols="^,^ " , ]figure [ fig : cfgs ] depicts the prolog code together with the corresponding cfgs for predicates ` foo/2 ` and ` sorted/1 ` .predicate ` foo/2 ` , given a number in its first argument , returns , in the second one , the value ` pos ` if the number is positive and ` zero ` if it is zero .if the number is negative , it just fails .predicate ` sorted/1 ` , given a list of numbers , checks whether the list is strictly sorted , in that case it succeeds , otherwise it fails .the cfgs contain the following nodes : * a non - terminal node associated to each atom in the body of each clause , * a set of terminal nodes `` t '' representing the success of the -th clause , and * the terminal node `` f '' to represent failure .as regards edges , in principle all non - terminal nodes have two output flows , corresponding to the cases where the builtin or predicate call succeeds or fails respectively .they are labeled as `` yes ''or `` no '' for builtins ( including unifications ) , and as `` * * rs * * '' ( _ return - after - success _ ) or `` * * rf * * '' ( _ return - after - failure _ ) for predicate calls . there is an exception in the case of unifications where one of the arguments is a variable , in which case the unification can not fail .this can be known statically by using the mode information .see for example nodes `` ` z = pos ` '' and `` ` z = zero ` '' in the ` foo/2 ` cfg . both `` yes '' and `` * * rs * * '' edges point to the node representing the next atom in the clause or to the corresponding `` t '' node if the atom is the last one . finally , each `` t '' node has an output edge labeled as `` redo '' to represent the case in which the predicate is asked for more solutions .all `` no '' , `` * * rf * * '' and `` redo '' edges point either to the node corresponding to the first previous non - deterministic call in the same clause , or the first node of the following clause , or the `` f '' node if no node meets the above conditions . see as an example the `` * * rs * * '' and `` * * rf * * '' edges from the non - terminal node for sorted([y|r ] ) . in order to define the computation paths determined by the cfgs, every edge in every cfg is labeled with a unique natural number . an special edge labeled with `` 0 '' and represents the entry of predicate .given the cfg for predicate , a _ computation sub - path _ is a sequence of numeric labels ( natural numbers ) s.t .: * corresponds to either an entry , an `` * * rs * * '' , an `` * * rf * * '' or a `` redo '' edge , * leads to a terminal node or to a predicate call , and * for all consecutive labels , there exists a node corresponding to a builtin in the cfg of , for which is an input flow and is an output flow .given the cfgs corresponding to the set of predicates defining a program , a _ computation path _( cp for short ) for predicate is a concatenation ( ) of computation sub - paths such that : * first label in is either , in which case we say it is a _ full _ cp , or corresponds to a `` redo '' edge , in which case we say it is a _ partial _ cp ( pcp for short ) .* last label in leads to a terminal node in the cfg of .if it is a node the cp is said to be _ successful _ otherwise it is called _failing_. * for all whose last label leads to a node corresponding to a predicate call , , is a cp for the called predicate , and : * * if is successful then the first label in corresponds to an `` * * rs * * '' edge , * * otherwise ( is failing ) , it corresponds to an * rf * edge .* for all whose first label corresponds to a `` redo '' edge flowing from a `` t '' node in the cfg of predicate , , , whose first label corresponds either to an entry edge or to a `` redo '' edge flowing from `` t '' , , of the cfg of .if a cp contains at least one label corresponding to a `` redo '' flow , then the cp is said to be an _ after - retry _ cp .the rest of the cps are _ first - try _ cps .for example in ` foo/2 ` , = and = are first - try successful cps ; = is a first - try failing branch ; = is an after - retry successful cp ( although this one is unfeasible as and are disjoint conditions ) , and = is an after - retry failing branch . in ` sorted/1 ` , = is a first - try successful cp and = is a first - try failing cp .it is interesting to observe the correspondence between the cps and the test data that make the program traverse them . in ` foo/2 ` , is followed by goal ` foo(1,z ) ` , by goal `foo(0,z ) ` , by ` foo(-1,z ) ` , is an unfeasible path , and is followed by ` foo(0,z ) ` when we ask for more solutions . as regards ` sorted/1 ` , is followed by the goal ` sorted([0,1 ] ) ` and by ` sorted([0,1,0 ] ) ` . as we will see in sect .[ sec : gener - test - cases ] , these will become part of the test - cases that we automatically infer .a key feature of our cfgs is that they make explicit the fact that after failing with a clause the computation has to re - try with the following clause , unless a non - deterministic call is left behind .e.g. , in ` foo/2 ` the cfg makes explicit that the only way to get a first - try failing branch is through the cp , hence traversing , and failing in , both conditions and .therefore , a test data to obtain such a behavior will be a negative number for argument .other approaches , like the one in , do not handle flows after failure in the same way .in fact , in , edge `` 3 '' in ` foo/2 ` goes directly to node `` f '' .it is not clear if these approaches are able to obtain such a test data . as another difference with previous approaches to tdgof prolog, we want to highlight that we use cfgs just to reason about the program transformation that will be presented in the following section and , in particular , to clarify which features we want to capture .however , in previous approaches , test - cases are deduced directly from the cfgs .as we outlined in sect .[ sec : intro ] , an intrinsic feature of the second phase of our approach is that it can only produce results associated to non - failing derivations .this is the main reason why the general approach to tdg by pe sketched in sect .[ sec : basics - tdg - partial ] is directly applicable only to tdg of imperative languages . to enable its application to prolog , we propose a program transformation which makes failure explicit in the prolog program .the specialisation of meta - programs has been proved to have a large number of interesting applications .futamura projection s to derive compiled code , compilers and compiler generators fall into this category .the specialization of meta - interpreters for non - standard computation rules has also been studied .furthermore , language extensions and enhancements can be easily expressed as meta - interpreters which perform additional operations to the standard computation . in short, program specialisation offers a general compilation technique for the wide variety of procedural interpretations of logic programs . among them , we propose to carry out our transformation which makes failure in logic programs explicit by partially evaluating a prolog meta - interpreter which captures failing derivations w.r.t . the original program .first , in sect . [ sec : interpreter ]we describe such a meta - interpreter emphasizing the prolog control features which we want to capture . then , sect .[ sec : control_pe ] describes the control strategies which have to be used in pe in order to produce an effective transformation . given a prolog program and given a goal , our aim is to define an interpreter in which the computation of the program and goal produces the same results as the ones obtained by using the standard prolog computation but with the difference that failure is never reported . instead, an additional argument will be bound to the value `` yes '' , if the computation corresponds to a successful derivation , and to `` no '' if it corresponds to a failing derivation .predicate ` solve/4 ` is the main predicate of our meta - interpreter whose first and second arguments are the predicate signature and arguments of the goal to be executed ; and its third argument is the answer ; by now we ignore the last argument .for instance , the call ` solve(foo/2,[0,z],answer , _ ) ` succeeds with and , and ` solve(foo/2,[-1,z],answer , _ ) ` also succeeds , but with .the interpreter has to handle the following issues : 1 .the prolog _ backtracking _ mechanism has to be explicitly implemented .to this aim , a stack of _ choice points _ is carried along during the computation so that : * if the derivation fails : ( 1 ) when the stack is empty , it ends up with success and returns the value `` no '' , ( 2 ) otherwise , the computation is resumed from the last choice point , if any ; * if it successfully ends : ( 1 ) when the stack is empty , the computation finishes with answer `` yes '' , ( 2 ) otherwise , the computation is resumed from the last choice point .2 . when backtracking occurs , all variable bindings , between the current point and the choice point to resume from , have to be undone .3 . the interpreter has to be implemented in a _ big - step _ fashion .this is a requirement for obtaining an effective decompilation .more details are given in sect .[ sec : control_pe ] . ....solve(p / ar , args , answer , tncps ) : - pred(p / ar , _ ) , build_s0(p / ar , args , s0,outvs ) , exec(args , s0,sf ) , sf = st(_,_,_,outvs',answer , tncps/ _ ) , outvs ' = outvs .exec(_,s , sf ) : - s = st(_,[],[],outvs , yes , ncps ) , sf = st(_,_,_,outvs , yes , ncps ) .exec(_,s , sf ) : - s = st(_,[],[_|_],outvs , yes , ncps ) , sf = st(_,_,_,outvs , yes , ncps ) .exec(_,s , sf ) : - s = st(_,_,[],outvs , no , tncps/0 ) , sf = st(_,_,_,outvs , no , tncps/0 ) .exec(args , s , sf ) : - s = st(_,[],[cp|cps],_,yes , tncps/0 ) , build_retry_state(args , cp , cps , tncps , s ' ) , exec(args ,exec(args , s , sf ) : - s = st(_,_,[cp|cps],_,no , tncps/0 ) , build_retry_state(args , cp , cps , tncps , s ' ) , exec(args , s',sf ) . .... .... exec(args , s , sf ) : - s = st(pp,[a|as],cps , outvs , yes , tncps / encps ) , pp = pp(p / ar , clid , pt ) , internal(a ) , functor(a , a_f , a_ar ) , a = .. [ a_f|a_args ] , next(pt , pt ' ) , solve(a_f / a_ar , a_args , ans , encps ' ) , tncps ' is tncps + encps ' , encps '' is encps + encps ' , pp ' = pp(p / ar , clid , pt ' ) , s ' = st(pp',as , cps , outvs , ans , tncps'/encps '' ) , exec(args , s',sf ) .exec(args , s , sf ) : - s = st(pp,[a|as],cps , outvs , yes , ncps ) , pp = pp(p / ar , clid , pt ) , builtin(a ) , next(pt , pt ' ) , run_builtin(pp , a , ans ) , pp ' = pp(p / ar , clid , pt ' ) , s ' = st(pp',as , cps , outvs , ans , ncps ) , exec(args , s',sf ) ..... figure [ fig : interpreter ] shows an implementation of a meta - interpreter which handles the above issues .the fourth argument of the main predicate ` solve/4 ` , named ` tncps ` , contains upon success the total number of choice points not yet considered , whose role will be explained later .the interpreter assumes that the program is represented as a set of ` pred/2 ` and ` clause/3 ` facts .there is a ` pred/2 ` fact per predicate providing its predicate signature , number of clauses and mode information ; and a ` clause/3 ` fact per clause providing the actual code and clause identifier .predicate ` solve/4 ` basically builds an initial state on ` s0 ` , by calling ` build_s0/4 ` , and then delegates on ` exec/3 ` to obtain the final state ` sf ` of the computation . the output information , `outvs ` , is taken from ` sf ` .the state carried along is of the form ` st(pp , g , cps , outvs , ans , ncps ) ` , where ` pp ` is the current program point , ` g ` the current goal , ` cps ` is the stack of choice points ( list of program points ) , ` outvs ` the list of variables in ` g ` corresponding to the output parameters of the original goal , ` ans ` the current answer ( `` yes ''or `` no '' ) and ` ncps ` the number of choice points left behind .a program point is of the form ` pp(p / ar , clid , pt ) ` , where ` p / ar ` , ` clid ` and ` pt ` are the predicate signature , the clause identifier and the program point of the clause at hand .predicate ` exec/3 ` implements the main loop of the interpreter .given the current state in its second argument it produces the final state of the computation in the third one .it is defined by the seven clauses which are applied in they following situations : : : _ the current goal is empty , the answer `` yes '' and there are no pending choice points . _ then , the computation finishes with answer `` yes '' . the current answer is actually used as a flag to indicate whether the previous step in the computation succeeded or failed ( see the last two ` exec/3 ` clauses ) . : : _ as but having at least one choice point ._ this clause represents the solution in which the computation ends .the 4 clause takes the other alternatives . : : _ the previous step failed and there are no pending choice points_. then , the computation ends with answer `` no '' . : : _ the current goal is empty , the answer `` yes '' and there is at least one pending choice point ._ this is the same situation as in the clause , however in this case the alternative of resuming from the last choice point is taken .the corresponding state ` s ` is built by means of ` build_retry_state/5 ` and the computation is resumed from ` s ` by recursively calling ` exec/3 ` . : : _ the previous step failed and there is at least one pending choice point . _ then , the computation is resumed from the last choice point in the same way as in the previous clause . : : _ the first atom to be solved is user - defined . _ a call to ` solve/4 ` handles the atom , and the computation proceeds with the next program point of the same clause which was the current one before calling ` solve/4 ` .this way of solving a predicate call makes the interpreter _ big - step _ ( issue ( 3 ) above ) . : : _ the first atom to be solved is a builtin ._ then , ` run_builtin/3 ` produces the corresponding answer , and the computation proceeds with the following program point .an interesting observation ( also applicable for the previous clause ) is that the answer obtained from ` run_builtin/3 ` ( or ` solve/4 ` ) is now set up as the answer of the next state .this will make the computation go through the 3 or 5 clauses in the following step , if the obtained answer was `` no '' .the correspondence between these clauses and the flows in the cfgs is as follows : clauses , and represent the output edges from every `` t '' node .clause represents the `` no '' edges to `` f '' nodes and the `` no '' edges to non - terminal nodes. finally clauses and represents the execution of builtins and predicate calls in non - terminal nodes and their corresponding `` yes '' edges .let us now explain how the interpreter handles the above three issues . to handle ( 1 ) , a stack of choice points is carried along within the state , initialised to contain all initial program points of each clause defining the predicate to be solved , except for the first one .e.g. , the initial stack of choice points for ` sorted/1 ` is ` [ pp(sorted/1,2,1),pp(sorted/1,3,1 ) ] ` .how this stack is used to perform the backtracking is already explained in the description of the 4 and 5 ` exec/3 ` clauses above . as regards issue ( 2 ) , a quite simple way to implement this in prolog is to produce the necessary fresh variables every time the computation is resumed .this is done inside ` build_retry_state/5 ` .the corresponding unification to link the fresh variables with the original goal variables is made at the end ( see last line of ` solve/4 ` ) .this is the reason why 1 ) the list of the actual variables used in the current goal needs to be carried along within the state ; and 2 ) the original arguments are carried along as the first argument of ` exec/3 ` , as the original ground arguments provided , have to be used when resuming from a choice point .finally , it is worth mentioning that ` solve/4 ` does not return the actual stack of choice points but only the number of them .this means that during a computation the interpreter only considers choice points of the predicate being solved .the question is then , how can the interpreter backtrack to the last choice point , including those induced by other computations of ` solve/4 ` ?e.g. , how can the interpreter follow edge `` 13 '' in the cfg of ` sorted/1 ` ? the interpreter performs the backtracking in the following way : 1 ) the total number of choice points left behind , ` tncps ` , is carried along within the state and finally returned in the last argument of ` solve/4 ` .2 ) the number of choice points corresponding to invoked predicates , ` encps ` , is also carried along .it is updated right after the call to ` solve/4 ` in the 6 clause of ` exec/3 ` .both numbers are stored in the last argument of the state as ` tncps / encps ` .3 ) execution is resumed from choice points of the current predicate only if , as it can be seen in the 4 and 5 clauses .otherwise , the computation just fails and prolog s backtracking mechanism is used to ask the last invoked predicate for more solutions .this indeed means that the non - determinism of the program is still implicit .the specialisation of interpreters has been studied in many different contexts , see e.g. .very recently , proposed control strategies to successfully specialise low - level code interpreters w.r.t . non trivial programs . herewe demonstrate how such guidelines can be , and should be , used in the specialisation of non - trivial prolog meta - interpreters .they include : 1 ._ big - step _ interpreter .this solves the problem of handling recursion ( see ) and enables a compositional specialisation w.r.t .the program procedures ( or predicates ) . note that an effective treatment of recursion is specially important in prolog programs where recursion is heavily used .optimality _ issues .optimality must ensure that : a ) the code to be transformed is traversed exactly once , and b ) residual code is emitted once in the transformed program . to achieve optimality , during unfolding ,all atoms corresponding with _ divergence _ or _ convergence points _ in the cfg of the program to be transformed , has to be _ memoised _ ( see sect . [sec : pe - basics ] ) . a divergence ( convergence )point is a program point from ( to ) which two or more flows originate ( converge ) .we already explained that the interpreter in fig .[ fig : interpreter ] is big - step . as regards optimality , by looking at the cfgs of fig .[ fig : cfgs ] , we can observe : 1 ) all program points are divergence points except those corresponding with unifications in which one argument is a variable , and 2 ) the first program point of every clause , except for the one of the first clause , is a convergence point .we assume that and denote , respectively , the set of convergence points and divergence points of a predicate .we follow the syntax of for pe annotations .an annotation is of the form `` \rightarrow ann ~pred$ ] '' where is an optional precondition defined as a logic formula , is the kind of annotation ( only * memo * in this case ) , and is a predicate descriptor , i.e. , a predicate function and distinct free variables .then , to achieve an effective transformation , we specialise the interpreter in fig . [fig : interpreter ] w.r.t .the program to be transformed by using the following annotation for each predicate ` p / ar ` in the program : additionally ` solve/4 ` and ` run_builtin/3 ` are also annotated to be memoised always to avoid code duplications .this already describes how the specialisation has to be steered in the local control .as regards the global control , the only predicate which can introduce non - termination is ` exec/3 ` .its first and third arguments contain a fixed structure with variables .the second one might be problematic as it ranges over the set of all computable states at specialisation time .note that the number of computable states remains finite thanks to the big - step nature of the interpreter .still , it can happen that the same program point is reached with different values for the ` ncps ` sub - term of the state .therefore , if one wants to achieve the optimality criterion above , such argument has to be always generalised in global control .figure [ fig : transformed ] depicts the transformed code we obtain for predicate ` foo/2 ` .it can be observed that there is a clear correspondence between the transformed code and the cfg in fig .[ fig : cfgs ] .thus , predicate ` solve/4 ` represents the node `` ` x>0 ` '' , ` exec_1/5 ` implements its continuation , whose three clauses correspond to the three sub - paths , and respectively .predicate ` exec_2/4 ` represents the node `` ` x=0 ` '' and ` exec_3/5 ` implements its continuation , whose two clauses correspond to the sub - paths and .note that edge `` 8 '' is not considered in the meta - interpreter ( nor in the transformed program ) as it is meaningless for tdg .it is worth mentioning that the transformed program captures the way in which variable bindings are undone .for instance in ` solve(foo/2,[c , d],\ldots ) ` , if we keep track of variables ` c ` and ` d ` , it can be seen that ` d ` , which corresponds to variable ` z ` in the original code , is only used for the final unification ` f=[d ] ` , while new fresh variables are used for the unifications with ` pos ` and ` zero ` .however , variable ` c ` , which corresponds to variable ` x ` in the original code , is actually used for the checks in ` run_builtin_1/2 ` and ` run_builtin_2/2 ` .this turns out to be fundamental when trying to obtain test data associated to the _ first - try failing _. it must be the same variable the one which , at the same time , is not `` '' and not `` = 0 '' .otherwise we can not obtain a negative number as test data for such cp . finally , observe that the original prolog arithmetic builtins have been ( automatically ) transformed into their ` clpfd ` counterparts .c|c ....solve(foo/2,[c , d],a , b ) : - run_builtin_1(e , c ) , exec_1(c , e , f , a , b ) , f = [ d ] .exec_1(a , no , f , g , h ) : - exec_2(a , f , g , h ) .exec_1(_,yes,[pos],yes,1 ) .exec_1(a , yes , f , g , h ) : - exec_2(a , f , g , h ) .exec_2(a , g , h , i ) : - run_builtin_2(k , a ) , exec_3(k , g , h , i ) . .... & .... exec_3(no,[_],no,0 ) .exec_3(yes,[zero],yes,0 ) .run_builtin_1(yes , a ) : - a#>0 .run_builtin_1(no , a ) : - \+ a#>0 . run_builtin_2(yes , a ) : - a#=0 .run_builtin_2(no , a ) : - \+ a#=0 . ....once the original prolog program has been transformed into an equivalent prolog program with explicit failure , we can use the approach of to carry out * phase ii * ( see fig . [fig : overview ] ) and generate test data both for successful and failing derivations . as we have explained in sect .[ sec : general - scheme ] , the idea is to perform a second pe over the clp transformed program where the unfolding rule plays the role of the coverage criterion . in an unfolding rule implementing the _ block - count(k ) _ coverage criterion was proposed .a set of computation paths satisfies the _ block - count criterion _ if it includes all terminating computation paths which can be built in which the number of times each block is visited does not exceed the given .the blocks the criterion refers to are the blocks or nodes in the cfgs of the original prolog program .as the only form of loops in prolog are recursive calls , the `` '' in the _ block - count _ actually corresponds to the number of recursive calls which are allowed .unfortunately , the presence of prolog s negation in our transformed programs complicates this phase .the negation will appear in the transformed program for `` no '' branches originating from nodes corresponding to a ( possibly ) failing builtin .see for example predicates ` run_builtin_1/3 ` and ` run_builtin_2/3 ` in the transformed code of ` foo/2 ` in fig .[ fig : transformed ] .while prolog s negation works well for ground arguments , it gives no information for free variables , as it is required in the evaluation performed during this tdg phase .in particular , in the ` foo/2 ` example , given the computation which traverses the calls `` ` \+ a#>0 ` '' and `` ` \+ a#=0 ` '' ( corresponding to the path in the cfg ) , we need to infer that `` ` a<0 ` '' . in other words , we need somehow to turn the _ negative _ information into _ positive _ information .this transformation is straightforward for arithmetic builtins : we just have to replace `` ` \+ e_1#=e_2 ` '' by `` ` e_1#\=e_2 ` '' and `` ` \+ e_1#>e_2 ` '' by `` ` e_1#=<e_2 ` '' , etc .this transformation allows us to obtain the following set of test - cases for ` foo/2 ` : ,[pos],yes / first - try}\rangle , & \langle\mbox{\tt [ 1],[\_],no / after - retry}\rangle , \\\langle\mbox{\tt [ 0],[zero],yes / first - try}\rangle , & \langle\mbox{\tt [ -100],[\_],no / first - retry}\rangle \end{array}\right\}\ ] ] they correspond respectively ( reading by rows ) to the cps , , and .each test - case is represented as a 3-tuple being the list of input arguments , the list of output arguments and the answer .the answer takes the form with and , we decided not include in the interpreter the support to calculate the ` first - try / after - retry ` value .] , so that we obtain sufficient information about the kind of cp to which the test - case corresponds ( see sect .[ sec : control_flow ] ) .as there are no recursive calls in ` foo/2 ` such test - cases are obtained using the _ block - count _ criterion for any ( greater than ) . the domain used for the integer number is .however , it can be the case that negation involves unifications with symbolic data .for example , the transformed code for ` sorted/1 ` includes the negations `` ` \+ l= [ ] ` '' and `` ` \+ l= [ _ _ ] ` '' .as before , we might write transformations for the negated unifications involving lists , so that at the end it is inferred that `` ` l= [ _ , _ _ ] ` '' . however this would be too an ad - hoc solution as many distinct term structures , different from lists , can appear on negated unifications .a solution for this problem has been recently proposed for mercury in the same context .it roughly consists in the following : 1 ) it is assumed that each predicate argument is well - typed .2 ) a domain is initialised for each variable , containing the set of possible functors the variable can take .3 ) when a negated unification involving an output variable is found ( in their terminology a negated _ decomposition _ ) , the corresponding functor is removed from the variable domain .it is crucial at this point the assumption that complex unifications are broken down into simple ones .4 ) finally , a search algorithm is described to generate particular values from the type definition and final domain for the variable .the technique is implemented using chr and can be directly used in principle for our purposes as well . on the other hand ,advanced declarative languages like toy make possible the co - existence of different constraint domains .in particular , the co - existence of boolean and numeric constraint domains enables the possibility of using _ disequalities _ involving both symbolic data and numbers .this allows for example expressing the negated unifications `` ` \+ l= [ ] ` '' and `` ` \+ l= [ _ _ ] ` '' as disequality constraints `` ` l/= [ ] ` '' and `` ` l/= [ _ _ ] ` '' . additionally , by relying on the boolean constraint solver , the negated arithmetic builtins `` ` \+ a#>0 ` '' and `` ` \+a#=0 ` '' can be encoded as `` ` ( a#>0 ) = = false ` '' and `` ` ( a#=0 ) = = false ` '' .this is in principle a more general solution that we want to explore , although a thorough experimental evaluation needs to be carried out to demonstrate its applicability to our particular context .now , by using any of the techniques outlined above , we obtain the following set of test - cases for ` sorted/1 ` , using _ block - count _ as the coverage criterion : ,[],yes / first - try}\rangle , & \langle\mbox{\tt [ [ 0]],[],yes / first - try}\rangle,\\ \langle\mbox{\tt [ [ 0,1]],[],yes / first - try}\rangle , & \langle\mbox{\tt [ [ 0,1,2]],[],yes / first - try}\rangle,\\ \langle\mbox{\tt [ [ 0,1,2,0\textbar\_]],[],no / first - try}\rangle , & \langle\mbox{\tt [ [ 0,1,0\textbar\_]],[],no / first - try}\rangle,\\ \langle\mbox{\tt [ [ 0,0\textbar\_]],[],no / first - try}\rangle \end{array}\right\}\ ] ] they correspond respectively ( reading by rows ) to the cps `` '' , `` '' , `` '' , `` '' , `` '' , `` '' , `` '' .they are indeed all the paths that can be followed with no more than recursive calls .this time the domain has been set up to .very recently , we proposed in a generic approach to tdg by pe which in principle can be used for any imperative language . however , applying this approach to tdg of a declarative language like prolog introduces some difficulties like the handling of failing derivations and of symbolic data . in this work, we have sketched solutions to overcome such difficulties . in particular , we have proposed a program transformation , based on pe , to make failure explicit in the prolog programs . to handle prolog s negation in the transformed programs ,we have outlined existing solutions that make it possible to turn the negative information into positive information .though our preliminary experiments already suggest that the approach can be very useful to generate test - cases for prolog , we plan to carry out a thorough practical assessment .this requires to cover additional prolog features like the module system , builtins like ` cut/0 ` , ` fail/0 ` , ` if/3 ` , etc . and also to compare the results with other tdg systems .we also want to study the integration of other kinds of coverage criteria like _ data - flow _ based criteria .finally , we would like to explore the use of static analyses in the context of tdg .for instance , the information inferred by a _ failure analysis _ can be very useful to prune some of the branches that our transformed programs have to consider .this work was funded in part by the information society technologies program of the european commission , future and emerging technologies under the ist-15905 _ mobius _ project , by the spanish ministry of education under the tin-2005 - 09207 _ merit _ project , and by the madrid regional government under the s-0505/tic/0407 _ promesas _ project .e. albert , m. gmez - zamalloa , and g. puebla .est data generation of bytecode by clp partial evaluation . in _18th international symposium on logic - based program synthesis and transformation ( lopstr08 ) _ , lncs .springer - verlag , july 2008 . to appear .f. degrave , t. schrijvers , and w. vanhoof .automatic generation of test inputs for mercury . in _18th international symposium on logic - based program synthesis and transformation ( lopstr08 ) _ , lncs .springer - verlag , 2008 . to appear .m. gmez - zamalloa , e. albert , and g. puebla .odular decompilation of low - level code by partial evaluation . in _8th international working conference on source code analysis and manipulation ( scam08)_. ieee computer society , september 2008 . to appear .kim s. henriksen and john p. gallagher . abstract interpretation of pic programs through logic programming . in _scam 06 : proceedings of the sixth ieee international workshop on source code analysis and manipulation _ , pages 184196 .ieee computer society , 2006 .g. luo , g. bochmann , b. sarikaya , and m. boyer .control - flow based testing of prolog programs . in _ in proc . of the 3rd international symposium on software reliability engineering _ ,pages 104113 , 1992 .m. mndez - lojo , j. navas , and m. hermenegildo .lexible ( c)lp - based approach to the analysis of object - oriented programs . in _17th international symposium on logic - based program synthesis and transformation ( lopstr07 ) _ , august 2007 .
in recent work , we have proposed an approach to test data generation ( tdg ) of imperative bytecode by _ partial evaluation _ ( pe ) of clp which consists in two phases : ( 1 ) the bytecode program is first transformed into an equivalent clp program by means of interpretive compilation by pe , ( 2 ) a second pe is performed in order to supervise the generation of test - cases by execution of the clp decompiled program . the main advantages of tdgby pe include flexibility to handle new coverage criteria , the possibility to obtain test - case generators and its simplicity to be implemented . the approach in principle can be directly applied for tdgof any imperative language . however , when one tries to apply it to a declarative language like prolog , we have found as a main difficulty the generation of test - cases which cover the more complex control flow of prolog . essentially , the problem is that an intrinsic feature of pe is that it only computes non - failing derivations while in tdgfor prolog it is essential to generate test - cases associated to failing computations . basically , we propose to transform the original prolog program into an equivalent prolog program with _ explicit failure _ by partially evaluating a prolog interpreter which captures failing derivations w.r.t . the input program . another issue that we discuss in the paper is that , while in the case of bytecode the underlying constraint domain only manipulates integers , in prolog it should properly handle the symbolic data manipulated by the program . the resulting scheme is of interest for bringing the advantages which are inherent in tdgby pe to the field of logic programming .
predicting the macroscopic properties of composite or porous materials with random microstructures is an important problem in a range of fields .there now exist large - scale computational methods for calculating the properties of composites given a digital representation of their microstructure ; eg .permeability , conductivity and elastic moduli .a critical problem is obtaining an accurate three - dimensional ( 3d ) description of this microstructure .for particular materials it may be possible to simulate microstructure formation from first principles .generally this relies on detailed knowledge of the physics and chemistry of the system , with accurate modeling of each material requiring a significant amount of research .three - dimensional models have also been directly reconstructed from samples by combining digitized serial sections obtained by scanning electron microscopy , or using the relatively new technique of x - ray microtomography . in the absence of sophisticated experimental facilities , or a sufficiently detailed description of the microstructure formation ( for computer simulation ) ,a third alternative is to employ a statistical model of the microstructure .this procedure has been termed `` statistical reconstruction '' since the statistical properties of the model are matched to those of a two - dimensional ( 2d ) image .statistical reconstruction is a promising method of producing 3d models , but there remain outstanding theoretical questions regarding its application .first , what is the most appropriate statistical information ( in a 2d image ) for reconstructing a 3d image , and second , is this information sufficient to produce a useful model ? in this paper we address these questions , and test the method against experimental data . modeling a composite and numerically estimating its macroscopic properties is a complex procedure .this could be avoided if accurate analytical structure - property relations could be theoretically or empirically obtained .many studies have focussed on this problem . in general , the results are reasonable for a particular class of composites or porous media . the self - consistent ( or effective medium )method of hill and budiansky and its generalization by christensen and lo is one of the most common for particulate media .no analogous results are available for non - particulate composites .a promising alternative to direct property prediction has been the development of analytical rigorous bounds ( reviewed by willis , hashin and torquato ) .there is a whole hierarchy of these bounds , each set tighter than the next , but depending on higher and higher order correlation functions of the microstructure .the original hashin and shtrikman bounds that have been widely used by experimentalists implicitly depend on the two - point correlation function of the microstructure , although the only quantities appearing in the formulas are the individual properties of each phase and their volume fractions . to go beyond these bounds to higher - order, more restrictive ( i.e. , narrower ) bounds , it is necessary that detailed information be known about the composite in the form of three - point or higher statistical correlation functions , which do appear explicitly in the relevant formulas .evaluation of even the three point function is a formidable task , so use of these bounds has in the past been restricted to composites with spherical inclusions .it is now possible to evaluate the bounds for non - particulate composites , and it is interesting to compare the results with experimental and numerical data . if the properties of each phase are not too dissimilar the bounds are quite restrictive and can be used for predictive purposes .sometimes experimental properties closely follow one or the other of the bounds , so that the upper or lower bound often provides a reasonable prediction of the actual property even when the phases have very different properties .it is useful to test this observation . in this studywe test a generalized version of quiblier s statistical reconstruction procedure on a well - characterized silver - tungsten composite .computational estimates of the young s moduli are compared to experimental measurements .the composite is bi - continuous ( both phases are macroscopically connected ) and therefore has a non - particulate character .as such the microstructure is broadly representative of that observed in open - cell foams ( such as aerogels ) , polymer blends , porous rocks , and cement - based materials . by comparing our computations of the moduli to the results of the self - consistent method we can test its utility for non - particulate media .an advantage of the reconstruction procedure we use is that it provides the statistical correlation functions necessary for evaluating the three - point bounds .comparison of the young s modulus to the bounds therefore allows us to determine the bounds range of application for predictive purposes .the two basic models we employ to describe two - phase composite microstructure are the overlapping sphere model and the level - cut gaussian random field ( grf ) model . in this sectionwe review the statistical properties of these models which are useful for reconstructing composites .the simplest , and most common , quantities used to characterize random microstructure are , the volume fraction of phase 1 , , the surface area to total volume ratio and , the two - point correlation function ( or /[p - p^2]$ ] the auto - correlation function ) . represents the probability that two points a distance apart lie in phase 1 .here we only consider isotropic materials where does not depend on direction . also note that and .realizations of the overlapping sphere model are generated by randomly placing spheres ( of radii ) into a matrix .the correlation function of the phase exterior to the spheres ( fraction ) is for and for where and the surface to volume ratio is . with modificationit is also possible to incorporate poly - dispersed and/or hollow spheres .the overlapping sphere model is the most well characterized of a wider class called boolean models , which have been recently reviewed by stoyan _et al . _ .the internal interfaces of a different class of composites can be modeled by the iso - surfaces ( or level - cuts ) of a stationary correlated gaussian random field ( grf ) ( so called because the value of the field at randomly chosen points in space is gaussian distributed ) . moreover ,if is fixed , the distribution over an ensemble will also be gaussian .correlations in the field are governed by the field - field correlation function which can be specified subject to certain constraints [ , .invariably is taken as unity .a useful general form for is the resulting field is characterized by a correlation length , domain scale and a cut - off scale .the cut - off scale is necessary to ensure as ; fractal iso - surfaces are generated if .there are many algorithmic methods of generating random fields .a straight forward method is to sum ( ) sinusoids with random phase and wave - vector where is a uniform deviate on and is uniformly distributed on a unit sphere .the magnitude of the wave vectors are distributed on with a probability ( spectral ) density .the density is related to by a fourier transform [ = .note that specifies an additional constraint on .although this formulation of a grf is intuitive , the fast fourier transform method is more efficient . following berk onecan define a composite with phase 1 occupying the region in space where and phase 2 occupying the remainder .the statistics of the material are completely determined by the specification of the level - cut parameters and the function ( or ) .the volume fraction of phase 1 is berk and teubner have shown that the two point correlation function is where .\end{aligned}\ ] ] the auxiliary variables and are needed below .the singularity at can be removed with the substitution .the specific surface is where with given by eqn .( [ modelg ] ) . many more models ( for which can be simply evaluated ) can be formed from the intersection and union sets of the overlapping sphere and level - cut grf models . herewe define a few representative models which have been shown to be applicable to composite and porous media .a normal model ( n ) corresponds to berk s formulation .models can also be formed from the intersection ( i ) and union ( u ) of two statistically identical level - cut grf s .another model , i , formed from the intersection of primary models , has also been found useful .the statistical properties ( , and ) of each model are given in terms of the properties of berk s model [ eqns .( [ gamma ] ) , ( [ h2 ] ) and ( [ hdash ] ) ] in table [ tabmods ] . .the volume fraction , two - point correlation function and surface to volume ratio of modelsn , i , u and i in terms of the properties [ , and of berk s two - level cut gaussian random field model . the formula is used for calculating the level - cut parameters ( see table [ tabab ] ) .[ tabmods ] [ cols="^,^,^,^,^",options="header " , ] for the purposes of computing the elastic properties of the model we maintain the length scale parameters ( , and ) and alter the level cut parameters ( and ) of the model such that =20% ( in accord with the experimental composite ) .the young s modulus , computed using the finite element method , is compared with the experimental data of umekawa _ et al . _ in fig .[ cf_young ] .for the temperature region below the melting point of silver the maximum error is 4% , a very good result . above the melting point of silver , when the silver phase is taken to have a zero bulk and shear modulus , the error is only 3% .the agreement may actually be better than that , however .since the elastic measurements were dynamic measurements , the liquid silver can be considered as being trapped on the time scale of the experimental measurement , before any significant flow could take place , and so could still contribute to the effective moduli via its non - zero liquid bulk modulus . just before melting, the silver had a bulk modulus of about 35gpa .if we take the bulk modulus to be somewhat lower , in analogy to the ice - water difference around 0 , then a bulk modulus of 23.1gpa causes the n ( =0 ) model to agree perfectly with experiment at temperature points above the melting point of silver . the bounds are also shown in fig .[ cf_young ] . for model n ( =0 )the microstructure parameters are and .the results bound the experimental data and provide a reasonable prediction of the young s modulus below the melting point of silver . note that even if the silver phase is given a non - zero bulk modulus past its melting point , the zero shear modulus causes the lower bounds for shear modulus and therefore young s modulus to be identically zero .unfortunately , there was no reported poisson s ratio results for the composite , so we can not compare to the model results for this quantity .we now compare the different analytical predictions of effective moduli with the finite element data .we have chosen to study the two most common models and only consider the moduli appropriate for the w - ag composite studied above .the results are shown in fig . [cf_thy](a ) for overlapping spheres and in fig .[ cf_thy](b ) for the single level - cut grf or excursion set of quiblier [ model n(=0 ) ] .the self - consistent method provides a very good estimate of for model n(=0 ) , but not for overlapping spheres .this might be expected because the n(=0 ) model is symmetric with respect to phase - interchange ( like the scm ) while the overlapping sphere model is not . as stated above the application of the generalizedscm is difficult because it is not obvious which phase should be chosen as the ` inclusion ' phase . for the overlapping sphere modelthe tungsten phase is comprised of spheres ( at 80% volume fraction ) , so is the more likely choice for the inclusion phase .nevertheless , we report both estimates ( 80% w inclusions and 20% ag matrix or 20% ag inclusions and 80% w matrix ) for both models . for either choice, the gscm fails to provide an accurate estimate .indeed , above the melting point of silver the gscm vanishes for the case 20% ag matrix case since the matrix phase is now completely soft . for the overlapping sphere model the beran , molyneux ,milton and phan thien bounds are calculated using the microstructure parameters , .below the melting point of silver ( where the contrast between the phases is moderate ) the upper bounds provide a very good estimate of the effective moduli . a brief discussion of the effect of elastic contrast is necessary here .we have already noted that the analytical predictions of effective moduli do not explicitly depend on microstructure , but have a built - in " microstructure .the elastic contrast , the ratio between the phase moduli , will determine how sensitive the effective moduli actually are to microstructure .for example , in the case of a two - phase composite having equal shear moduli but different bulk moduli , there is a simple exact formula for the effective bulk modulus which is totally insensitive to microstructure . in the case of small contrast , the effective moduli can be expressed exactly as a power series in the moduli differences . up to second order in this difference , at any volume fraction, the coefficients of the power series are not dependent on anything but the volume fractions and the individual phase properties .therefore at small contrast , analytical predictions of effective moduli that explicitly depend only on volume fractions and phase moduli should all work well .( a ) ( b )throughout this paper , we have treated the finite element computation method as being perfectly accurate , so that comparisons of elastic results to experiment were solely a test of how well the reconstructed microstructure compared to the real microstructure .this is not exactly true , since there are numerical errors in the finite element method .these are small , however , and are generally of about the same size or less than the differences seen between model computations and experimental data for the elastic moduli .there are also statistical sampling errors associated with the finite size ( m ) of the models we employ to estimate the elastic properties .since this is much greater than the correlation length of the samples ( m - see fig .[ cf_p2 ] ) we again assume these errors to be small . therefore , the good agreement between model prediction and experimental data seen in this paper is good evidence that the model considered is indeed capturing the main aspects of the experimental microstructure .we have compared various theoretical results to finite element computations of the effective young s modulus for non - particulate media : a w - ag composite and two model media ( overlapping spheres and a single - cut gaussian random field ) .the generalized self - consistent method ( derived for particulate composites ) did not provide a good estimate of for the bi - continuous materials considered here . the standard self - consistent method provided a good estimate for the single - cut grf and w - ag composite .since the method predicts zero moduli for porosity above 50% but the solid phase of the single - cut grf remains connected up to porosities of around 90% such agreement can not be general .upper bounds , calculated using three - point statistical correlation functions , provided a good prediction at low contrast ( / 6 ) for each composite .when one of the phases was completely soft the bounds lost predictive value . therefore , for general composites , it is important to employ numerical computations of the effective moduli . for accurate numerical prediction of composite propertiesit is important that a realistic model be used .model - based statistical reconstruction , based on the joshi - quiblier - adler approach , appears to be a viable route for microstructural simulation .however , it is important that the models underlying the procedure be capable of mimicking the composite microstructure .we have shown how several different models can be employed to find a useful reconstruction .thanks the australian - american educational foundation ( fulbright commission ) for financial support and the department of civil engineering and operations research at princeton university where this work was completed .we also thank the partnership for high - performance concrete program of the national institute of standards and technology for partial support of this work .
we statistically reconstruct a three - dimensional model of a tungsten - silver composite from an experimental two - dimensional image . the effective young s modulus ( ) of the model is computed in the temperature range 251060 using a finite element method . the results are in good agreement with experimental data . as a test case , we have reconstructed the microstructure and computed the moduli of the overlapping sphere model . the reconstructed and overlapping sphere models are examples of bi - continuous ( non - particulate ) media . the computed moduli of the models are not generally in good agreement with the predictions of the self - consistent method . we have also evaluated three - point variational bounds on the young s moduli of the models using the results of beran , molyneux , milton and phan thien . the measured data were close to the upper bound if the properties of the two phases were similar ( ) . = -27.0 mm = -30 mm * keywords : * structure - property relationships ; microstructures ( a ) ; inhomogeneous material ( b ) ; finite elements ( c ) ; probability and statistics ( c ) .
in human societies social life consists of the flow and exchange of norms , values , ideas , goods as well as other social and cultural resources , which are channeled through a network of interconnections . in all the social relations between people_ trust _ is a fundamental component , such that the quality of the dyadic relationships reflects the level of trust between them . from the personal perspective social networkscan be considered structured in a series of layers whose sizes are determined by person s cognitive constraints and frequency and quality of interactions , which in turn correlate closely with the level of trust that the dyad of individuals share . as one moves from the inner to the outer layers of an individual s social network , emotional closeness diminishes , as does trust . despite its key role in economics , sociology , and social psychology ,the detailed psychological and social mechanisms that underpin trust remain open . in order to provide a systematic framework to understand the role of trust, one needs to create metrics or quantifiable measures as well as models for describing plausible mechanisms producing complex emergent effects due to social interactions of the people in an interconnected societal structure .one example of such social interaction phenomena , in which trust plays an important role , is trading between buyers and sellers .such an economic process is influenced by many apparently disconnected factors , which make it challenging to devise a model that takes them into account .therefore , models that have been proposed , necessarily select a subset of factors considered important for the phenomena to be described .for instance , there are studies of income and wealth distribution , using gas like models , life - cycle models , game models , and so on . for a review of various agent based models we refer to .in addition , we note that detailed studies of empirical data and analysis of the distribution functions seem to lend strong support in favour of gas - like models for describing economic trading exchanges . in order to consider the role of trust in trading relations we focus on the simplest possible situation in which trust clearly plays a definite role .this is the case of trading goods or services for money through dyadic interactions or exchange , which takes place either as a directional flow of resources from one individual to another individual or vice versa .when an agent is buying , trust plays a role , as people prefer to buy from a reliable and reputable selling agent , i.e. agent they trust .it should be noted that the dyadic relationship does not have to be symmetric , i.e. a seller does not need to trust the buyer .a key ingredient in the trading interactions is profit that an agent makes when providing goods or services , and it can realistically be assumed that a seller wants to put the highest possible price to its goods , while the buyer tends to perform operations with agents offering a low price . in this studywe propose an agent based `` gas - like '' model to take into account the above mentioned important features of trading .the model describes dyadic transactions between agents in a random network .the amount of goods and money are considered conserved in time , but the price of goods and trust , we measure as reputation , vary according to the specific situation in which trade is made . in section [ model ]we describe the model and set up the dynamic equations of the system . in section [ results ]we present the results of extensive numerical calculations and explore their dependence on the parameters of the model . herewe also compare our numerical results with available real data and discuss the predictions of the model as well as possible extensions to it .finally , in section [ conclusion ] we conclude by making remarks concerning the role of trust in trade and social relations .first we introduce the basic model , which describes the dynamic development of a random network of agents such that the state of agent is defined by two time - dependent state variables , , where stands for the amount of money and for the amount of goods or services .the pairwise connectivities between agents in the network are described by adjacency matrix .it is necessary to distinguish the direction of the flow of goods and money in the network , since agent could buy from agent , or vice versa . at time we define two symmetric matrices , and , with an average of random entries per row , for the flow of money or goods , respectively .then the adjacency matrix is simply , and stands for the mean degree .the elements of and are defined as the normalised probabilities of transactions per unit time and , respectively and they could become asymmetric .these matrices represent the buying or selling transactions , according to the individual agent instantaneous situation .the dynamic equations for the state variables ( money ) and ( goods ) initialised randomly ] .the appropriate step size for convergence was found to be , reached within 200000 iterations . at time set randomly using a flat distribution with the mean and the width , and set a time scale and . for this system reaches dynamical equilibrium within the running time .the diagonal elements of the trading matrices and were set to zero . in fig .[ fig1 ] we depict the structure of the final network by showing only the active links , where the height of the 3d plot is stratified according to the wealth of the agent , i.e. . hereit is seen that there are poor agents and only a few very rich ones .in addition we observed in numerous calculations that if the width of the distribution of prices is increased , the rich become richer and the number of poor agents increases . and , red and , and blue and .the colour of the nodes is chosen according to the wealth ( such that light green , red , dark green , blue , grey , black are in decreasing order of .,width=226 ] in fig .[ fig2 ] we show the time histories of the variables for the typical calculation of the previous figure .note that most agents have converged to certain amounts of money and goods , but the ones who posses more wealth are prevented from taking it all , which is an effect of trust . also observe oscillations of prices and apparently chaotic behaviour of trust . with the chosen value of ( reluctance for the agent to lower the prices ) the average priceis maintained around a constant value , but with increasing the average price diminishes , because one is less reluctant to lower its prices .if all prices are set to the same value , the dynamical behaviour of turns out to be less chaotic , and shows less variations . in fig .[ fig3 ] the behaviour of the average reputation of sellers and price of goods are depicted .here we see that the mean value of reputation of agents decreases with time , which reflects the fact that there are less successful transactions , while the reputation of some selected individuals increases with time . in order to assess the effect of including trust in the transactions, we made a calculation in which no trust variables were included , that is and , , and another calculation including the dynamics of the trust variables . in fig .[ fig4 ] we show a representation of the network as in fig . [ fig1 ] and a 2d graph of the two calculations .here we observe that trust has two main effects : 1 ) it reinforces the trade network , the number of active links is increased noticeably , while in the calculation without trust ( indiscriminate trading ) the network is dismembered in isolated subgraphs with very few active links .2 ) the wealth distribution between agents turns out to be more fair , in the sense that there are less poor people and quite a large proportion in the middle class .the backbone network , in which active trading took place , is noticeably larger and with more `` black '' or active interactions ( and ) , the dramatic effect of trust is : a disconnected trading network without trust becomes a robust and connected network when trust is affecting agents in deciding transactions .we have also observed that in the calculations with trust the state variables remain positive or zero , while in the calculations without trust a small number of agents have negative values .this means that these agents are not only poor but in debt but the number of such agents is very small , such that on average there are 160 agents with negative values in a population of 5000 agents after 200000 time steps . as one of our main research fociwe compare the distribution of wealth calculated from our model with that of the actual statistical data . for comparisonwe depict in fig .[ fig5a ] the histograms of the actual wealth distribution in the usa ( in blue ) , together with our results from 10 realisations of the network with 500 agents , without and with trust included .it should be noted that our model predicts a concentration of wealth in the hands of few agents , regardless of the role of trust , although the concentration of such agents is much less pronounced than what we see in reality . in the calculation including trust the distribution amongst middle and bottom percentages agree quantitatively with the data , but we are not able to find a good match with data for the upper 40 of the wealth .this could be due to the small size and randomness of our model network , nevertheless the trend is already noticeable . and the spread . ] in ref . one finds an interesting exercise in which people are asked to construct wealth distribution that they consider ideal , and also an estimated distribution , based on their information , and these data are compared with actual data for wealth distribution in the usa . in fig .[ fig5b ] we compare the published results with our calculations . herewe find that our calculation results seem to compare very well with the ideal distribution when trust is included and no spread of prices is allowed .this situation probably represents a country that has strict control of unique prices for goods , set by the government or some monopoly .the estimated distribution is well reproduced with a rather small spread of prices and having trust included . however , the actual situation is somewhat disappointing , since the best fit is with a large spread of prices . in the bottom right panelwe show the situation in which the network agents lack trust in their transactions thus rendering the outcome unrealistic .the situation is quite different for such egalitarian societies as denmark , in which case we have found that for very small spread of prices and including trust our model agrees extremely well with the actual data ( taken from ) of the income distribution for the 1992 statistics . using data from the money variableonly , we show the comparison in fig .[ fig6 ] , which turns out to be very favourable . .] another maybe better way to compare our results with real statistical data is to investigate the distribution of money in different systems , since it depends on the actual mechanisms of acquiring money .for instance , there are data on the annual income of people in europe , which ranges from 0 to millions of euros . in fig .[ fig7](a ) we compare the actual data ( in red ) with a histogram from our numerical calculation with 1000 agents but without including trust . in fig .[ fig7](b ) we compare the same data with the results of a calculation with 250 agents and a dispersion of prices twice as large as in ( a ) but once again without including trust .both these calculations do not seem to fit with the real data , neither do averages over many realisations .+ however , a rather different situation is encountered in a closed system as we have found out when we investigated data from the annual salaries that all the players in the nfl earned in two different years . in fig .[ fig7](c ) we show in red the distribution of salaries in 1998 and compare it with a numerical result without including trust . in fig . [ fig7](d ) we show the numerical results for the same system including trust and compare them with the nlf data from 2011 . herewe can observe that both distributions fit the data fairly well , which allows us to think that trust does not play much of a role in the mechanisms of deciding salaries in a system like nfl , in which a few of the star players began to receive exaggeratedly good salaries . in order to investigate such a situation in more detailwe have adopted the approach presented by jun - ichi inoue et al . , in which they define indices to measure social inequality in various fields , including income and trading . in fig .[ fig8 ] we present the inequality for our model calculations .we see that these lorentz inequality curves vary quite sensitively with the spread of prices of the goods .we also see that the real situation presents itself as rather unequal such that for the same spread of prices , the lack of trust generates more inequity .for the curve tagged with an arrow we found and , which is very similar to the indexes calculated for usa : and .we have also detected that the results do not vary much when the size of the network is increased to 1000 .as for the dynamical behaviour of trading there are data available for the wealth share of various countries . in fig .[ fig9 ] we show the top 1 , 5 , and 10% countries income share , and compare it with the numerical calculations for a network size of 500 agents .all the calculations were set to run up to 200000 iterations , of which only half of them were selected for the comparisons .the parameters of the model and were varied to find the best fit .for all the cases a value of in the model calculation is found to fit well with the data , except in the case of italy , where we chose for the best fit .it is interesting to notice that the case of china is the only one that fits better with the 100000 initial iterations of the calculation , and the dispersion of prices is smaller ( ) .all the other countries are best fitted once the variables attain a final distribution , which occurs during the last 100000 iterations .this could reflect the fact that china is a newly emergent economic power and that their rules of trading are tighter . for japan and australiathe dispersion of prices is larger than for china ( ) .this could be interpreted such that these countries have more free trading rules , in which case one could vary prices more widely without loosing competitiveness . for developed countries with long history in economic traditions, is quite large , probably reflecting the influence of many strategies to allow prices to vary without the loss of competitiveness .for instance the dispersion of gas prices nationally in the usa ranges from 1.31 to 2.37 dollars per gallon , and the prices are unevenly distributed geographically .observe that usa and canada are practicable indistinguishable , which is to be expected as they are similar and tightly linked .the case of italy , the only european country selected , is interesting , since it is the only one in which a good fit is obtained by increasing the time scale for price changes .this seems to suggest that in italy and similar european countries prices tend to change more slowly than in the very dynamic american economies .interestingly , the fit for the 1% top population is not as good as the others , meaning that our model predicts less extremely rich people than in reality there is .the extreme social and economic inequality of the present world is most likely due to the fact that the economic alliances are not random ( as in our model ) and impose some trading preferences , other than the ones considered in our model .also , our approximation of a conserved system result in a constraint on the amount of goods or money an individual could gather , and this constraint is not present in the actual economic picture , in which money could be printed and goods could be produced and destroyed . in the real world countries can not be considered as closed systems , although the global economy could be considered as such .in conclusion our agent - based trading model gives rise to results that overall seem to compare very favourably with the findings from the real data , even though the model takes into account only a subset of the known factors affecting trading .one striking result is the effect of trust in trading relations .first of all , it was found that trust reinforces trading transactions in such a way that the network with active links remains fully connected when trust is included , and becomes a set of disconnected graphs if it is not .secondly , trust helps to make society more even and it is seen that the distribution of wealth is fairer when trust plays a role in trading . if trust is not included then the society seems to have a number of poor people in debt , unlike in a society with trust .one important conclusion of this work is that even in the simplified case of having a conserved system , agreement with real data on the distribution of wealth is not only possible but also quite good .this could be interpreted indicating that including the production and deterioration of goods and money , which is essential to the idea of creation of wealth , does not seem to be a fundamental property of the economy in general .furthermore , as far as the distribution of wealth is concerned , the fundamental issue seems to be the spread of prices , rather than the production of wealth .it also turns out that our model predicts inequality in a closed economy without production from simple rules of buying and selling between agents , which illustrates the fact that inequality can arise naturally in a rudimentary economy .it should be noted that contrary to the general trend in classical economic models to consider various representative classes of agents ( consumers or producers ) we have here considered a single class of heterogeneous trading agents .the behaviour emerging from such economy is an aggregate of individual decisions .hence it seems that an unbalanced economy emerges from particular decisions of individuals .it is also important to recognise that the structure of real trading networks is more similar to a scale free networks than to random networks , which stays fixed during the dynamic trading process . in order to test the effect of topologically changing network structure we have introduced a rewiring scheme in which the agents whose connections are not workingare deleted from the network , and new agents are added following a scale free method . as a resultwe have found that the final network structure evolved after a long run of the dynamics resembling roughly a scale free network .it is found that the structure of the final network still depends very much on the trust variable , namely if for all agents , then agents that become `` hubs '' at a certain time are very likely to be deleted , and eventually the network is fragmented into small isolated trading groups . on the other hand , if one includes in the dynamics , the hubs that are formed remain trading and the network , although changing , remains cohesive .the network that one obtains after many rewiring processes has always all the links working perfectly ( black lines ) .however , the wealth distribution in the network is very similar to the ones reported here without rewiring , and thus we decided to leave the detailed study of the rewiring problem for the future , since the above - described rewiring scheme breaks the conservation of wealth , which is an essential assumption in our current model .our justification to do so is that in the calculations shown here the time span for transactions is not long enough to rewire the network .+ as the main message of this work we would like to suggest that in trade the prices seem to be the main cause of impoverishment. wide spread of prices tends to augment the differences between poor and rich people , and the results seem much more sensitive to a change in the allowed spread of prices than to the average price .also we conclude that trust seems to be important in regulating trading , since it is a way in which agents decide to trade preferentially amongst the agents they are linked with . in indiscriminate transactions without including trust , the network turns out to be disrupted , while with trust few links are always reinforced . in this way trust could be considered to favour the appearance of monopolies , since the agents that have a good history seem better off regardless of the high prices and quality of goods , and are thus in position of engulfing small traders . as a final remarkwe could conclude that our simple model conserving the amount of goods and money is able to reproduce some salient features found in trading networks , with the additional advantage that we could fairly easily add new features to the model and analyse them in depth .rb , erg , and tg would like to acknowledge financial support from conacyt ( mexico ) through project 179616 and kk from academy of finland through project 276439 .we are grateful to prof .larissa adler , whose original ideas were the primary source of inspiration for this work .we are also grateful to prof .juan m. hernndez for careful reading of the manuscript and valuable comments and criticisms .chakraborti a , challet d , chatterjee a , marsili m , zhang y , chakrabarti bk , _ statistical mechanics of competitive resource allocation using agent - based models _ , physics reports , * 552 * , 1 - 25 ( 2015 ) , issn 0370 - 1573 , doi:10.1016/j.physrep.2014.09.006 inoue j , ghosh a , chatterjee a , chakrabarti bk , _ measuring social inequality with quantitative methodology : analytical estimates and empirical data analysis by gini and indices _ , physica a * 429(1 ) * 184 - 204 ( 2015 ) issn 0378 - 4371 , doi:10.1016/j.physa.2015.01.082
we present a simple dynamical model for describing trading interactions between agents in a social network by considering only two dynamical variables , namely money and goods or services , that are assumed conserved over the whole time span of the agents trading transactions . a key feature of the model is that agent - to - agent transactions are governed by the price in units of money per goods , which is dynamically changing , and by a trust variable , which is related to the trading history of each agent . all agents are able to sell or buy , and the decision to do either has to do with the level of trust the buyer has in the seller , the price of the goods and the amount of money and goods at the disposal of the buyer . here we show the results of extensive numerical calculations under various initial conditions in a random network of agents and compare the results with the available related data . in most cases the agreement between the model results and real data turns out to be fairly good , which allow us to draw some general conclusions as how different trading strategies could affect the distribution of wealth in different kinds of societies . + : social networks , agent - based model , wealth distribution , nonlinear dynamical systems , price effects , trust , reputation +
observations of fluctuations in the temperature of the cosmic microwave background ( cmb ) are now providing us with a direct view of the primordial inhomogeneities in the universe .the power spectrum of temperature fluctuations yields a wealth of information on the nature of the primordial perturbations , and the values of the cosmological parameters .mapping the polarization of the cosmic microwave sky is an important next step , offering a great deal of complementary information , especially regarding the character of the primordial inhomogeneities .one of the most interesting questions to resolve is whether the primordial perturbations possessed a tensor ( gravitational wave ) component , as predicted by simple inflationary models . here ,polarization measurements offer a unique probe .polarization of the cosmic microwave sky is produced by electron scattering , as photons decouple from the primordial plasma .linear polarization is produced when there is a quadrupole component to the flux of photons incident on a scattering electron .scalar ( density ) perturbations generate an ` electric ' ( gradient ) polarization pattern on the sky due to gradients in the velocity field on the surface of last scattering . for scalar perturbationsthe velocity field is curl - free , and this leads directly to the production of an entirely ` electric ' pattern of linear polarization .in contrast , tensor perturbations ( gravitational waves ) produce polarization by anisotropic redshifting of the energy of photons through decoupling . in this casethe polarization has ` magnetic ' ( i.e. curl ) and ` electric ' ( i.e. gradient ) components at a comparable level .a magnetic signal can also be produced by weak lensing of the electric polarization generated by scalar modes .detection and analysis of the lensing signal would be interesting in itself , but a detection of an additional tensor component would provide strong evidence for the presence of primordial gravitational waves , a generic signature of simple inflationary models . detecting or excluding a magnetic component is clearly of fundamental significance in cosmology .but there is a significant obstacle to be faced .the problem is that for the foreseeable future , the primordial sky polarization will only be observable over the region of the sky which is not contaminated by emission from our galaxy and other foreground sources of polarization .thus we shall only be able to measure the polarization over a fraction of the sky .but the electric / magnetic decomposition is inherently _ non - local _ , and _ non - unique _ in the presence of boundaries . to understand this ,consider the analogous problem of representing a vector field ( in two dimensions ) as a gradient plus a curl : the electric and magnetic components respectively . from this equation, one has , and . for a manifold without a boundary , like the full sky, the laplacian may be inverted up to a constant zero mode , and the two contributions to are uniquely determined .but for a finite patch , one can always think of adding charged sources for the potentials and outside of the patch on which is measured , which alter and without changing .for example one can add to and pieces with equal but perpendicular gradients so there is no net contribution to .since full sky observations are unrealistic , so is the hope of a unique decomposition of the sky polarization into electric and magnetic components .however , this does not at all mean that the hypothesis of a magnetic signal can not be tested .one possibility is to construct a local measure of the magnetic signal by differentiating the measured polarization ( i.e. vanishes if is pure electric in the analogue example above ) , but this is problematic for noisy , sampled data .a more promising alternative , which avoids differentiating the data , is to construct line integrals of the polarization .for example , in the vector analogy above , any line integral is guaranteed to vanish if is purely electric . however, the problem with these line integrals is that there are an infinite number of them , and they are not statistically independent .one would therefore prefer a set of ` magnetic ' variables to which the ` electric ' component does not contribute , but which are finite in number and statistically independent , for a rotationally symmetric statistical ensemble . since polarization from a primordial scale invariant spectrum of gravitational wavesis predominantly generated on scales of a degree or so ( the angle subtended by the horizon at last scattering ) , we would expect to be able to characterize the cosmic magnetic signal by a set of statistically independent variables roughly equal in number to the size of the patch in square degrees. however the signal within a degree or so of the boundary can not be unambiguously characterized as magnetic , and hence one loses a number of variables proportional to the length of the boundary .the amount of information about the magnetic signal therefore increases as the patch area minus the area of this boundary layer . in this paper we shall find the set of observable ` magnetic ' variables explicitly for circular sky patches : the method may be generalized to non - circular patches if necessary . as mentioned above , the electric component of the polarization ( due primarily to scalar perturbations ) is expected to be much larger than the magnetic signal . therefore to detect the latter it may be useful to construct observables which suffer no electric contamination . we show how to construct such variables , and use them to estimate what magnitude of magnetic signal the planned planck satellite might be able to detect .we also discuss the optimal survey size for future experiments aimed at detecting tensor modes via magnetic polarization , including the effects of ` magnetic noise ' due to weak lensing of the dominant electric polarization . even for observations that do not expect to detect the magnetic signal the magnetic - only observables are likely to be very useful in checking consistency of any residual polarization with noise or indeed in identifying foreground contamination .they may also be useful for studying the small scale weak lensing signal . to construct variables that depend only on the electric or magnetic polarizationwe integrate the polarization field over the observed patch with carefully chosen spin - weight 2 window functions .we present a harmonic - based approach for constructing these window functions which is exact in the limit of azimuthally - symmetric patches .the method is expected still to perform well for arbitrary shaped patches of the sky , but the separation will no longer be exact in that case . constructing the window functions with our harmonic methodautomatically removes redundancy due to the finite size of the patch , keeps the information loss small ( except for very small patches ) , and ensures that for idealized noise in the polarization map ( isotropic and uncorrelated ) , the noise on the electric and magnetic variables preserves these properties . in this respect the construction is analogous to the orthogonalized harmonics approach used in the analysis of temperature anisotropies .however in the polarized case there is no simple interpretation in terms of a set of orthogonalized harmonics . in ref . it was shown how the lossless quadratic estimator technique can be applied to polarization . there , no attempt was made to separate the electric and magnetic contribution to the estimators , so the resulting window functions for the power displayed considerable leakage between the electric and magnetic modes .the authors of ref . showed how the leakage could be reduced , but it is arguably still too large to allow robust estimation of the magnetic signal in the presence of an electric signal that is orders of magnitude larger .we are able to perform a much cleaner separation at the level of the harmonic components in the map , and as we shall see the information loss in our approach is quite small for full sky surveys with a galactic cut .the electric - magnetic decomposition of the polarization field is exactly analogous to the corresponding decomposition of projected galaxy ellipticities induced by weak lensing . shows how to construct local real - space correlation functions for measuring the magnetic component .these are useful for distinguishing the purely electric signal due to gravitational lensing from intrinsic correlations in galaxy alignments , and the method has the advantage of working for arbitrarily shaped regions of sky .however the method assumed a flat sky approximation , and includes only the two - point information .for polarization observations the sky curvature will be important and we aim to extract a set of statistically independent observables that contain as much of the magnetic information as possible .this may also prove useful for weak lensing studies .the paper is arranged as follows . in sec .[ sec : ebpol ] we present the spin - weight 2 window technique for separating electric and magnetic polarization on the sphere , generalizing results in refs .section [ sec : sep ] describes our harmonic - based technique for constructing window functions with the properties required to ensure separation of the electric and magnetic modes while keeping information loss small .classical techniques for testing the hypothesis that there is no magnetic signal are discussed in sec .[ sec : hyp ] , and estimates of the detection limits with the planck satellite and future experiments are also given .lossless methods for estimation of the polarization power spectra are contrasted with methods using the separated variables in sec .[ lossless ] . in a series of appendices we outline our conventions for spin weight functions and their spherical harmonics .in addition we present a number of the standard integral theorems on 2-dimensional manifolds in convenient spin weight form , and present recursive methods for the fast computation of the spin weight spherical harmonics and their inner products over azimuthally symmetric patches of the sphere .a further appendix discusses the statistics of detecting weak signals from tensor modes .the observable polarization field is described in terms of the two stokes parameters and with respect to a particular choice of axes about each direction on the sky . in this paperwe take these axes to form a right - handed set with the incoming radiation direction ( following ref .the real stokes parameters are conveniently combined into a single complex field that represents the observed polarization the values of stokes parameters depend on the choice of axes ; since is the difference of the intensity in two orthogonal directions it changes sign under a rotation of .the field is related to the field by a rotation of .more generally under a right - handed rotation of the axes by an angle about the incoming direction the complex polarization transforms as and is therefore described as having spin minus two ( see appendix [ app : eth ] for our conventions for spin weight functions ) .the analysis of polarized data is therefore rather more complicated than for the temperature which does not depend on making a choice of axes in each direction on the sky .as described in appendix [ app : eth ] , one can define spin raising and lowering operators that can be used to relate quantities of different spin .the spin raising operator is denoted and the lowering operator .since the polarization has spin - weight -2 it can be written as the action of two spin lowering operators on a spin zero complex number the underlying real scalar ( spin - zero ) fields and describe electric and magnetic polarization respectively .they are clearly non - local functions of the stokes parameters .one can define a spin zero quantity which is local in the polarization by acting with two spin raising operators .using some results from appendix [ app : eth ] one obtains where is the covariant derivative on the sphere .the real and imaginary parts of this equation can therefore be used to determine the electric and magnetic parts of the polarization . performing a surface integral we define where is a complex window function defined over some patch of the observed portion of the sky .it follows that provide a measure of the electric and magnetic signals .note that with an equivalent result for . using the integral theorem in appendix [ app : eth ] we can write where is now a spin 2 window function, is a spin window function , and is the spin 1 element of length around the boundary of . clearly we do not wish to take derivatives of noisy observed data and hence it is usually useful to choose the window function to eliminate the derivative terms on the boundary . for cmb polarimetrywe are interested in the polarization defined on the spherical sky .the surface integrals vanish if we choose such that , which will be true if is a linear combination of the spherical harmonics with or 1 , since these possess no spin 2 component .if we then set on the boundary , so as to eliminate the derivatives of the polarization , we are forced to consider circular patches , in which case a combination of the two harmonics works .this implies that the electric and magnetic signals can be probed by performing line integrals around circles , as emphasized in refs .these line integrals can be performed around any circle that is enclosed in the observed region of the sky , and it is unclear how to obtain a complete set of statistically independent observables in order to extract all of the available information . also for current experiments , performing one - dimensional line integrals on pixelized maps is unlikely to be a good way to extract information robustly . in this paper , we suggest choosing the window functions so that the line integrals around that appear in the construction of and contain no contribution from the magnetic and electric polarization respectively . in the absence of special symmetries ( see below for exceptions that arise in the case of circular patches ) this requires that , , and all vanish on the boundary .these conditions are equivalent to demanding that the window function and its normal derivative vanish on . with such a choice of windowwe can measure the electric and magnetic signals using only the surface integrals since the window functions are scalar functions on the sphere we can expand them in spherical harmonics , ( the square root factor is included for later convenience . )we need not include and 1 spherical harmonics since they do not contribute to the spin - weight window functions , and the boundary integral terms automatically separate for these multipoles . in practice , we are only interested in probing scales to some particular ( e.g. the magnetic signal from tensor modes has maximal power for and decreases rapidly with ) , so the sum in eq .( [ eq : hwindow ] ) can be truncated at some finite .we shall focus on the case where the observed sky patch is azimuthally symmetric in which case the construction of exact window functions becomes particularly simple .the harmonic - based method we describe in sec .[ sec : sep ] provides a practical solution to constructing a non - redundant set of window functions that separate the electric and magnetic modes exactly .in addition , for the special case of isotropic , uncorrelated noise on the observed polarization , these simple properties are preserved in the variables and . for observations over non - azimuthally symmetric patches our method can , of course , be used over the largest inscribed circular patch , but in this case there is inevitable information loss since we use only a subset of the observed data .however , we expect that the method presented in sec .[ sec : sep ] could also be applied directly to the full observed region to construct window functions that achieve approximate separation of electric and magnetic polarization .consider the case of an azimuthally - symmetric patch so the boundary consists of one or two small circles .for each azimuthal dependence on we can construct combinations that satisfy the necessary boundary conditions .for it is easy to see that and contain no contribution from and respectively for any choice of the [ i.e. the boundary integrals that distinguish ( ) from ( ) vanish if the polarization is pure magnetic ( electric ) ] .it follows that for there are linearly independent window functions that satisfy the boundary conditions . for will be shown in the next section that there is only one independent linear constraint per boundary circle , so there are possible window functions ( for a boundary composed of two circles ) . for are two linear constraints per boundary circle which can be taken to be the vanishing of and its normal derivative . in this casethere are ( ) window functions for boundaries consisting of one ( two ) small circles .since we are only considering a fraction of the sky not all of the window functions counted above may return observables and containing independent information .this arises because for large , or small patches , there will generally arise non - zero window coefficients that produce spin 2 window functions that are poorly supported over the patch .( see e.g. ref . for a discussion of the equivalent problem in the case of scalar functions . ) the redundancy in the set of acceptable window functions can be removed by expanding the spin 2 window functions in a smaller set of functions which are ( almost ) complete for band - limited signals over the region .the construction of such a set by singular value methods ( e.g. refs . ) forms the starting point of the method we present in the sec .[ sec : sep ] .we construct window functions in harmonic space , so as a useful preliminary we consider the harmonic expansion of spin - weight 2 fields over the full sphere .the polarization is spin and can be expanded over the whole sky in terms of the spin two harmonics ( see appendix [ app : harmonics ] for our conventions and some useful results ) reality of and requires , so that with an equivalent result for . under parity transformations but , since . from the orthogonality of the spherical harmonics over the full sphereit follows that in a rotationally - invariant ensemble , the expectation values of the harmonic coefficients define the electric and magnetic polarization power spectra : if the ensemble is parity - symmetric the cross term is zero , .the form of the harmonic expansion ( [ eq : hwindow ] ) of the window function ensures that the spin - weight windows are where the sum is over and .evaluating the surface integrals in eq .( [ eq : ewbw ] ) we find where the pseudo - harmonics are obtained by restricting the integrals in eqs .( [ eq : elm ] ) and ( [ eq : blm ] ) to the region : , \label{nosepeqe } \\\tilde{b}_{lm } & = & \frac{i}{2}\sum_{l'm'}\int_s \text{d}s\ , \left [ ( e_{l'm ' } - i b_{l'm'}){}_{-2}y_{l'm'}{}_{-2}y_{lm}^\ast - ( e_{l'm ' } + i b_{l'm ' } ) { } _ 2y_{l'm'}{}_2y_{lm}^\ast \right ] .\label{nosepeqb}\end{aligned}\ ] ] defining hermitian coupling matrices where we can write in the limit , become projection operators as a consequence of the completeness of the spin - weight harmonics .the matrix controls the contamination of and with magnetic and electric polarization respectively .our aim is to construct window functions that remove this contamination for all and .some elements of the matrices are shown in fig .[ windows ] . for azimuthally - symmetric patches the coupling matricesare block diagonal ( ) , and so window functions can be constructed for each separately [ see eq .( [ eq : hwindowm ] ) ] . for have so and we have clean separation for any azimuthally - symmetric window function .the set of azimuthally symmetric window functions gives separated variables that contain the same information as would be obtained by computing line integrals around all those circles concentric with the boundary of the azimuthal patch . for general is leakage of into ; for parity - symmetric cuts there is only leakage between modes with different parity ( i.e. for even the pseudo - harmonics depend on only for odd ) .we showed in the previous section that , for a general window function , the contamination of e.g. by the magnetic polarization is due entirely to boundary terms .this implies that can always be written as a line integral around the boundary of .( we show in appendix [ app : ints ] that the matrices can be transformed into line integrals for . however can be written as a line integral for all and . ) making use of eq .( [ eq : appcdint ] ) , it is straightforward to show that \nonumber\\ & & \mbox{}+ \oint_{\partial s } { { } _ 1\text{d}l\:}\ , \left[\sqrt{l(l+1 ) } { } _ { -1}y_{lm}^\ast { } _ { -2}y_{l'm ' } + \sqrt{(l'-1)(l'+2 ) } y_{lm}^\ast { } _ { -1}y_{l'm ' } \right ] \bigr ) .\label{eq : wminusint}\end{aligned}\ ] ] this can be put in manifestly hermitian form using the recursion relation derived from the action of on for a circular boundary at constant latitude ( i.e. the boundary of an azimuthal patch ) , we find , \label{eq : wminusdcmp}\ ] ] where the vectors , \\v_l(m ) & = & \sqrt{\frac{(l-2)!}{(l+2)!}}\frac{\sqrt{(m^2 - 1)}}{\sin\theta } y_{lm}(\theta,\phi)\end{aligned}\ ] ] for and some arbitrary .[ note that and will not generally be orthogonal so eq . ( [ eq : wminusdcmp ] ) is not the spectral decomposition of . ]any window whose inner products with and both vanish , i.e. will achieve clean separation of electric and magnetic polarization . for such window functions and their normalderivative necessarily vanish on the boundary . as noted earlier , for is actually only one constraint to be satisfied which now follows from the fact that .in this section we give a practical method for constructing a non - redundant set of window functions where labels the particular window , that achieve exact separation for azimuthal patches .the corresponding ( cleanly separated ) electric and magnetic observables will be denoted and .we will make use of a notation where vectors are denoted by bold roman font , e.g. has components , and has components , and matrices are denoted by bold italic font , e.g. have components .we present the method in a form that is applicable ( though no longer exact ) to arbitrary shaped regions ; for azimuthal patches the method is exact . for the azimuthal caseall matrices are block diagonal and the window functions can be constructed for each separately . in matrix form ,( [ eq : ewbw_harm ] ) is where is the matrix whose row contains the harmonic coefficients of the window function , and recall for an azimuthally - symmetric sky patch the block - diagonal matrices ( with components ) from which are constructed can be computed very quickly using the recursion relations given in appendix [ app : ints ] .alternatively , can be computed directly from eq .( [ eq : wminusdcmp ] ) . in the limit of full sky coverage and .we know that for , the range of submatrix of is two - dimensional [ spanned by and ] , so that all but two of the eigenvalue of the submatrix are zero .equivalently , all but two linear combinations of the are independent of .the submatrices of have only one non - zero eigenvalue ; the associated eigenvectors are .the submatrix is identically zero .the essence of our method for constructing the window functions is to choose to project out of the range of .we first diagonalize by performing a singular value decomposition . here, is a positive ( semi-)definite diagonal matrix whose elements are the eigenvalues of .the columns of the unitary matrix are the normalized eigenvectors of .the singular value decomposition allows us to identify the linear combinations that are poorly determined by those corresponding to the small diagonal elements of .the eigenvectors with very small eigenvalues correspond to polarization patterns that essentially have no support inside the observed patch of the sky , and would lead to a set of redundant window functions if not removed from the analysis .the distribution of eigenvalues of is approximately bimodal as illustrated in fig .[ evals ] , and the exact definition of ` small ' is not critical when considering the range of the matrix .this bimodality arises because are approximately projection operators for large , and the fact that the range of is a rather small subspace . to remove redundant degrees of freedom from the spin 2 window functions ,we define an operator which projects onto the eigenvectors of whose eigenvalues are close to one .this amounts to removing the appropriate columns of .since is orthogonal , is column orthogonal and hence ( but ) .the matrix is the corresponding smaller square diagonal matrix , and we have we now multiply by , defined by {ij } \equiv \delta_{ij } [ { \tilde{{\bm{d}}}}_+]_{ii}^{-1/2} ] .since the noise correlation is proportional to the identity matrix for isotropic noise we can perform any rotation , where is unitary , and still have a set of variables with uncorrelated errors .the rotated variables are derived from window functions . for a particular theoretical model we can rotate to the frame where the signal matrix is diagonal .the rotated will then be fully statistically independent . in fig .[ realspace ] we plot the window functions for the which give the largest contributions to the signal for a typical flat model with a scale invariant tensor initial power spectrum and no reionization .the window functions are plotted as line segments of length at angle to the direction where the real quantities and are defined in terms of the real part of the ( rotated ) scalar window function as this definition ensures that for the imaginary part of , and should be defined as . for the case of azimuthal patches , as considered in fig .[ realspace ] where the windows are constructed for each , the imaginary part would produce a plot that is rigidly rotated by ( ) about the centre .plotting the window functions in this form is useful since the length of the line segment gives the sampling weight assigned to that point , and the orientation gives the direction of the linear polarization that contributes at each point .we could repeat the exercise for the in which case for the real part we would define . in fig .[ contribs ] we show the signal to noise in the magnetic variables for two azimuthal patches .as the patch size increases the signal in the modes with large also increases , reflecting the fact that for small patches the diagonalization of removes a greater relative fraction of the modes at each as increases .for small patches of the sky most of the signal at each is compressed into a small number of modes , whereas for larger patches the signal is distributed more uniformly . for cosmological models with reionizationthe signal for large patches is distributed less uniformly , with a small number of modes giving big contributions due to the greater large scale power .we are now in a position to use the magnetic observable to constrain the magnetic signal without having to worry about contamination with the much larger electric signal .the simplest thing to do would be to test the null hypothesis that the magnetic signal is due entirely to noise ( this hypothesis is unlikely to be ruled out pre - planck ) .if the signal were not consistent with noise it could indicate various things : the presence of cmb magnetic polarization , the presence of polarized foregrounds , that havent been removed successfully , systematic leakage into the magnetic mode in the analysis ( e.g. due to unaccounted for pointing errors , or pixelization effects ) , or - leakage in the observation ( e.g. due to unaccounted for cross - polarization in the instrument optics ) .magnetic polarization can originate from tensor modes , but also by weak lensing of the scalar electric polarization .the lensing signal should be dominant on small scales , and the magnetic variables could certainly be used to observe this signal . of more interest here is the larger scale contribution from tensor modes . in order to identify this componentwe shall have to model the lensing contribution , which becomes increasingly important as one tries to observe smaller tensor contributions .in the first three of the following subsections we assume that the magnetic signal is generated purely from the tensor modes , then in subsection [ lensing ] we show how our results can be adapted to account for the lensing signal .if the noise and signal are gaussian the will be gaussian and the simplest thing to do is a test by computing ( for isotropic noise this is just ) . whilst the cmb magnetic polarization signal is from tensor modesis expected to be gaussian , the lensing signal and any spurious or unexpected signal may not be .one may therefore also wish to do a more sophisticated set of statistical tests at this point .assuming that the signal is as expected any signal present is gaussian and would have a power spectrum as predicted for a near scale - invariant tensor initial power spectrum one can account for the expected form of the power spectrum and thereby increase the chance of a detection .we assume that the main parameters of the universe are well determined by the time magnetic polarization comes to be observed , so the shape of the magnetic polarization power spectrum is known to reasonable approximation ( the only significant freedom arising from the shape of the primordial tensor power spectrum ) .we compute the expected signal correlation for some particular tensor amplitude and say that the real signal is . assuming gaussian signal and noise the likelihood in this case is then given by }{|{\bm{n}}+ r{\bm{s}}|^{1/2}}. \label{likelihood}\ ] ] the likelihood distribution can be computed numerically from the observed , and gives the posterior probability distribution on the value of after multiplying by the prior . the magnetic signal is expected to be weak , and the detailed statistics for analysing such a signal are given in appendix [ app : stats ] .there we show that gives a measure of the number of ` sigmas ' of the detection the number of standard deviations of the maximum likelihood from pure noise ( ) assuming low signal to noise .we use this as a test statistic in monte - carlo simulations to compute detection probabilities at a given significance .we have checked at isolated points that using optimal statistics gains very little except for very small sky patches ( where there are only a small number of magnetic modes , each of which must have fairly high signal to noise in order to get a detection ) . using the variablesis clearly not optimal as we have thrown away some well determined linear combinations of and .however in the idealized situation considered here they should provide a robust way for testing for magnetic polarization .the number of modes thrown away is in any case quite small not more than two per mode for azimuthal patches .we quantify this information loss further in sec .[ lossless ] . of the current funded experiments ,only planck is likely to detect magnetic polarization if the levels are as predicted by standard cosmological models . as a toy model we consider the and ghz polarized channels of the planck high frequency instrument .we approximate the noise as isotropic and ignore the variation of beam width ( 7.1 and 5.0 arcmin full width at half maximum respectively ) between these channels .combining maps from these two channels with inverse variance weighting , we find , where and are expressed as dimensionless thermodynamic equivalent temperatures in units of the cmb temperature .we apply an azimuthally - symmetric galactic cut of degrees either side of the equator . the expected magnetic polarization power spectrum peaks at , and there is therefore no need to consider high resolutions so we can use without significant loss of power . in fig .[ planck_detect ] we show the probability of obtaining a detection with planck as a function of the true underlying scale - invariant tensor power spectrum amplitude ( defined as in ref . ) assuming a standard flat model .a tensor amplitude of would contribute about 1/10 of the large scale temperature detected by cobe , and is likely to be detected by planck if our model is at all realistic .this corresponds to being able to detect the signal from inflationary models with energy scale at horizon crossing .such models include the simple potentials , with . for a given detector sensitivity the magnitude of the signal that can be detected depends on the size of the sky patch that is observed .the signal to noise in each observable increases in proportion to the observation time per unit area .the noise covariance is proportional to which varies in proportion to the observed area for a given survey duration .for large areas the number of observables varies approximately in proportion to the area , which would make the number of ` sigmas ' of a chi - squared detection scale with the square root of the area . combining these two effects, the expected detection is therefore proportional to one over the square root of the area , and is larger if a fixed survey time is spent observing a smaller area .however for smaller areas the signal to noise on each observable becomes larger , and the number of variables decreases . with fewer variables the probability of obtaining no detection increases significantly .this is just the fact that if you observe a small patch of sky you have a larger chance of being unlucky and having a patch which has a small magnetic polarization signal everywhere .also the existence of the boundary becomes increasingly important for small patches and a larger fraction of the information is lost in order to obtain clean separation of the magnetic observables .the question of ` optimal ' survey size is somewhat delicate , as it depends on the probability distribution for the detection significance that one thinks is optimal . in fig .[ patchsize_probs ] we plot the probability of detecting various tensor amplitudes at 95 per cent and 99 per cent confidence for different survey sizes . in fig .[ patchsize ] we show the minimum gravitational wave ( tensor ) amplitude that might be detected at 99 per cent confidence as a function of the radius of the survey size .it is clear that radii in the range are optimal , though one can not be more precise without defining more specifically the aims of the observation .a radius of about would be a good compromise between being able to place good upper limits if there was no detection ( which favours radii closer to ) and having a better chance of detecting small amplitudes ( which favours smaller radii ) . the solid curves in fig .[ patchsize ] fully take account of the need to separate the magnetic signal from the ( much larger ) electric signal . by way of comparison ,the dashed curves show the minimum detectable amplitude obtainable if one could do perfect lossless separation , which is clearly impossible on an incomplete sky ( see sec .[ lossless ] ) . with lossless separation ,the best upper bounds are obtained for smaller patches since the size of the boundary is no longer important .the dashed curves in fig .[ patchsize ] can be compared with those given in ref . where perfect separation was assumed , the effects of finite sky coverage were treated only approximately , and a less rigorous approach to hypothesis testing was employed . ref . gives an improved analysis along the lines of ref . , and also performs calculations properly taking account of the mixing of electric and magnetic polarization through a ( brute - force ) fisher analysis in pixel space .unlike most of the foreground signals that might contaminate the observation , the magnetic signal from the lensing of scalar electric polarization has the same frequency spectrum as the primordial magnetic signal and so can not be removed easily by use of multi - frequency observations . in order to isolate the tensor contribution to the magnetic signal we can incorporate knowledge of the expected lensing power spectrum into the null - hypothesis covariance matrix ( we neglect the non - gaussianity of the lensed polarization ) . for the multipoles of interest for the tensor signalthe lensing signal is approximately white , with if the cobe signal is entirely generated by scalar modes . for large patches of sky , where the matrix is nearly proportional to the identity matrix, the lensing signal contributes like an additional isotropic noise with .we have checked this approximation by computing the following results exactly in particular cases , with agreement to within a few percent for patch sizes with .the effect of the lensing is therefore simply to increase the effective noise by a constant amount . for the planck satellite the effect is small , reducing the that could be observed by about per cent .however for the smaller surveys with better sensitivity , considered in figs .[ patchsize_probs ] and [ patchsize ] , the effect is much more important . for a one year survey of radius with sensitivity the noise varianceis given by where is the fraction of the sky which is observed and .this noise gives the tensor amplitudes plotted in fig .[ patchsize ] . incorporating the lensing effect the actual tensor amplitude one could detect in an experiment with sensitivity and duration is where is the amplitude for a one year mission with and lensing ignored ( i.e. the amplitude plotted in fig .[ patchsize ] ) .this allows our previous results to be modified for inclusion of the lensing signal .we have plotted the modified results for various survey sensitivities in fig .[ patchsize_lens ] .the optimal survey size now depends on the sensitivity as sensitivity improves the lensing signal becomes more important and one needs to survey larger scales in order to accurately measure the difference in variance expected with the tensor signal . for large patchsizes the tensor amplitude that can be detected in the absence of lensing is proportional to . allowing for lensing, there is an optimal survey size at [ if there is a solution with , in other words when the variance of the instrument noise is equal to the lensing signal .there is a lower limit of that can be measured even with perfect sensitivity , when the tensor contribution can not be distinguished from random sampling variations in the lensing signal distribution .this corresponds to an inflation model with energy scale , in broad agreement with ref . for a three sigma detection .this situation could only be improved if one could find ways to obtain information about the particular realization of the lensed signal .we have assumed that component separation and source subtraction can be performed exactly so that the observed signal is only lensed cmb .polarized thermal dust emission is expected to generate a significant magnetic signal at roughly the level shown by the line in fig . [ patchsize_lens ] at .separation of this signal from the cmb signal should be possible with multi - frequency observations , and it should then not have a significant effect on our results .we now compare the above analysis with truly lossless methods .lossless , likelihood analysis for cmb polarization in pixel space has been considered recently in ref . . in this sectionwe consider lossless and nearly lossless methods in harmonic space .a simple way to incorporate most of the magnetic signal for constraining the tensor modes is to use the unprojected variables , where and were defined in sec .[ sec : sep ] .the null - hypothesis covariance matrix can be computed including the expected signal from the electric polarization , and the analysis can be performed as before .this is marginally superior to using the projected variables if the tensor amplitude is quite high , as shown in fig .[ planck_detect ] for the planck mission . for smaller tensor amplitudes the entangled linear combinations of and modesare dominated by the electric component and performing the projection looses very little .using the projection gives one clean separation , and there is no need to know the electric polarization power spectrum . by identifying variables that depend only on the electric and magnetic signals at the level of the map we also do not need to assume gaussianity , so we could for example perform gaussianity tests on the two physically distinct polarization types independently .we now consider the full joint analysis of the electric and magnetic polarization , with the pseudo - multipoles and forming our fundamental data vector since we are no longer worrying about - separation so we can equally well use the block - diagonal frame where performing a singular value decomposition of the block - diagonal matrix ( where the matrices and should not be confused with those defined in sec .[ sec : sep ] ) , we can identify the well determined linear combinations as before this diagonalization is equivalent to defining harmonic coefficients with respect to a complete set of spin two harmonics which are orthonormal over the patch of sky , in the same way as for the spin zero cut - sky temperature analysis .as before , this construction ensures that for isotropic noise the noise correlation is diagonal the signal correlation is given by where the and are the diagonal electric and magnetic power spectrum matrices respectively , and we have assumed that . if the noise and signal are gaussian we can proceed to do a likelihood analysis for the power spectra using }{|{\bm{n}}+{\bm{s}}|^{1/2}}.\ ] ] the coupling matrices can be computed quickly for an azimuthally symmetric patch of sky , as described in appendix [ app : ints ] , and modes with different decouple .the problem is therefore tractable .however it is not nearly so simple to find a maximum likelihood estimate of the magnetic amplitude , and in general there will be complicated correlations between the recovered power spectra . by using the lossy projection in the previous sections we have essentially shown that this likelihood function is ` nearly ' separable . making it separable costs something in terms of lost information , butit significantly simplifies the problem . using the projected variables also reduces the size of the matrices , so performing the matrix inversions is significantly faster .if the signal is determined to be negligible one would want to apply an efficient nearly - lossless method to estimate the electric power spectrum ( or to do parameter estimation ) , so we now consider the case when one polarization type is absent .if the signal can be neglected we have where we have done a singular value decomposition as before so that we can find the well determined linear combinations of the : the matrices one has to invert to do a likelihood analysis are now one half the size of those in the optimal method when both polarization types are present , and so the problem is numerically much faster . however isotropic noise no longer gives the simple diagonal noise covariance , though this can always be rectified by using .in practice a nearly optimal method would probably be more appropriate using only the unprojected variables , where and were defined in sec .[ sec : sep ] .these variables have diagonal noise properties like the for isotropic noise , and the computational saving may be significant when analysing high resolution polarization maps .we have checked numerically that including in the analysis gains very little even for low and small patches . for large area surveys at high resolutionthe information loss will probably be negligible .there are exactly equivalent relations for the well determined magnetic variables in the case when vanishes .this case is of little practical interest , since the signal would have to be removed to within the magnitude of the signal , and this is impossible on an incomplete sky since the two are not unambiguously distinguishable without accurate boundary data .however , supposing that could be removed is useful theoretically as we can then compute the best obtainable magnetic signal to compare with what we obtain using our projected variables . the information lost due to the projection depends on the cosmology .models with reionization have more power on large scales and a greater fraction of the power is lost due to removal of the boundary terms . for our toy model of the planck satellitewe find that the amplitude that could be detected at given significance and probability is reduced by about per cent by the projection for a cosmology with reionization at , but only by per cent for a zero reionization model . in the reionization model oneis loosing a lot of the additional information in the low multipoles that in the absence of the projection would have high signal to noise .the net result is that the reionization model has an only slightly higher chance of giving a tensor detection despite having more large scale power .by using the unprojected variables and incorporating the expected electric polarization contamination as an extra noise term one can approximately halve this loss .the lossless result is compared to the realistic projected result for general circular sky patches in fig .[ patchsize ] for an observation with much higher sensitivity .the cost we incur by using the non - optimal method in terms of slightly larger error bars on the signal , or a less powerful test of detection at a given significance , is small for large survey areas though it does increase for small sky patches .for these sensitive observations the electric signal is much larger than the magnetic signal and essentially nothing is lost by performing the projection rather than including electric contamination as a large extra noise . to make a detection of the magnetic signal on such a small sky patch with the planned long duration balloon observations the tensor / scalar ratio would need to be significantly larger than one , which is too large to be allowed by the current temperature anisotropy observations course , seeing if there is only a small magnetic signal is an important consistency check for current models with low tensor / scalar ratio to pass .one simple way to reduce the information loss in our method would be to use data objects that include not only the surface integrals , but also those parts of the boundary terms in eqs .( [ eq : i2wp ] ) and ( [ eq : im2wp ] ) that do not depend on on the boundary .such objects would separate electric and magnetic polarization exactly if the scalar window functions were constructed to vanish on the boundary .the problem of producing a non - redundant set of such windows could be tackled with a simple variant of the harmonic - based method presented in sec .[ sec : sep ] . the additional boundary contribution would cancel that part of that couples to the normal derivative of the window function on the boundary , leaving a single non - zero singular value ( for ) to project out .the net effect would be that for azimuthal patches we would gain one extra variable per for , though the noise properties of these variables would not be as simple as if the line integrals were not included , and the problem of performing line integrals with pixelized data is non - trivial .for reionization models ( which have significant large scale power ) the reduction in information loss may be worth the effort required to overcome these obstacles , though a full analysis with the non - separated variables would probably work better .we have considered the problem of producing statistically independent measures of the electric and magnetic polarization from observations covering only a portion of the sky .although the separation of the polarization field into electric and magnetic modes is not unique in the presence of boundaries , we have shown how to construct window functions that are guaranteed to probe separately the electric and magnetic polarization exactly over azimuthally - symmetric patches of the sky .we presented a harmonic - based method for efficient construction of the windows that automatically removes redundancy due to the finite sky coverage .in addition , our window functions return separated electric and magnetic variables that have very simple diagonal noise correlations for idealized noise on the polarization map . for azimuthal patches separating the electric and magnetic polarizationcomes at the cost of losing two pieces of information per mode , or roughly twice the number of pixels of area on the boundary of the patch . for large patchesthis information loss is small unless there is large scale power due to reionization , but for smaller patches it can be more severe due to the limited support of the high spin - weight 2 harmonics in the patch .although we have proved that our method gives exact separation for azimuthal patches , the harmonic - based construction should produce window functions that give approximate separation for arbitrarily shaped patches with similar information loss to the azimuthal case .we showed how the variables constructed from our window functions could be used to constrain the amplitude of the magnetic signal without contamination from the much larger electric signal .for the first time , we made predictions for the tensor amplitude that planck should be able to detect taking proper account of excluding the galactic region .if other non - negligible foregrounds can be removed using the other frequency channels , planck should be able to detect the magnetic signal predicted by some simple inflationary models . for less sensitive observations, our window functions should nevertheless be useful to set upper limits on the magnetic signal , and may also aid the identification of systematic effects in the instrument or analysis pipeline .if the magnetic signal is shown to be consistent with noise then we showed how one can use all the well determined polarization pseudo - multipoles to analyse the electric polarization power spectrum without loss of information .the analysis using these variables is no more complicated than the analysis of temperature anisotropies using cut - sky orthogonalized scalar harmonic functions .we have only considered isotropic noise here , however , as long as the noise is azimuthally symmetric the separation of modes will still work , and the problem remains computationally tractable though rather less simple . in practice , there will be several other complications in real - life cmb polarimetry observations that will impact on the map - making and subsequent analysis stages .further careful investigation of the propagation of instrument effects such as beam asymmetries , straylight , cross - polar contamination , and pointing instabilities through the map - making stage will be required before the programme for analysing magnetic polarization outlined in this paper will be realizable .we acknowledge use of the lapack package for performing the matrix decompositions .we acknowledge the support of pparc via a pptc special program grant for this work .ac acknowledges a pparc postdoctoral fellowship .we also thank pparc and hefce for support of the cosmos facility .in general a spin - weight quantity is defined over a two - dimensional riemannian manifold with respect to an orthonormal diad field . the local freedom in the choice of diad amounts to the transformations of the ( complex ) null vectors .a quantity is defined to be of spin - weight if under the transformation ( [ eq : appa0a ] ) . to every spin - weight object we can associate a ( complex ) symmetric trace - free , rank- tensor : for , where the irreducible tensor product .the inverse relation is for we define .the spin raising and lowering operators and are defined by the null diad components of the covariant derivatives of : ( the minus signs are conventional ). in cmb polarimetry we are concerned with fields defined over the sphere , in which case the transformation in eq .( [ eq : appa0a ] ) corresponds to a _ left_-handed rotation of the diad about the outward normal . choosing the orthonormal diad to be aligned withthe coordinate basis vectors and of a spherical polar coordinate system , we have and .it follows that for this choice of diad the spin raising and lowering operators reduce to an elegant interpretation of the spin raising and lowering operators on the sphere can be obtained by considering spin - weight objects defined on a diad at ( position - dependent ) angle to the coordinate directions , so that where is defined on and . in this case , the spin raising and lowering operators can be related to the angular momentum operators for a rigid body .working in a representation where the orientation of the body is specified in terms of euler angles follows ref . , i.e. , successive right - handed rotations by , , and about the , , and -axes respectively . the use of , which is minus the third euler angle , as a configuration variable for the rigid body is necessary to relate the angular momentum operators directly to the spin raising and lowering operators with the ( consistent ) conventions we have adopted here . ] , the angular momentum operators on the _ body - fixed axes _take the form where .these operators satisfy the commutation relations = \mp k_\pm , \qquad [ k_+,k_-]=-2 k_z , \label{eq : appa0}\ ] ] so that are lowering / raising operators with respect to the eigenvalues of .note that the signs in these commutation relations are different from those for angular momentum operators on a fixed frame since on the body - fixed axes we have = - i k_z ] , the fisher curvature for small . in this limitthe null - buster remains a good statistic and .however in general performs significantly better when the eigenvalues are distributed less evenly , or when there are not that many eigenvalues . as the signal to noise increases also performs much less sub - optimally than the null - buster. for the case of magnetic polarization observationsthe statistic outperforms the null - buster for a wide range of patch sizes in realistic reionization models .the large scale magnetic signal coming from low redshift ( ) reionization gives a small number of modes with relatively high signal to noise ( see fig .[ contribs ] ) , and the conditions under which the null - buster is a good statistic are therefore not satisfied .the qualitative reason that the null - buster performs poorly is that the position of the maximum and the curvature of the likelihood function are correlated , so dividing the actual maximum by the expected curvature does not give you an accurate measure of the number of ` sigmas ' from zero for a particular observation .this makes the null hypothesis distribution unnecessarily broad at large values , and therefore makes it harder to rule out the null hypothesis with good significance .the statistic has a much sharper distribution than the null - buster ( which has a distribution similar to chi - squared ) in the alternative hypothesis , and the value of corresponds much more closely to the significance ( measured in gaussian - like ` sigmas ' ) of the result .a few points can be made in the null - buster s defence .firstly it is slightly easier to compute than .secondly , since we motivated the by assuming a gaussian signal it is conceivable that the null - buster could perform better with certain non - gaussian signals .lastly , the null - buster is quadratic which makes it easy to calculate the mean and variance analytically .however it is clear that with gaussian signals using the null - buster is in general significantly sub - optimal .
the full sky cosmic microwave background polarization field can be decomposed into ` electric ' and ` magnetic ' components . working in harmonic space we construct window functions that allow clean separation of the electric and magnetic modes from observations over only a portion of the sky . we explicitly demonstrate the method for azimuthally symmetric patches , but also present it in a form in principle applicable to arbitrarily - shaped patches . from the window functions we obtain variables that allow for robust estimation of the magnetic component without risk of contamination from the probably much larger electric signal . the variables have a very simple noise properties , and further analysis using them should be no harder than analysing the temperature field . for an azimuthally - symmetric patch , such as that obtained from survey missions when the galactic region is removed , the exactly - separated variables are fast to compute . we estimate the magnetic signal that could be detected by the planck satellite in the absence of extra - galactic foregrounds . we also discuss the sensitivity of future experiments to tensor modes in the presence of a magnetic signal generated by weak lensing , and give lossless methods for analysing the electric polarization field in the case that the magnetic component is negligible . a series of appendices review the spin weight formalism and give recursion relations for fast computation of the spin - weighted spherical harmonics and their inner products over azimuthally - symmetric patches of the sphere . a further appendix discusses the statistics of weak signal detection .
cooperation has played a fundamental role in the early evolution of our societies and continues playing a major role still nowadays . from the individual level , where we cooperate with our romantic partner , friends , and co - workers in order to handle our individual problems , up to the global level where countries cooperate with other countries in order to handle global problems ,our entire life is based on cooperation .given its importance , it is not surprising that cooperation has inspired an enormous amount of research across all biological and social sciences , spanning from theoretical accounts to experimental studies and numerical simulations .since the resolution of many pressing global issues , such as global climate change and depletion of natural resources , requires cooperation among many actors , one of the most relevant questions about cooperation regards the effect of the size of the group on cooperative behavior . indeed , since the influential work by olson , scholars have recognized that the size of a group can have an effect on cooperative decision - making .however , the nature of this effect remains one of the most mysterious areas in the literature , with some scholars arguing that it is negative , others that it is positive , and yet others that it is ambiguous or non - significant .interestingly , the majority of field experiments seem to agree on yet another possibility , that is , that group size has a curvilinear effect on cooperative behavior , according to which intermediate - size groups cooperate more than smaller groups and more than larger groups .the emergence of a curvilinear effect of the group size on cooperation in real life situations is also supported by data concerning academic research , which in fact support the hypothesis that research quality of a research group is optimized for medium - sized groups . herewe aim at shedding light on this debate , by providing evidence that a single parameter can be responsible for all the different and apparently contradictory effects that have been reported in the literature .specifically , we show that the effect of the size of the group on cooperative decision - making depends critically on a parameter taking into account different ways in which the notion of cooperation itself can be defined when there are more than two agents . indeed , while in case of only two agents a cooperator can be simply defined as a person willing to pay a cost to give a greater benefit to the other person , the same definition , when transferred to situations where there are more than two agents , is subject to multiple interpretations .if cooperation , from the point of view of the cooperator , means paying a cost to create a benefit , what does it mean from the point of view of the _ other _player__s _ _ ? does get earned by each of the other players or does it get shared among all other players , or none of them ?in other words , what is the marginal return for cooperation ? of course , there is no general answer and , in fact , previous studies have considered different possibilities .for instance , in the standard public goods game it is assumed that gets earned by each player ( including the cooperator ) ; instead , in the n - person prisoner s dilemma ( as defined in ) it is assumed that gets shared among all players ; yet , the volunteer s dilemma and its variants using critical mass rest somehow in between : one or more cooperators are needed to generate a benefit that gets earned by each player , but , after the critical mass is reached , new cooperators do not generate any more benefit ; finally , it has been pointed out that a number of realistic situations can be characterized by a marginal return which increases linearly for early contributions and then decelerates , reflecting the natural decrease of marginal returns that occurs when output limits are approached . in order to take into account this variety of possibilities , we consider a class of _ social dilemmas _ parametrized by a function describing the marginal return for cooperation when people cooperate in a group of size .more precisely , our _ general public goods game _ is the n - person game in which n people have to simultaneously decide whether to cooperate ( c ) or defect ( d ) . in presence of a total of cooperators ,the payoff of a cooperator is defined as ( represents the cost of cooperation ) and the payoff of a defector is defined as . in order to have a social dilemma ( i.e. , a tension between individual benefit and the benefit of the group as a whole ) we require that : * full cooperation pays more than full defection , that is , , for all ; * defecting is individually optimal , regardless of the number of cooperators , that is , for all , one has . the aim of this paper is to provide further evidence that the function might be responsible for the confusion in the literature about group size effect on cooperation . in particular , we focus on the situation , inspired from realistic scenarios , in which the natural output limits of the public good imply that increases fast for small s and then stabilizes . indeed , in our previous work , we have shown that the size of the group has a positive effect on cooperation in the standard public goods game and has a negative effect on cooperation in the n - person prisoner s dilemma .a reinterpretation of these results is that , if increases linearly with ( standard public goods game ) , then the size of the group has a positive effect on cooperation ; and , if is constant with ( n - person prisoner s dilemma ) , then the size of the group has a negative effect on cooperation .this reinterpretation suggests that , in the more realistic situations in which the benefit for full cooperation increases fast for early contributions and then decelerates once the output limits of the public good are approached , we may observe a curvilinear effect of the group size , according to which intermediate - size groups cooperate more than smaller groups and more than larger groups .to test this hypothesis , we have conducted a lab experiment using a general public goods game with a piecewise function , which increases linearly up to a certain number of cooperators , after which it remains constant . while it is likely that realistic scenarios would be better described by a smoother function , this is a good approximation of all those situationsin which the natural output limits of a public good imply that the increase in the marginal return for cooperation tends to zero as the number of contributors grows very large .the upside of choosing a piecewise function is that , in this way , we could present the instructions of the experiment in a very simple way , thus minimizing random noise due to participants not understanding the decision problem at hand ( see method ) .our results support indeed the hypothesis of a curvilinear effect of the size of the group on cooperative decision - making .taken together with our previous work , our findings thus ( i ) shed light on the confusion regarding the group size effect on cooperation , by pointing out that different values of a single parameter might give rise to qualitatively different group size effects , including positive , negative , and even curvilinear ; and ( ii ) they help fill the gap between lab experiments and field experiments . indeed , while lab experiments use either the standard public goods game or the n - person prisoner s dilemma , _ real _ public goods game are mostly characterized by a marginal return of cooperation that increases fast for early contributions and then approaches a constant function as the number of cooperators grows very large - and our results provide evidence that these three situations give rise to three different group size effects .we have recruited participants through the online labour market amazon mechanical turk ( amt ) . after entering their turkid ,participants were directed to the following instruction screen . _ welcome to this hit . __ this hit will take about 5 minutes and you will earn 20c for participating . _ _ this hit consists of a decision problem followed by a few demographic questions . __ you can earn an additional bonus depending on the decisions that you and the participants in your cohort will make ._ _ we will tell you the exact number of participants in your cohort later ._ _ each one of you will have to decide to join either group a or group b. _ _ your bonus depends on the group you decide to join and on the size of the two groups , a and b , as follows : _ * _ if the size of group a is 0 ( that is , everybody chooses to join group b ) , then everybody gets 10c _ * _ if the size of group a is 1 , then the person in group a gets 5c and each person in group b gets 15c _ * _ if the size of group a is 2 , then each person in group a gets 10c and each person in group b gets 20c _ * _ if the size of group a is 3 , then each person in group a gets 15c and each person in group b gets 25c _ * _ if the size of group a is 4 , then each person in group a gets 20c and each person in group b gets 30c _ * _ and so on , up to 10 : if the size of group a is 10 , then each person in group a gets 50c and each person in group b gets 60c _ * _ however , if the size of group a is larger than 10 , then , independently of the size of the two groups , each person in group a will still get 50c and each person in group b will still get 60c . _ after reading the instructions , participants were randomly assigned to one of 12 conditions , differing only on the size of the cohort ( ) .for instance , the decision screen for the participants in the condition where the size of the cohort is 3 was : _ you are part of a cohort of 3 participants. _ _ which group do you want to join ? _ by using appropriate buttons , participants could select either group a or group b. we opted for not asking any comprehension questions .we made this choice for two reasons .first , with the current design , it is impossible to ask general comprehension questions such as `` what is the strategy that benefits the group as a whole '' , since this strategy depends on the strategy played by the other players .second , we did not want to ask particular questions about the payoff structure since this may anchor the participants reasoning on the examples presented .of course , a downside of our choice is that we could not avoid random noise .however , as it will be discussed in the results section , random noise can not be responsible for our findings . instead, our results would have been even cleaner , if we had not had random noise , since the initial increase of cooperation and its subsequent decline would have been more pronounced ( see results section for more details ) . after making their decisions ,participants were asked to fill a standard demographic questionnaire ( in which we asked for their age , gender , and level of education ) , after which they received the `` survey code '' needed to claim their payment . after collecting all the results ,bonuses were computed and paid on top of the participation fee , that was $ 0.20 . in casethe number of participants in a particular condition was not divisible by the size of the cohort ( it is virtually impossible , in amt experiments , to decide the exact number of participants playing a particular condition ) , in order to compute the bonus of the remaining people we formed an additional cohort where these people where grouped with a random choice of people for which the bonus had been already computed .additionally , we anticipate that only 98 subjects participated in the condition with n=100 . this does not generate deception in the computation of the bonuses since the payoff structure of the game does not depend on ( as long as ) . as a consequence of these observations , no deception was used in our experiment . according to the dutch legislation ,this is a non - wmo study , that is ( i ) it does not involve medical research and ( ii ) participants are not asked to follow rules of behavior .see http://www.ccmo.nl / attachments / files / wmo- engelse-vertaling-29-7-2013-afkomstig-van-vws.pdf , section 1 , article 1b , for an english translation of the medical research act .thus ( see http://www.ccmo.nl / en / non - wmo- research ) the only legislations which apply are the agreement on medical treatment act , from the dutch civil code ( book 7 , title 7 , section 5 ) , and the personal data protection act ( a link to which can be found in the previous webpage ) .the current study conforms to both . in particular , anonymity was preserved because amt `` requesters '' ( i.e. , the experimenters ) have access only to the so - called turkid of a participant , an anonymous i d that amt assigns to a subject when he or she registers to amt . additionally , as demographic questions we only asked for age , gender , and level of education .a total of 1.195 _ distinct _ subjects located in the us participated in our experiment . _distinct _ subjects means that , in case two or more subjects were characterized by either the same turkid or the same ip address , we kept only the first decision made by the corresponding participant and eliminated the rest .these multiple identities represent usually a minor problem in amt experiments ( only 2% of the participants in the current dataset ) .participants were distributed across conditions as follows : 101 participants played with , 99 with , 102 with , 101 with , 98 with , 103 with , 97 with , 99 with , 97 with , 101 with , 99 with , 98 with .1 summarizes the main result .the rate of cooperation , that is the proportion of people opting for joining group a , first increases as the size of the group increases from to , then it starts decreasing .the figure suggests that the relation between the size of the group and the rate of cooperation is _ not _ quadratic : while the initial increase of cooperation is relatively fast , the subsequent decrease of cooperation seems extremely slow .this is confirmed by linear regression predicting rate of cooperation as a function of and , which shows that neither the coefficient of nor that of are significant ( , resp . ) .for this reason we use a more flexible econometric model than the quadratic model , consisting of two linear regressions , one with a positive slope ( for small s ) and the other one with a negative slope ( for large s ) . as a switching point, we use the , corresponding to the size of the group which reached maximum cooperation .doing so , we find that both the initial increase of cooperation and its subsequent decline are highly significant ( from to : coeff , ; from to : coeff , ) . to : coeff , ; from to : coeff , )._,title="fig : " ] [ fig : intermediate ] we conclude by observing that not only random noise can not explain our results , but , without random noise , the effect would have been even stronger . indeed , first we observe that there is no a priori worry that random noise would interact with any condition and so we can assume that it is randomly distributed across conditions .then we observe that subtracting a binary distribution with average from a binary distribution with average , one would obtain a distribution with average .similarly , subtracting a binary distribution with average from a binary distribution with average one would obtain a distribution with average .thus , if the s are the averages that we have found ( containing random noise ) and the s are the _ true _ averages ( without random noise ) , the previous inequalities allow us to conclude that the initial increase of cooperation and its following decrease would have been stronger in absence of random noise .here we have reported on a lab experiment providing evidence that the size of a group can have a curvilinear effect on cooperation in one - shot social dilemmas , with intermediate - size groups cooperating more than smaller groups and more than larger groups . joining the current results with those of a previously published study of us , we can conclude that group size can have qualitatively different effects on cooperation , ranging from positive , to negative and curvilinear , depending on the particular decision problem at hand .interestingly , our findings suggest that different group size effects might be ultimately due to different values of a single parameter , the number , describing the benefit for full cooperation .if is constant in , then group size has a negative effect on cooperation ; if increases linearly with , then group size has a positive effect on cooperation ; in the _ middle _ , all sorts of things may a priori happen .in particular , in the realistic situation in which is a piecewise function that increases linearly with up to a certain and then remains constant , then group size has a curvilinear effect , according to which intermediate - size groups cooperate more than smaller groups and more than larger groups .see table 1 ..summary of the different group size effects on cooperation depending on how the benefit for full cooperation varies as a function of the group size .[ cols="<,^,^",options="header " , ] to the best of our knowledge , ours is the first study reporting a curvilinear effect of the group size on cooperation in an experiment conducted in the ideal setting of a lab , in which confounding factors are minimized .previous studies reporting a qualitatively similar effect used field experiments , in which it is difficult to isolate the effect of the group size from possibly confounding effects . in our case , the only possibly confounding factor is random noise due to a proportion of people that may have not understood the rules of the decision problem .as we have shown , our results can not be driven by random noise and , in fact , the curvilinear effect would have been even stronger , without random noise .moreover , since our experimental design was inspired by a tentative to mimic all those _ real _ public goods games in which the natural output limits of the public good imply that the increase of the marginal return for cooperation , when the number of cooperators diverges , tends to zero , our results might explain the apparent contradiction that field experiments tend to converge on the fact that the effect of the group size is curvilinear , while lab experiments tend to converge on either of the two linear effects .our contribution is also conceptual , since we have provided evidence that a single parameter might be responsible for different group size effects : the parameter , describing the way the benefit for full cooperation varies as a function of the size of the group . of course, we do not pretend to say that this is the only ultimate explanation of why different group size effects have been reported in experimental studies .in particular , in real - life situations , which are typically repeated and in which communication among players is allowed , other factors , such as within - group enforcement , may favor the emergence of a curvilinear effect of the group size on cooperation , as highlighted in . if anything , our results provide evidence that the curvilinear effect on cooperation goes beyond contingent factors and can be found also in the ideal setting of a lab experiment using one - shot anonymous games .we believe that this is a relevant contribution in light of possible applications of our work .indeed , the difference between and the total cost of full cooperation can be interpreted has the incentive that an institution needs to pay to the contributors in order to make them cooperate .since institutions are interested in minimizing their costs and , at the same time , maximizing the number of cooperators , it is crucial to understand what is the `` lowest '' such that the resulting effect of the group size on cooperation is positive .this seems to be an non - trivial question .for instance , does give rise to a positive effect or is it still curvilinear or even negative ?the technical difficulty here is that it is hard to design an experiment to test people s behavior in these situations , since one can not expect that an average person would understand the rules of the game when presented using a logarithmic functions . in terms of economic models , our results are consistent with utilitarian models such as the charness & rabin model and the novel cooperative equilibrium model .both these models indeed predict that , in our experiment , cooperation initially ( i.e. , for ) increases with ( see for the details ) , and then starts decreasing .this behavioral transition follows from the simple observation that free riding when there are more than 10 cooperators costs zero to each of the other players and benefits the free - rider .thus , cooperation in larger groups is not supported by utilitarian models , which then predict a decrease in cooperative behavior whose speed depends on the particular parameters of the model , such as the extent to which people care about the group payoff versus their individual payoff , and people s beliefs about the behavior of the other players .thus our results add to the growing body of literature showing that utilitarian models are qualitatively good descriptors of cooperative behavior in social dilemmas .however , we note that while theoretical models predict that the rate of cooperation should start decreasing at , our results show that the rate of cooperation for is marginally significantly higher than the rate of cooperation for ( rank sum , ) .although ours is a between - subjects experiment , this finding seems to hint at the fact that there is a proportion of subjects who would defect for and cooperate for .this is not easy to explain : why should a subject cooperate with and defect with ?one possibility is that there is a proportion of `` inverse conditional cooperators '' , who cooperate only if a small percentage of people cooperate : if these subjects believe that the rate of cooperation decreases quickly after , they would be more motivated to cooperate for than for .another possibility , of course , is that this discrepancy is just a false positive . in any case , unfortunately our experiment is not powerful enough to detect the reason of this discrepancy between theoretical predictions and experimental results and thus we leave this interesting question for future research .v.c . is supported by the dutch research organization ( nwo ) grant no .this material is based upon work supported by the national science foundation under grant no .0932078000 while the first author was in residence at the mathematical science research institute in berkeley , california , during the spring 2015 semester . 1 kaplan h , gurven m. the natural history of human food sharing and cooperation : a review and a new multi - individual approach to the negotiation of norms .in : gintis h , bowles s , boyd r , fehr e , editors .moral sentiments and material interests : the foundations of cooperation in economic life .cambridge , ma : mit press ; 2005 .tomasello m. a natural history of human thinking .cambridge , ma : harvard university press ; 2014 .trivers r. the evolution of reciprocal altruism .q rev biol .1971 ; 46 : 35 - 57 .axelrod r , hamilton wd .the evolution of cooperation . science .1981 ; 211 : 1390 - 1396 .fehr e , fischbacher u. the nature of human altruism . nature .2003 ; 425 : 785 - 791 .five rules for the evolution of cooperation . science . 2006 ; 314 , 1560 - 1563 .perc m , szolnoki a. coevolutionary games - a mini review .biosystems 2010 ; 99 : 109 - 125 .press wh , dyson fj .iterated prisoner s dilemma contains strategies that dominate any evolutionary opponent .proc natl acad sci usa .2012 ; 109 : 10409 - 10413 .perc m , gmez - gardees j , szolnoki a , flora lm , moreno y. evolutionary dynamics of group interactions on structured populations : a review .j roy soc interface .2013 ; 10 : 20120997 .capraro v. a model of human cooperation in social dilemmas .plos one 2013 ; 8 : e72427 .hilbe c , nowak ma , sigmund k. the evolution of extortion in iterated prisoners dilemma games .proc natl acad sci usa .2013 ; 110 : 6913 - 6918 .rand dg , nowak ma .human cooperation .trends cogn sci .2013 ; 17 : 413 - 425 .capraro v , halpern jy .translucent players : explaining cooperative behavior in social dilemmas . 2014 .available : http://ssrn.com/abstract=2509678 .andreoni j. why free ride ?j public econ .1988 ; 37 : 291 - 304 .fischbacher u , gchter s , fehr e. are people conditionally cooperative ?evidence from a public goods experiment .econ lett .2001 ; 71 : 397 - 404 .milinski m , semmann d , kranbeck hj .reputation helps solve the ` tragedy of the commons ' .2002 ; 415 : 424 - 426 .frey bs , meier s. social comparisons and pro - social behavior .testing ` conditional cooperation ' in a field experiment .am econ rev . 2004 ; 94 : 1717 - 1722 .fischbacher u , gchter s. social preferences , beliefs , and the dynamics of free riding in public goods experiments .am econ rev . 2010 ; 100 : 541 - 556 .traulsen a , semman d , sommerfeld rd , krambeck h - j , milinski m. human strategy updating in evolutionary games .proc natl acad sci usa .2010 ; 107 : 2962 - 2966 .apicella cl , marlowe fw , fowler jh , christakis na . social networks and cooperation in hunter - gatherers . nature .2012 ; 481 : 497 - 501 .capraro v , jordan jj , rand dg .heuristics guide the implementation of social preferences in one - shot prisoner s dilemma experiments .2014 ; 4 : 6790 .capraro v , smyth c , mylona k , niblo ga .benevolent characteristics promote cooperative behaviour among humans .plos one . 2014 ; 9 : e102881 .capraro v , marcelletti a. do good actions inspire good actions in others ?2014 ; 4 : 7470 .hauser op , rand dg , peysakhovich a , nowak ma . cooperating with the future .2014 ; 511 : 220 - 223 .gallo e , yan c. the effects of reputational and social knowledge on cooperation .proc natl acad sci usa .doi : 10.1073/pnas.1415883112 .nowak ma , may rm .evolutionary games and spatial chaos . nature .1992 ; 359 : 826 - 829 .boyd r , gintis h , bowles s , richerson pj .the evolution of altruistic punishment .proc natl acad sci usa .2003 ; 100 : 3531 - 3535 .santos fc , pacheco jm .scale - free networks provide a unifying framework for the emergence of cooperation .phys rev lett .2005 ; 95 : 098104 .perc m , szolnoki a. social diversity and promotion of cooperation in the spatial prisoner s dilemma game .phys rev e. 2008 ; 77 : 011904 .roca cp , cuesta ja , snchez a. evolutionary game theory : temporal and spatial effects beyond replicator dynamics .phys life rev .2009 ; 6 : 208 - 249 .gmez - gardees j , reinares i , arenas a , flora lm . evolution of cooperation in multiplex networks .2012 ; 2 : 620 .jiang l - l , perc m. spreading of cooperative behaviour across interdependent groups .2013 ; 3 : 2483 .olson m. the logic of collective action : public goods and the theory of groups .cambridge : harvard university press ; 1965 .dawes r , mctavish j , shaklee h. behavior , communication , and the assumptions about other people s behavior in a commons dilemma situation .j pers soc psychol .1977 ; 35 : 1 - 11 .komorita ss , lapwortb wc .cooperative choice among individuals versus groups in an n - person dilemma situation .j pers soc psychol .1982 ; 42 : 487 - 496 .baland jm , platteau jp .the ambiguous impact of inequality on local resource management .world dev .1999 ; 27 : 773 - 788 . ostrom e. understanding institutional diversity .princeton : princeton university press ; 2005 gruji j , eke b , cabrales a , cuesta ja , snchez a. three is a crowd in iterated prisoner s dilemmas : experimental evidence on reciprocal behavior .sci rep . 2012; 2 : 638 .vilone d , giardini f , paolucci m. partner selection supports reputation - based cooperation in a public goods game .preprint/ available : arxiv:1410.6625 .nosenzo d , quercia s , sefton m. cooperation in small groups : the effect of group size .2015 ; 18 : 4 - 14 .mcguire mc .group size , group homogeneity , and the aggregate provision of a pure public good under cournot behavior .public choice .1974 ; 18 : 107 - 126 .isaac rm , walker jm , williams aw .group size and the voluntary provision of public goods : experimental evidence utilizing large groups .j public econ .1994 ; 54 : 1 - 36 .haan m , kooreman p. free riding and the provision of candy bars .j public econ . 2002 ; 83 : 277 - 291 . agrawal a , chhatre a. explaining success on the commons : community forest governance in the indian himalaya .world dev .2006 ; 34 : 149 - 166 .masel j. a bayesian model of quasi - magical thinking can explain observed cooperation in the public good game .j econ behav organ . 2007 ; 64 : 216 - 231 .zhang xq , zhu f. group size and incentives to contribute : a natural experiment at chinese wikipedia .am econ rev .2001 ; 101 : 1601 - 1615 .szolnoki a , perc m. group - size effects on the evolution of cooperation in the spatial public goods game .phys rev e. 2011 ; 84 : 047102 .esteban j , ray d. collective action and the group size paradox .am polit sci rev .2001 ; 95 : 663 - 672 .pecorino p , temimi a. the group size paradox revisited .j public econ theory .2008 ; 10 : 785 - 799 . oliver pe , marwell g. the paradox of group - size in collective action - a theory of the critical mass .ii . am sociol rev .1988 ; 53 : 1 - 8 .chamberlin jr .provision of collective goods as a function of group size .am polit sci rev .1974 ; 68 : 707 - 716 .todd s. collective action : theory and applications .ann arbor : university of michigan press ; 1992 gautam ap .group size , heterogeneity and collective action outcomes : evidence from community forestry in nepal .int j sustain dev world ecol . 2007 ;14 : 574 - 583 .rustagi d , engel s , kosfeld m. conditional cooperation and costly monitoring explain success in forest commons management . science .2010 ; 330 : 961 - 965 .poteete ar , ostrom e. heterogeneity , group size and collective action : the role of institutions in forest management .dev change .2004 ; 35 : 435 - 461 .agrawal a , goyal s. group size and collective action - third - party monitoring in common - pool resources .comp polit stud .2001 ; 34 : 63 - 93 .agrawal a. small is beautiful , but is larger better ?forest management institutions in the kumaon himalaya , india . in : gibson c , mckean ma , ostrom e , editors . people and forests : communities , institutions , and governance .cambridge , ma : mit press ; 2000 .yang w , liu w , via a , tuanmum - n , he g , dietz t , et al .nonlinear effects of group size on collective action and resource outcomes .proc natl acad sci usa .2013 ; 110 : 10916 - 10921 .je , macneil ma , basurto x , gelcich s. looking beyond the fisheries crisis : cumulative learning from small - scale fisheries through diagnostic approaches . global environ change .kenna r , berche b. critical mass and the dependency of research quality on group size . scientometrics .2011 ; 86 : 527 - 540 .kenna r , berche b. critical masses for academic research groups and consequences for higher education research policy and management .high educ manag pol .2011 ; 23 : 9 - 29 .kenna r , berche b. managing research quality : critical mass and academic research group size .j manag math . 2012 ; 23 : 195 - 207 .barcelo h , capraro v. group size effect on cooperation in one- shot social dilemmas .2015 ; 5 : 7937 .diekmann a. volunteer s dilemma .j confl resolut .1985 ; 29 : 605 - 610 .szolnoki a , perc m. impact of critical mass on the evolution of cooperation in spatial public goods games .2010 ; 81 : 057101 .marwell g , oliver p. the critical mass in collective action : a micro - social theory .cambridge , england : cambridge university press ; 1993 .heckathorn dd .the dynamics and dilemmas of collective action .am soc rev .1996 ; 6 : 250 - 277 .paolacci g , chandler j ipeirotis pg .running experiments on amazon mechanical turk .judgm decis mak .2010 ; 5 : 411 - 419 .horton jj , rand dg , zeckhauser rj .the online laboratory : conducting experiments in a real labor market .2011 ; 14 : 399 - 425 .mason w , suri s. conducting behavioral research on amazon s mechanical turk .behav res meth .2012 ; 44 : 1 - 23 .charness g , rabin m. understanding social preferences with simple tests .q j econ . 2002 ; 117 : 817 - 869 .capraro v , venanzi m , polukarov m , jennings nr .cooperative equilibria in iterated social dilemmas . in : proceedings of the 6th international symposium on algorithmic game theory .lecture notes in computer science ; 2013 .146 - 158 .
in a world in which many pressing global issues require large scale cooperation , understanding the group size effect on cooperative behavior is a topic of central importance . yet , the nature of this effect remains largely unknown , with lab experiments insisting that it is either positive or negative or null , and field experiments suggesting that it is instead curvilinear . here we shed light on this apparent contradiction by considering a novel class of public goods games inspired to the realistic scenario in which the natural output limits of the public good imply that the benefit of cooperation increases fast for early contributions and then decelerates . we report on a large lab experiment providing evidence that , in this case , group size has a curvilinear effect on cooperation , according to which intermediate - size groups cooperate more than smaller groups and more than larger groups . in doing so , our findings help fill the gap between lab experiments and field experiments and suggest concrete ways to promote large scale cooperation among people .
quantum key distribution is the art of distilling a secret key between two distant parties , alice and bob , who have access to an untrusted quantum channel . in this scenario ,one typically assumes that the equipment in alice and bob s labs can be trusted , and moreover , that its behavior is accurately described by a given theoretical model .unfortunately , this often turns out to be a very strong assumption which is not justified in practice .in particular , many loopholes can be exploited by an eavesdropper to get around the usual security proofs : for instance , the state preparation might be imperfect , or the eavesdropper might perform a blinding attack to take control of the detectors at a distance . one way around such problems consists in exhaustively listing all the potential mismatches between the theoretical model and the real implementation and taking care of each one of them individually .however , this approach is dubious as it is impossible to be sure that all loopholes have really been addressed .another , more promising , approach is inspired by the recent framework of device - independent quantum information processing . here, the idea is that if alice and bob are able to experimentally violate a bell inequality , it means that their data exhibit intrinsic randomness as well as secrecy , independently of the internal operation of the devices . in the recent years, this framework has been used to prove the security of device - independent key distribution , to certify randomness expansion , self - testing of quantum computers and states , and guarantee the presence of entanglement . in the present work ,we focus on the cryptographic task of key distribution , which has been the subject of many very recent developments . until recently, security proofs were restricted to scenarios where alice and bob have access to a pair of memoryless devices or independent pairs of devices , thus ensuring that the measurements inside their own labs were causally disconnected or commuting .this is reminiscent of the notion of collective attacks in standard qkd , where some independence assumption is required . ideally , one would like a protocol where only one device is required per party , and for which no assumption is needed for the device .this is indeed the motivation for doing device - independent cryptography in the first place .recent works have been able to get rid of this assumption . in ref . , the authors introduced a protocol based on the chained bell inequality and established its security against arbitrary adversaries .the protocol , however , only produces a single secret bit and does not tolerate any noise . in ref . , the authors proved a strong converse of tsirelson s optimality result for the clauser - horne - shimony - holt ( chsh ) game , based on the chsh inequality : the only way using quantum resources to win the game as predicted by tsirelson s bound is to use a strategy close to the optimal one for independent and identically distributed states , that is , applying the optimal measurements on copies of a two - qubit maximally entangled state .this theorem provides a security proof for diqkd based on the chsh inequality .unfortunately , the security proof does not resistant any constant amount of noise .while this work was completed , vazirani and vidick gave a universally composable security proof of diqkd against arbitrary attacks .their protocol , based again on the chsh inequality , is both reasonably efficient ( the key length scales linearly with the number of measurements ) and tolerant to a constant fraction of noise .a drawback , however , is that the maximum amount of noise tolerated is of the order of , significantly lower than the bounds obtained for protocols using pairs of devices . in the present paper ,we present a security proof that ( i ) works for only two devices , that is , does not require commuting measurements or memoryless devices , ( ii ) can be applied to generic diqkd protocols based on arbitrary bell inequalities , ( iii ) has the same efficiency and tolerance to noise than previous proofs using memoryless devices .all these nice properties , however , come at the price of assuming that the adversary only holds classical information .while this may seem a strong requirement , it can be easily enforced in any realistic implementation by delaying the reconciliation process , since the best existing quantum memories have very short coherence times .another advantage of our general framework is that it can also provide security beyond quantum theory , that is , against eavesdroppers that are only limited by the no - signalling principle .the outline of the paper is the following .we first give a brief reminder of the relation between non locality , that is , violation of a bell inequality , and randomness .we then describe the quantum key distribution protocol and present its secret key rate .we prove the security of the protocol under the assumption that the eavesdropper does not have access to a long - term quantum memory .we conclude by briefly comparing our results with the existing security proofs , and discussing some rather natural follow - up questions .in the following , we consider a bipartite scenario where alice and bob input random variables and in their respective devices and obtain classical outputs and , respectively .we denote the sizes of the alphabets of , respectively .moreover , we denote by the probability of getting the specific results when the inputs are , and the vector with components .a bell inequality can be written as : = \sum_{a , b , x , y } \beta ( a , b , x , y)\ , p(a , b|x , y ) \leq i_\mathrm{cl}\ , \ ] ] where is the classical upper - bound . to any such bell inequality, one can associate a bound on the randomness of the output given the input through a function such that )\quad \mbox { for all } a\in\lambda_a\ .\ ] ] such a function can be computed using the techniques given in , as explained in . without loss of generality , this function can be assumed to be monotonically non - increasing and such that is convex . for simplicity , we consider the case where there exist an input - independent bound , i.e. a function such that for all .examples of bell inequalities satisfying this property are : the chsh inequality , the chained inequality , and the collins - gisin - linden - massar - popescu ( cglmp ) inequality . our results , however ,can easily be generalised to cover the case of input - dependent bounds .the diqkd protocol that we consider in this paper is very general in the sense that it is compatible with arbitrary bell inequalities , in particular with the various examples of bell inequalities mentioned above .our protocol consists of four steps : measurements , estimation of the bell violation , error correction and privacy amplification .we note the number of times each device is used during the protocol. 1 . * measurements . *alice and bob respectively generate the random variables with distribution for . if then alice measures round with input obtaining outcome . if then alice generates with uniform distribution and measures round with input obtaining outcome does the analog with , input , and outcome .in other words , events where are used to establish a raw key , while events where are used to test the bell inequality and guarantee that a secret key can indeed be extracted from the raw key .* estimation . * alice and bob publish for all , and discard the data corresponding to the rounds with .the data corresponding to the post - selected rounds with is relabeled with the index keeping the time order .the data corresponding to the rounds of the set is also published and used to estimate the bell - inequality violation .more specifically , alice and bob can use the public data to compute the following quantity : the data of the rounds not in constitutes the raw key of alice and bob .* error correction .* alice and bob publish bits in order to correct bobs errors . for sufficiently large , all errors are corrected with high probability .note that some of the published bits are used to estimate how many more bits need to be publish for a successful error correction . for large ,publishing bits is enough .for more details about the functioning of error correction , we refer to .* privacy amplification .* alice generates and publishes a two - universal random function which maps to an -bit string .the number depends on the published information as where is the largest integer not bigger than .alice and bob then compute , obtaining two copies of the secret key .note that if the adversary holds a quantum memory , but can not keep it for an arbitrary long time , the honest parties should implement the protocol in two steps : ( i ) they receive the quantum systems from the source and perform the measurements , ( ii ) some time later they perform the rest of the protocol involving the public communication for the estimation , error correction , and privacy amplification .we show security under the assumption that the adversary can not keep a quantum memory for a time . according to current and near - future technology, this assumption can be enforced by taking of the order of a few minutes .to prove security , we will not make any assumption on the behaviour of the devices of alice and bob , except that they do not broadcast information about the inputs and outputs towards the adversary ( a condition without which there is no hope of ever establishing any secret ) .modulo this requirement , we can even assume for simplicity that the devices have been built by the adversary .the eavesdropper could in particular hold quantum systems that are entangled with the systems in the users devices .however , our proof of security only holds under the condition that the eavesdropper can not store this quantum information past the measurement step of the protocol .after this step , she should thus perform a measurement on his quantum system , which would give him some classical information about the behaviour of alice s and bob s devices .but since until this point no public communication has been exchanged between alice and bob , we can as well assume that the eavesdropper has performed his measurement before the users received their devices from the source .the fact that our proof of security holds independently of the behaviour of the devices , then implies that it holds independently of the prior classical information that eve holds on the devices , and we can thus forget in the following . at the end of the protocol, alice holds the secret key , and eve holds the information published in the estimation step ] holds against an adversary limited by quantum theory and ) ] and that the raw key is .let and note that and .* lemma 1 . *the no - signaling constraints imposed by the causal structure of the protocol imply \right)\ , \ ] ] for all , where : = \frac{1}{m } \sum_{i=1}^m i\ ! \left [ p(a_i , b_i|x_i , y_i , t^{i-1},z^{i-1 } ) \right]\ .\ ] ] note that above , in , the symbols are upper - case while are lower - case , meaning that is the vector with components for all values of but fixed ._ this proof is based on an argument introduced in .a useful observation is that bound ( [ tau i ] ) implies )\quad \mbox { for all } a , b , x , y\ .\ ] ] the following chain of equalities and inequalities follows from : bayes rule , no - signaling to the future , bounds ( [ tau i ] ) and ( [ extra ] ) , and the concavity of the function . \right ) \\ & \leq & \tau^{m}\ ! \left ( \bar i [ t^m , z^{m } ] \right)\end{aligned}\]] * lemma 2 . * the numbers , , are functions of the random variable , and satisfy where .( here a comment is in order .actually , is not only a function of but also depends on the global probability distribution .but we think of this distribution as given , fixed and unknown .this dependence prevents the straight generalization of the results in this paper to a quantum adversary . )_ the function satisfies \ = \ \frac{i_{\rm est } [ t^m , z^m]\ , |{\cal e}|}{\pr\{u=1|u = v\}}\ , \ ] ] and = i[p(a_i , b_i|x_i , y_i , t^{i-1 } , z^{i-1})]\ , \ ] ] for all .consider the sequence of functions of defined by \ , \ ] ] for .the fact that = \alpha_{l-1 } ( t^{l-1 } , z^{l-1})\ ] ] implies that the sequence of random variables is a martingale with respect to the sequence . also , using the fact that and \geq q^2 \bar i , . note that also depends on the global distribution , which prevents the generalization of this results to the case of quantum adversary .fortunately , according to lemma 2 , the probability of is large note the abuse of notation .define the set and note that . using this and obtain .recall and note that and . define the set and note that where we have used .the good event mentioned in the statement of this lemma is , and has probability , as in ( [ g ] ) .we assume , since it is a premise of the lemma . if then .hence , the non - trivial case happens for , which we assume in what follows .using bayes rule , the definition of and , lemma 1 , and ( [ g1 ] ) , we obtain \right ) \\ & \leq & 2\ , ( \lambda_a \lambda_b)^{2|{\cal e}|}\ , \tau^m \!\ ! \left ( \frac{|{\cal e}|\ , i_{\rm est}(g , z^m)}{m\pr\{u=1|u = v\ } } -n^{-1/8 } \right)\ , \end{aligned}\ ] ] which shows the lemma . * theorem . *the distance between the secret key generated by the protocol and an ideal key is _ proof . _ using definitions ( [ n_k ] ) and ( [ pguess ] ) , lemma 3 , and , we obtain : the symbol denotes the knowledge of with respect to ( see appendix ) when the statistics is conditioned on the events and .next , we use the identity with the event introduced in lemma 3 . noticing that is a function of , using ( [ hg ] ) , the triangular inequality , and lemma 4, we see that which concludes the proof .in this work , we provide a novel security proof for diqkd .contrary to most of the existing proofs , it applies to the situation in which alice and bob generate the raw key using two devices . in particular, it does not need to assume that the devices are memoryless or , equivalently , that each raw - key symbol is generated using a different device . while there exist other recent proofs that also work without this assumption , they tolerate zero or rather small amounts of noise .another important feature of our proof is that it can also be applied to non - signalling supra - quantum eavesdroppers .all these advantages come at the price of making an extra assumption on eve : she does not have access to a long - term quantum memory and , therefore , effectively she can not store quantum information .while this may at first be considered a strong assumption ( and is actually not needed in new security proofs for diqkd ) , it is a very realistic assumption taking into account current technology .the natural open question is to understand how the assumption on the memory can be removed within the framework presented here , or how the other proofs could be improved to tolerate realistic noise rates . in the case of no - signalling eavesdroppers, there is some evidence suggesting that the fact that eve can store information and delay her measurement prevents any form of privacy amplification between the honest parties .however , the recent results of imply that privacy amplification is indeed possible against quantum eavesdroppers .a good understanding of privacy amplification in the device - independent quantum scenario is probably the missing ingredient to get robust and practical fully device - independent security proofs .we acknowledge useful discussion with serge massar . this work is supported by the erc sg percent , by the eu projects q - essence and qcs , by the chist - era diqip project , by the spanish fis2010 - 14830 projects , by the snf through the national centre of competence in research `` quantum science and technology '' , by catalunyacaixa , by the interuniversity attraction poles photonics programme ( belgian science policy ) , by the brussels - capital region through a bb2b grant , and from the frs - fnrs under project diqip .a random function is two - universal if for all with .the following is a simple extension of the main result in .* lemma 4 .* let be two ( possibly correlated ) random variables where takes values in the set , and let be a two - universal random function .the random variable satisfies where the two main approaches for quantum memories are based on ensemble of atoms or on crystals . to our knowledge ,the best existing quantum memories with ensemble of atoms have coherence times of the order of 100 milliseconds , a. g. radnaev _et al . _ , nature phys . * 6 * , 894 ( 2010 ) .moving to crystals , coherence times of the order of a few seconds have been reported for classical light , see j. j. longdell , e. fraval , m. j. sellars and n. b. manson , phys .95 , 063601 ( 2005 ) . while in principlethe method should be scalable to light at the quantum level , this has not been demonstrated yet .of course , improvements on these coherence times may be expected in the foreseeable future , however there is no evidence that these improvements will be significant .s. pironio , a. acn , s. massar , a. boyer de la giroday , d. n. matsukevich , p. maunz , s. olmschenk , d. hayes , l. luo , t. a. manning and c. monroe , _ random numbers certified by bell s theorem _ nature * 464 * 1021 ( 2010 ) .f. magniez , d. mayers , m. mosca and h. ollivier , _ self - testing of quantum circuits _ , proceedings of 33rd international colloquium on automata , languages and programming , volume 4051 , series lecture notes in computer science , 72 ( 2006 ) .m. navascus , s. pironio and a. acn , _ bounding the set of quantum correlations _ _ physlett . _ * 98 * 010401 ( 2007 ) ; _ a convergent hierarchy of semidefinite programs characterizing the set of quantum correlations _ , new j. phys . * 11 * , 045021 ( 2009 ) .r. arnon - friedman , e. hnggi and a. ta - shma , _ towards the impossibility of non - signalling privacy amplification from time - like ordering constraints _, arxiv:1205.3736 ; r. arnon - friedman and a. ta - shma , _ on the limits of privacy amplification against non - signalling memory attacks _ , arxiv:1211.1125 .
device - independent quantum key distribution ( diqkd ) is a formalism that supersedes traditional quantum key distribution , as its security does not rely on any detailed modelling of the internal working of the devices . this strong form of security is possible only using devices producing correlations that violate a bell inequality . full security proofs of diqkd have been recently reported , but they tolerate zero or small amounts of noise and are restricted to protocols based on specific bell inequalities . here , we provide a security proof of diqkd that is both more efficient and noise resistant , and also more general as it applies to protocols based on arbitrary bell inequalities and can be adapted to cover supra - quantum eavesdroppers limited by the no - signalling principle only . it requires , however , the extra assumption that the adversary does not have a long - term quantum memory , a condition that is not a limitation at present since the best existing quantum memories have very short coherence times .
detailed investigation of geophysical flows involves experimental campaigns in which buoys , in the ocean , or balloons , in the atmosphere , are released in order to collect lagrangian data against which theories and models can be tested .questions concerning oil spill fate , fish larvae distribution or search and rescue operations are only a few examples that make the study of advection and diffusion properties not only a challenging scientific task , but also a matter of general interest . in the past years , an amount of lagrangian data about the south atlantic ocean ( sao ) was collected thanks to the first global atmospheric research program ( garp ) global experiment ( fgge ) drifters , released following the major shipping lines , the southern ocean studies ( sos ) drifters , deployed in the brazil - malvinas confluence ( bmc ) and the programa nacional de bias ( pnboia ) drifters [ brazilian contribution to the global oceans observing system ( goos ) ] , released in the southeastern brazilian bight ( sbb ) .these data allowed estimates of eddy kinetic energy ( eke ) , integral time scales and diffusivities ( piola et al .1987 ; figueroa and olson 1989 ; schfer and krauss 1995 ) . despite the relatively uniform coverage , the boundary currents resulted poorly populated by buoys ; furthermore , all previous studies about drifters in the south atlantic have concerned one - particle statistics only . in this regard , in the framework of monitoring by ocean drifters ( mondo ) project , a recent lagrangian experiment , consisting in the release of a set of 39 world ocean circulation experiment ( woce ) surface velocity program ( svp ) drifters , was planned in relationship with an oil drilling operation in proximity of the coast of brazil , around ( , ) .part of the drifters were deployed in 5-element clusters , some of them with initial drifter separations smaller than 1 km .this set of satellite - tracked lagrangian trajectories offers , now , the opportunity to revisit advective and diffusive properties characterizing the current systems explored by the drifters . from the analysis of trajectory pair dispersionwe can extract , in principle , information about the dominant physical mechanism acting at a certain scale of motion ( e.g. chaotic advection , turbulence , diffusion ) . a thorough description of the oceanography of the south atlantic ocean , particularly of the main circulation patterns and of the mass transport properties , can be found in peterson and stramma ( 1991 ) ; campos et al .( 1995 ) ; stramma and england ( 1999 ) .the major feature characterizing the central region of the sao is the large anticyclonic ( anticlockwise ) circulation known as subtropical gyre ( sg ) .other relevant surface current systems are : south equatorial current ( sec ) , brazil current ( bc ) , malvinas current ( mc ) , south atlantic current ( sac ) and benguela current ( bgc ) , as shown in fig .[ fig : sao ] . in the quasigeostrophic ( qg ) approximation ,valid for relative vorticities much smaller than the ambient vorticity because of the earth s rotation , some theoretical arguments would predict that , from the scale of the forcing at which eddies are efficiently generated by instability , e.g. the rossby radius of deformation , both a down - scale enstrophy cascade and an up - scale energy cascade take place , corresponding to energy spectra and , respectively ( kraichnan 1967 ; charney 1971 ) . from a relative dispersion perspective , in the forward cascade range , the mean square relative displacement between two trajectories grows exponentially fast in time ( lin 1972 ) , whereas , in the inverse cascade range , it follows the power law ( obhukov 1941 ; batchelor 1950 ) .possible deviations from this ideal picture may reasonably come from the non homogeneous and non stationary characteristics of the velocity field : for example , in the case of boundary currents , as well as from ageostrophic effects . at this regard ,one presently debated issue is the role of submesoscale vortices ( mcwilliams 1985 ) [ velocity field features of size km ] in determining the shape of the energy spectrum at intermediate scales between the rossby deformation radius [ in the ocean typically km ] and the dissipative scales ( much smaller than 1 km ) .a thorough discussion about submesoscale processes and dynamics can be found in thomas et al .recent high - resolution 3d simulations of upper layer turbulence ( capet et al .2008a , b ; klein et al .2008 ) have shown that the direct cascade energy spectrum flattens from to for order rossby number , where is the typical velocity difference on a characteristic length of the flow and is the coriolis parameter .our main purpose is to exploit the mondo drifter trajectories , shown in fig .[ fig : drifters ] , to examine relative dispersion by means of several indicators , and discuss the consistency of our data analysis in comparison with classical turbulence theory predictions , model simulations and previous drifter studies available for different regions of the ocean .this paper is organized as follows : in section [ sec : diffusion ] we recall the definitions of the major indicators of the lagrangian dispersion process ; in section [ sec : data ] we give a description of the mondo drifter lagrangian data ; in section [ sec : results ] the outcome of the data analysis is presented ; in section [ sec : model ] , we report the analysis of the ocean model lagrangian simulations in comparison with the observative data ; and , in section [ sec : conclusions ] , we outline and discuss the main results we have obtained in the present work .let be the position vector of a lagrangian particle , in a 3d space , evolving according to the equation , where is a 3d eulerian velocity field , and let us indicate with the lagrangian velocity along the trajectory .let us imagine , then , a large ensemble of lagrangian particles , passively advected by the given velocity field , and refer , for every statistically averaged quantity , to the mean over the ensemble .the autocorrelation function of a lagrangian velocity component can be defined , for , as : / \left [ \langle v(t_0)^2 \rangle - \langle v(t_0 ) \rangle^2 \right ] \label{eq : autocorr}\ ] ] in case of stationary statistics , depends only on the time lag .the integral lagrangian time is the time scale after which the autocorrelation has nearly relaxed to zero .typically can be estimated as the time of the first zero crossing of , or , alternatively , as the time after which remains smaller than a given threshold .absolute dispersion can be defined as the variance of the particle displacement relatively to the mean position at time : ^ 2 \rangle - \langle \left [ \mathbf{r}(t)-\mathbf{r}(0 ) \right ] \rangle^2 \label{eq : absdisp}\ ] ] in the limit of very small times , absolute dispersion is expected to behave as follows : where is the lagrangian autocorrelation time and is the total lagrangian velocity variance .the ballistic regime ( [ eq : ballistic ] ) lasts as long as the trajectories save some memory of their initial conditions . in the opposite limit of very large times ,when the autocorrelations have relaxed to zero and the memory of the initial conditions is lost , we have : where is the absolute diffusion coefficient ( taylor 1921 ) . although single particle statistics give information about the advective transport , mostly because of the largest and most energetic scales of motion , two ( or more ) particle statistics give information about the physical mechanisms acting at any scale of motion , compatibly with the available resolution .let us indicate with the distance between two trajectories at time .relative dispersion is defined as the second order moment of : where the average is over all the available trajectory pairs . in the small scale range , the velocity field between two sufficiently close trajectoriesis reasonably assumed to vary smoothly .this means that , in nonlinear flows , the particle pair separation typically evolves following an exponential law : where , from the theory of dynamical systems , is the generalized lyapunov exponent of order 2 ( bohr et al .1998 ) . when fluctuations of the finite - time exponential growth rate around its mean value are weak , one has , where is the ( lagrangian ) maximum lyapunov exponent ( mle ; boffetta et al .notice that for ergodic trajectory evolutions the lyapunov exponents do not depend on the initial conditions .if ( except for a set of zero probability measure ) we speak of lagrangian chaos . the chaotic regime holds as long as the trajectory separation remains sufficiently smaller than the characteristic scales of motion . in the opposite limit of large particle separations , when two trajectories are sufficiently distant from each other to be considered uncorrelated , the mean square relative displacement behaves as : where we indicate with the asymptotic eddy - diffusion coefficient ( richardson 1926 ) . at any time , the diffusivity can be defined as : with for . if the velocity field is characterized by several scales of motion , relative dispersion in the intermediate range , i.e. between the smallest and the largest characteristic length , depends on the properties of local velocity differences , i.e. the mean gradients on finite scale .for instance , in 3d fully developed turbulence ( frisch 1995 ) , relative dispersion follows the so - called richardson s law : with , as long as the trajectory separation lies in the inertial range of the energy cascade ( richardson 1926 ) from large to small scales .it is worth to remark that richardson s law also holds in the inverse cascade range ( from small to large scales ) of 2d turbulence because , in that case as well , the energy spectrum follows kolmogorov s scaling , exactly as in the inertial range of 3d turbulence ( kraichnan 1967 ) .any power law of the type ( [ eq : richardson ] ) for with is known as super - diffusion .the finite - scale lyapunov exponent ( fsle ) has been formerly introduced as the generalization of the mle for non - infinitesimal perturbations ( aurell et al .if is the size of the perturbation on a trajectory in the phase space of a system , and is the phase space averaged time that takes to be amplified by a factor , then the fsle is defined as the quantity is the exit time of the perturbation size from the scale , and it is defined as the first arrival time to the scale , with .the computation of the expectation value of the growth rate at a fixed scale , which justifies the definition ( [ eq : fsle ] ) , is described in boffetta et al .as far as lagrangian dynamics are concerned , the evolution equations of the lagrangian trajectories form a dynamical system whose phase space is the physical space spanned by the trajectories . in this context , the analysis of relative dispersion can be treated as a problem of finite - size perturbation evolution , with scale - dependent growth rate measured by the fsle .the first who had the idea to measure the relative dispersion , or , equivalently , the diffusivity , as a function of the trajectory separation was richardson ( 1926 ) .the fsle is fundamentally based on the same principle .recently , the use of fixed - time and fixed - scale averaged indicators of relative dispersion in various contexts , from dynamical systems to observative data in ocean and atmosphere , have been reviewed and discussed in several works ( artale et al .1997 ; boffetta et al . 2000; lacorata et al . 2001 , 2004 ; lacasce and ohlmann 2003 , lacasce 2008 ) . by a dimensional argument ,if relative dispersion follows a scaling law , then the fsle is expected to scale as .for example , in the case of standard diffusion we expect ; in richardson s super - diffusion , ; in ballistic or shear dispersion we have .chaotic advection means exponential separation between trajectories . in terms of fslethis amounts to a scale - independent : that is , . in the limit of infinitesimal separation ,the fsle is nothing but the mle , i.e. ( aurell et al . 1996 ) . under these conditions ,relative dispersion is said to be a non - local process , since it is determined by velocity field structures with a characteristic scale much larger than the particle separation .on the contrary , when the growth of the distance between two particles is mainly driven by velocity field structures of the same scale as the particle separation , relative dispersion is said to be a local process .the super - diffusive processes occurring in 2d and 3d turbulence are phenomena of this type .an indicator related to the fsle is the mean square velocity difference between two trajectories as function of their separation .indicating with , , , the positions and the lagrangian velocities , respectively , of two particles and at a given time , we define the finite - scale relative velocity ( fsrv ) at scale : ^ 2 \rangle = \langle \left [ { \mathbf v}^{(1 ) } - { \mathbf v}^{(2 ) } \right]^2 \rangle \label{eq : fsrv}\ ] ] where the average is over all trajectory pairs fulfilling the condition at some time . from the fsrva scale - dependent diffusivity can be formed as ^ 2 \rangle^{1/2} ] can be defined , by dimensional arguments , replacing with , fig .[ fig : spectrum ] . the same scenario formerly indicated by the fsrv is reproduced , in space , by as well . ) , direct cascade ( ) and the submesoscale ( ) spectra are plotted as reference .the rossby radius 30 km corresponds to a wavenumber ., scaledwidth=70.0% ] we compare now the diffusivity ( see fig . [fig : diff ] ) computed in both ways : as a fixed - time average ( [ eq : diffusivity ] ) and as a fixed - scale average from the fsrv .vs and fixed - scale average vs .the and correspond to and spectra , respectively .the scaling corresponds to a spectrum.,scaledwidth=70.0% ] both quantities are plotted as functions of the separation between two drifters : ^ 2 \rangle^{1/2} ] .this only partially agrees with the results found for real drifters , namely only in the scale range km km . at subgrid scales , velocity field featuresare not resolved and relative dispersion is necessarily a nonlocal exponential process driven by structures of size of the order of ( at least ) the rossby radius .correspondingly , the fsle computed on model trajectories does not display the higher plateau level at scales smaller than km .km and amplification factor for mondo drifters and virtual drifters from numerical experiments e1 and e2 . for virtual drifters errorsare of the order of point size .the large - scale saturation ( e1 ) depends on the value of the trajectory integration time.,scaledwidth=70.0% ] km for mondo drifters and virtual drifters from numerical experiments e1 and e2 . for virtual drifters errorsare of the order of point size.,scaledwidth=70.0% ] finally , the behaviour of the relative diffusivity as a function of the separation is shown in fig .[ fig : diffu_vd ] for both mondo and virtual drifters . here is computed from the mean square velocity difference , as described in sections [ sec : diffusion]c and [ sec : results]b .model and experimental data again are in agreement at scales larger than the numerical space resolution ( km ) .indeed , in both numerical experiments e1 and e2 we find scaling behaviours compatible with a qg double cascade : ( corresponding to richardson s super - diffusion in an inverse cascade regime ) for , and ( corresponding to a direct cascade smooth flow ) for . at variance with the outcome of the real experiment with mondo drifters , here we are unable to detect any significant deviations from the qg turbulence scenario at small scales . as function of the separation for mondo drifters and virtual drifters from numerical experimentse1 and e2 ; here km . , scaledwidth=70.0% ]the failure of the model to reproduce flow features at very small scale is of course due to its finite spatial resolution , which is of the order of km . below this length scale , the velocity field computed in the model is smooth , while the one measured in the real ocean clearly displays active scales also in the range km .nevertheless , the overall conclusion we can draw from the above comparisons is that the characteristics of the relative dispersion process found with mondo drifters agree with those obtained with an ocean general circulation model ( ogcm ) for scales km .lagrangian dispersion properties of drifters launched in the southwestern corner of the south atlantic subtropical gyre have been analyzed through the computation of time - dependent and scale - dependent indicators .the data come from a set of 37 woce svp drifters deployed in the brazil current around ( , ) during monitoring by ocean drifters ( mondo ) project , an oceanographic campaign planned by prooceano and supported by eni oil do brasil in relationship with an oil drilling operation .the experimental strategy of deploying part of the drifters in 5-element clusters , with initial separation between drifters smaller than 1 km , allows to study relative dispersion on a wide range of scales .single - particle analysis has been performed by computing classic quantities like lagrangian autocorrelation functions and absolute dispersion , defined as the variance around the drifter mean position , as a function of the time lag from the release .velocity variances ( cm s ) and integral time scales ( days ) are compatible with the estimates obtained in the analysis of the fgge drifters ( figueroa and olson 1989 ; schfer and krauss 1995 ) .anisotropy of the flow is reflected in the different behavior of the zonal and meridional components of the absolute dispersion .being the mondo drifters advected mostly by the boundary currents surrounding the subtropical gyre , the brazil current first and the south atlantic current later , the meridional component of the absolute dispersion is dominant as long as the mean drifter direction is nearly southward ( bc ) , while the zonal component dominates ( see the appearance of the late ballistic regime ) as the mean drifter direction is nearly eastward ( sac ) .early time advection is modulated , also , by the response of the currents to a wind forcing of period days , a characteristic meteorological feature of the bc dynamics .two - particle analysis has been performed by means of both fixed - time and fixed - scale averaged quantities .classic indicators like the mean square relative displacement and the relative diffusivities between two drifters as functions of the time lag from the release give loose information about early phase exponential separation , characterized by a mean rate day , and long time dispersion approximated , to some extent , by richardson super - diffusion before the cut - off due to the finite lifetime of the trajectories .evidence of a small scale exponential regime for relative dispersion is common to other drifter studies for different ocean regions ( lacasce and ohlmann 2003 ; ollitrault et al .2005 , koszalka et al .2009 , lumpkin and elipot 2010 ) .scale - dependent indicators return back a cleaner picture , compatibly with the limited statistics allowed by the experimental data and the non homogeneous and non stationary characteristics of the flow .the fsle displays a mesoscale [ km ] regime compatible with richardson super - diffusion , lagrangian counterpart of the 2d inverse cascade scenario characterized by a energy spectrum . at scales smaller than 100 kmthe fsle has a step - like shape , with a first plateau at level day in the submesoscale range km , and a second plateau at level day for scales comparable with the rossby radius of deformation ( km ) .constant fsle in a range of scales corresponds to exponential separation .the plateau could be related to the 2d direct cascade characterized by a energy spectrum , while the origin of the km plateau is likely related to the existence of submesoscale features of the velocity field , the role of which has been recently assessed by means of high - resolution 3d simulations of upper ocean turbulence at rossby numbers ( capet et al . 2008a , b ; klein et al .the fsle does not display a clean continuous cascade scaling , corresponding to a energy spectrum , from sub to mesoscales ; however , it highlights the existence of scales of motion hardly reconcilable with the qg turbulence scenario , as analogously assessed also by lumpkin and elipot ( 2010 ) for drifter dispersion in the north atlantic .the fsrv measures the mean square velocity difference at scale .the scaling of the fsrv is related to the turbulent characteristics of relative dispersion .the fsrv behavior is , under a certain point of view , cleaner than that of the fsle , but it is affected by the presence of two `` valleys '' , roughly at km and at km , likely associated to trapping events ( the same features are present also in the fsle behavior ) .coherent structures on scales and km may be responsible of the `` fall '' in the relative dispersion rate .these scales are of the order of the rossby radius and of the mesoscale rings that detach from the current systems , respectively .we must also consider the role of the brazil - malvinas confluence which tends to inhibit the growth of the dispersion as the drifters , initially flowing southwestward along the brazil current , encounter the northeastward flowing malvinas current before being , eventually , transported eastward along the south atlantic current .the fsrv displays a mesoscale scaling ( except for the `` valley '' ) , compatible with a inverse cascade ; a scaling in the two subranges km and km , which correspond to the step - like shape of the fsle ; a scaling which , to some extent , is compatible with a regime connecting scales of the order of the rossby radius to the submesoscale ( below which the velocity field becomes approximately smooth ) .the equivalent lagrangian spectrum , formed by dividing the fsrv ( essentially the relative kinetic energy at scale ) by the wavenumber , returns back the same scenario in space .last , we have compared the scale - dependent relative diffusivity constructed from the fsrv , for which is the independent variable , with the classic relative diffusivity seen as a function of the mean separation between two drifters , for which is the independent variable .what emerges from the analysis of the diffusivities is that , in the mesoscale range , loosely from the rossby radius up to scales km , , corresponding to the inverse cascade ; in the submesoscale range km , and in a limited subrange from about the rossby radius down to km , , corresponding to exponential separation ; in the subrange km , the scaling is compatible with a regime .the spectrum is a characteristic of upper ocean turbulence when the rossby number is order , as recently assessed with high - resolution 3d model simulations ( capet et al .2008a , b ; klein et al . 2008 ) , which connects the mesoscale to the submesoscale km ( mcwilliams 1985 ) .below the submesoscale the velocity field is reasonably assumed to vary smoothly. a rough estimate of the rossby number associated to the mondo drifter dynamics gives a value ( at least ) , taking m / s , km and 1/s , which is , nonetheless , considerably larger than the typical rossby number in open ocean .although the drifter data analysis does not show a clear evidence of a relative dispersion regime corresponding to the spectrum , the presence of velocity field features of size comparable to submesoscale vortices is reflected , to some extent , by the behavior of the small scale relative dispersion process .numerical simulations of the lagrangian dynamics have been performed with an ogcm of the south atlantic ( huntley et al .the results concerning the relative dispersion essentially agree with the data analysis of the mondo drifters , within the limits of the available numerical resolution .in particular , two - particle statistical indicators such as the fsle , the fsrv and the scale - dependent relative diffusivity , computed on the trajectories of virtual drifters , display the same behaviour found for mondo drifters for scales larger than approximatley km , that is larger than the numerical grid spacing .below this length scale , evidently , the model flow field is smooth , hence relevant departures from qg turbulence and the role of submesoscale structures can not be assessed .further investigations on the modeling of submesoscale processes would provide extremely useful in order to make a clearer picture of the small scale dynamics of the surface ocean circulation in the region .+ fds thanks eni oil do brasil s.a . in the person of ms .tatiana mafra for being the financial promoter of mondo project and for making the experimental data available to the scientific community .sb acknowledges financial support from cnrs .we thank d. iudicone and e. zambianchi for useful discussions and suggestions .the authors are grateful , also , to three anonymous reviewers who have helped to improve the substance and the form of this work with their critical remarks .v. artale , g. boffetta , a. celani , m. cencini , a. vulpiani , physics of fluids * 9 * , 3162 ( 1997 ) .a. t. assireu , _ estudo das caractersticas cinemticas e dinmicas das guas de superfcie do atlntico sul ocidental a partir de derivadores rastreados por satlite _ ( phd thesis , instituto nacional de pesquisas espaciais , 2003 ). g. k. batchelor , quarterly journal of the royal meteorological society * 76 * , 133 ( 1950 ) .r. bleck , s. benjamin , monthly weather reviews * 121 * , 1770 ( 1993 ) .g. boffetta , a. celani , m. cencini , g. lacorata , a. vulpiani , chaos * 10 * ( 1 ) , 50 ( 2000 ) . t. bohr , m. h. jensen , g. paladin , a. vulpiani , _ dynamical systems approach to turbulence _ ( cambridge university press , 1998 ). e. j. d. campos , j. l. miller , t. j. mller , r. g. peterson , oceanography * 8 * , 87 ( 1995 ) .x. capet , j. c. mcwilliams , m. j. molemaker , a. f. shchepetkin , journal of physical oceanography * 38 * , 29 ( 2008 ) .x. capet , j. c. mcwilliams , m. j. molemaker , a. f. shchepetkin , journal of physical oceanography * 38 * , 44 ( 2008 ) .j. g. charney , journal of atmospheric sciences * 28 * , 1087 ( 1971 ) .j. a. cummings , quarterly journal of the royal meteorological society * 131 * , 3583 ( 2005 ) .b. m. de castro , j. a. lorenzetti , i. c. a. da silveira , l. b. de miranda , _ estrutura termohalina e circulao na regio entre o cabo de so tom ( rj ) e o chu ( rs ) _ , in _ o ambiente oceanogrfico da plataforma continental e do talude na regio sudeste - sul do brasil _ , edited by c. l. d. b. rossi - wongtschowski and l. s .- p .madureira ( edusp , 2006 ) .a. gordon , c. greengrove , deep sea research part a * 33 * ( 5 ) , 573 ( 1986 ) .d. v. hansen , p. m. poulain , journal of atmospheric and oceanic technology * 13 * ( 4 ) , 900 ( 1996 ) .s. houry , e. dombrowsky , p. de mey , j. minster , journal of physical oceanography * 17 * ( 10 ) , 1619 ( 1987 ) .h. s. huntley , b. l. lipphardt , a. d. kirwan , ocean modelling , doi : 10.1016/j.ocemod.2010.11.001 ( 2010 ) .p. klein , b. l. hua , g. lapeyre , x. capet , s. le gentil , h. sasaki , journal of physical oceanography * 38 * , 1748 ( 2008 ) .i. koszalka , j. h. lacasce , k. a. orvik , journal of marine research * 67 * , 411 ( 2009 ) .r. h. kraichnan , physics of fluids * 10 * , 1417 ( 1967 ) . j. h. lacasce , progress in oceanography * 77 * , 1 ( 2008 ) .j. h. lacasce , c. ohlmann , journal of marine research * 61 * , 285 ( 2003 ) .g. lacorata , e. aurell , b. legras , a. vulpiani , journal of the atmospheric sciences * 61 * , 2936 ( 2004 ) .g. lacorata , e. aurell , a. vulpiani , annales geophysicae * 19 * , 121 ( 2001 ) .r. legeckis , a. gordon , deep sea research * 29 * , 375 ( 1982 ) . c. lentini , d. olson , g. podest , geophysical research letters * 29 * , 1811 ( 2002 ) .i. d. lima , c. a. e. garcia , o. o. mller , continental shelf research * 16 * ( 10 ) , 1307 ( 1996 ) .j. t. lin , journal of atmospheric sciences * 29 * , 394 ( 1972 ) .r. lumpkin , s. elipot , journal of geophysical research * 115 * , c12017 ( 2010 ) .j. c. mcwilliams , reviews of geophysics * 23 * , 165 ( 1985 ) .t. j. mller , y. ikeda , n. zangenberg , l. v. nonato , journal of geophysical research * 103 * ( c3 ) , 5429 ( 1998 ) .a. m. obhukov , izvestiya akademii nauk sssr , seriya geograficheskaya i geofizichaskaya * 5 * , 453 ( 1941 ) .l. r. oliveira , a. r. piola , m. m. mata , i. d. soares , journal of geophysical research * 114 * ( c10 ) , c10006 ( 2009 ) .m. ollitrault , c. gabillet , a. c. d. verdiere , journal of fluid mechanichs * 533 * , 381 ( 2005 ) .r. g. peterson , l. stramma , progress in oceanography * 26 * ( 1 ) , 1 ( 1991 ) .l. p. pezzi , r. b. souza , m. s. dourado , c. a. e. garcia , m. m. mata , m. a. f. silva - dias , geophysical research letters , * 32 * ( 22 ) ( 2005 ) .a. piola , h. figueroa , a. bianchi , journal of geophysical research * 92 * ( c5 ) , 5101 ( 1987 ) .a. piola , o. o. mller , r. a. guerrero , e. j. d. campos , continental shelf research * 28 * ( 13 ) , 1639 ( 2008 ) .l. f. richardson , proceedings of the royal society a * 110 * , 709 ( 1926 ) .h. schfer , w. krauss , journal of marine research * 53 * , 403 ( 1995 ) .j. stech , j. lorenzetti , journal of geophysical research * 97 * ( c6 ) , 9507 ( 1992 ) . m. stevenson , woce newsletter * 22 * , 1 ( 1996 ) .l. stramma , m. england , journal of geophysical research * 104 * ( c9 ) , 20863 ( 1999 ) .a. sybrandy , p. p. niiler , _ woce / toga lagrangian drifter construction manual _ ( university of california , 1991 ) .g. i. taylor , proceedings of the london mathematical society * 20 * , 196 ( 1921 ) .l. n. thomas , a. tandon , a. mahadevan , _ submesoscale processes and dynamics _ , in _eddy resolving ocean modeling _ , edited by m. w. hecht , h. hasumi , amer .union , 17 ( 2008 ) .
in the framework of monitoring by ocean drifters ( mondo ) project , a set of lagrangian drifters were released in proximity of the brazil current , the western branch of the subtropical gyre in the south atlantic ocean . the experimental strategy of deploying part of the buoys in clusters offers the opportunity to examine relative dispersion on a wide range of scales . adopting a dynamical systems approach , we focus our attention on scale - dependent indicators , like the finite - scale lyapunov exponent ( fsle ) and the finite - scale ( mean square ) relative velocity ( fsrv ) between two drifters as function of their separation , and compare them with classic time - dependent statistical quantities like the mean square relative displacement between two drifters and the effective diffusivity as functions of the time lag from the release . we find that , dependently on the given observable , the quasigeostrophic turbulence scenario is overall compatible with our data analysis , with discrepancies from the expected behavior of 2d turbulent trajectories likely to be ascribed to the non stationary and non homogeneous characteristics of the flow , as well as to possible ageostrophic effects . submesoscale features of km are considered to play a role , to some extent , in determining the properties of relative dispersion as well as the shape of the energy spectrum . we present , also , numerical simulations of an ogcm of the south atlantic , and discuss the comparison between experimental and model data about mesoscale dispersion .
since the beginning of financial science , stock prices , option prices and other quantities have been described by stochastic and partial differential equations . since the 1980s however , the path integral approach , created in the context of quantum mechanics by richard feynman , has been introduced to the field of finance .earlier , norbert wiener , in his studies on brownian motion and the langevin equation , used a type of functional integral that turns out to be a special case of the feynman path integral ( see also mark kac , and for a general overview see kleinert and schulman ) .the power of path - integration for finance ( , ) lies in its ability to naturally account for payoffs that are path - dependent .this makes path integration the method of choice to treat one of the most challenging types of derivatives , the path - dependent options .feynman and kleinert showed how quantum - mechanical partition functions can be approximated by an effective classical partition function , a technique which has been successfully applied to the pricing of path - dependent options ( see ref . and references therein , and refs . for recent applications ) .there exist many different types of path - dependent options .the two types which are considered in this paper are asian and barrier options .asian options are exotic path - dependent options for which the payoff depends on the average price of the underlying asset during the lifetime of the option .one distinguishes between _ average price _ and _ average strike _ asian options . the average price asian option has been treated in the context of path integrals by linetsky .the payoff of an average price is given by and for a call and put option respectively . here is the strike price and denotes the average price of the underlying asset at maturity . can either be the arithmetical or geometrical average of the asset price .average price asian options cost less than plain vanilla options .they are useful in protecting the owner from sudden short - lasting price changes in the market , for example due to order imbalances .average strike options are characterized by the following payoffs : and for a call and put option respectively , where is the price of the underlying asset at maturity .barrier options are options with an extra boundary condition .if the asset price of such an option reaches the barrier during the lifetime of the option , the option becomes worthless , otherwise the option has the same payoff as the option on which the barrier has been imposed .( for more information on exit - time problems see ref . and the references therein ) in section [ average strike option ] we treat the geometrically averaged asian option . in section[ 1 ] the asset price propagator for this standard asian option is derived within the path integral framework in a similar fashion as in ref . for the weighted asian option .the underlying principle of this derivation is the effective classical partition function technique developed by feynman and kleinert . in section[ 2 ] we present an alternative derivation of this propagator using a stochastic calculus approach .this propagator now allows us to price both the average price and average strike asian option . for both types of optionsthis results in a pricing formula which is of the same form as the black - scholes formula for the plain vanilla option .our result for the option price of an average price asian option confirms the result found in the literature . for the average strike option no formula of this simplicity exists as far as we know .our derivation and analysis of this formula is presented in section [ 3 ] , where our result is checked with a monte carlo simulation . in section [ 4 ] we impose a boundary condition on the asian option in the form of a barrier on a control process , and check whether the method used in section [ average strike option ] is still valid when this boundary condition is imposed on the propagator for the normal asian option , using the method of images . finally in section [ 5 ] we draw conclusions .the path integral propagator is used in financial science to track the probability distribution of the logreturn at time , where is the initial value of the underlying asset .this propagator is calculated as a weighted sum over all paths from the initial value at time to a final value at time \,dt\right)\ ] ] the weight of a path , in the black - scholes model , is determined by the lagrangian = \frac{1}{2\sigma^{2}}\left [ \dot { x}-\left ( \mu-\frac{\sigma^{2}}{2}\right ) \right ] ^{2 } \label{black - scholes lagrangiaan}\ ] ] where is the drift and is the volatility appearing in the wiener process for the logreturn . for asian options ,the payoff is a function of the average value of the asset .therefore we introduce as the logreturn corresponding to the average asset price at maturity .when is the geometric average of the asset price , then is an algebraic average . the key step to treat asian options within the path integral framework is to partition the set of all paths into subsets of paths , where each path in a given subset has the same average . summing over only these paths that have a given average defines the conditional propagator : \,dt\right ) \label{conditionele propagator}\ ] ] this is indeed a partitioning of the sum over all paths: the delta function in the sum over all paths picks out precisely all the paths that will have the same payoff for an asian option .the calculation of is straightforward ; when the delta function is rewritten as an exponential , + \frac{1}{t}ikx(t)\right ) dt\right ) , \ ] ] the resulting lagrangian is that of a free particle in a constant force field in 1d .the resulting integration over paths is found by standard procedures : ^{2}\right .\nonumber \\ & \left .-\frac{6}{\sigma^{2}t}\left ( \bar{x}_{t}-\frac{x_{t}}{2}\right ) ^{2}\right \ } , \label{conditionele propagator uitgerekend}\ ] ] and corresponds to the result found by kleinert and by linetsky . the conditional propagator is interpreted in the framework of stochastic calculus as the joint propagator of and its average .the calculation of here is similar to the derivation presented in ref . where this joint propagator is calculated for the vasicek model .the main point is that in a gaussian model the joint distribution of the couple has to be gaussian too . as a consequence this joint distributionis fully characterized by the expectation values and the variances of and and by the correlation between these two processes .the expectation value of is given by , its variance by and the correlation between the two processes by .the density function of such a gaussian process is then known to be ^{2}+3\left [ \bar { x}_{t}-\left ( \mu-\frac{\sigma^{2}}{2}\right ) \frac{t}{2}\right ] ^{2}\right .\nonumber \\ & \left .\left . -3\left[ x_{t}-\left ( \mu-\frac{\sigma^{2}}{2}\right ) t\right ] \left [ \bar{x}_{t}-\left ( \mu-\frac{\sigma^{2}}{2}\right ) \frac{t}{2}\right ] \right \ } \right)\end{aligned}\ ] ] this agrees with eq .( [ conditionele propagator uitgerekend ] ) for . if the payoff at time of an asian option is written as , then the expected payoff is = { \displaystyle \int \limits_{-\infty}^{\infty } } dx_{t}{\displaystyle \int \limits_{-\infty}^{\infty } } d\bar{x}_{t}\text { } v_{t}^{asian}(x_{t},\bar{x}_{t})\mathcal{k}\left ( x_{t},t\ , \left \vert 0,0\right \vert \bar{x}_{t}\right ) \label{algemene vorm waarde van de aziatische optie}\ ] ] the price of the option , is the discounted expected payoff , \label{6}\ ] ] where is the discount ( risk - free ) interest rate . using expression ( [algemene vorm waarde van de aziatische optie ] ) the price of any option which is dependent on the average of the underlying asset during the lifetime of the option can be calculated .we will now derive the price of an average strike geometric asian call option explicitly . in order to do this , expression ( [ algemene vorm waarde van de aziatische optie ] ) has to be evaluated using the payoff : substituting ( [ payoff ] ) in ( [ 6 ] ) yields where the lower boundary of the integration now depends on .when considering an average price call , the payoff ( for a call option ) is leading to a constant lower boundary for the integration , and the integrals are easily evaluated . in the present casehowever , the integration boundary is more complicated and it is more convenient to express this boundary through a heaviside function , written in its integral representation : now the two original integrals have been reduced to gaussians at the cost of inserting a complex term in the exponential .expression ( [ waarde van de aziatische optie 1 ] ) can be split into two terms denoted and , where ^{2}\right .\nonumber \\ & \left .-\frac{6}{\sigma^{2}t}\left ( \bar{x}_{t}-\frac{x_{t}}{2}\right ) ^{2}+i\left ( x_{t}-\bar{x}_{t}\right ) \tau+x_{t}\right \}\end{aligned}\ ] ] and has the same form , except with instead of in the last term of the argument of the exponent . as a first step , the gaussian integrals over and are calculated , yielding with\ ] ] now the integral has been reduced to a form which can be rewritten by making use of plemelj s formulae . taking into account symmetry ,this reduces to\ ] ] with{l}\smallskip a=\dfrac{\sigma^{2}t}{6}\\ \smallskip b=\left ( \mu+\dfrac{\sigma^{2}}{2}\right ) \dfrac{t}{2}\end{array } \right.\ ] ] the first term thus becomes + 1\right \}\ ] ] the second term , is evaluated similarly , leading to + 1\right \ } \right .\nonumber \\ & \left .-\exp \left [ \left ( \mu-\frac{\sigma^{2}}{6}\right ) \frac{t}{2}\right ] \left \{ \operatorname{erf}\left [ \sqrt{\frac{3t}{8\sigma^{2}}}\left ( \mu-\frac{\sigma^{2}}{6}\right ) \right ] + 1\right \ } \right)\end{aligned}\ ] ] using the cumulative distribution function of the normal distribution \ ] ] this can be rewritten in a more compact form as with the following shorthand notations{c}\smallskip d_{1}=\sqrt{\dfrac{3t}{4\sigma^{2}}}\left ( \mu+\dfrac{\sigma^{2}}{2}\right ) \\\smallskip d_{2}=\sqrt{\dfrac{3t}{4\sigma^{2}}}\left ( \mu-\dfrac{\sigma^{2}}{6}\right ) \end{array } \right.\ ] ] expression ( [ priceform ] ) is the analytic pricing formula for an average strike geometric asian call option , obtained in the present work with the path integral formalism . to the best of our knowledge ,no pricing formula of this simplicity exists . to check this formula, we compared its results to those of a monte carlo simulation .the monte carlo scheme used is as follows : first , the evolution of the logreturn is simulated for a large number of paths .this evolution is governed by a discrete geometric brownian motion for a number of time steps .using the value for the logreturn at each time step , the average logreturn can be calculated for every path .subsequently the payoff per path can be obtained , which is then used to calculate the option price by averaging over all payoffs per path en discounting back in time . the analytical result andthe monte carlo simulation agree to within a relative error of 0.3% when 500 000 samples and 100 time steps are used .this means that our analytical result lies within the error bars at every point .we also obtained the result for an average price asian option ; in contrast to the new result for the average strike option this could be compared to the existing formula , and was found to be the same .[ h ] figure1.eps using the propagator ( [ propagator voor aziaat met barriere ] ) the price of an asian option with a barrier can be calculated .the general pricing formula is given by: this calculation was done for an average price option : .the calculation , though rather cumbersome , is essentially the same as for the asian options in section [ average strike option ] .the integral over is a gaussian integral , and the remaining two integrals can be transformed into a standard bivariate cumulative normal distribution , defined by:=\frac{1}{2\pi \sqrt{1-\chi^{2}}}\int_{-\infty}^{a}\int_{-\infty } ^{b}\exp \left ( -\frac{1}{2\left ( 1-\chi^{2}\right ) } \left ( x^{2}+y^{2}-2\chi xy\right ) \right ) dxdy \label{cumnormdistr}\ ] ] this eventually leads to the following pricing formula for an asian option with a barrier: } \left ( \frac{b}{s_{0y}}\right ) ^{\frac{2\left [ \frac{4}{\xi } \left ( \nu-\frac{\xi^{2}}{2}\right ) -3\frac{\rho}{\sigma}\left ( \mu -\frac{\sigma^{2}}{2}\right ) \right ] } { \xi \left ( 4 - 3\rho^{2}\right ) } } n\left ( d_{5},d_{6},-\sqrt{\frac{3}{4}}\rho \right ) \nonumber \\ & \left .+ k~e^{\frac{3}{\sigma^{2}}x_{s}\left [ \frac{2x_{s}}{t}+\left ( \mu-\frac{\sigma^{2}}{2}\right ) \right ] } \left ( \frac{b}{s_{0y}}\right ) ^{\frac{2\left [ \frac{4}{\xi}\left ( \nu-\frac{\xi^{2}}{2}\right ) -3\frac{\rho}{\sigma}\left ( \mu-\frac{\sigma^{2}}{2}\right ) \right ] } { \xi \left ( 4 - 3\rho^{2}\right ) } } n\left ( d_{7},d_{8},-\sqrt{\frac{3}{4}}\rho \right ) \right ] \label{optieprijs aziaat met barriere}\ ] ] where the following shorthand notations were used:{l}d_{1}=-\dfrac{\ln \left ( \dfrac{k}{s_{0x}}\right ) -\dfrac{t}{2}\left ( \mu+\dfrac{\sigma^{2}}{6}\right ) } { \sqrt{\dfrac{\sigma^{2}t}{3}}}\\ d_{2}=\dfrac{\ln \left ( \dfrac{b}{s_{0y}}\right ) -t\left ( \nu-\dfrac{\xi ^{2}}{2}+\dfrac{\sigma \xi \rho}{2}\right ) } { \sqrt{\xi^{2}t}}\\ d_{3}=-\dfrac{\ln \left ( \dfrac{k}{s_{0x}}\right ) -\dfrac{t}{2}\left ( \mu-\dfrac{\sigma^{2}}{2}\right ) } { \sqrt{\dfrac{\sigma^{2}t}{3}}}\\ d_{4}=\dfrac{\ln \left ( \dfrac{b}{s_{0y}}\right ) -t\left ( \nu-\dfrac{\xi ^{2}}{2}\right ) } { \sqrt{\xi^{2}t}}\\ d_{5}=-\dfrac{\ln \left ( \dfrac{k}{s_{0x}}\right ) -t\left [ 2\dfrac{x_{s}}{t}+\dfrac{1}{2}\left ( \mu+\dfrac{\sigma^{2}}{6}\right ) \right ] } { \sqrt{\dfrac{\sigma^{2}t}{3}}}\\ d_{6}=\dfrac{\ln \left ( \dfrac{b}{s_{0y}}\right ) -\dfrac{t}{\sigma}\left [ 3\xi \rho \dfrac{x_{s}}{t}+\sigma \dfrac{x_{s}}{t}+\sigma \left ( \nu-\dfrac { \xi^{2}}{2}+\dfrac{\sigma \xi \rho}{2}\right ) \right ] } { \sqrt{\xi^{2}t}}\\ d_{7}=-\dfrac{\ln \left ( \dfrac{k}{s_{0x}}\right ) -t\left [ \dfrac{2x_{s}}{t}+\dfrac{1}{2}\left ( \mu-\dfrac{\sigma^{2}}{2}\right ) \right ] } { \sqrt{\dfrac{\sigma^{2}t}{3}}}\\ d_{8}=\dfrac{\ln \left ( \dfrac{b}{s_{0y}}\right ) -\dfrac{t}{2\sigma}\left [ \dfrac{1}{t}\left ( 6\rho x_{s}\xi+2\sigma y_{s}\right ) + 2\sigma \left ( \nu-\dfrac{\xi^{2}}{2}\right ) \right ] } { \sqrt{\xi^{2}t}}\end{array}\ ] ] fig . ( [ versch_corr_artikel ] ) shows the option price for an asian option with a barrier as a function of the initial asset price belonging to the process , defined by : .[ h ] figure2.eps this figure shows that the analytical result derived in section [ derivation of the option price ] deviates from the monte carlo simulation with increasing correlation .the approximate nature of our approach can be understood as follows .the essence of the approach presented here is that to calculate the price of asian barrier options , two steps need to be taken .first , a partitioning of paths according to the average along the path must be performed , and second , the method of images must be used in order to cancel out paths which have reached the barrier .the difficulty combining these two steps , is that mirror paths have a different average than the original paths , and thus belong to a different partition .this difficulty can apparently be overcome by treating the average itself as a separate , correlated process ( as proposed in ref .this procedure , relating to , leads to the correct propagator ( and price ) in the case of a plain asian option as shown in section [ 2 ] . however , from the results shown in fig .[ versch_corr_artikel ] it is clear that this is no longer the case for an asian option with a barrier on a correlated control process .this is because the exact average of the process does not behave as a separate , correlated process ( the average described by this process is henceforth called the approximate average ) .this approach is exact for a plain asian option , where all paths contribute , but when a barrier is implemented using the method of images , and thus eliminating some of the paths , the following approximation is made .when the process hits the barrier and is thus eliminated , its corresponding and processes are eliminated as well .but the process considered in our derivation is only approximate , so the wrong paths are eliminated .the central question is whether this will lead to a difference between the distribution of contributing paths for the exact averages and the corresponding distribution for the approximate averages , when a barrier has been implemented .figure 3 shows that this is indeed the case , and that this difference increases when correlation increases .[ h ] figure3.eps when the correlation is zero , the paths which are eliminated for both the exact and the approximate average are randomly distributed ( because the behavior of has nothing to do with the behavior of ) , which means that both distributions remain the same gaussian as they would be without a barrier .this is the reason why our result is exact when correlation is zero .another source of approximation lies in the use of the black - scholes model which has well - known limitations .several other types of market models propose to overcome such limitations , for example by introducing additional ad hoc stochastic variables or by improving the description of the behavior of buyers / sellers .the extension of the present work to for example the heston model lies beyond the scope of this article .in this paper , we derived a closed - form pricing formula for an average price as well as an average strike geometric asian option within the path integral framework .the result for the average price asian option corresponds to that found by linetsky , using the effective classical partition function technique developed by feynman and kleinert .the result for the average strike asian option was compared to a monte carlo simulation .we found that the agreement between the numerical simulation and the analytical result for an average strike asian option is such that they coincide to within a relative error of less than 0.3 % for at least 500 000 samples and 100 time steps .furthermore , a pricing formula for an asian option with a barrier on a control process was developed .this is an asian option with the additional condition that the payoff is zero whenever the value of control process crosses a certain predetermined barrier .the pricing of this option was performed by constructing a new propagator which consisted of a linear combination of two propagators for a regular asian option .the resulting pricing formula is exact when the correlation is zero , and is approximate when the correlation increases .the central approximation made in our derivation , is that the process for the average logreturn is treated as a stochastic process , which is correlated with the process of the logreturn .this assumption is correct whenever all price - paths contribute to the total sum , but becomes approximate when a boundary condition is applied .the authors would like to thank dr .sven foulon and prof .karel in t hout for the fruitful discussions .this work is supported financially by the fund for scientific research - flanders , fwo project g.0125.08 , and by the special research fund of the university of antwerp bof noi ua 2007 . d. lemmens , m. wouters , j. tempere , s. foulon , _ a path integral approach to closed - form option pricing formulas with applications to stochastic volatility and interest rate models , _ physical review .e 78 , 016101 ( 2008 ) .
we derive a closed - form solution for the price of an average price as well as an average strike geometric asian option , by making use of the path integral formulation . our results are compared to a numerical monte carlo simulation . we also develop a pricing formula for an asian option with a barrier on a control process , combining the method of images with a partitioning of the set of paths according to the average along the path . this formula is exact when the correlation is zero , and is approximate when the correlation increases .
in this paper we study the doubly nonlinear ( dnl ) reaction - diffusion problem posed in the whole euclidean space we want to describe the asymptotic behaviour of the solution for large times and for a specific range of the parameters and .we recall that the -laplacian is a nonlinear operator defined for all by the formula and we consider the more general diffusion term called `` doubly nonlinear''operator . here, is the spatial gradient while is the spatial divergence .the doubly nonlinear operator ( which can be though as the composition of the -th power and the -laplacian ) is much used in the elliptic and parabolic literature ( see and their references ) and allows to recover the porous medium operator choosing or the -laplacian operator choosing .of course , choosing and we obtain the classical laplacian . before proceeding ,let us fix some important restrictions and notations .we define the constants and we make the assumption : that we call `` fast diffusion assumption''(cfr . with ) .note that the shape of the region depends on the dimension .two examples are reported in figure [ fig : simulfastcaserange ] ( note that the region in the case is slightly different respect to the case and we have not displayed it ) .we introduce the constant since its positivity simplifies the reading of the paper and allows us to make the computations simpler to follow .-plane.,title="fig : " ] -plane.,title="fig : " ] the case , i.e. , has been recently studied in . in thissetting , the authors have showed that the equation in possesses a special class of travelling waves which describe the asymptotic behaviour for large times of more general solutions ( see subsection [ sectionpreviousresultsfast ] for a summary of the results of the case ) .our main goal is to prove that the case presents significative departs in the asymptotic properties of the solutions of problem . in particular, we will see that general solutions do not move with constant speed but with exponential spacial propagation for large times .this fact is the most interesting deviance respect to the classical theory in which tws play an important role in the study of the asymptotic behaviour .the function is a reaction term modeled on the famous references by fisher , and kolmogorov - petrovski - piscounoff in their seminal works on the existence of traveling wave propagation .the classical example is the logistic term , .more generally , we will assume that \to { \mathbb{r}}\text { and } f \in c^1([0,1 ] ) \\f(0 ) = 0 = f(1 ) \ ; \text { and } \ ; f(u ) > 0 \text { in } ( 0,1 ) \\f \text { is concave in } [ 0,1 ] \end{cases}\ ] ] see for a more complete description of the model .moreover , we will suppose that the initial datum is a lebesgue - measurable function and satisfies note that the previous assumption is pretty much general than the more typical continuous with compact support initial data .moreover , since , all data satisfying are automatically integrable , .[ [ main - results - and - organization - of - the - paper . ] ] main results and organization of the paper .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the paper is divided in parts as follows : in section [ sectionpreviousresultsfast ] we present some known theorems about problem . our goal is to give to the reader a quite complete resume on the previous work and related bibliography , to connect it with the new results contained in this paper . in section [ convergencetozerofast ]we begin the study of the asymptotic behaviour of the solutions of problem -- , with restriction . in particular, we firstly introduce the critical exponent by giving a formal motivation and , later , we prove the following theorem .[ convergencetozerofastdiffusion ] fix .let and such that .then for all , the solution of problem with initial datum satisfies for all , we call `` exponential outer set '' or , simply , `` outer set '' .the previous theorem shows that , for large times , the solution converges to zero on the `` outer set '' and represents the first step of our asymptotic study . in section [ sectionexponentialexpansionsuperlevelsets ]we proceed with the asymptotic analysis , studying the solution of problem with initial datum where and are positive real numbers and .we show the following crucial proposition .[ expanpansionofminimallevelsets ] fix .let and such that and let .then there exist , and such that the solution of problem with initial datum satisfies this result asserts that for all initial data `` small enough '' and for all , the solution of problem is strictly greater than a fixed positive constant on the `` exponential inner sets '' ( or `` inner sets '' ) for large times .hence , it proves the non existence of travelling wave solutions ( tws ) since `` profiles '' moving with constant speed of propagation can not describe the asymptotic behaviour of more general solutions ( see section [ sectionpreviousresultsfast ] for the definition of tws ) .moreover , this property will be really useful for the construction of sub - solutions of general solutions since , as we will see , it is always possible to place an initial datum with the form under a general solution of and applying the maximum principle ( see lemma [ lemmaplacingbarenblattundersolution ] ) . in section [ sectionasymptoticbehaviourfast ]we analyze the asymptotic behaviour of the solution of problem , in the `` inner sets '' . along with theorem [ theoremboundsforlevelsetsfast ]the next theorem is the main result of this paper .[ convergencetoonefastdiffusion ] fix .let and such that .then for all , the solution of problem with initial datum satisfies this theorem can be summarized by saying that the function converges to the steady state 1 in the `` inner sets '' for large times . from the point of view of the applications, we can say that the density of population invades all the available space propagating exponentially for large times ., for all .,title="fig : " ] , for all .,title="fig : " ] in section [ sectionboundsforlevelsetsfast ] we consider the classical reaction term .we find interesting bounds for the level sets of the solution of problem , .in particular , we prove that the information on the level sets of the general solutions is contained , up to a multiplicative constant , in the set , for large times .[ theoremboundsforlevelsetsfast ] fix .let and such that , and take .then for all , there exists a constant and a time large enough , such that the solution of problem with initial datum and reaction satisfies for all . in particular, we have an important feature of this result is that for all , the set does not depend on some , while in theorem [ convergencetozerofastdiffusion ] and theorem [ convergencetoonefastdiffusion ] the `` outer sets '' and the `` inner sets '' depend on and , respectively .moreover , taking a _ spatial logarithmic scale _ we can write the estimate for large enough .actually , this result was not known for `` fast '' nonlinear diffusion neither for the porous medium case , nor for the -laplacian case .however , it was proved by cabr and roquejoffre for the fractional laplacian in , in dimension . in order to fully understand the importance of theorem [ theoremboundsforlevelsetsfast ], we need to compare it with the linear case and , see formula .as we will explain later , in the linear case the location of the level sets is given by a main linear term in with a logarithmic shift for large times , see .in other words , the propagation of the front is linear `` up to '' a logarithmic correction , for large times .now , theorem [ theoremboundsforlevelsetsfast ] asserts that this correction does not occur in the `` fast diffusion '' range . using the logarithmic scale , we can compare the behaviour of our level sets with the ones of formula for linear diffusion , noting that there is no logarithmic deviation , but the location of the level sets is approximately linear for large times ( in spatial logarithmic scale , of course ) , and moreover there is a bounded interval of uncertainty on each level set location . in section [ sectionmaxprinccyldomains ]we prove a maximum principle for a parabolic equation of -laplacian type in non - cylindrical domains , see proposition [ maxprinnoncyldomains ] .the idea of comparing sub- and super - solutions in non - cylindrical domains comes from and it will turn out to be an extremely useful technical tool in the proof of theorem [ convergencetoonefastdiffusion ] .section [ appendixselfsimsolincinitdata ] is an appendix .we present some knew results on the existence , uniqueness and regularity for solutions of the `` pure diffusive '' parabolic equation with -laplacian diffusion and non - integrable initial data .in particular , we focus on radial data , and we study some basic properties of the self - similar solutions with datum . the results of this section are needed for proving theorem [ convergencetoonefastdiffusion ] . finally , in section [ sectionfinalremarksfast ]we conclude the paper with some comments and open problems related to our study . in particular , we focus on the range of parameters the case is critical in our study while the range is also known in literature as `` very fast '' diffusion range .one of the problems of this range is the lack of basic tools and basic theory ( existence , uniqueness , regularity of the solutions and estimates ) known for the porous medium equation and for the -laplacian equation , but not in the doubly nonlinear setting .in this brief section , for the reader s convenience , we recall some known results about problem with related bibliography , and we introduce some extremely useful tools such as `` barenblatt solutions '' , we will need through the paper . as we have explained before , we present here the literature and past works linked to our paper , in order to motivate our study . basically , the goal is to give to the reader a suitable background on the fisher - kpp theory , so that our new results can be compared and fully understood .[ [ finite - propagation - the - doubly - nonlinear - case . ] ] finite propagation : the doubly nonlinear case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in , we studied problem assuming and initial datum satisfying before enunciating the main results we need to introduce the notion of travelling waves ( tws ) .they are special solutions with remarkable applications , and there is a huge mathematical literature devoted to them .let us review the main concepts and definitions .fix and and assume that we are in space dimension 1 ( note that when , the doubly nonlinear operator has the simpler expression ) .a tw solution of the equation is a solution of the form , where , and the _ profile _ is a real function . in the application to the fisher - kpp problem, the profile is assumed to satisfy in that case we say that is an _ admissible _ tw solution .similarly , one can consider admissible tws of the form with , decreasing and such that and .but it is easy to see that these two options are equivalent , since the the shape of the profile of the second one can be obtained by reflection of the first one , , and it moves in the opposite direction of propagation .+ finally , an admissible tw is said _ finite _ if for and/or for , or _ positive _ if , for all .the line that separates the regions of positivity and vanishing of is then called the _free boundary_. now , we can proceed .we proved that the existence of admissible tw solutions depends on the wave s speed of propagation .in particular , we showed the following theorem , cfr . with theorem 2.1 and theorem 2.2 of .[ theoremexistenceoftws ] let and such that .then there exists a unique such that equation possesses a unique admissible tw for all and does not have admissible tws for .uniqueness is intended up to reflection or horizontal displacement . moreover ,if , the tw corresponding to the value is finite ( i.e. , it vanishes in an infinite half - line ) , while the tws corresponding to the values are positive everywhere . finally , when , any admissible tw is positive everywhere .the concept of admissible tws and the problem of their existence was firstly introduced in and .then aronson and weinberger , see , proved theorem [ theoremexistenceoftws ] in the case of the linear diffusion , i.e. and ( note that the choice and is a subcase of ) .later , the problem of the existence of critical speeds and admissible tws for the fisher - kpp equation has been studied for the porous medium diffusion ( and ) , see and .recently , see , it has been proved the existence of admissible tws and admissible speeds of propagation when and , i.e. -laplacian diffusion . in theorem [ theoremexistenceoftws ]we generalized these results when doubly nonlinear diffusion is considered and .then we focused on the pde part in which we studied the asymptotic behaviour of more general solutions , proving the following theorem ( theorem 2.6 of ) .[ ntheoremconvergenceinneroutersets ] fix .let and such that .\(i ) for all , the solution of the initial - value problem with initial datum satisfies ( ii ) moreover , for all it satisfies , in the case in which the function stands for a density of population , the statement of the previous theorem means that the individuals tend to occupy all the available space and , for large times , they spread with constant speed , see . from the mathematical point of view, we can state that the steady state is asymptotically stable while the null solution is unstable and , furthermore , the asymptotic stability / instability can be measured in terms of speed of convergence of the solution which , in this case , is asymptotically linear in distance of the front location as function of time .again we recall that for and , the previous theorem was showed in , while for and in .we point out that in this last paper , the authors worked with a slightly different reaction term , they called `` strong reaction '' , see also . in the linear case and , the statements of theorem [ ntheoremconvergenceinneroutersets ]were improved .indeed , when , bramson showed an interesting property of the level sets , , of the solution of equation ( with and ) with reaction term satisfying . in particular , in and , it was proved that for all there exist constants , and such that \ ] ] for large enough , where .the previous formula is interesting since it allows to estimate the `` delay '' of the solution from the positive tw with critical speed which , according to , grows in time and consists in a logarithmic deviance .furthermore , he showed that general solutions converge uniformly to the tw with critical speed of propagation ( once it is `` shifted '' of a logarithmic factor ) , for large times .more recently , similar results have been proved in with pdes techniques .[ [ exponential - propagation - for - porous - medium - fast - diffusion . ] ] exponential propagation for porous medium fast diffusion .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let s now resume briefly the results of king and mccabe ( ) , which have inspired this paper .they considered the porous medium case , which is obtained by taking in the equation in : in the fast diffusion range , ( note that we absorbed a factor by using a simple change of variables ) .they considered non - increasing radial initial data decaying faster than as and studied radial solutions of problem .they showed that when , the radial solutions of the previous equation converge pointwise to 1 for large times with exponential rate , for , and that the `` main variation in concentration occurs on the scale '' , ( see pag .2544 ) where in their notation .we will present an adaptation of their methods to our case in section [ convergencetozerofast ] .note that `` our '' critical exponent generalizes the value to the case and to more general reaction terms than .this is a severe departure from the tw behavior of the standard fisher - kpp model since there are no tw solutions .instead , they found that radial solutions of have non - tws form for large times : where and .the case is studied too and , as we have anticipated , we will discuss this range in the final section with some comments .exponential propagation happens also with fractional diffusion , both linear and nonlinear , see for instance and the references therein .we will not enter here into the study of the relations of our paper with nonlinear fractional diffusion , though it is an interesting topic . finally , we recall that infinite speed of propagation depends not only on the diffusion operator but also on the initial datum . in particular ,in , hamel and roques found that the solutions of the fisher - kpp problem with linear diffusion i.e. , ( and ) propagate exponentially fast for large times if the initial datum has a power - like spatial decay at infinity .the scene is set for us to investigate what happens in the presence of a fast doubly nonlinear diffusion .now we present some basic results concerning the barenblatt solutions of the `` pure diffusive '' doubly nonlinear parabolic equation which are essential to develop our study in the next sections ( the reference for this issue is ) .moreover , we recall some basic facts on existence , uniqueness , regularity and maximum principles for the solutions of problem . [ [ barenblatt - solutions . ] ] barenblatt solutions .+ + + + + + + + + + + + + + + + + + + + + fix and such that and consider the `` pure diffusive '' doubly nonlinear problem : where is the dirac s function with mass in the origin of and the convergence has to be intended in the sense of measures .it has been proved ( see ) that problem admits continuous weak solutions in self - similar form , called barenblatt solutions , where the _ profile _ is defined by the formula : ^{-\frac{p-1}{\widehat{\gamma}}},\ ] ] where is determined in terms of the mass choosing ( see for a complete treatise ) .we point out that there is an equivalent formulation ( see for the case ) in which the barenblatt solutions are written in the form ^{-\frac{p-1}{\widehat{\gamma } } } , \qquad r(t ) = \big[(n/\alpha)t \big]^{\frac{\alpha}{n}},\ ] ] where is a new constant. it will be useful to keep in mind that we have the formula which describes the relationship between the barenblatt solution of mass and mass and the estimates on the profile corresponding to the barenblatt solution of mass : for suitable positive constants and depending on . [[ existence - uniqueness - regularity - and - maximum - principles . ] ] existence , uniqueness , regularity and maximum principles .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + before presenting the main results of this paper , we briefly discuss the basic properties of the solutions of problem .results about existence of weak solutions of the pure diffusive problem and its generalizations , can be found in the survey and the large number of references therein .the problem of uniqueness was studied later ( see for instance ) . the classical reference forthe regularity of nonlinear parabolic equations is , followed by a wide literature . for the porous medium case ( ) we refer to , while for the -laplacian case we suggest and the references therein . finally ,in the doubly nonlinear setting , we refer to .the results obtained show the hlder continuity of the solution of problem .finally , we mention for a proof of the maximum principle .in this section we study the asymptotic behaviour of the solution of the cauchy problem with non - trivial initial datum satisfying : for some constant .we recall here the definition of the critical exponent before proceeding , let us see how to formally derive the value of the critical exponent in the case and ( note that ) .we follow the methods used in .first of all , we fix and we consider radial solutions of the equation in , which means note that the authors of worked with a slightly different equation ( they absorbed the multiplicative factor with a simple change of variables ) . we linearize the reaction term and we assume that satisfies now , we look for a solution of of the form for which agrees with the assumption on the initial datum and with the linearization .it is straightforward to see that for such solution , the function has to solve the equation note that since , we have that is well defined and positive , while .equation belongs to the famous bernoulli class and can be explicitly integrated : hence , for all fixed , we obtain the asymptotic expansion for our solution now , for all fixed , we consider a solution of the _ logistic _ equation which describes the state in which there is not diffusion and the dynamics is governed by the reaction term .we assume to have where the leading - order term satisfies for some unknown function , with , as .now , matching with for large and , we easily deduce thus , substituting in and taking for , we have where . the previous formula corresponds to a `` similarity reduction '' ( see , pag .2533 ) of the logistic equation with .note that taking and , we have for while if and we have for .this means that setting , is a `` critical '' curve , in the sense that it separates the region in which the solution converges to to the one which converges to .we will show this property in theorem [ convergencetozerofastdiffusion ] and theorem [ convergencetoonefastdiffusion ] . in what follows , we prove that the solution of problem with initial datum converges uniformly to the trivial solution in the outer set as if . in the nex sections we will prove that this solution converges uniformly to the equilibrium point in the inner set as if . [ [ proof - of - theorem - convergencetozerofastdiffusion . ] ]proof of theorem [ convergencetozerofastdiffusion ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + fix , , and .first of all , we construct a super - solution for problem , using the hypothesis on the function . indeed , since for all , the solution of the linearized problem gives the super - solution we are interested in and , by the maximum principle , we deduce in .now , consider the change of the time variable , \quad \text{for } t \geq 0,\ ] ] with .then the function solves the problem from the properties of the profile of the barenblatt solutions and the hypothesis on the initial datum , it is evident that there exist positive numbers and such that in and so , by comparison , we obtain now , since the profile of barenblatt solutions satisfies for some constant and for all ( see ) , we can perform the chain of upper estimates where we set and we used the first relation in in the third inequality .note that we used that , too .now , supposing in the last inequality , we get since we have chosen , completing the proof . section is devoted to prove that for all and initial data `` small enough '' , the solution of problem lifts up to a ( small ) positive constant on the `` inner sets '' for large times .let and be positive real numbers and , for all , consider the initial datum where . note that has `` tails '' which are asymptotic to the profile of the barenblatt solutions for large ( see formula ). the choice will be clear in the next sections , where we will show the convergence of the solution of problem , to the steady state .the nice property of the initial datum is that it can be employed as initial `` sub - datum '' as the following lemma shows .[ lemmaplacingbarenblattundersolution ] fix and let and such that .then for all , there exist , , and , such that the solution of problem with nontrivial initial datum satisfies where is defined in ._ let the solution of problem with nontrivial initial datum and consider the solution of the purely diffusive cauchy problem : it satisfies in thanks to the maximum principle .let .since is continuous in and non identically zero ( the mass of the solution is conserved in the `` good '' exponent range ) , we have that it is strictly positive in a small ball , and . without loss of generality , we may take .so , by continuity , we deduce in ] .first of all , we construct a barenblatt solution of the form such that since the profile of the barenblatt solution is decreasing , we impose in order to satisfy in the set .moreover , using and noting that , it simple to get and so , it is sufficient to require , so that is valid in . thus , it is simple to obtain the relations now , consider the linearized problem and the change of variable , \quad \text{for } t \geq 0.\ ] ] note that and the function solves the `` pure diffusive '' problem since for all , from the maximum principle we get hence , using the concavity of and the second inequality in we get \ ] ] and so , since implies , we have that is a sub - solution of problem , in ] by and so , following the chain of inequalities as before , we get now , in order to conclude the proof of the case , we must check that the conditions and actually represent a possible choice and the value of , defined at the beginning , performs their compatibility .the compatibility between and can be verified imposing ^{\frac{\alpha p}{\;n\widehat{\gamma}}},\ ] ] which can be rewritten using the definitions as now , it is simple to verify that condition implies and so it holds too .hence , a sufficient condition so that is satisfied and does not depend on is i.e. , our initial choice of in which proves the compatibility between and . _remark 2_. rewriting formula using the definition of , it is simple to deduce and , using the second hypothesis on in , it is straightforward to obtain .in particular , we have shown i.e. , the thesis for .[ [ iteration . ] ] iteration .+ + + + + + + + + + set , and for all and define we suppose to have proved that the solution of problem , satisfies and we show in for the values and . from the induction hypothesis, we have that the solution of the problem is a sub - solution of problem , in which implies in and so , it is sufficient to prove in .since we need to repeat almost the same procedure of the case , we only give a brief sketch of the induction step . _step1_. construction of a sub - solution of problem , , in ] , where , \quad \text{for } t \geq t_{j-1}.\ ] ] in particular , note that , , and _ step2_. we have to study a chain of inequalities similar to the one carried out in _ step2 _ verifying that thus , imposing conditions similar to and and requiring their compatibility , we have to check the validity of the inequality since , we have and so , a sufficient condition so that the previous inequality is satisfied is , which is guaranteed by the initial choice of .finally , following the reasonings of the case it is simple to obtain the relation which implies and we complete the proof . * proof of proposition [ expanpansionofminimallevelsets ] . *the previous lemma proves that for the sequence of times and for any choice of the parameter , the solution of problem , reaches a positive value in the sequence of sets where is chosen large enough ( in particular , we can assume ) . actually , we obtained a more useful result .first of all , note that , for all , lemma [ lemmaexpandingfastlevelsets ] implies for all .moreover , since conditions are satisfied for all , we can repeat the same proof of lemma [ lemmaexpandingfastlevelsets ] , modifying the value of and choosing a different value , which is smaller but strictly positive for all .hence , it turns out that for all , it holds now , iterating this procedure as in the proof of lemma [ lemmaexpandingfastlevelsets ] , it is clear that we do not have to change the value of when grows and so , for all , we obtain then , using the arbitrariness of , we complete the proof . [ [ remark .] ] remark .+ + + + + + + note that , to be precise , in the proof of proposition [ expanpansionofminimallevelsets ] , we have to combine lemma [ lemmaplacingbarenblattundersolution ] with lemma [ lemmaexpandingfastlevelsets ] as follows .let the solution of problem with initial datum .we wait a time given by lemma [ lemmaplacingbarenblattundersolution ] , in order to have for all and some depending on .now , thanks to the maximum principle , we deduce in , where we indicate with the solution of problem with initial datum . in this way, we deduce the statement of lemma [ lemmaexpandingfastlevelsets ] for more general initial data satisfying and we can prove proposition [ expanpansionofminimallevelsets ] .as mentioned in the introduction , we now address to the problem of showing the convergence of a general solution of problem , to the steady state . as anticipated , we find that the convergence to 1 is exponential for large times , with exponent .this fact represents an interesting deviance , respect to the case ( i.e. ) in which the solutions converge with constant speed for large times and show a tw asymptotic behaviour .[ [ proof - of - theorem - convergencetoonefastdiffusion . ] ] proof of theorem [ convergencetoonefastdiffusion ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + fix , and set in .we will prove that for all , there exists such that which is equivalent to the assertion of the thesis . __ fix and consider the inner set , where is initially arbitrary .we recall that proposition [ expanpansionofminimallevelsets ] assures the existence of and such that in the set . in particular , for all ,we have that is bounded from below and above in the inner set : moreover , it is not difficult to see that , setting and , the function solves the problem ^m \qquad\qquad & \text{in } { \mathbb{r}}^n . \end{aligned } \end{cases}\ ] ] using , it is simple to see that where for what concerns , it is bounded from below in : where and depends on and . indeed , if we have that for all .hence , we get our bound from below recalling and noting that if , we have the formula and so , since and arguing as in the case , we deduce for some depending on and . in particular , it follows that satisfies i.e. , is a sub - solution for the equation in problem in the set ._ in this step , we look for a super - solution of problem with in .we consider the solution of the problem according to the resume presented in section [ appendixselfsimsolincinitdata ] , problem is well posed if when .further assumptions are not needed when . furthermore , since can be chosen smaller and larger , we make the additional assumption now , we define the function \;\;\quad & \text{if } 1 < p < 2 \\ \frac{1}{a_1}(t - t_1 ) \qquad\qquad\qquad\qquad\qquad\quad & \text{if } p = 2 \\ \frac{1}{c_0(p-2)}\big[1 - e^{-(c_0/a_1)(p-2)(t - t_1 ) } \big ] \quad & \text{if } p > 2 .\end{aligned } \end{cases}\ ] ] note that is increasing with for all .moreover , we define the limit of as with the formula ^{-1 } \quad & \text{if } p > 2 .\end{aligned } \end{cases}\ ] ] then , the function ( with ) solves the `` pure diffusive '' problem as we explained in section [ appendixselfsimsolincinitdata ] , for all the problem admits self - similar solutions , with self - similar exponents and profile with for all , where we set . note that since we assumed when , the self - similar exponents are well defined with and for all . finally , recall that it is possible to describe the spacial `` decay '' of the self - similar solutions for large values of the variable , with the bounds for a constant large enough , see formula .now , it is not difficult to see that , for all fixed delays .moreover , we compute the time derivative : \big\ } \\ & = -(\tau + \tau_1)^{-\alpha_{\lambda}-1}e^{-\frac{c_0}{a_1}(t - t_1 ) } \bigg\ { \frac{c_0}{a_1}(\tau + \tau_1)^{\alpha_{\lambda}+1 } + \bigg[\frac{c_0}{a_1}(\tau + \tau_1 ) + \alpha_{\lambda } \tau ' \bigg]f(\xi ) + \beta_{\lambda } \tau ' \xi f'(\xi ) \bigg\ } , \end{aligned}\ ] ] where and stands for the derivative respect with the variable .let s set since , , , and are non - negative and , in order to have , it is sufficient to show for all and a suitable choice of .if , this follows from a direct and immediate computation , choosing large enough .if , we may proceed similarly .it is simple to see that condition for reads ^{(c_0/a_1)(2-p)(t - t_1 ) } \geq 1 - \frac{\tau_1}{\tau_{\infty}}.\ ] ] consequently , since , it is sufficient to choose . finally , when it holds for all .hence , it is simple to see that the choice is a sufficient condition so that for all .we stress that the choice of is independent of .now , using the fact that in and that in , it is straightforward to see that _ step3 ._ now we compare the functions and , applying the maximum principle of section [ sectionmaxprinccyldomains ] .hence , we have to check that the assumptions in proposition [ maxprinnoncyldomains ] are satisfied .it is simple to see that it holds in .indeed , we have while ^m \leq 1 ] to be lipschitz continuous . __ in this last step , we conclude the proof .the following procedure holds for all ( see also ) .thanks to corollary [ conjectureforlevelsetssigmaastfast ] , to deduce the second inclusion in : it is sufficient to prove in , for some .so , in the set , we have note that the bound can be extended to the region , thanks to our assumption on and theorem [ convergencetoonefastdiffusion ] .consequently , applying corollary [ conjectureforlevelsetssigmaastfast ] with , and , we end the proof of the theorem .in this brief section , we give the proof of a maximum principle for a certain class of parabolic equations with -laplacian diffusion .as mentioned in the introduction , a similar result have been introduced in , but proved with different techniques .this comparison principle is crucial in the study of the asymptotic behaviour of the general solutions of the fisher - kpp problem , see theorem [ convergencetoonefastdiffusion ] . before proceeding we need to introduce some definitions .first of all , let be a positive and non - decreasing function , and consider the `` inner - sets '' now , for all , we consider the equation where is a continuous function in , with in , and . the next definition is given following .see also , chapter 8 for the porous medium setting .a nonnegative function is said to be a `` local strong '' super - solution of equation in if ( i ) , and ; \(ii ) satisfies \eta + |\nabla \overline{u}|^{p-2}\nabla\overline{u}\,\nabla\eta \geq 0,\ ] ] for all test function , .a nonnegative function is said to be a `` local strong '' sub - solution of equation in if ( i ) , and ; \(ii ) satisfies \eta + |\nabla \underline{u}|^{p-2}\nabla\underline{u}\,\nabla\eta \leq 0,\ ] ] for all test function , .[ maxprinnoncyldomains ] consider two functions and defined and continuous in .assume that : ( a1 ) in .( a2 ) in .( a3 ) finally , assume that is a `` local strong '' super - solution and is a `` local strong '' sub - solution of equation in .then in . _ proof ._ let s fix . for all ,we define the subset of we show that for all , it holds +\big\|_{l^1(\omega_{i , t } ) } \leq \big\|[\underline{u}(0 ) - \overline{u}(0)]_+\big\|_{l^1({\mathbb{r}}^n)},\ ] ] where + ] with , we can take as test function thus , by the definition of sub- and super - solutions , it is simple to deduce (w_j)h + \big<\,|\nabla\underline{u}|^{p-2}\nabla\underline{u } - |\nabla\overline{u}|^{p-2}\nabla\overline{u},\nabla w_j\,\big > p'(w_j)h \,dxdt \leq 0.\ ] ] the second integral converges to thanks to the fact that for all and , see the last section of .hence , taking the limit in the second integral we deduce (\underline{u}-\overline{u})h \,dxdt\leq 0,\ ] ] and , letting +(\cdot) ] , , and we easily get + dx\bigg ) h(t ) \,dt \leq 0 \quad \text { for all } \ ; h \in c_c^1([0,t ] ) , \ ; 0 \leq h \leq 1.\ ] ] thus , thanks to arbitrariness of , we deduce that + dx \leq 0,\ ] ] for all . using assumption ( a2 ) again, it is not difficult to deduce + dx\bigg ) \leq 0,\ ] ] which implies +\big\|_{l^1(\omega_{i , t } ) } \leq \big\|[\underline{u}(0 ) - \overline{u}(0)]_+\big\|_{l^1(\omega_{i,0 } ) } \leq \big\|[\underline{u}(0 ) - \overline{u}(0)]_+\big\|_{l^1({\mathbb{r}}^n)},\ ] ] i.e. , the thesis . [ [ remark.-1 ] ] remark .+ + + + + + + we point out that the functions we use in the proof of theorem [ convergencetoonefastdiffusion ] satisfy the assumptions of regularity required in the statement of proposition [ maxprinnoncyldomains ] , as we have remarked in the introduction .see also the bibliography reported in the next section .in this section , we recall some basic facts about the existence of barenblatt solutions for the cauchy problem where . in particular , we focus on the specific initial datum a more complete analysis of the self - similarity of the -laplacian equation can be found in .we have decided to dedicate an entire appendix to this topic since solutions of problem play a main role in the proof of theorem [ convergencetoonefastdiffusion ] .moreover , we think it facilitates the reading and gives us the occasion to present the related bibliography . before proceeding with our analysis , we need to recall some important properties about problem .[ [ case - boldsymbolp-2 . ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the existence and uniqueness of solutions for the heat equation for continuous non - integrable initial has been largely studied , see tychonov and the references therein . in particular , he proved that if the initial datum satisfies and , then problem , admits a unique ( classical ) solution defined in . more work on this issue can be found in .[ [ case - boldsymbolp-2.-1 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this range was studied in , by dibenedetto and herrero .the authors showed that , under the assumptions for some and , there exists a unique weak solution of problem , defined in ( see theorem 1 , theorem 2 , and theorem 4 of ) .furthermore , they proved that ( i.e. is a `` local strong solution '' ) and the function is locally hlder continuous in ( see also ) .[ [ case - boldsymbol1-p-2 . ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the same authors ( see ) considered problem with and nonnegative initial data without any assumption on the decay at infinity of .first of all , they show existence and the uniqueness of weak solutions of problem , by using the benilan - crandall regularizing effect , see . then they posed their attention on the regularity of these solutions when the initial datum is a non - negative -finite borel measure in , in the range .in particular , they showed the existence and the uniqueness of a locally hlder continuous weak solution in , with ( i.e. they are `` local strong solutions '' ) , with locally hlder continuous in .the sub - critical range was studied later by bonforte , iagar and vzquez in .they proved new local smoothing effects when the initial datum is taken in and sub - critical , and special energy inequalities which are employed to show that bounded local weak solutions are indeed `` local strong solutions '' , more precisely .then , thanks to the mentioned smoothing effect and known regularity theory ( and ) they found that the local strong solutions are locally hlder continuous .[ [ barenblatt - solutions - for - problem - eqcauchyproblemplaplacianincreasinginitialdata - eqassumptionindatumpowertype . ] ] barenblatt solutions for problem ( [ eq : cauchyproblemplaplacianincreasinginitialdata ] ) , ( [ eq : assumptionindatumpowertype ] ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + from now on we take , .we do not make any other assumptions on if , whilst when we assume , according to the theory developed in , and presented before . as mentioned before , the assumptions on the parameter guarantees the existence , the uniqueness and the hlder regularity of the solution of problem , , for all .we look for solutions in _ self - similar _form where and are real numbers and is called profile of the solution .let and write .it is not difficult to compute and , by taking we have , and so we obtain the equation of the profile furthermore , since guarantees that the equation in is invariant under the transformation , , we use the uniqueness of the solution of problem , to deduce hence , we get and , combining it with , we obtain the precise expressions for the self - similar exponents we point out that , thanks to the assumption when , we have for all , and so while . [ [ properties - of - the - barenblatt - solutions . ] ] properties of the barenblatt solutions .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we are going to prove that the profile of the barenblatt solutions is positive and monotone non - decreasing by applying the aleksandrov s symmetry principle .later , we show some asymptotic properties of the profile .let , with and , for all , consider the approximating sequence of initial data note that are both radial non - decreasing and bounded in .now , consider the sequence of initial data and the sequence of solutions of problem with initial data , for all .hence , by applying the aleksandrov s symmetry principle , we deduce that for all times , the solutions are radially non - increasing in space too .finally , we define the sequence which are radially non - decreasing in space and solve problem with initial data , for all .hence , passing to the limit as , we have and the limit , solution of problem with initial datum , inherits the same radial properties of the sequence .now , we show the existence of two constants such that the following asymptotic bounds hold estimates follow directly from that fact that as .indeed , for all fixed , we have that since , the left expression converges to 0 as , we deduce that , as and , from this limit , we get . [ [ aleksandrovs - symmetry - principle . ] ] aleksandrov s symmetry principle .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the aleksandrov - serrin symmetry method was firstly introduced in and to show monotonicity of solutions of both ( eventually nonlinear ) elliptic and parabolic equations . here ,following , we give a short proof for the case of the `` pure diffusive '' -laplacian equation in , for all . before proceeding with the statement , we fix some notations .let be an hyperplane in , and the two half - spaces `` generated '' by , and the reflection with respect to the hyperplane .let be a solution of the initial - value problem with initial datum .suppose that then , for all times it holds in particular , radial initial data generate radial solutions . _proof_. first of all , thanks to the rotation invariance of the equation in , we can assume and .moreover , it follows that solves problem in with initial datum .now , we have in and in . hence , since the solution is continuous , we get the thesis by applying the maximum principle . note that , to be precise , we should consider solutions of the cauchy - dirichlet problem posed in the ball with zero boundary data .these solutions approximate and .consequently , we can apply the maximum principle to these approximate solutions and , finally , pass to the limit as .see chapter 9 of for more details .if is radial , we can apply the statement for all hyperplane passing through the origin of and deducing that for all times , the solution is radial respect with the spacial variable too .we end the paper by discussing some open problems . moreover , we present some final comments and remarks to supplement our work .as we have mentioned in the introduction , nonlinear evolution processes give birth to a wide variety of phenomena .indeed we have seen that solutions of problem exhibit a travelling wave behaviour for large times when , i.e. , while infinite speed of propagation when .it is natural to ask ourselves what happens in the range of parameters that we call `` very fast diffusion assumption '' .however , respect to the porous medium and the -laplacian case , we have to face the problem of lack of literature and previous works related to the doubly nonlinear operator ( in this range of parameters ) .for this reason , in the next paragraphs we will briefly discuss what is known for the porous medium and the -laplacian case , trying to guess what could happen in the presence of the doubly nonlinear operator .we stress that our approach is quite _ formal _ , but can be interesting since it gives a more complete vision of the fast diffusion range , and allows us to explain what are ( or could be ) the main differences respect to the range .this critical case was firstly studied by king and later in for the porous medium setting , i.e. and , with .when it follows , choice of parameter which goes out of our range and we avoid it .king studied the asymptotic behaviour of radial solutions of the pure diffusive equation with , , and .actually , he considered a slightly different equation absorbing a factor in the time varible and he studied the cases and , too .note that the choice corresponds to when .in , the author described the asymptotic behaviour of radial solutions of equation , given by the formula where is a constant depending on and on the initial datum ( see formula ( 2.34 ) of ) .in particular , it follows that the solutions of have spacial power like decay `` corrected '' by a logarithmic term for .we are interested in seeing that an analogue decay holds when and in the doubly nonlinear setting .we proceed as in , see section 2 ._ asymptotic behaviour for large ._ let s take for a moment .seeking solutions of equation in separate form as in _ step1 _ of theorem [ theoremboundsforlevelsetsfast ] , it is simple to see that if the initial datum satisfies , then for all we have for some suitable constant .note that it corresponds to fix and take the limit as in the formula of the barenblatt solutions , see subsection [ preliminariesintro ] .now , motivated by the previous analysis , we fix ( in order to remain in the ranges and ) , , and we look for solutions of equation in the form for some correction function as and some constant . in what followswe ask as , too .it is simple to compute as , where we have used the fact that .hence , it is simple to see that solves if and only if now , it is clear that a possible choice is , for some , and a straightforward computation shows that the previous equation is satisfied by taking so that for all , we obtain which generalizes the case and ._ barenblatt solutions for ._ as was observed in ( see pag .346 ) , does not respect the self - similarity reduction of equation .indeed , it admits `` pseudo - barenblatt '' solutions which , following the notation of , can be written in the form where ^{-\frac{(p-1)}{p}n } , \qquad \xi = xr(t)^{-1 } , \qquad r(t ) = e^t,\ ] ] and is a free parameter ( cfr . with for the case and with formula for the range ) .we point out that the profile satisfies the inequalities in with . however , these self - similar solutions ( also called `` of type iii '' , see ) are quite different from the ones in the range .in particular , they are eternal , i.e. defined for all and they do not converge to a dirac delta as ( see also ) . finally , for all fixed , these self - similar solutions are _ not integrable _ respect with to the spacial variable and show the spacial decay taking into account these facts , when it seems reasonable to study problem with nontrivial initial datum satisfying for some constant , and trying to extend the techniques used for the range , to the this critical case .first of all , we can define by continuity .thus , it is possible to repeat the proof of theorem [ convergencetozerofastdiffusion ] by using `` pseudo - barenblatt '' solutions instead of the usual ones . in this way , for all , we show the convergence of the solutions to 0 in the `` outer sets '' , as .however , it is clear that the methods employed for showing proposition [ expanpansionofminimallevelsets ] can not be used in this case too .indeed , in the range , this crucial proposition has been proved by constructing barriers from below with barenblatt solutions .this has been possible since the initial datum in shares the same spacial decay of these self - similar solutions . in the critical case , this property would not be preserved as suggests . in particular , `` pseudo - barenblatt '' solutionscannot be placed under an initial datum satisfying and so the validity of proposition [ expanpansionofminimallevelsets ] in this critical case remains an open problem . before discussing the doubly nonlinear diffusion , let us recall what is known in the porous medium setting in the corresponding range of parameters , , , and . consider the porous medium equation where .it has been proved that the corresponding solution extinguishes in finite time ( see for instance and the references therein ) . in other words , there exists a critical `` extinction time '' such that , for all .again , the cases , and , are critical and we refer to , chapters 5 to 8 . _ barenblatt solutions for so , even though there is not literature on the subject ( at least to our knowledge ) , it seems reasonable to conjecture that the doubly nonlinear diffusion shows a similar property in the range , with . in particular , also in this case we have `` pseudo - barenblatt '' solutions written in the form ^{-\frac{p-1}{\widehat{\gamma}}},\ ] ] where and , with a strong departure from , ^{-\frac{|\alpha|}{n}},\ ] ] where is fixed and stands for the `` extinction time''(cfr . with pag . 194 or for the case , and with formula for the range ) .the existence of this kind of self - similar solutions ( also said in `` of type ii '' ) strengthen the idea that a larger class of solutions have an extinction time , i.e. they vanish in finite time ._ application to the fisher - kkp equation . _ in section [ convergencetozerofast ] we have seen that the linearized problem gives a super - solution for the fisher - kpp problem with nontrivial initial datum , . again , with the change of variable , \quad \text{for } t \geq 0,\ ] ] we deduce that the function solves the problem now , set and note that .now , let be the `` extinction time '' of the solution of problem .thus , we deduce , for all and , if , it follows which implies for all $ ] , and so the solution of the fisher - kpp problem with initial datum extinguishes in finite time , too .this conclusion holds under the assumption , which should be guaranteed if the the initial datum is `` small enough '' ( in terms of the mass ) , see chapter 5 , for the porous medium setting .the analysis of the case in which the initial mass is infinite is an interesting open problem .* acknowledgments . *both authors have been partially funded by projects mtm2011 - 24696 and mtm2014 - 52240-p ( spain ) .work partially supported by the erc advanced grant 2013 n. 339958 `` complex patterns for strongly interacting dynamical systems - compat '' .we thank flix del teso for providing us with the numerical simulations that we display .
the famous fisher - kpp reaction diffusion model combines linear diffusion with the typical fisher - kpp reaction term , and appears in a number of relevant applications . it is remarkable as a mathematical model since , in the case of linear diffusion , it possesses a family of travelling waves that describe the asymptotic behaviour of a wide class solutions of the problem posed in the real line . the existence of propagation wave with finite speed has been confirmed in the cases of `` slow '' and `` pseudo - linear '' doubly nonlinear diffusion too , see . we investigate here the corresponding theory with `` fast '' doubly nonlinear diffusion and we find that general solutions show a non - tw asymptotic behaviour , and exponential propagation in space for large times . finally , we prove precise bounds for the level sets of general solutions , even when we work in with spacial dimension . in particular , we show that location of the level sets is approximately linear for large times , when we take spatial logarithmic scale , finding a strong departure from the linear case , in which appears the famous bramson logarithmic correction .
the recovery of sparse signals of high dimensions on the basis of noisy linear measurements is an important problem in the field of signal acquisition and processing .when the number of linear observations is significantly lower than the dimension of the signal to be recovered , the signal recovery may exploit the property of sparsity to deliver correct results .the field of research that studies such problems is often referred to as _ compressed sensing _ or _ compressive sensing _ ( cs ) .+ several computationally tractable methods to address cs problems have been developed in the last two decades . among them , greedy methods prove to be valuable choices as their complexity is significantly lower than that of algorithms based on -minimization . + while many cs problems involve only one sparse signal and the corresponding _ measurement vector _ , _i.e. _ , the vector gathering all the linear observations of this signal , some applications either require or at least benefit from the presence of several sparse signals and measurement vectors .examples of such applications are available in section [ subsec : applications ] .models involving one measurement vector are referred to as single measurement vector ( smv ) models while multiple measurement vector ( mmv ) models involve at least two measurement vectors .+ when the supports of the sparse signals are similar , it is possible to improve the reliability of the recovery by making joint decisions to determine the estimated support .thereby , all the measurement vectors intervene in the estimation of the support and the final support is common to all the sparse vectors .algorithms performing joint recovery are also capable to weaken the influence of additive measurement noise on the performance provided that the noise signals are statistically independent and exhibit some degree of isotropy . + orthogonal matching pursuit ( omp ) is one of the most extensively used greedy algorithm designed to solve smv problems . among several greedy algorithmsconceived to deal with multiple measurement vectors , the extension of omp to the mmv paradigm , referred to as simultaneous orthogonal matching pursuit ( somp ) , is of great interest as it remains simple , both conceptually and algorithmically .the classical somp algorithm does not account for the possibly different measurement vector noise levels . in some sense, all the measurement vectors are considered equally worthy .however , it is clear that an optimal joint support recovery method should necessarily take into account the noise levels by accordingly weighting the impact of each measurement vector on the decisions that are taken .the first aim of this paper is to extend somp by gifting it with weighting capabilities .the new algorithm will be referred to as somp with noise stabilization ( somp - ns ) and basically extends the decision metric of somp to weight the impact of each measurement vector onto the decisions that are taken .+ the second objective is to provide theoretical and numerical evidence that the proposed algorithm indeed enables one to achieve higher performance than the other greedy alternatives when the noise levels , or more generally the signal - to - noise ratios ( snr ) , vary from one measurement vector to another .we study partial and full support recovery guarantees of somp - ns for a mmv signal model incorporating arbitrary sparse signals to be recovered and statistically independent additive gaussian noise vectors exhibiting diagonal covariance matrices , _i.e. _ , the entries within each vector are statistically independent . it is assumed that the variances of the entries within each noise vector are identical although they may be different for each measurement vector . the signal model is thoroughly detailed in section [ subsec : signalmodel ] .+ our first contribution is the proposal of somp - ns which generalizes somp by weighting the measurement vectors .the second contribution is a novel theoretical analysis of somp and somp - ns in the presence of additive gaussian noise on the measurements . to the best of the authors knowledge, the theoretical analysis in this paper has never been proposed , neither for somp nor for somp - ns .+ finally , numerical simulations will show that the weighting capabilities of somp - ns enable one to improve the performance with regards to somp when the noise vectors exhibit different powers .the numerical results will also provide evidence that the theoretical analysis accurately depicts key characteristics of somp - ns .in particular , closed - form formulas for the optimal weights will be derived from the theoretical analysis and will be compared to the simulation results .several authors have worked on similar problems .the study of full support recovery guarantees for omp with or -bounded noises as well as with gaussian noises has been performed in .the authors of also provided conditions on the stopping criterion to ensure that omp stops after having picked all the correct atoms .+ our analysis is similar to that performed by tropp in for convex programming methods in a smv setting .together with gilbert , they analyzed the probability of full support recovery by means of omp for gaussian measurement matrices in the noiseless case .their result has subsequently been refined by fletcher and rangan in to account for additive measurement noise by means of a high - snr analysis , _i.e. _ , it is assumed that the signal - to - noise ratio scales to infinity .all of the papers discussed so far only focus on the smv framework .+ the theoretical analysis of our paper is partially inspired from and has been generalized to the mmv framework .it is worth pointing out that our analysis does not require the high snr assumption of , properly captures the benefits provided by multiple measurement vectors but nevertheless presents some inaccuracies that are to be discussed at the end of this paper .+ gribonval _ et al . _ have performed an analysis of somp for a problem similar to ours in .they were interested in providing a lower bound on the probability of correct support recovery when the signal to be estimated is sparse and its non - zero entries are statistically independent mean - zero gaussian random variables exhibiting possibly different variances . +while our statistical analysis considers the additive measurement noise as a random variable and the sparse signals to be recovered as deterministic quantities , the results obtained in use the opposite approach , _i.e. _ , the sparse signals are random and the noise is deterministic .thus , the problem addressed in our paper differs from that presented in but both papers use similar mathematical tools and the criteria to ensure full support recovery with high probability share analogous properties .this last remark will be further discussed in section [ subsec : gribonvalrelatedthm ] .first of all , section [ sec : sigmodel ] progressively introduces the context , provides a detailed description of the signal model and depicts an associated application .afterwards , section [ sec : sompandsompns ] provides descriptions of somp and somp - ns . + before deriving the theoretical analysis , section [ sec : background ] introduces the mathematical tools necessary for its execution .section [ sec : section3 ] then provides general theoretical results on the proper recovery of sparse vectors by means of somp - ns . on the basis of the results from section [ sec : section3 ] ,we show in section [ sec : recovguarantees ] that , for gaussian noises , the probability of failure of somp - ns decreases exponentially with regards to the number of measurement vectors . + in section [ sec :numresults ] , extensive numerical simulations show that adequate weighting strategies enable somp - ns to outperform somp whenever the noise variances for each measurement vector are different . also , a closed - form weighting strategy is derived from the theoretical analysis of the previous sections and these weights are compared to the optimal ones obtained by simulation .finally , the simulation results show which aspects of the behavior of somp - ns are properly captured by the proposed theoretical analysis .the reasons why our analysis fails to capture some characteristics of somp are discussed and potential workarounds are proposed for investigation .we find preferable to introduce here the common notations used in this paper .first of all , : = \left\lbrace 1 , 2 , \dots , n \right\rbrace ] . for any matrix , denotes the space spanned by its column vectors .also , the trace of is and the frobenius norm of is denoted by . for , denotes the submatrix of that comprises its columns indexed by .likewise , is the subvector of comprising only the components indexed within .the moore - penrose pseudoinverse of any matrix is denoted by .the orthogonal complement of a vector subspace is given by .for any random variable , its cumulative density function ( cdf ) is denoted by while its probability density function ( pdf ) is written ( when it exists ) .similarly , the joint cdf and pdf of the random variables are written as and respectively .the probability measure is given by while the mathematical expectation is denoted by .the statistical independence symbol is written .we define the support of a vector as .the `` norm '' of a vector is defined as . loosely speaking, we say that is sparse whenever .moreover , is said to be -sparse whenever .+ let us consider a collection of signals ( ) that are sparse in an orthonormal basis , _i.e. _ , where represents the orthonormal basis and is the sparse coefficient vector of expressed in .+ we now consider a unique linear measurement process to recover each one of the sparse coefficient vector .moreover , the measurement process is assumed to deliver a number of observations significantly lower than .additive measurement noise vectors are also accounted for .formally , the latter statements rewrite where the _ measurement matrix _ denotes the linear measurement process and the _ measurement vectors _ gather the observations .+ since the number of observations is lower than , arbitrary vectors can not be recovered from , even in the noiseless case . however , in the noiseless case , it has been shown that can be recovered provided that it is sparse enough , _i.e. _ , the cardinality of its support is below a certain threshold , and that the measurement matrix exhibits specific properties such as the restricted isometry property that is described afterwards .+ the orthonormal basis is often assumed to be the canonical basis .the reason that explains this simplification relies on the fact the the measurement matrix is usually generated as a realization of a subgaussian random matrix .it can be shown that such random matrices are well - condtionned for sparse support recovery , even when multiplied by orthonormal matrices , _i.e. _ , remains a subgaussian random matrix .this phenomenon is more thoroughly discussed in ( * ? ? ?* section 9.1 ) .thereby , the signal model will be simplified to if several sparse signals share similar supports , then it is interesting to simultaneously recover their joint support instead of performing independent and possibly different estimations of the support of every vector .the reason that explains why such a strategy yields performance improvements is twofold : 1 .it has been shown that when the sparse vectors share a similar support whose associated entries are highly variable from one vector to another , then the probability of correct support recovery increases with the number of available measurement vectors .2 . in the noisy setting , the vectors are often statistically independent and isotropic which suggests that a joint support recovery procedure could be capable of filtering them. this property is part of what the theoretical results and simulations of this paper establish .once the joint support has been recovered , is easily recovered by solving a least squares problem involving only the non - zero entries of and the associated column vectors from .we wish to provide here a formal and precise statement of the signal model to be used in the rest of this paper . +as mentioned previously , we consider measurement vectors that are generated on the basis of sparse signals whose supports are similar .we consider the following mmv model : where is composed of the measurement vectors . comprises the sparse signals .finally , the columns of correspond to measurement errors .each error vector is distributed as and all of them are statistically independent .the purpose of ( [ eq : mmvsignalmodel ] ) is to aggregate the equations defined in ( [ eq : introsigmodel2 ] ) into a single relation .this representation will be preferred throughout the rest of this paper . + before going further , we will point out that the mathematical problem of the joint support recovery is equivalent to finding the columns of that enable one to fully express .thereby , may be seen as a _dictionary matrix _ whose columns are the _ atoms _ of the associated dictionary .the problem of joint support recovery then boils down to determining which atoms to choose to simultaneously express the measurement vectors as their linear combinations .although viewing as a measurement process is well suited to the description of typical applications , the dictionary approach is more appropriate for the sake of presenting the mathematical results and will thus be adopted for the rest of this paper .however , the term _ measurement vector _ will not be replaced to stay consistent with the standard mmv terminology .+ the dictionary matrix is assumed to satisfy the restricted isometry property of an order equal to or higher than the cardinality of the joint support .moreover , it will be assumed that each column of this matrix exhibits a unit norm .we briefly review two procedures to obtain a dictionary matrix that satisfies both properties above with high probability : 1 .generate a random gaussian matrix and then normalize the norm of its columns .in such a way , the atoms are uniformly distributed on the unit hypersphere of dimension .it is possible to show that this class of random matrices satisfies the restricted isometry property with high probability ( see for example ) .2 . generate a matrix whose entries are statistically independent rademacher random variables .each column of the resulting matrix is then normalized by multiplying the matrix by to obtain atoms exhibiting unit norms .the typical scenario associated with signal models ( [ eq : introsigmodel2 ] ) and ( [ eq : mmvsignalmodel ] ) is depicted in figure [ fig : firstscenario ] .the idea is to observe a physical quantity , _e.g. _ , a chemical composition , a wireless signal , etc . at different locations and/or time instants by means of sensing nodes _ whose only purpose is to acquire observations , _i.e. _ , measurement vectors , and repatriate them to a central node ( cn ) that will simultaneously process all the data .in such a configuration , the sensing nodes are generally cheap and exhibit very limited computational capabilities and power consumption while the central node is more costly because it has to deliver higher performance. + nodes with different noise levels generate measurement vectors on which joint estimation of the sparse support can be performed ] although several applications of the signal model ( [ eq : introsigmodel2 ] ) are presented in ( * ? ? ?* section 3.3 ) , we propose to take as an example the spectrum sensing problem .spectrum sensing aims at scanning multi - gigahertz electromagnetic ( em ) spectrums at a rate that is below that of nyquist .the reason that motivates this objective is that , although most of the available frequency bands are licensed to specific users and thus costly to acquire , it has been observed that the spectrum occupancy is limited , _i.e. _ , the spectrum is sparse in the frequency domain at a given time instant and location .therefore , it would be interesting to observe this spectrum through an appropriate linear measurement process , as described in equation ( [ eq : introsigmodel2 ] ) , and then use algorithms tailored for cs problems to determine , in real - time , which frequency bands are free and can be used to transmit information .+ in this application , each entry of the sparse signals represents the power of a given frequency band .since most of the spectrum is assumed to be unused at a given time instant and location , the vectors are expected to be sparse . although the nodes should ideally be exposed to the same spectrum , this is not the case in practice because of the rayleigh fading that strongly attenuates some frequency bands , thus being invisible to some nodes . in multiple input multiple output ( mimo )wireless communications , this issue is circumvented by placing the receivers sufficiently far away from one another so that the fading becomes statistically independent for each node and thus highly unlikely to strongly attenuate the same frequency band for every node . in a likewise fashion, the same solution will work for our framework since occasional `` holes '' will reveal to be a nonissue when performing joint decisions .+ finally , the different nodes may exhibit different noise levels because of discrepancies in the fabrication process or because the hardware components ( _ e.g. _ , amplifiers , multipliers , filters ) of each node are different .this last remark justifies the multiple noise variances hypothesis of this paper .the two next subsections present two methods for addressing the problem of joint support recovery envisioned in section [ subsec : signalmodel ] .the first method , somp , is standard in the literature but does not include noise stabilization .the second method , our contribution , generalizes the first one by multiplying each measurement vector ( ) by a weight .the original omp algorithm has been generalized in several ways to deal with matrix signals , _ i.e. _ , mmv problems. simultaneous orthogonal matching pursuit ( somp ) is a possible generalization of omp .+ somp is a greedy algorithm that provides approximate solutions to the joint support recovery problem by successively picking atoms from to simultaneously approximate the measurement vectors .somp is described in algorithm [ alg : somp ] .+ algorithm [ alg : somp ] : + simulatenous orthogonal matching pursuit ( somp ) + , , initialization : and determine the atom of to be included in the support : + update the support : projection of each measurement vector onto : + projection of each measurement vector onto : + we now explain how somp proceeds .the residual at iteration , denoted by , consists of the projection of each one of the original signals onto the orthogonal complement of .in such a way , the residual is orthogonal to every atom that has been chosen so far .initially , the residual is chosen equal to the original signal . the decision on which atom to choose ( step 4 ) is based on the sum of the inner products of every atom with each residual measurement vector ( where refers to the -th column of ) since the index of the atom maximizing the norm is included in the support ( step 5 ) .then , the original signal is projected onto the orthogonal complement of ( steps 6 and 7 ) .+ in this setting , somp stops after exactly iterations . however , it is worth mentioning that the stopping criterion usually comprises a criterion based on the number of iterations as well as another one relying on the norm of the residual , _i.e. _ , if the norm of the residual is below a certain threshold , then omp stops .different norms can be used for the second criterion but these considerations will not be further discussed in this paper .the interested reader can consult for related matters .+ furthermore , maximizing the norm in step is not the unique choice .other authors have investigated different norms , _e.g. _ , the and norms .nevertheless , some numerical simulations reveal that the choice of the norm has very little effect on the performance ( see ( * ? ? ?* figure 3 ) ) .we now present the development of a noise stabilization strategy to be used in conjunction with somp that has low computational requirements .the equivalent new algorithm is referred to as somp - ns where ns stands for _ noise stabilization_. algorithm [ alg : sompns ] describes the first form of somp - ns . +this novel algorithm is a generalization of somp that weights the impact of each measurement vector within matrix on the decisions performed at each iteration .+ algorithm [ alg : sompns ] : + somp with noise stabilization ( somp - ns ) first form + , , , and determine the atom of to be included in the support : + projection of each measurement vector onto : + projection of each measurement vector onto : + somp - ns is actually very close to somp .both algorithms decide on which atom to pick on the basis of a sum of absolute values of inner products , each term in the sum depending only on one measurement vector .somp gives the same importance to each measurement vector whereas its weighted counterpart introduces weights ( ) so as to give more or less importance to each measurement vector .+ a second form of somp - ns that is more computationally efficient is available in algorithm [ alg : sompnsv2 ] . in the second form , * somp*( )refers to the regular somp algorithm described in algorithm [ alg : somp ] .algorithm [ alg : sompnsv2 ] : + somp with noise stabilization ( somp - ns ) second form + , , , the columns of are weighted beforehand : apply the regular somp algorithm : * somp*( ) both forms are equivalent since and .the last equality holds true because the orthogonal projector is applied to each column of separately .although the residual matrix is different for the two forms of somp - ns , the difference only consists in a multiplicative term for each column of and it does not modify the atoms added to the estimated support .this section briefly explains the mathematical tools needed to conduct the theoretical analysis .let be a matrix composed of column vectors .moreover , ] .the definition above extends the notion of support to matrices , _i.e. _ , the support of a matrix is the union of the supports of its columns .similarly to the vector case , if , then } { \text{supp}}({\boldsymbol{u}}_j ) \right| ] which is not a norm .furthermore , for , is defined as ( * ? ? ?* equation a.8 ) where . for the sake of simplifying the notations, we will adopt the convention .it can be shown that , for , ( * ? ? ?* lemma a.5 ) } \sum_{j=1}^{n } \left| \phi_{i , j } \right| = \|{\boldsymbol{\phi}}^{\mathrm{t } } \|_{1}. \ ] ] the sparse rank of , denoted by , is given by is thus the smallest number of linearly dependent columns of .equivalently , it means that , if , then , for every support such that , the columns of are linearly independent , _i.e. _ , has full column rank .+ note that computing for a given matrix is not computationally tractable as this problem is even harder to solve than a norm minimization problem which is known to be np - hard .the matrix satisfies the so - called restricted isometry property ( rip ) of order if there exists a constant such that for all -sparse vectors .the smallest that satisfies equation ( [ eq : ripdef ] ) is called the rectricted isometry constant ( ric ) of order .the ric of order can theoretically be computed by considering \\|s| = s } } \lambda_{\mathrm{max}}({\boldsymbol{\phi}}_{s}^{\mathrm{t } } { \boldsymbol{\phi}}_{s } ) - 1 \\l_s = 1 - \min_{\substack{s \subseteq \left[n \right ] \\|s| = s } } \lambda_{\mathrm{min}}({\boldsymbol{\phi}}_{s}^{\mathrm{t } } { \boldsymbol{\phi}}_{s})\end{aligned}\ ] ] where and denote the smallest and largest eigenvalues respectively . then, evaluating and is not computationally tractable as it requires to determine the smallest and largest eigenvalues of matrices of size .in particular , this problem has been shown to be np - hard in the general case .it is therefore interesting to find an upper bound on that can be easily computed .[ lem : riclambda ] if , then \\ |s| = s } } \lambda_{\mathrm{max}}({\boldsymbol{\phi}}_{s}^{\mathrm{t } } { \boldsymbol{\phi}}_{s } ) \leq 1 + \mu_1(s-1 ) \leq 1 + ( s-1 ) \mu\\\min_{\substack{s \subseteq \left[m \right ] \\|s| = s } } \lambda_{\mathrm{min}}({\boldsymbol{\phi}}_{s}^{\mathrm{t } } { \boldsymbol{\phi}}_{s } ) \geq 1 - \mu_1(s-1 ) \geq 1 - ( s-1 ) \mu.\end{aligned}\ ] ] the first inequality of each line holds if .this result is obtained in .a consequence of lemma [ lem : riclambda ] is that , if , then it is worth noticing that if , then .the reason is that implies that , |s| = s } \lambda_{\mathrm{min}}({\boldsymbol{\phi}}_{s}^{\mathrm{t } } { \boldsymbol{\phi}}_{s } ) > 0 ] . following the steps of ( * ? ? ?* theorem 4.5 ) , the lemma is easily obtained .each column of the matrix can be expressed as a linear combination of the columns of .the reason that explains this last statement is that and .+ moreover , is guaranteed to have full column rank which implies that the moore - penrose pseudoinverse is equal to and consequently that . furthermore , it is easily established ( * ? ? ?* lemma 4.4 ) that for two matrices and , . combining the results above yields finally , using equation ( [ eq : inftytoonenorm ] ) shows that and concludes the proof .we are now ready to provide an erc for somp - ns .[ thm : erc ] let and .if and has full column rank , then a sufficient condition for somp - ns to properly retrieve the support of after exactly iterations is where is the relative complement of with respect to ] and let denote the indexes of the atoms chosen by somp - ns at iteration .it is assumed that , _i.e. _ , only correct decisions have been made before iteration . contains the indexes of the correct atoms yet to be selected at iteration .let where denotes the orthogonal projector onto .then , for any ( ) , where .moreover , if satisfies the rip with -th restricted isometry constant , then denoting the -th column of by , we first observe that the maximum being taken over since is orthogonal to because of the orthogonal projector . since , which implies the triangle inequality yields thus , the first inequality results from the observation that , for any vector , we have .the inequality is available in ( * ? ? ?* lemma 5 ) .the first part of the theorem is now proved . + if satisfies the rip with ric , then equation ( [ eq : ricasmax ] ) yields , which proves the second part of the theorem .the result above shows that the decision metric in corresponding to the correct atoms , _i.e. _ , , is closely related to the norm of the signal to be recovered and to the singular values of .it is clear that the ability of to conserve the norm of sparse vectors is necessary to ensure that the measurement noise does not absorb .+ through the following corollary , we wish to obtain a simple term that replaces .[ corr : ripbasedbtbound ] let ] and let denote the indexes of the atoms chosen by somp - ns at iteration .it is assumed that , _i.e. _ , only correct decisions have been made before iteration . contains the indexes of the correct atoms yet to be selected at iteration .let where denotes the orthogonal projector onto .if , then moreover , if , then both equation ( [ eq : firstboundvmu1 ] ) and ( [ eq : firstboundvmu ] ) hold true . using lemma [ lem : riclambda ] and the equality , one obtains the first inequality makes sense only if while the last inequality requires .although corollary [ corr : mubasedbtbound ] is less powerful and general than corollary [ corr : ripbasedbtbound ] , it provides an interesting insight into how the coherence of the dictionary influences .the previous section provided a non - probabilistic analysis of the quantity by deriving lower bounds that are more simple to evaluate than the original quantity . regarding the noise - related quantity , we will in this sectionperform a stochastic analysis to derive a lower bound on the probability that it does not exceed a threshold for gaussian noises . + as shown by theorem [ thm : theoframework1 ] , it is possible to examine whether somp - ns succeeds in choosing a correct atom at step by evaluating separately quantities linked to the sparse signal to be estimated and the noise vectors , and respectively . since several simple lower bounds for have been found , it becomes possible to evaluate a lower bound on \ ] ] _ i.e. _ , a lower bound on the probability that somp - ns makes correct decisions for signal model ( [ eq : mmvsignalmodel ] ) according to equation ( [ eq : sompanalysis2 ] ) of theorem [ thm : theoframework1 ] .corollaries [ corr : ripbasedbtbound ] and [ corr : mubasedbtbound ] yield \\\geq \mathbb{p}\left [ \|{\boldsymbol{\phi}}^{\mathrm{t } } { \boldsymbol{e}}^{(t ) } { \boldsymbol{q } } \|_{\infty } < 0.5 \left(1 - \|{\boldsymbol{\phi}}_s^+ { \boldsymbol{\phi}}_{\overline{s } } \|_{1 } \right ) \|{\boldsymbol{\phi}}_s^{\mathrm{t } } { \boldsymbol{z}}^{(t ) } { \boldsymbol{q } } \|_{\infty}^{\mathrm{(rip ) } } \right ] \\\geq \mathbb{p}\left [ \|{\boldsymbol{\phi}}^{\mathrm{t } } { \boldsymbol{e}}^{(t ) } { \boldsymbol{q } } \|_{\infty } < 0.5 \left(1 - \|{\boldsymbol{\phi}}_s^+ { \boldsymbol{\phi}}_{\overline{s } } \|_{1 } \right ) \|{\boldsymbol{\phi}}_s^{\mathrm{t } } { \boldsymbol{z}}^{(t ) } { \boldsymbol{q } } \|_{\infty}^{(\mu ) } \right ] \end{aligned}\ ] ] where a statistical analysis of is proposed when and for .the advantage of our approach is to take into account the isotropic nature of statistically independent gaussian random vectors .our main result is theorem [ thm : finalgaussianthm ] , which shows that the probability of making incorrect decisions from iteration to iteration included decreases exponentially with regards to a certain number of parameters .this theorem is then particularized so as to make use of the coherence of instead of the rip and the erc .+ first of all , the statistical properties of for a single and arbitrary atom are investigated in section [ subsec : oneatom ] .these properties are then extended to } \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right) ] .the analysis conducted hereafter mainly relies on the notion of lipschitz functions and the related concentration inequalities . in this section ,we are interested in providing a lower bound for for an arbitrary atom .the main result of this section is lemma [ lem : finalboundindivprob ] .[ thm : concentrationineqlipschitz ] ( * ? ? ? * theorem 8.40 . )let be a lipschitz function ( with regards to the metric ) with lipschitz constant .let be a vector of independent standard gaussian random variables .then , for all and consequently the theorem above shows that lipschitz functions tend to concentrate around their expectations when is distributed as a standard gaussian random vector .moreover , the concentration gets better as the lipschitz constant decreases .+ this theorem is intended to be used in conjunction with the function where ( ) .this function will be shown to be equivalent to when is gaussian .+ we now wish to establish that is a lipschitz function , compute the associated lipschitz constant and determine its expectation .let , then , using the reverse triangle inequality and the cauchy - schwarz inequality , therefore , a valid lipschitz constant of is equal to .this is the best lipschitz constant since , for and , .+ using the concentration inequalities for lipschitz functions requires to know the value of ] is higher than a threshold instead of the same probability obtained for .the issue we are facing is that the random variables ( for ) are not statistically independent which implies that the probability of upper bounding all these random variables simultaneously is not equal to the product of the probability to upper bound them separately .however , it remains possible to find a workaround using the union bound , as demonstrated by the following theorem .[ thm : finalboundmanyatomsprob ] let where ( ) .let ( ) be independent random variables respectively distributed as where .it is also assumed that .then , for , \leq n \exp \left ( - \kappa({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) \varepsilon^2 \right).\ ] ] equivalently , \geq 1 - n \exp \left ( - \kappa({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) \varepsilon^2 \right).\ ] ] first of all , we observe that } \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right)$ ] .then , by union bound , } \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right ) \geq b({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) + \varepsilon \right ] \\ = & \mathbb{p } \left [ \bigcup_{j=1}^{n } \left [ \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right ) \geq b({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) + \varepsilon \right ] \right ] \\\leq & \sum_{j=1}^{n } \mathbb{p } \left [ \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right ) \geq b({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) + \varepsilon \right ] \\\leq & n \exp \left ( - \kappa({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) \varepsilon^2 \right).\end{aligned}\ ] ] the first inequality results from the union bound while the second inequality holds because of lemma [ lem : finalboundindivprob ] .theorem [ thm : finalboundmanyatomsprob ] implicitly provides a lower bound on the probability of making correct decisions during iteration .it is indeed possible to use either equation ( [ eq : eqripbound ] ) or ( [ eq : eqmubound ] ) in conjunction with equation ( [ eq : probinequalities ] ) and theorem [ thm : finalboundmanyatomsprob ] to obtain the desired lower bound .nevertheless , a lower bound on the probability that somp - ns makes correct decisions from iteration to iteration remains to be found .this is the purpose of this section .+ before establishing the main result , one needs lemmas [ lem : fullsupportlem1 ] and [ lem : fullsupportlem2 ] . + [lem : fullsupportlem1 ] if is distributed as and is a fixed orthogonal projector matrix , then is distributed as where .we have an auxiliary atom can be defined for index , .this atom satisfies the inequality .let us now observe that if the entries of are i.i.d .random variables distributed as , then . the result immediately follows .lemma basically establishes that , for a fixed projection matrix , is distributed as a mean - zero gaussian random variable whose variance is always lower than that of the entries of .this result will enable us to apply a statistical analysis similar to that of the previous section by keeping the gaussian hypothesis .[ lem : fullsupportlem2 ] we consider the random variables , and where .it is assumed that and .then , for all , \leq \mathbb{p } \left [ x + |y_2| \leq \varepsilon \right].\ ] ] we know that and where since is a monotonically increasing function , one obtains .thus , & = \int_{-\infty}^\varepsilon \mathbb{p } \big [ |y_1| \leq \varepsilon - x \big ] f_x(x ) \mathrm{d}x \\ & \leq \int_{-\infty}^\varepsilon \mathbb{p } \big [ |y_2| \leq \varepsilon - x \big ] f_x(x ) \mathrm{d}x\\ & = \mathbb{p } \left[x + |y_2| \leq \varepsilon \right ] .\qedhere\end{aligned}\ ] ] in the rest of this paper , the random variable of lemma [ lem : fullsupportlem2 ] will be replaced with a sum of independent random variables exhibiting half - normal distributions , _i.e. _ , where the ( ) are independent and .thereby , an immediate corollary of the lemma above is that the probability of upper bounding by the sum of statistically independent random variables exhibiting half - normal distributions is always decreased whenever the variance of at least one of the random variables is increased .+ let us now state the key theoretical result of this paper , which provides a lower bound on the probability that somp - ns selects correct atoms during the first iterations .[ thm : finalgaussianthm ] let ( ) be statistically independent gaussian random vectors respectively distributed as .let .let be the signal matrix whose support is .let also be unit - norm vectors in and be the corresponding matrix which is assumed to satisfy the rip with -th ric .let . then, somp - ns with dictionary matrix , weights ( ) and signal is ensured to make correct decisions during the first iterations , _i.e. _ , from iteration to iteration included , with probability higher than whenever where and being defined in equation ( [ eq : defkappa ] ) and ( [ eq : defb ] ) , respectively .first , we observe that where it is convenient to define the -th auxiliary atom at iteration as . according to ( [ eq : probinequalities ] ) , at iteration ( for ) , a sufficient condition for to make a correct decision at iteration is given by } \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j^{(t ) } , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right ) \leq 0.5 ( 1 - \|{\boldsymbol{\phi}}_s^+ { \boldsymbol{\phi}}_{\overline{s } } \|_{1 } ) ( 1-\delta_{|\mathcal{j}_t| } ) \min_{j \in s } \sum_{k=1}^{k } | x_{j , k } | q_k . \end{aligned}\ ] ] since implies , then which shows that a less tight sufficient condition for somp - ns to make a correct decision at iteration is } \left ( \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{\phi}}_j^{(t ) } , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \right ) \leq \varepsilon ( { \boldsymbol{q } } ) = 0.5 ( 1 - \|{\boldsymbol{\phi}}_s^+ { \boldsymbol{\phi}}_{\overline{s } } \|_{1 } ) ( 1-\delta_{|s| } ) \min_{j \in s } \sum_{k=1}^{k } | x_{j , k } | q_k .\end{aligned}\ ] ] therefore , if the condition above holds for , somp - ns is guaranteed to pick a correct atom at iteration . at iteration , the orthogonal projector can take different values ( each value of corresponds to a specific support ) .therefore , if ( [ eq : suffcondwsompstept ] ) holds for every possible projector matrix at iteration , somp - ns is guaranteed to pick a correct atom at iteration . thus , we need to satisfy equations of the form ( where ) to ensure that somp - ns picks correct atoms during the first two iterations .+ extending the previous train of thought from iteration to iteration , one easily comes to the conclusion that equations of the form ( where ) should be satisfied since , at iteration , there exist possible realizations of the orthogonal projector . usingthe union bound as in the proof of theorem [ thm : finalboundmanyatomsprob ] shows that the probability of satisfying the equations is lower bounded by \ ] ] where is the -th possible realization of the orthogonal projector at iteration assuming that only correct atoms have been picked before iteration .+ lemmas [ lem : fullsupportlem1 ] and [ lem : fullsupportlem2 ] imply that \leq \mathbb{p } \left [ \sum_{k=1}^{k } \left| \left\langle { \boldsymbol{{\boldsymbol{\phi}}_j } } , { \boldsymbol{e}}_{k } \right\rangle \right| q_k \geq \varepsilon({\boldsymbol{q } } ) \right ] .\end{aligned}\ ] ] furthermore , since , lemma [ lem : finalboundindivprob ] shows that \leq \exp \left ( - \kappa({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) \overline{\varepsilon}({\boldsymbol{q } } , { \boldsymbol{\sigma}})^2 \right).\ ] ] whenever .finally , the probability that somp - ns chooses correct atoms from iteration to iteration ( let us denote this event by ) satisfies \geq 1 - n \mathcal{c}_s \exp \left ( - \kappa({\boldsymbol{q } } , { \boldsymbol{\sigma } } ) \overline{\varepsilon}({\boldsymbol{q } } , { \boldsymbol{\sigma}})^2 \right ) .\qedhere\ ] ] the theorem above translates several intuitive realities .first , by examining the expression of , one concludes that several quantities influence the probability of correct recovery : * the expression quantifies the robustness of the recovery in the noiseless case .it is evident that the margin of error when deciding which atom to choose in the noiseless case will contribute to determine the admissible noise level in the noisy case .* the term translates the ability of the dictionary matrix to maintain the norm of -sparse signals , which ensures that the norm of the columns of remain sufficiently high to avoid being absorbed by the noise matrix . *the minimized sum depicts the idea that the weighted sum of the coefficients associated with every atom should be high enough to allow somp - ns to identify them when noise is added to the measurements .this term simultaneously captures the influence of the signal to be recovered as well as that of the weights .moreover , indicates that increasing the noise variances decreases the probability of support recovery .also , increasing the weights naturally augments the power of the noise signal .nevertheless , it should be expected that this effect is counterbalanced by as the latter variable is also a function of the weights .finally , translates the fact that the noise affects every atom while takes into account the existence of iterations .+ it is worth noticing that so that it indicates that the probability of full support recovery is lower bounded by on the basis of past results in the literature , it will be suggested in section [ subsec : gribonvalrelatedthm ] that is an artifact of our proof and should ideally be replaced with a function that increases more slowly as is augmented .+ also , as a last remark , theorem [ thm : finalgaussianthm ] can be rephrased by stating that the probability of failure of somp - ns from iteration to iteration , _i.e. _ , at least one wrong atom is chosen during the first iterations , admits the upper bound [ thm : finalgaussianthm ] is now to be particularized using the coherence of instead of the rip and the erc .[ thm : finalgaussianthmcoherence ] let ( ) be statistically independent gaussian random vectors respectively distributed as .let .let also be the signal matrix whose support is .let also be unit - norm vectors in and be the corresponding matrix whose coherence is assumed to satisfy .let .then , somp - ns with dictionary matrix , weights ( ) and signal is ensured to make correct decisions during the first iterations , _i.e. _ , from iteration to iteration included , with probability higher than whenever , where and being defined in equation ( [ eq : defkappa ] ) and ( [ eq : defb ] ) , respectively . as it has already been pointed out in section [ subsec : ercnoiseless ] , if , then for all the supports of size . moreover , equation ( [ eq : rictocoherence ] ) shows that whenever thus , using the two inequalities above in conjunction with theorem [ thm : finalgaussianthm ] yields elementary algebraic manipulations show that the expression above simplifies to a similar although stronger result can be obtained by means of the cumulative coherence function . however , the resulting expression of is more complicated .moreover , coherence - based bounds only have a theoretical interest as they prove to be pessimistic for practical cases , even when using the cumulative coherence function .a result similar to theorem [ thm : finalgaussianthm ] has already been obtained in .the striking similarities between our result and that obtained in motivate this section .they prove the following theorem .* theorem 7 ) [ thm : gribonval ] let with a matrix of standard gaussian random variables , and an error term orthogonal to the atoms in .assume that the dictionary matrix satisfies the restricted isometry property ( rip ) with restricted isometry constant ( ric ) ( of order ) and then , the probability that iterations of somp fail to exactly recover the support on the basis of does not exceed with the number of measurement vectors , the number of atoms and in , the statistical analysis has been focused on the sparse signal to be estimated , _i.e. _ , , while no particular statistical distribution is assumed for .since the noise vectors are assumed to be orthogonal to the columns of , their purpose is to model an approximation noise , _i.e. _ , the part of the signal that can not be mapped by .conversely , the noise signals envisioned in our paper represent additive measurement noises that can not be assumed to be orthogonal to any vector subspace . + in the noiseless case , a result similar to theorem [ thm : gribonval ] has been obtained in ( * ? ? ?* theorem 6.2 ) for a variant of somp entitled 2-somp , _i.e. _ , a somp algorithm where the decision on which atom to pick is performed on the basis of the maximization of a norm instead of a norm . +although the problem addressed in this paper is different from that of theorem [ thm : gribonval ] , the authors of have used approaches similar to ours and they also noticed that the parasitic term seems difficult to remove .the rationale of this discussion is thus that it is difficult to avoid the suboptimal term when conducting developments similar to that presented in this paper .theorem [ thm : finalgaussianthm ] provides interesting insights into how successful somp - ns is whenever some parameters are modified .however , the theoretical developments presented in this paper do not properly capture all the characteristics of somp - ns .+ first of all , regarding , the value of should be lowered in practice .the reason why should be replaced by is because our analysis assumes that all the atoms not to be picked ( of index ) are such that while , in practice , only a few atoms are likely to exhibit a significant value of . + we conjecture that may be replaced by a linear function of .the reason why we failed at obtaining such a result is probably linked to the proof of theorem [ thm : finalgaussianthm ] . all the possible supports at each iteration are considered and it is ensured that the sufficient condition for making a correct decision is satisfied for each support .this is very pessimistic as only one support out of all the numerous possibilities actually matters .as indicated in section [ subsec : gribonvalrelatedthm ] , other researchers working on similar problems have stumbled upon this issue and no solution has been found so far to the best of the authors knowledge .the simulation results presented in the end of this paper will however not address this problem .+ furthermore , as it will be explained in section [ subsec : summarynumres ] , the bias should be removed in order to deliver results compatible with what has been observed in our numerical simulations .this term is most likely an artifact linked to equation ( [ eq : firstbigapprox ] ) and equation ( [ eq : secondbigapprox ] ) .equation ( [ eq : firstbigapprox ] ) basically assumes that and always have opposite signs for atoms whose indexes belong to the support .conversely , equation ( [ eq : secondbigapprox ] ) assumes that and always have identical signs for atoms whose indexes do not belong to the support .thus , a statistical analysis directly performed on may prevent the bias term from appearing .+ therefore , considering all the conjectures above , a better bound may be given by where has been replaced with , has been substituted to and has been canceled out . + finally , it is worth pointing out that equation ( [ eq : b1 ] ) will likely not deliver perfect results . indeed ,equation ( [ eq : trueequationmodel ] ) suggests that the probability of sparse support recovery success is actually a sum of exponential functions , each function corresponding to a single atom , iteration and support .moreover , possibly different values of should be chosen for each exponential function .it is also probably suboptimal to make use of the union bound .this section aims at demonstrating that : 1 .somp - ns provides significant performance improvements when compared to somp provided that the noise vectors exhibit different variances and that the weights are properly chosen .2 . depending on the signal model that is chosen , it is possible to accurately estimate the optimal weights by using simple closed - formed formulas derived from equation ( [ eq : b1 ] ) .3 . equation ( [ eq : b1 ] ) properly predicts the performance improvements obtained when the number of measurement vectors increases .the purpose of the first point is to demonstrate numerically that the gains provided by somp - ns are significant .the last two points rather focus on the numerical validation of the theoretical analysis presented in this paper . with these goals in mind, a particular signal model will be chosen .in particular , this signal model will be sufficiently simple to allow the computation of the optimal weights when the model ( [ eq : b1 ] ) is assumed to be correct .the objective of this section is to particularize theorem [ thm : finalgaussianthm ] to models for which ( , ) and ( , ) where the terms denote rademacher random variables , _i.e. _ , random variables that return either or with probability for both outcomes .+ two different sign patterns will be distinguished .sign pattern 1 refers to the case where the sign pattern is identical for all the sparse vectors to be recovered , _i.e. _ , for all , and whenever .sign pattern 2 corresponds to the situation where the sign pattern is independent for each and within each , _i.e. _ , whenever and/or .+ in both cases , it is worth mentioning that the absolute values of the entries of each are equal to . thus , and , according to theorem [ thm : finalgaussianthm ] , the probability of full support recovery is always higher than where .also , the bound above only holds if . + by using the conjecture about the elimination of the bias term described in section [ subsec : conjectures ] ,one obtains equation ( [ eq : modelconjecture ] ) , which is reminiscent of equation ( [ eq : b1 ] ) , except that ( [ eq : modelconjecture ] ) does not include the conjectures linked to the term . if our only objective is to derive the optimal weights according the theoretical model , only the argument of the exponential matters so that adjusting the term has no effect . let denote the arithmetic mean of the entries of vector .then , and .thus , the probability of full support recovery is always higher than as a particular case , if ( ) , the latter probability also rewrites } \rangle } \varepsilon'^2 \right).\ ] ] let us now focus our attention on the weights that maximize equation ( [ eq : b2 ] ) or , equivalently , .we can restrict our attention to the maximization of we define so that the expression above also reads the quantity represents the direction of so that we know that a global maximizer is obtained whenever and have the same direction .it means that a global maximizer is obtained if and only if where .this is equivalent to requiring since . by choosing ,one concludes that provides an optimal weighting strategy according to ( [ eq : b2 ] ) .now that the optimal weighting strategy for the signal models envisioned in section [ subsec : specialsigmodel ] has been derived , it becomes possible to determine whether the bound ( [ eq : b2 ] ) properly predicts the impact of the weights on the performance that is achieved .+ our matlab simulation software is available in .all the scripts needed to generate the figures presented in this section are also available . the reader should know that every simulation result exposed hereafter has been performed by using single precision floating point representations .the reason for such a choice is that single precision arithmetic is faster and thus preferred for algorithms intended to run on real - time platform such as somp - ns .for the same reason , the simulation results are obtained faster when using the single precision format . + it is assumed that and .the simulation framework consists of a fixed dictionary matrix whose entries were generated on the basis of independent and identically distributed gaussian random variables and then normalized in such a way that each column of the matrix exhibits a unit norm .this matrix is fixed for all the simulations and is available in .+ two simulation frameworks have been envisioned to demonstrate the three points introduced at the very beginning of section [ sec : numresults ] .the first framework consists of simulations for the case and addresses the first two objectives while the last framework examines how the probability of successful support recovery evolves as increases .first of all , it is worth pointing out that the performance achieved by is invariant if the weight vector is multiplied by a positive constant .therefore , only the angle is investigated .the weights are thus generated on the basis of the polar coordinate system , where . in practice ,the grid of weighting angles will consist of uniformly spaced angles from to .+ the noise standard deviation vector is generated on the basis of the polar coordinate system where describes the orientation of the noise vector .the grid of values for is composed of uniformly spaced angles ranging from to .extremely high or low angles have been avoided because they correspond to situations for which the noise concentrates essentially on one measurement vector .therefore , appropriate weighting strategies would be able to cancel most of the noise and would lead to probabilities of correct support recovery that are too high to be reliably estimated on the basis of a limited number of monte carlo cases . + a total of six simulation configurations have been run .each configuration corresponds to one of the two sign patterns described in section [ subsec : specialsigmodel ] , to a support size and to a value of .simulation cases have been generated for each value of belonging to the grid defined beforehand .once the support size is fixed , the actual support is randomly and uniformly chosen among all the possibilities .the support that is simulated is independent for each case .+ although , the input signal - to - noise ratio ( snr ) , referred to as and defined in equation ( [ eq : defsnrinput ] ) , is to be modified by means of the quantity .+ table [ tab : configssimk2 ] describes all the configurations that have been investigated numerically .the values of have been chosen in such a way that the probability of full support recovery when is approximately equal to and for the sign pattern 1 and sign pattern 2 respectively .these probabilities have been chosen in such a way that the probability of successful full support recovery over the grid defined for never reaches values so high that it can not be reliably estimated on the basis of a limited number of simulation experiments .the lower value of the probability of successful recovery for sign pattern is linked to the fact that having a sign pattern independent for each measurement vector , as for sign pattern 2 , provides performance improvements .this observation is actually reminiscent to what has been established in theorem [ thm : gribonval ] .+ in table [ tab : configssimk2 ] , the input snr , _i.e. _ , , is estimated by generating cases for the configuration of interest , then the input snr ( in db ) is computed for each case and the results are finally averaged .the matlab script implementing this estimation is available in ..simulation configurations | [ cols= " < , < , < , < , < , < " , ] as an example , figure [ fig : simanglesplotk2conf2 ] depicts the probability of full support recovery for configuration as a function of .+ ) for configuration ( sign pattern , , and db ) probability of full support recovery as a function of and the black curve refers to the optimal weights derived from equation ( [ eq : b2 ] ) each pixel of the figure has been computed on the basis of simulation cases , width=491 ] the first objective we would like to fulfill is demonstrating that somp - ns is capable to outperform somp whenever the noise standard deviations are not identical for each measurement vector . to do so , we will examinate , for each configuration and for each value of , what is the ratio of the probability of failure obtained for , _i.e. _ , the weights corresponding to somp , to the lowest probability of failure , _i.e. _ , that obtained for the truly optimal weights . figure [ fig : simanglessompvsoptimal ] plots the aforementioned quantity for the all the configurations of table [ tab : configssimk2 ] .note that the optimal weights are determined on the basis of the numerical results , the formula obtained on the basis of equation ( [ eq : b2 ] ) is not used .+ it is observed that the gains provided by the proper application of somp - ns are significant , especially for low values of the support size .the only case for which the gain is almost nonexistent is for sign pattern , and , _ i.e. _ , configuration .this may actually be a consequence of the normalization procedure of described previously which ensures that the probability of successful support recovery for is equal to .the value of had to be chosen significantly higher than that of the other cases to attain the goal , which limits the impact of the noise and thus hinders the improvement of the performance by modifying the weights .+ let us now attack the second objective of section [ sec : numresults ] .we wish to show that the formula obtained in section [ subsec : specialsigmodel ] delivers reliable estimates of the optimal weights .figure [ fig : simanglesoptimalvstheory ] summarizes the results .first of all , it is observed that , for the first sign pattern , the solution always corresponds to the truly optimal weights while this is not true for the second sign pattern . in particular , the discrepancy between the numerical results and the theoretical formula ( [ eq : b2 ] ) increases as the size of the support augments .+ the first observation is explained by the different sign patterns for each measurement vector .although the weights have been introduced to better filter the influence of the noise , they also have an impact on the relative importance of each in the decisions that are taken . given that the sparse vectors to be recovered have identical distributions , it is to be expected that , without noise , the optimal weights are obtained by choosing for symmetry reasons .figure [ fig : simanglesplotk2conf6nonoise ] displays the simulation results obtained for a configuration identical to configuration except that db , _i.e. _ , the influence of the noise is negligible .it is observed that the interaction of the weights and the sparse vectors to be recovered exists and that the optimal weighting angle is equal to .a possible interpretation of the observations above is that the optimal weighting strategy is a mixture of the strategy that optimizes the support recovery in the noiseless case and of that which minimizes the impact of the noise on the decisions that are taken , _i.e. _ , .nevertheless , further theoretical developments should be conducted to assess whether the proposed interpretation is correct .finally , it is worth pointing out that the phenomenon described above is not observed for sign pattern because and are identical in this case .+ the reasons that explain why the optimal weights get closer to when the support size increases , as shown in figure [ subfig : sp2optangles ] , is not clear and would require additional theoretical investigations that fall outside of the framework of this paper . + ) for configuration with probability of full support recovery as a function of and sign pattern # cases ,width=491 ] the final question of whether the proposed theoretical analysis properly conveys the properties of somp - ns whenever increases is to be discussed in this section .our main objective is to show that , as predicted by equation ( [ eq : b2 ] ) , the probability of failure of somp - ns decreases linearly with when it is plotted is semi - logarithmic axes . indeed ,equation ( [ eq : b2 ] ) yields where denotes the probability of failure of correct support recovery . + the simulation framework consists of a fixed weighting strategy for which all the weights are equal , _i.e. _ , the weighting strategy corresponds to somp . for each value of ,the noise vector is given by so that it is reminiscent of the noise vector defined in section [ subsec : simresk2 ] for .the results are plotted in figure [ fig : simplotk ] .the configurations that have been chosen are directly inspired of those presented in table [ tab : configssimk2 ] .the number of cases simulated for each curve and each value of is equal to .some configurations have been discarded because they do not exhibit a probability of failure equal to without noise , which is incompatible with the implicit assumption of our theoretical model that no errors are committed in the noiseless case . indeed , if , _i.e. _ , the erc is the noiseless case is not satisfied , then equation ( [ eq : sompanalysis2 ] ) can not hold .for example , figure [ fig : simanglesplotk2conf6nonoise ] shows that configuration from table [ tab : configssimk2 ] exhibits a non - zero probability of failure without noise .+ sp refers to sign pattern , width=491 ] the principal observation for figure [ fig : simplotk ] is that the slope of in semi - logarithmic axes is linear with regards to .this observation provides evidence that the theoretical model conveys the behavior of somp - ns when increases .the numerical results have revealed the following interesting facts : 1 .somp - ns provides significant performance improvements when compared to somp provided that the weights are properly chosen and that the noise variances are different for each measurement vector .the formula derived from equation ( [ eq : b2 ] ) corresponds to the truly optimal weights whenever * the sparse vectors to be recovered are identical . *the support size is low enough .+ the exact reason why the formula gets less accurate as the size of the support increases remains an open question .the theoretical analysis properly predicts the characteristics of the decrease of the probability of failure of somp - ns whenever the number of measurement vectors increases . besides the three points above , which answer the three questions enumerated at the beginning of section [ sec : numresults ] , the close fitting of and the numerical results for sign pattern suggests that the bias term is indeed an artifact of our developments as adding it would change the theoretically optimal weights and we would then observe a mismatch between them and those obtained by simulation .as suggested in section [ subsec : conjectures ] , the bias could be removed by avoiding to make use of the inequalities ( [ eq : firstbigapprox ] ) and ( [ eq : secondbigapprox ] ) . using a more subtle approach than the use of the union bound could also yield performance improvements .+ although it appears to be difficult , replacing the term by a function that depends linearly on would be of great interest , especially since it would close a hole in the literature regarding the performance of the well - known somp algorithm that is a particular case of ours .+ finally , extending the presented analysis by performing a joint statistical analysis of both the noise and the sparse signals to be recovered would be of great interest . to begin with, it would provide a theoretical model that predicts the truly optimal weights by simultaneously taking into account how they impact the sparse signals to be recovered and the noise vectors .next , it would also enable one to comprehend why the discrepancy between the formula and the truly optimal weights increases as the support size augments ( see figure [ fig : simanglesoptimalvstheory ] ) .finally , the statistical analysis of the sparse signals could replace by ( where is significantly lower than ) as conjectured in section [ subsec : conjectures ] .a novel algorithm entitled somp - ns that generalizes somp by associating weights with each measurement vector has been proposed . a theoretical framework to analyzethis algorithm has been built .lower bounds on the probability of full support recovery by means of somp - ns have been developed in the case where the noise corrupting the measurements is gaussian .numerical simulations have revealed that the developed theoretical results accurately depict key components of the behavior of somp - ns while they also fail to capture some of its properties . in particular , it has been shown that , under the right circumstances , the weights of somp - ns can be efficiently optimized on the basis of the proposed theoretical bounds . finally , the reasons that explain why some characteristics of somp - ns are not properly conveyed by the theoretical analysis have been discussed and potential workarounds to be investigated have been suggested .r. adamczak , a. e. litvak , a. pajor , and n. tomczak - jaegermann , `` restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling , '' _ constructive approximation _ , vol .1 , pp . 6188 , 2011 , springer .e. j. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ information theory , ieee transactions on _ , vol .2 , pp . 489509 , 2006 .j. f. determe , j. louveaux , l. jacques , and f. horlin , `` somp - ns software package , '' version 0.3.2 , available online at https://opera-wireless.ulb.ac.be/owncloud/index.php/s/lhkmncgou66mtes or http://bit.ly/1qg0y1g .d. l. donoho , and m. elad `` optimally sparse representation in general ( nonorthogonal ) dictionaries via minimization , '' _ proceedings of the national academy of sciences _ , vol . 100 , no .5 , pp . 21972202 , 2003 .d. l. donoho , m. elad , and v. n. temlyakov `` stable recovery of sparse overcomplete representations in the presence of noise , '' _ information theory , ieee transactions on _ , vol .1 , pp . 618 , 2006 .r. gribonval , h. rauhut , k. schnass and p. vandergheynst , `` atoms of all channels , unite ! average case analysis of multi - channel sparse recovery using greedy algorithms , '' _ journal of fourier analysis and applications _ , vol .14 , no . 5 - 6 , pp .655687 , 2008 .d. needell , and r. vershynin , `` signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit , '' _ selected topics in signal processing , ieee journal of _ , vol .2 , pp . 310316 , 2010 .j. a. tropp , and a. c. gilbert , `` simultaneous sparse approximation via greedy pursuit , '' _ acoustics , speech , and signal processing , 2005 .proceedings.(icassp05 ) .ieee international conference on _ , vol . 5 ,pp . 721724 , 2005 .a. m. tillmann , and m. e. pfetsch , `` the computational complexity of the restricted isometry property , the nullspace property , and related concepts in compressed sensing , '' _ information theory , ieee transactions on _ , vol .2 , pp . 12481259 , 2013 .
this paper studies the joint support recovery of similar sparse vectors on the basis of a limited number of noisy linear measurements , _ i.e. _ , in a multiple measurement vector ( mmv ) model . the additive noise signals on each measurement vector are assumed to be gaussian and to exhibit different variances . the simultaneous orthogonal matching pursuit ( somp ) algorithm is generalized to weight the impact of each measurement vector on the choice of the atoms to be picked according to their noise levels . the new algorithm is referred to as somp - ns where ns stands for noise stabilization . to begin with , a theoretical framework to analyze the performance of the proposed algorithm is developed . this framework is then used to build conservative lower bounds on the probability of partial or full joint support recovery . numerical simulations show that the proposed algorithm outperforms somp and that the theoretical lower bound provides a great insight into how somp - ns behaves when the weighting strategy is modified .
multicomponent systems are of great theoretical and practical importance .an example of an important ternary system , that inspired the current paper , is the formation of polymer membranes through immersion precipitation . in this processa polymer - solvent mixture is brought in contact with a non - solvent . as the non - solvent diffuses into the mixture , the mixture phase - separates , leaving behind a complex polymer morphology which depends strongly on the processing conditions .the dependence of the morphology on the parameters of the system is as yet poorly understood .preliminary lattice boltzmann simulations of this system exist . however , this work did not recover the correct drift diffusion equation .a general fully consistent lattice boltzmann algorithm with an underlying free energy to simulate multicomponent systems is still lacking .this paper strives to bring us a step nearer to achieving this goal .there are several previous lattice boltzmann methods for the simulation of multi - component systems .there are three main roots for these approaches. there are those derived from the rothmann - keller approach that attempt to maximally phase - separate the different components .a second approach by shan and chen is based on mimicking the microscopic interactions and a third approach after swift , orlandini and yeomans is based on an underlying free energy .all of these have different challenges .since we are interested in the thermodynamics of phase - separation we find it convenient to work with a method based on a free energy .this allows us to easily identify the chemical potentials of the components .this is convenient since the gradients of the chemical potentials drive the phase separation as well as the subsequent phase - ordering .the challenge for the lb simulation of a multicomponent system lies in the fact that momentum conservation is only valid for the overall system but not for each component separately , and diffusion occurs in the components . for a binary system of components and with densities and , the simulation usually traces the evolution of the total density and the density difference .although this scheme is successful in the simulation of a binary system , its generalization for the lb simulations of systems with an arbitrary number of components is asymmetric .for instance , to simulate a ternary system of components , , and with densities , and , the total density of the system , , should be traced , and the other two densities to be traced may be chosen as , e.g. , and . this approach is likely to be asymmetric because the three components are treated differently as is the case of lamura s model .if an lb method is not symmetric , it will lose generality an will only be adequate for special applications . in this paper , we established a multicomponent lattice boltzmann method based on an underlying free energy that is manifestly symmetric .the equation of motion for a multicomponent system are given by the continuity and navier - stokes equations for the overall system and a drift diffusion equation for each component separately .the continuity equation is given by where is the mass density of the fluid , is the mass flux which is given by , and is the macroscopic velocity of the fluid .the navier - stokes equation describes the conservation of momentum : where and are the pressure and viscous stress tensors respectively , is the component of an external force on a unit mass in a unit volume , and the einstein summation convention is used . for newtonian fluids , the viscous stress tensor is given by where is the shear viscosity , and is bulk viscosity , and is the spacial dimension of the system .free energy , chemical potential , and pressure are key thermodynamic concepts to understand the phase behavior of a system .the chemical potential of each component can be obtained by a functional derivative as where is the chemical potential of component ; is the number density of component ; and is the total free energy of the system .the pressure in a bulk phase in equilibrium is given by the pressure tensor is determined by two constraints : in the bulk and everywhere . in multicomponent systems , there are two mechanisms for mass transport : convection and diffusion .convection is the flow of the overall fluid , while diffusion occurs where the average velocities of components are different .the velocity of the overall fluid is a macroscopic quantity because it is conserved , but the average velocities of the components are not .the macroscopic velocity of the fluid can be expressed in terms of the density and velocity of each component in the form of with the notation the flux of each component can be divided into a convection part and a diffusion part : because mass conservation still holds for each component , the continuity equation for each component is valid : substituting eq .( [ jjj ] ) into eq .( [ onecon ] ) , the convection diffusion equation for a component can be obtained . from eqs .( [ uaverage ] ) and ( [ dsl ] ) , we see that which ensures the recovery of the continuity equation for the overall system .the diffusion process between two components is related to the difference of the chemical potential of the two components , which is also called the exchange chemical potential . recognizing that the gradient of the exchange chemical potential determines the diffusion processes , we obtain a first order approximation for the diffusion flux of one component into all other components as where and enumerate the components ; and are the chemical potentials of components and ; and is a symmetric positive definite mobility tensor .a simple model for the diffusion process assumes that a diffusion flux between two components is proportional to the overall density and the concentration of each component .then mobility tensor can be expressed as where is the constant diffusion coefficient between components and .it depends on components but is independent of the total densities and concentration of each component . substituting eq .( [ mss ] ) into eq .( [ akak ] ) , we have substituting eq .( [ bfjj ] ) into eq .( [ condieq ] ) , the general form of a convection diffusion equation is obtained as simulate a multicomponent fluid using lb we set up a lb equation for each component .the lbe for a component of a multicomponent system is given by , \label{sslb}\end{aligned}\ ] ] where is the particle distribution function with velocity for component , is its equilibrium distribution and is the forcing term of component due to the mean potential field generated by the interaction of the component with the other components . the main task in setting up this lattice boltzmann method is to determine the correct form of the forcing term which will recover the convection diffusion equation ( [ gfcd ] ) .the density of each component and the total density are given by the average velocity of one component and the overall fluid can be defined as where is the average velocity of the component , and is the average velocity of the overall fluid .the moments of equilibrium distributions for one component are chosen to be the moments for the forcing terms of one component are to utilize the analysis of the one component system we can establish a lb equation for the total density by defining similar to the counterparts of the one - component system , the moments for the overall equilibrium distribution function are given by the moments for the overall force terms are then given by using eq .( [ ff3 ] ) , we obtain where the second term of eq .( [ olso ] ) is of a higher order smallness than the first terms , and therefore does not enter the hydrodynamic equations to second order . for the third momentwe have by summing eq .( [ sslb ] ) over , an effective lb equation for the total density is , \label{sglb}\end{aligned}\ ] ] this is identical to the lb equation for a system of one component .therefore , the continuity equation and the navier stokes equation of the overall fluid of a multicomponent system are recovered as where is the macroscopic velocity of the fluid . the navier stokes equation for the overallfluid is : to recover the convection diffusion equation of each component , we performed a taylor expansion on the left of eq .( [ sslb ] ) to second order : because of the recursive nature of eq .( [ s2 lb ] ) , can be expressed by and derivatives of as substituting eq .( [ bfe ] ) into the left side of eq .( [ s2 lb ] ) we obtain summing eq .( [ s3 lb ] ) over gives , the first moment of and are not identical , and the continuity equation can not be obtained .( [ sgao ] ) shows that is of order , and is of order .therefore is of order , and we get + o(\epsilon^3 ) .\end{aligned}\ ] ] so eq .( [ sgao ] ) can be simplified to = 0 .\label{ssim}\end{aligned}\ ] ] eqs .( [ snse ] ) yields from eq .( [ sgao ] ) it follows that inserting eqs .( [ stu ] ) and ( [ sxx ] ) into eq .( [ ssim ] ) we get substituting eq .( [ jeep ] ) into eq .( [ ssim ] ) results in , \label{geq}\end{aligned}\ ] ] from this we deduce that the correct form of the forcing term is where the coefficient is .this coefficient approaches 1 as approaches 0 , as one would expect from the continuum limit .this constitutes the main result of this paper .plugging eq .( [ definef ] ) into eq .( [ geq ] ) , we then obtain the convection diffusion equation . \label{woche}\end{aligned}\ ] ] the diffusion flux of component is so that the in eq .( [ gogogo ] ) is equivalent to in eq .( [ bfjj ] ) .we examined the equilibrium behavior of phase separated binary and ternary systems .we used the flory - huggins free which is a very popular model to study polymer solutions .it is given by where is the polymerization of the component , is its number density , and is its volume fraction .it is defined as where is the mer density of component and is the mer density of the system , which is a constant in the flory - huggins model . to validate our algorithmwe compared the binodal lines obtained by our algorithm to the theoretical ones obtained by minimizing the free energy .we used the interfacial tension parameter in all our lb simulations of binary and ternary systems , because there is an intrinsic surface tension in the lb simulation due to higher order terms , which did not appear explicitly in the second order taylor expansion presented in this paper .since we are only evaluating the phase - behavior here we use a one dimensional model known as d1q3 .this model has the velocity set .this is an important test since all other frequently used higher dimensional model have this d1q3 model as a projection .we consider two binary systems : a monomer system with and , and a polymer system with and .for both systems , the total density was . throughout this paperwe choose the self interaction parameters to vanish : .the critical volume fractions for the monomer system are and and for the polymer systems are and . to induce phase separation a small sinusoidal perturbation added in the initial conditions .the amplitude of the perturbation is 0.1 and its wavelength is the lattice size . the initial volume fraction of component a is given by .the initial volume fraction of component b is given by .the monomer system was simulated with different inverse relaxation times . in figure [ binary_binodal ]we show results for , and .we see that the equilibrium densities have only a very slight dependence on the relaxation time , although the range of stability depends noticeably on the relaxation time .the polymer system was simulated with only one inverse relaxation time of .starting from the critical point and increasing the value for each initial condition until the simulations were numerically unstable , we obtained a pair of binodal points for each initial condition .the system reached a stable state after about 5000 time steps .the measurement were taken after 50000 time steps to be sure that an equilibrium state had been reached .for the polymer system , figure [ lbchem ] shows the comparison of the total density , and the volume fractions and chemical potentials of each component to the corresponding theoretical values .the total density of a system in equilibrium by lb is essentially constant with a variation of .the volume fractions of each component in the lb simulation agree well with the theoretical values .the chemical potential of each component by the lb simulation was very close to the theoretical value .the chemical potential , corresponding to the polymer component , varied slightly with a difference for the bulk values of about and a variation in the interface of about .this is the underlying reason for the small deviation from the theoretically predicted concentration .the potential was nearly constant with a variation of less than in the bulk and a variation of about at the interface . for large values of discrepancy increases leading to the noticeable variation of the equilibrium densities of the polymer system as shown in figure [ binary_binodal ] .we also performed lb simulations with two ternary systems : a monomer system with , , and and a polymer system with , , and .the parameters for both systems were , , and .the other parameters were zero .the inverse relaxation time constant for both simulations was .the critical point for the monomer system was , , and .the critical point for the polymer system was , , and .the initial state of each simulation were set from the critical points towards the end point ( , , ) .initially a small sinusoidal wave perturbation of an amplitude of 0.1 and wavelength of the lattice size was superimposed on the initial volume fraction of the a component .this perturbation was subtracted from the b component and the c component was constant .i performed a lb simulation for each set of initial volume fractions and obtained the volume fractions of the two phases in the equilibrium state , resulting in two binodal points .the simulation reached a stable state after about 20,000 time steps .the measurements were taken after 200,000 time steps to make sure the equilibrium state was reached .figure [ trilbm10phase ] shows the comparison of the binodal points by lb simulation to the theoretical binodal lines of both systems .the binodal points obtained by the lb simulation agree fairly well with the theoretical binodal lines for the monomer and polymer systems .the simulation becomes unstable when is close to zero , i.e. when one component is nearly depleted . in this regionthe simulation results also deviate noticeably from the theoretical binodal lines . immediately near the critical point ,the evolution of the system becomes extremely slow so the slight deviation between the binodal points obtained through the lb simulation and the theoretical ones probably indicates that the lb simulation was not yet fully equilibrated .for the polymer system figure [ trilbm10ch ] shows a comparison of the volume fractions and chemical potentials of each component .the total density of the system is again nearly constant with variation of less than in the bulk . at the interfacethere is a small variation of .the volume fractions of each phase in the simulation were very close to their theoretical values .the chemical potential of component a was slightly different in two phases with a variation of about , while the chemical potentials of components b and c were much closer in the two phases with a variation of less than .we have presented a general lattice boltzmann algorithm for systems with an arbitrary number of components which is based on an underlying free energy . in this algorithmthe key thermodynamic quantities such as the chemical potentials of the components are immediately accessible .it is also manifestly symmetric for all components .we tested the equilibrium behavior of the new algorithm for two and three component systems in each case examining both the case of monomer and polymer mixtures with an underlying flory - huggins free energy .we obtained to expected phase - diagrams to good accuracy and the chemical potentials were constant to good accuracy for the monomer systems .polymer systems were more challenging to simulate but still obtained acceptable results for .higher polymerizations , however , become increasingly difficult to realize with the current algorithm .there are three directions in which we hope to extend this algorithm in the future .the current algorithm does not allow for component dependent mobility .we are working on developing an algorithm that can recover an arbitrary mobility tensor .the chemical potential is only approximately constant .recent progress for liquid - gas systems makes us hopeful that we will be able to ensure that the chemical potential is constant up to machine accuracy . andlastly we hope to extend to model so that it can simulate polymer systems with significantly larger polymerization .
we present a lattice boltzmann algorithm based on an underlying free energy that allows the simulation of the dynamics of a multicomponent system with an arbitrary number of components . the thermodynamic properties , such as the chemical potential of each component and the pressure of the overall system , are incorporated in the model . we derived a symmetrical convection diffusion equation for each component as well as the navier stokes equation and continuity equation for the overall system . the algorithm was verified through simulations of binary and ternary systems . the equilibrium concentrations of components of binary and ternary systems simulated with our algorithm agree well with theoretical expectations .
one of the most common ways to investigate the properties of a dynamical system is to study how it responds to controlled external perturbations .the response of a system to a weak perturbing field is related to its equilibrium fluctuations by the celebrated fluctuation dissipation relation .the response provides a direct measure of system dynamics and fluctuations . in a time - domain response measurementone uses a series of impulsive perturbations ( fig .[ fig1 ] ) and records some property of the system as a function of their time - delays .impulsive perturbations make it possible to study the free dynamical evolution of the system during the time delays unmasked by the time profile of the perturbing field .furthermore , the joint dependence on several time delays can be used to separate the contributions of different dynamical pathways . due to its dependence on multiple time delaysthis method is termed multidimensional .the response of a system is typically measured by the expectation value of some operator .this is a linear functional of the system probability density ( or the density matrix for quantum systems ) .multidimensional response have had considerable success in nonlinear spectroscopy , due to the ability to control and shape optical fields .applications range from spin dynamics in nmr , vibrational dynamics of proteins in infrared systems and electronic energy transfer in photosynthetic complexes as probed by visible pulses .these span a broad range of timescales from milliseconds to femtoseconds .interestingly , there exist non - linear functionals of probability densities which have interesting physical interpretations .one such quantity is the von - newman entropy .a related quantum nonlinear measure called the concurrence serves as a measure of quantum entanglement .the kullback - leibler distance ( kld ) or relative entropy , , which compares one probability distribution to another , is a nonlinear measure that had been found useful in many applications .this paper aims at developing multidimensional measures based on the kld .numerous applications of the kld differ in the probability distributions involved .the ratio of the probability of a stochastic path and its reverse at a steady state has been connected to a change of entropy . for an externally driven system a similar quantity was found to be related to the work done on the system . as a result , the kld which compares the path distribution to a distribution of reversed paths is a measure of the lack of reversibility of a thermodynamical process . for distributions in phase - space ( as opposed to path - space ) , the kld between the density of a driven system and the density of a reversed process , or the distance between the driven density and the corresponding equilibrium density ( for the same value of parameters ) were shown to be bounded by dissipated work in the process .the transfer of information through a stochastic resonance is quantified by the kld between the probability distributions with and without the external input .the ability of neuronal networks to retain information about past events was characterized by the fisher - information , which is closely related to the kld between a distribution and the one obtained from it by a small perturbation .we shall examine the response of a system to impulsive perturbations which drive it out of a stationary ( steady state or equilibrium ) state .the kld between the distribution before and after the perturbation does not correspond to an entropy , or work .however , since it compares the perturbed and unperturbed densities , it characterize how `` easy '' it is to drive the system away from its initial state .in ordinary response theory , one compared the expectation values of some operator taken over the perturbed and unperturbed probability density .this depends on the specific properties of the observed operator .the kld is a more robust measure for the effect of the impulsive perturbation on the probability density . by expanding the kld in the perturbation strengthwe obtain a hierarchy of kullback - leibler response functions ( klrf ) .these differ qualitatively from the hierarchy of ordinary response functions ( orf ) , since they are nonlinear in the probability density .the klrf serve as a new type of measures characterizing the dynamics and encoding different information than the orf .for example , the second order klrf , which we connect to the fisher - information , is found to exhibit qualitatively different dependence on the time delays , depending on whether the system is perturbed out of a steady state or out of thermal equilibrium .this is in contrast to the corresponding orf .the fluctuation dissipation relation , which is linear in the density matrix , can also distinguish between systems driven out - of - equilibrium and out of a steady state .the kld offer a different window into this aspect .the structure of the paper is as follows . in sec .[ formalsec ] we describe the multidimensional measures and present the two heirarchies of orf and klrf response functions .these are then calculated using a formal perturbation theory in the coupling strength to the external perturbation . in sec .[ overdampedsec ] we show that for systems undergoing overdamped stochastic dynamics the non - linear klrf are naturally described using a combination of the stochastic dynamics and its dual dynamics . in sec .[ mastersec ] we extend the results of sec .[ overdampedsec ] to discrete markovian systems with a finite number of states .our results are discussed in sec .[ discsec ] .we consider a system initially at a stationary state ( either equilibrium or a steady state ) , which is perturbed by a series of short pulses , as depicted in fig .[ fig1 ] . andsubjected to impulsive perturbations .the pulse is centered at and its strength is denoted by . are the time intervals between successive pulses .[ fig1 ] ] the probability distribution describing the driven system at time , , depends parametrically on , the strength of the pulse , as well as on the time differences between the pulses with .ordinary response theory focuses on the expectation value of some observable ,\ ] ] and its dependence on the parameters .the lowest ordinary response functions ( orf ) are and so forth .the time differences in eqs .( [ defr2])-([defr4 ] ) can be expanded in terms of the time delays between the pulses , . are used to investigate various properties of the unperturbed dynamics , such as the existence of excited modes , and the relaxation back to a steady state . here, we focus on different , but closely related quantity . instead of studying an expectation value of an observable ,we focus on a quantity that compares the perturbed and unperturbed probability distributions .the kld , also known as the relative entropy , is defined as the kld vanishes when the two distributions are equal , and is positive otherwise , for .note that the kld is not a true distance since and it does not satisfy the triangle inequality .the kld measures the dissimilarity between two distributions .it had found many applications in the field of information theory .for instance , the mutual information between two random variables is , where is the joint distribution while ( ) is the marginal distribution of ( ) . in the present applicationthe kullback - leibler distance is a measure for the deviation of the system from its initial state . in a manner similar to the definition of the orf, we define a klrf hierarchy by taking derivatives of the kld with respect to pulse strengths , and displaying them with respect to the time delays all the derivatives are calculated at , and we have used the relation . this is also true for all other derivatives in the following . to keep the notation simplewe will not state this explicitly .higher order klrf are defined similarly .it is important to note that the orf are linear in whereas the kld are nonlinear .we thus expect the kld to carry qualitatively different information about the dynamics .the second derivative ( [ second ] ) , known as the fisher memory ( or information ) matrix , plays an important role in information theory , since the cramer-rao inequality means that it is a measure of the minimum error in estimating the value of a parameter of a distribution .the fisher information has been used recently to analyze the survival of information in stochastic networks .conservation of probability implies that the first klrf vanishes the second derivative , the fisher memory matrix , is given by a straightforward calculation allows to recast the third order derivative of in terms of products of lower order derivatives in what follows the derivatives will be calculated perturbatively in .it is important to note that the derivative of has contributions from interaction with at most pulses .the contribution from the linear component , which interacts with pulses , has the same structure of the perturbation theory for observables ( which is also linear ) .however , since is a non linear function of the derivative contains a non - linear contribution which is a product of lower order contributions for .the klrf encode qualitatively new information about the system dynamics in comparison to the orf .the time evolution of the probability distribution is given by this formal equation is quite general , and can describe either hamiltonian ( unitary ) or stochastic dynamics where the operator will accordingly be the liouville , or the fokker - planck operator . for a system subjected to a time dependent weak perturbation we can write where we assume that the unperturbed system is time independent and is intially in a steady state , , so that .we consider an impulsive perturbation of the form where describes the action of a pulse on the probability distribution , and is the overall strength of the pulse .using these definitions , the state of the system at time can be expanded as a power series in the number of interactions with the pulses the partial corrections for the density , , appearing in eq .( [ pseries ] ) , contain all the information necessary for computing both the klrf and orf .they are given by here $ ] is the free propagator of the unperturbed system .conservation of probability requires that , which , in turn , means that .( [ pseries])-([defs2 ] ) can be used to calculate the logarithmic derivatives , which then determine the klrfs .we will only need the first two logarithmic derivatives , which are given by and to calculate the orf , we substitute eq .( [ pseries ] ) in eq .( [ avga ] ) , resulting in it is interesting to compare the expressions of the klrf the orf .we calculate and for perturbing pulses . to leading order, we find at the next order , we compare the fisher information to the second order orf , and the non - diagonal elements of depend on the two delay times .expressions for the third order response functions are given in app .[ thirdorderformal ] . both and on the same set of time intervals with some important differences . vanishes , while the linear response does not . and have a different structure : can be calculated from the second order correction to the density ( or ) while is determined from a product of s describing the first order interaction with different pulses .this difference reflects the non - linear dependence of the klrf on , and also applies to higher orders .a comment is now in order regarding our choice of the kld ( [ defkld ] ) .we have chosen to use as the measure for the effect of the perturbations . would have been equally suitable . however , as discussed in app .[ hamiltonianapp ] , the leading order of both klds in the strength of the perturbation , i.e. their fisher informations , coincide . therefore all the following results pertaining to the fisher information would hold for either choice .in the following we use the formal results of sec . [ formalsec ] to calculate the leading order orf and klrf for a system undergoing overdamped stochastic dynamics .we show that the fisher information is related to a forward - backward stochastic process .the backward part is driven by the -dual process , which will be simply referred to as the dual in what follows .the fisher information is found to exhibit qualitatively different properties for systems perturbed from equilibrium , or from a steady state .we also use the eigenfunctions and eigenvalues of the dynamics to derive explicit expressions for several low order orf and klrf . in stochastic dynamicsthe probability density plays the role of a reduced density matrix , which depends on a few collective coordinates . in this reduced descriptionthe entropy typically increases with time .this should be contrasted with a description which includes all the degrees of freedom , where the dynamics is unitary and the entropy does not change in time . for completeness unitary dynamicsis discussed in app .[ hamiltonianapp ] .the fisher information can be represented in terms of the dual stochastic dynamics .this interesting property reflects its non - linear dependence on .we examine a stochastic dynamics of several variables , given by here we use the ito stochastic calculus . ) .both methods are equally viable as long as they are used in a consistent manner .details can be found is ref .the noise terms are assumed to be gaussian with with a symmetric positive definite matrix . while for many systems this matrix does not depend on the coordinate , , this assumption will not be used in what follows. equation ( [ stochasticoriginal ] ) is equivalent to the fokker - planck equation in what follows we present the dual dynamics , which can be loosely thought as the time reversed dynamics : it have the same steady state , but with reversed steady state current .we consider the current density the fokker - planck equation can be written in terms of the current , the steady - state is the solution of we write which defines . for systems at equilibrium simply the potential .however , this is not the case for general steady states .the steady state current can be written as after some algebra , the generator of the stochastic dynamics can be written in terms of the steady state density and currents the dual dynamics is given by with a straightforward calculation gives with it is a simple matter to verify that the dual dynamics has the same steady state as the original one , but the steady state currents have opposite signs .it can be simulated by integrating the ito stochastic equation the dual dynamics reverses the non conservative forces in the system .this relates the joint probability to go from one place to another in the original dynamics to the joint probability of the reversed sequence of events in the dual dynamics the left hand side of eq .( [ conddual ] ) is the joint steady state probability to first the system at , and at after a time .the right hand side is the joint probability of the reversed sequence of events , but for a modified dynamics . when this modified dynamics is the dual these joint probabilities become equal .we next turn to discuss the system s response to a series of impulsive perturbation .we assume that the perturbation is of the form with as a potential field perturbing the system . using eq .( [ defs1 ] ) we have where with the help of equation ( [ conddual ] ) , we obtain we now have all the tools needed to compare the orf and klrf for overdamped stochastic dynamics .the leading order response function is given by while .at the next order , we have and b({\bf x}_0 ) \rho_0 ( { \bf x}_0 ) .\label{r2deriv}\ ] ] some insight into the structure of different response functions can be gained by representing them as ensemble averages over stochastic trajectories. the first order response function can be simulated directly using stochastic trajectories of the original dynamics .the appearance of a derivative of the conditional probability complicates the direct simulation of . it may be possible to circumvent this difficulty using the finite field method , where one combined simulations with and without a finite , but small perturbation . can be simulated with trajectories which follow the original dynamics for time and then the dual dynamics for time .systems at equilibrium are self dual , allowing to substitute in eq .( [ q12usingdual ] ) . as a resultthe fisher information only depends on a single time variable .this is in contrast to systems which are perturbed from a nonequilibrium steady state , whose fisher information is a two dimensional function of and .the fisher information is therefore qualitatively different for systems which are perturbed out of a steady state , or out of an equilibrium state . for the self - dual case has the same structure as , up to a replacement of with .however , in the general , non self - dual case , the structure of is manifestly different than that of and .an alternative approach for the calculation of the fisher information , as well as higher order klrf , uses eigenfunction expansions for the density .it will be sufficient to examine a simple one dimensional model where is a fokker - planck operator and the perturbation is given by eq .( [ la ] ) . the propagation of the unperturbed system can be described in terms of the eigenvalues and eigenfunctions of .the right eigenfunctions satisfy similarly , the left eigenfunctions , , satisfy .it is assumed that the right and left eigenfunctions constitute a byorthogonal system , that is any probability density can be expanded in terms of right eigenfunctions , with we consider systems perturbed out of thermal equilibrium , with a probability density . in this case the left eigenvalues are simply related to the right eigenvalues . for variable which is even with respect to time reversal , this relation takes the form as a result , one can use only the left eigenfunctions , which in this case satisfy the orthogonality condition we note that and correspond to the equilibrium distribution .our goal is to calculate the response of the system to a perturbation with a coordinate dependent operator .several types of integrals appear repeatedly in the calculation , and it will be convenient to introduce an appropriate notation .one comes from the need to decompose the probability distribution into eigenstates after each interaction with a pulse = -\int dx \frac{\partial q_m}{\partial x } \frac{\partial a}{\partial x } q_n(x ) \rho_0(x),\label{defb}\ ] ] where integration by parts was used in the second equality .( we assume that falls of fast enough to eliminate boundary terms . )the calculation of the response also involves an evaluation of the average of an observable , which , in the current setting , leads to integrals of the form it is straightforward to calculate and with the help of eq .( [ defb ] ) .we find and we now calculate the first few orf and klrf .the first order response functions is similarly , the second order response function is given by should be compared to the klrf of the same order , namely the fisher information },\ ] ] which is calculated with the help of eqs .( [ fisher2 ] ) and ( [ orthleft ] ) .again , the fisher information of systems perturbed out of equilibrium depends only on . for a system initially in a steady state the fisher information is - \lambda ( m ) t_2 } \int dx \rho_0^{-1 } ( x ) \rho_n ( x ) \rho_m ( x ) .\label{geneigenq}\ ] ] however , since the relation eq .( [ lrrelation ] ) does not hold in this case , the integral in eq .( [ geneigenq ] ) does not vanish for , and the fisher information becomes a two dimensional function of and . and are calculated in app .[ thirdordereigen ] , where it is shown that is a three dimensional function of all its time variables .this qualitatively different signature of systems initially at steady state vs equilibrium is unique to the fisher information , due to its quadratic dependence on . it does not apply to higher order quantities , such as .the eigenfunctions for a simple example , of an harmonic oscillator with an exponential perturbation , are presented in app .[ exampleosc ] .the description of the klrf in terms of a combination of the regular stochastic dynamics and its dual holds also for markovian systems with a finite number of states .below we derive a simple expression for the fisher information of a stochastic jump process in terms of the dual dynamics of the original process .consider a system with a finite number of states , undergoing a markovian stochastic jump process described by the master equation is a vector of probabilities to find the system in its states and is the transition rate matrix .its off diagonal elements are positive , for , and express the rate of transitions from state to , given that the system is at .the diagonal elements satisfy .we further assume that there exist a unique steady state , , satisfying and that at this steady state there is a non vanishing probability to find the system at each of the states , .the master equation is one of the simplest models for irreversible stochastic dynamics .below we briefly describe some of its relevant properties , such as the backward equation , and its dual dynamics .one can define an evolution operator which satisfy the equation of motion with the initial condition . for this modela dynamical variable is a vector with a value corresponding to each state of the system .its expectation value is given by instead of calculating this average by propagating , one can define a time dependent dynamical variables , , such that .this is the analogue of the scrdinger and heisenberg pictures in quantum mechanics .the equation of motion for is easily shown to be this equation is known as the backward equation , which turns to be related to the dual dynamics .( the backward equation is also written as a function of an _ initial _ rather than a final time . ) and have the same eigenvalues . however , the roles of the right and left eigenvectors are interchanged and decays to a uniform vector with at long times .let us now define a diagonal matrix , using .the dual evolution is then defined as the fact that is built from the steady state of , , guaranties that is a physically reasonable rate matrix , that is , that is satisfies .the dual dynamics describes a physically allowed process which has the same steady state as the original process it was derived from .however , at this steady state the dual currents have opposite signs compared to the steady state currents of the original dynamics .a process is self - dual , that is , , if and only if it satisfies detailed balance .self duality is therefore related to being in thermal equilibrium .consider a system subjected to several impulsive perturbations where corresponds to some physical perturbation . herethe free evolution is given by similarly the evolution during an impulsive perturbation can be described by where is arbitrarily small .we expand the exponent in eq .( [ impulse ] ) , and collect all terms of the same order in .the expression for the fisher information , following eq .( [ fisher2 ] ) is this expression can be simplified by writing one of the propagators in terms of the dual process , using the relation after some algebraic manipulations , we find in eq .( [ fisherdual ] ) we have defined . is not the dual of since is not composed of the eigenvector of which corresponds to a vanishing eigenvalues .( is composed of the eigenvalue of . )equation ( [ fisherdual ] ) is the analogue of eq .( [ q12usingdual ] ) for discrete systems .it demonstrates that for this model the fisher information can be simulated using a combination of the ordinary process and its dual .however one must also include a ( possibly artificial ) dual perturbation , , which can nevertheless be computed from the known physical one .as before , equation ( [ fisherdual ] ) shows that the fisher information , for self - dual systems , is a function of . as a side remark ,the model considered here has only even degrees of freedom under time reversal .more general models may also include odd degrees of freedom , such as momenta . in that caseone may speculate that there would be anti - self - dual systems whose dual turns out to be the time reversed dynamics .this would lead to a fisher information depending on alone , as is the case for unitary dynamics .in this work we have studied a system driven from its steady state by a sequence of impulsive perturbations .we have defined a new set of measures for the response of the system to the perturbation , the klrf , which are given by the series expansion of the kullback - leibler distance between the perturbed and unperturbed probability distributions . at each orderthe klrf and orf depend on the same time differences between the pulses . however , there are important differences stemming from the nonlinear dependence of the klrf on . the expression for the klrf , for instance eqs . ( [ fisher2 ] ) and ( [ q123 ] ) reveal quantities which can be simulated using several trajectories which end at the same point .we have shown that a simpler , but equivalent description exists .it uses the dual dynamics which allows to `` run some of the trajectories backward '' .this description is especially appealing for the fisher information .instead of viewing the fisher information as composed over sum of pair of trajectories joined at their end point one can view it as an average of contributions of a single forward - backward trajectory .another difference between the klrf and the orf has to do with the appearance of derivatives of conditional probabilities , which for _ deterministic _ systems would correspond to groups of very close trajectories .these first appear in , see for instance eqs .( [ r2deriv ] ) for .the nonlinear character of the klrf means that such terms appear in comparatively higher order of the perturbation theory .for example , contributes to but not to the fisher information .instead it first contributes to .we have demonstrated that the fisher information behaves in a qualitatively different way depending on whether the system is perturbed from equilibrium or from an out - of - equilibrium steady state .for the classically stochastic systems considered here we have seen that in the former case .that is , the fisher information is a one dimensional function of the two time delays .this qualitative difference results from the self - duality of equilibrium dynamics , which is another expression of the principle of detailed balance . does not show a similar reduction of dimension [ see eq .( [ eigenq3 ] ) ] .this property is special to the fisher information .we have focused on the properties of the fisher information for overdamped stochastic dynamics .do other types of systems exhibit similar behavior ? in app .[ hamiltonianapp ] we consider deterministic hamiltonian systems . in that casethe unitary dynamics results in a fisher information which depends only on the time .it will be of interest to study how ( not overdamped ) stochastic systems bridge between the overdamped and unitary limits .we showed that the klrf can serve as a useful measure characterizing the system s dynamics .they encode information which differs from the information encoded in the orf .this is demonstrated by the ability of the fisher information to distinguish between systems perturbed out of equilibrium or out of a non - equilibrium steady state .we expect other useful properties of the klrf to be revealed by further studies .one of us ( s. m. ) wishes to thank eran mukamel for most useful discussions .the support of the national science foundation ( grant no . che-0745892 ) and the national institutes of health ( grant no .gm-59230 ) is gratefully acknowledged .for completeness , in this appendix we discuss the application of nonlinear response theory to deterministic hamiltonian systems .we use simple examples to clarify the relation between the response of a system and the kullback - leibler distance .the hamiltonian of the system is assumed to be of the form , where is the perturbation .we start with some general comments stemming from the fact that the dynamics in phase space is an incompressible flow .the state of a classical system is described by its phase space density .it is important to note that this is the full probability density , which includes information on all the degrees of freedom .let us denote the propagator of the classical trajectories by , so that where denotes a phase space point ( all coordinates and momenta ) .similarly , the propagator for the probability distribution is denoted by .liouville theorem tells us that phase space volumes do not change in time . as a result, there is a simple relation between the phase space density at different times , the density is simply transported with the dynamics in phase space . the density at at time is equal to the density at at time .this property of the phase space dynamics have an interesting consequence .any integral whose integrand depends locally on alone is time independent , since the values of the integrand are just transported around by the dynamics .an interesting example is the entropy function let us change the integration variable to , which is just the phase space point which would flow to after a time .liouville theorem assures us that the jacobian of the transformation is unity and therefore where we have used eq .( [ transport ] ) .it is clear that this entropy does not depend on time .( [ constent ] ) is a result of the unitary evolution in phase space .it connects with all the intricate problems related to the emergence of macroscopic irreversibility out of microscopic reversible dynamics .this deep problem is beyond the scope of the current paper .such problems are circumvented when one uses a reduced probability density , whose dynamics is irreversible to begin with .unitary dynamics lead to an interesting result for the reverse kullback - leibler distance we have seen that the first term is a constant of the motion . assuming a hamiltonian system intially in equilibrium .we have where is the free energy of the initial equilibrium state . is therefore linear in , and thus it is equivalent to a calculation of response functions !the general comments above raise two interesting points .first , the fact that terms such as are constant means that their expansion in powers of pulse strengths turns out to have only the constant term , all other terms in the expansion must vanish .the second point of interest has to do with the relation between the kullback - leibler distance and its reverse .generally , .we are interested in systems pertubed by a series of pulses and compare the initial and final distributions . in that case we can use unitarity to change variables from the phase space points at the final time to the points at the initial time which are connected to it by the dynamics .this gives here , that is , is the density that would evolve to the equilibrium density under the influence of the pulses .it is not equal to , which tells us that the distance and its reverse are not equal .while in general the distance and its reverse are not equal , when the distance between the distributions is small , its easy to show that their leading order expansion in is the same . loosely speaking can deduce that the fisher information , which is determined by this leading order , could be obtained from a calculation of response for systems with unitary evolution .( since it could also be obtained from the reverse distance . )our last general point is also related to unitarity , but has to do with the fisher information . according to eq .( [ fisher2 ] ) the fisher information is built from two partial densities .( more accurately , density differences . )these evolve with respect to the same hamiltonian between interactions with the pulses . as a result , the integral in eq .( [ fisher2 ] ) do not depend on , for the same reason that caused the entropy to be time independent .we find that as a result of the unitarity of the phase space dynamics .similar independence on the final time interval would also appear for higher order terms in the expansion of the kullback - leibler distance . the simplest system that can serve as an example is a single harmonic oscillator we take the perturbation to be for this system it is trivial to solve for the free evolution this relation can be inverted , expressing in terms of eq . ( [ reverseosc ] ) will be useful when one propagates probability distributions in time .the equilibrium distribution of the harmonic oscillator is a gaussian we also note that is the linear order correction for the density just after interaction with one pulse . to calculate we need to propagate eq .( [ tempdrho ] ) . with the help of eqs .( [ transport ] ) and ( [ reverseosc ] ) we find } \rho_0 ( q , p).\ ] ] in the derivation we have used the fact that is invariant under the evolution with respect to . to calculate we operate with on , and then propagate the resulting correction for the density for a time interval .a straightforward calculation leads to + \frac{1}{q_0 m \omega } \sin \omegat_1 \right ) \right\ } \\ & \times & \exp \left[-\frac{q}{q_0 } \left ( \cos \omega t_2 + \cos \omega ( t_1+t_2)\right ) + \frac{p}{m \omega q_0}\left(\sin \omega t_2 + \sin \omega ( t_1+t_2 ) \right)\right ] \rho_0 \nonumber\end{aligned}\ ] ] we now turn to calculate the first two orf , using eqs .( [ formalr1 ] ) and ( [ formalr2 ] ) .the calculation of is cumbersome but straightforward ..\ ] ] the calculation of is more involved , and we only include the final result \\\times \exp \left [ \frac{1}{m \beta \omega^2 q_0 ^ 2 } \left\ { \frac{3}{2 } + \cos \omega t_1 + \cos \omega t_2 + \cos \omega ( t_1+t_2)\right\}\right].\end{gathered}\ ] ] we would like to compare these response functions to the klrf , and in particular to the fisher information , which can be calculated using eq .( [ fisher2 ] ) .the calculation can be simplified by using the classical coordinates right after the second interaction with the pulse as integration variables . due to unitarity, the fisher information only depends on the time difference , see also the discussion in the previous subsection .we get \exp \left [ \frac{1}{\beta m \omega^2q_0 ^ 2 } \left ( 1+\cos \omega t_1\right)\right].\ ] ] this expression seems similar to the first order response function .one can indeed show that they are related by it will be interesting to check whether this expression could be generalized to any hamiltonian system ( with unitary dynamics ) .in the main text we have calculated the orf and klrf to second order . in the appendix we present the third order quantities , , and .following the calculations performed in sec .[ formalsec ] , the third order response functions are given by and \right .\\ \left . -2\rho_0^{-2}(x ) { \cal s}^{(1)}(x;t_1+t_2+t_3 ) { \cal s}^{(1)}(x ; t_2+t_3 ) { \cal s}^{(1)}(x ; t_3 ) \right\},\label{q123}\end{gathered}\ ] ] where here we present expressions for the third order orf and klrf in term of the eigenvalues and eigenfunctions of the stochastic dynamics . at high orders the nonlinear character of the contributions to the kullback - leibler distance result in integrals with products of several eigenfunctions . herewe will only need the one with three eigenfunctions the third order orf is given by this orf should be compared to the third order klrf , which is calculated using eq .( [ q123 ] ) .we find } \right .+ e^{-\lambda(n ) \left [ t_1 + t_2 \right ] } e^{-\lambda ( m ) \left [ t_2 + 2 t_3\right ] } + e^{-\lambda ( n ) t_1 } e^{-\lambda ( m ) \left [ t_2 + 2 t_3\right ] } \right\ } \\ - 2 \sum_{nml } { \cal b}_{n0 } { \cal b}_{m0 } { \cal b}_{l0 } { \cal j}_{nml } e^{-\lambda ( n ) \left [ t_1+t_2 + t_3\right ] } e^{-\lambda ( m ) \left [ t_2 + t_3\right ] } e^{-\lambda ( l ) t_3}. \label{eigenq3}\end{gathered}\ ] ] the expression for , presented in eq .( [ eigenq3 ] ) , has three terms in which the orthogonality condition ( [ orthleft ] ) has been used , pointing to a reduction of dimension in the time dependence of this specific term . however , the time combinations in these terms are all different . in addition , the fourth term in eq .( [ eigenq3 ] ) clearly depends on all its time variables .we conclude that in contract to the fisher information , the higher order klrf depend on all their time variables .the reduction of dimension is therefore specific for the fisher information .it results from the fact that it is built out of a single product of two density corrections .in this appendix we consider a simple example of an overdamped harmonic oscillator with a potential , with a perturbing potential . for this system it is possible to write explicit expressions for the eigenvalues and eigenfunctions of the fokker - planck operator , as well as to perform several of the integral , defined in sec .[ overdampedsec ] . in this case \\\hat{\cal l}_a \rho & = & - \frac{\alpha}{q_0 \gamma } \frac{\partial}{\partial q } \left [ e^{-q / q_0 } \rho \right].\end{aligned}\ ] ] this model has been studied extensively .the equilibrium density is given by the left eigenfunctions , and the eigenvalues , of this model are here , are the hermit polynomials. for instance , one can substitute eq .( [ hn2 ] ) for the hermit polynomial , and use integration by parts .this leads to this is a gaussian integral which is easily evaluated to give one can also easily calculate the integrals by the same technique .one finds comparing eqs .( [ osccalc ] ) and ( [ osccalb0 ] ) , we see that for this model as a result , there is a simple relation between the fisher information and the first order response function , this relation is a special result for this model , and is not expected to hold for other systems .one can also obtain explicit results for with non vanishing indices .however , the calculation and the result are quite combersome , and are omitted .we were not able to calculate explicitly .
by subjecting a dynamical system to a series of short pulses and varying several time delays we can obtain multidimensional characteristic measures of the system . multidimensional kullback - leibler response function ( klrf ) , which are based on the kullback - leibler distance between the initial and final states , are defined . we compare the klrf , which are nonlinear in the probability density , with ordinary response functions ( orf ) obtained from the expectation value of a dynamical variable , which are linear . we show that the klrf encode different level of information regarding the system s dynamics . for overdamped stochastic dynamics two dimensional klrf shows a qualitatively different variation with the time delays between pulses , depending on whether the system is initially in a steady state , or in thermal equilibrium .
any network found in the literature is inevitably just a sampled representative of its real - world analogue under study .for instance , many networks change quickly over time and in most cases merely incomplete data is available on the underlying system .additionally , network sampling techniques are lately often applied to large networks to allow for their faster and more efficient analysis . since the findings of the analyses and simulations on such sampled networks are implied for the original ones , it is of key importance to understand the structural differences between the original networks and their sampled variants . a large number of studies on network sampling focused on the changes in network properties introduced by sampling .lee et al . showed that random node and link selection overestimate the scale - free exponent of the degree and betweenness centrality distributions , while they preserve the degree mixing . on the other hand ,random node selection preserves the degree distribution of different random graphs and performs better for larger sampled networks .furthermore , leskovec et al . showed that the exploration sampling using random walks or forest - fire strategy outperforms the random selection techniques in preserving the clustering coefficient , different spectral properties , and the in - degree and out - degree distributions .more recently , ahmed et al . proposed random link selection with additional induction step , which notably improves on the current state - of - the - art .their results confirm that the proposed technique well captures the degree distributions , shortest paths and also the clustering coefficient of the original networks . lately, different studies also focus on finding and correcting biases in sampling process , for example observing the changes of user attributes under the sampling of social networks , analyzing the bias of traceroute sampling and understanding the changes of degree distribution and hubs inclusion under various sampling techniques .however , despite all those efforts , the changes in network structure introduced by sampling and the effects of network structure on the performance of sampling are still far from understood .real - world networks commonly reveal communities ( also link - density community ) , described as densely connected clusters of nodes that are loosely connected between . communities possibly play important roles in different real - world systems , for example in social networks communities represent friendship circles or people with similar interest , while in citation networks communities can help us to reveal relationships between scientific disciplines .furthermore , community structure has a strong impact on dynamic processes taking place on networks and thus provides an important insight into structural organization and functional behavior of real - world systems .consequently , a number of community detection algorithms have been proposed over the last years ( for a review see ) .most of these studies focus on classical communities characterized by higher density of edges .however , some recent works demonstrate that real - world networks reveal also other characteristic groups of nodes like groups of structurally equivalent nodes denoted modules ( also link - pattern community and other ) , or different mixtures of communities and modules . despite community structureappears to be an intrinsic property of many real - world networks , only a few studies considered the effects between the community structure and network sampling .salehi et al . proposed page - rank sampling , which improves the performance of sampling of networks with strong community structure .furthermore , expansion sampling directly constructs a sample representative of the community structure , while it can also be used to infer communities of the unsampled nodes .other studies , for example analyzed the evolution of community structure in collaboration networks and showed that the number of communities and their size increase over time , while the network sampling has a potential application in testing for signs of preferential attachment in the growth of networks . however , to the best of our knowledge , the question whether sampling destroys the structure of communities and other groups of nodes or are sampled nodes organized in a similar way than nodes in original network remains unanswered . in this paper, we study the presence of characteristic groups of nodes in different social and information networks and analyze the changes in network group structure introduced by sampling .we consider six sampling techniques including random node and link selection , network exploration and expansion sampling .the results first reveal that nodes in social networks form densely linked community - like groups , while the structure of information networks is better described by modules .however , regardless of the type of the network and consistently across different sampling techniques , the structure of sampled networks exhibits much stronger characterization by community - like groups than the original networks .we therefore conclude that the rich community structure is not necessary a result of for example homophily in social networks .the rest of the paper is structured as follows . in section [ sec : sampl ] , we introduce different sampling techniques considered in the study , while the adopted node group extraction framework is presented in section [ sec : nodegroups ] .the results of the empirical analysis are reported and formally discussed in section [ sec : analys ] , while section [ sec : conclusion ] summarizes the paper and gives some prominent directions for future research .network sampling techniques can be roughly divided into two categories : random selection and network exploration techniques . in the first category , nodes or linksare included in the sample uniformly at random or proportional to some particular characteristic like the degree of a node or its pagerank score . in the second category ,the sample is constructed by retrieving a neighborhood of a randomly selected seed node using random walks , breadth - first search or another strategy . for the purpose of this study , we consider three techniques from each of the categories . from the random selection category ,we first adopt random node selection by degree ( rnd ) . here , the nodes are selected randomly with probability proportional to their degrees , while all their mutual links are included in the sample ( fig .[ subfig : rnd ] ) .note that rnd improves the performance of the basic random node selection , where the nodes are selected to the sample uniformly at random .rnd fits better spectral network properties and produces the sample with larger weakly connected component .moreover , it shows good performance in preserving the clustering coefficient and betweenness centrality distribution of the original networks . nevertheless , it can still construct a disconnected sample network , despite a fully connected original network .next , we adopt random link selection ( rls ) , where the sample consists of links selected uniformly at random ( fig .[ subfig : rls ] ) .rls overestimates degree and betweenness centrality exponent , underestimate the clustering coefficient and accurately matches the assortativity of the original network .the samples created with rls are sparse and the connectivity of the original network is not preserved , still rls is likely to capture the path length of the original network .last , we adopt random link selection with induction ( rli ) , which improves the performance of rls . in rli ,the sample consists of randomly selected links as before , while also all additional links between their endpoints ( fig .[ subfig : rli ] ) .rli outperforms several other methods in capturing the degree , path length and clustering coefficient distribution .it selects nodes with higher degree than rls , thus the connectivity of the sample is increased .techniques from random selection category imitate classical statistical sampling approaches , where each individual is selected from population independently from others until desired size of the sample is reached . from the network exploration category ,we first adopt breadth - first sampling ( bfs ) . here , a seed node is selected uniformly at random , while its broad neighborhood retrieved from the basic breadth - first search is included in the sample ( fig .[ subfig : bfs ] ) .the sample network is thus a connected subgraph of the original network .bfs is biased towards selecting high - degree nodes in the sample .it captures well the degree distribution of the networks , while it performs worst in inclusion of hubs in the sample quickly in the sampling process .bfs imitates the snowball sampling approach for collecting social data used especially when the data is difficult to reach .selected seed participant is asked to report his friends , which are than invited to report their friends .the procedure is repeated until the desired number of people is sampled .next , we adopt a modification of bfs denoted forest - fire sampling ( ffs ) . in ffs ,the broad neighborhood of a randomly selected seed node is retrieved from partial breadth - first search , where only some neighbors are included in the sample on each step ( fig .[ subfig : ffs ] ) .the number of neighbors is sampled from a geometric distribution with mean , where is set to .ffs matches well spectral properties , while it underestimates the degree distribution and fails to match the path length and clustering coefficient of the original networks .however , ffs corresponds to a model by which one author collects the papers to cite and include them in the bibliography .the author starts with one paper , explores its bibliography and selects the papers to cite .the procedure is recursively repeated in selected papers until desired collection of citations is reached .last , we adopt expansion sampling ( exs ) , where the seed node is again selected uniformly at random , while the neighbors of the sampled nodes are included in the sample with probability proportional to where is the concerned node , the current sample and the neighborhood of nodes in ( fig .[ subfig : exs ] ) .expression denotes the expansion factor of node for sample and means the number of new neighbors contributed by .the parameter is set to .note that exs ensures that the sample consists of nodes from most communities in the original network and that the nodes that are grouped together in the original network , are also grouped together in the sample .exs imitates the modification of snowball sampling approach mentioned above , where for example we want to gather the data about individuals from different countries .thus , on each step we include in the sample the individuals , which knows larger number of others from various countries .the node group structure of different networks is explored by a group extraction framework with a brief overview below .let the network be represented by an undirected graph , where is the set of nodes and the set of links .next , let be a group of nodes and a subset of nodes representing its corresponding linking pattern ( i.e. , the pattern of connections of nodes from to other nodes ) , . denote and .the linking pattern is selected to maximize the number of links between and , and minimize the number of links between and , while disregarding the links with both endpoints in . for details on the group objective functionsee .the above formalism comprises different types of groups commonly analyzed in the literature ( fig .[ fig : groups ] ) .it consider communities ( i.e. , link - density community ) , defined as a ( connected ) group of nodes with more links toward the nodes in the group than to the rest of the network .communities are characterized by .furthermore , the formalism consider possibly disconnected groups of structurally equivalent nodes denoted modules ( i.e. , link - pattern community ) , defined as a ( possibly ) disconnected group of nodes with more links towards common neighbors than to the rest of the network .modules have . communities and modules represent two extreme cases with all other groups being the mixtures of the two , and/or . the reader may also find it interesting that the core - periphery structure is a mixture with , while the hub & spokes structure is a module with .the type of group can in fact be determined by the jaccard index of and its corresponding linking pattern .the group parameter , $ ] , is defined as communities have , while modules are indicated by .mixtures correspond to groups with . for the rest of the paper, we refer to groups with as community - like and groups with as module - like .groups in networks are revealed by a sequential extraction procedure proposed in .one first finds the group and its linking pattern with random - restart hill climbing that maximizes the objective function .next , the revealed group is extracted from the network by removing the links between groups and , and any node that becomes isolated .the procedure is then repeated on the remaining network until the objective function is larger than the percentile of the values obtained under the same framework in a corresponding erds - rnyi random graph .all groups reported in the paper are thus statistically significant at level .note that the above procedure allows for overlapping , hierarchical , nested and other classes of groups .section [ subsec : nets ] introduces real - world networks considered in the study .section [ subsec : orig ] reports the node group structure of the original networks extracted with the framework described in section [ sec : nodegroups ] .the groups extracted from the sampled networks are analyzed in section [ subsec : sampled ] . for a complete analysis , we also observe the node group structure of a large network with more than a million links in section [ subsec : large ] .clrr & & & + _ collab _ & high energy physics collaborations & & + _ pgp _ & pretty good privacy web - of - trust & & + _ p2p _ & gnutella peer - to - peer file sharing & & + _ citation _ & high energy physics citations & & + the empirical analysis in the following sections was performed on four real - world social and information networks .their main characteristics are shown in table [ tbl : nets ] . the _ collab _ is a social network of scientific collaborations among researchers , who submitted their papers to high energy physics theory category on the arxiv in the period from january to april .the nodes represent the authors , while undirected links denote that two authors co - authored at least one paper together . the _ pgp _ is a social network , which corresponds to the interaction network of users of the pretty good privacy algorithm collected in july .the nodes represent users , while undirected links indicate relationships between those , who sign each other s public key .the _ p2p _ is an information network , which contains a sequence of snapshots of the gnutella peer - to - peer file sharing network collected in august .the nodes represent hosts in the gnutella network , which are linked by undirected links if there exist connections between them . the _ citation _ is an information network , again gathered from the high energy physics theory category from the arxiv in the period from january to april and includes the citations among all papers in the dataset .the network consists of nodes , which represent papers , while links denote that one paper cite another .we first analyze the properties of groups extracted from the original networks summarized in table [ tbl : orig ] .the number of groups differs among networks , still the mean group size ( denoted ) is comparable across network types .groups in social networks consist of around nodes , while in information networks exceeds nodes .the mean linking pattern size ( denoted ) of social networks is comparable to .the latter relation is expected due to the pronounced community structure commonly found in social networks . on the other hand, is expected for information networks , due to the abundance of module - like groups .the characteristic group structure of networks is reflected in the group parameter . for social networks ,its values are around , which indicates the presence of communities , modules and mixtures of these .in contrast to social networks , the information networks have closer to and consist mostly of module - like groups . to summarize , social networks represent people and interactions between them , like a few authors writing a paper together ,therefore we can expect a larger number of community - like groups in these networks . on the other hand , in information networksthe homophily is less typical and thus the structure of these networks seem better described by module - like groups .crrrrrrrr & & & & + & & & & & & + _ collab _ & & & & & & & & + _ pgp _ & & & & & & & & + _ p2p _ & & & & & & & & + _ citation _ & & & & & & & & + sampling techniques outlined in section [ sec : sampl ] enable setting the size of the sampled networks in advance .we consider sample sizes of of nodes from the original networks , that provides for an accurate fit of several network properties .table [ tbl : samplsoc ] and [ tbl : samplinf ] present the properties of the node group structure of sampled social and information networks , respectively .notice that rls and ffs show different performance than other techniques .the samples obtained with rls and ffs contain less groups , which consist of no more than nodes .additionally , almost all groups in these samples are modules , which reflects in the mean group parameter ( denoted ) approaching for all networks . to verify the above findings, we compute externally studentized residuals of the sampled networks that measure the consistency of each sampling technique with the rest . the residuals are calculated for each technique as the difference between the observed value of considered property and its mean divided by the standard deviation .the mean value and standard deviation are computed for all sampling techniques , excluding the observed one ( for details see ) .statistically significant inconsistencies between techniques are revealed by two - tailed student at of , rejecting the null hypothesis that the values of the considered property are consistent across the sampling techniques . statistical comparison of sampling techniques for the number of groups and the mean group parameter is shown on fig . [fig : stau ] .we confirm that the samples obtained with rls and ffs reveal significantly less groups with significantly smaller than other sampling techniques .moreover , if we compare the number of links in the sampled networks , rls and ffs create samples that contain on average of links from the original networks .in contrast , the samples obtained with rnd , rli , bfs and exs consist of around of links from the original networks . as mentioned before , the sizes of all samples are of the original networks , thus the sampled networks obtained with rls and ffs are much sparser than others . in addition , the performance of rls and ffs can also be explained by their definition . since in rlswe include only randomly selected links in the sample , the variance is very high , while it commonly contains a large number of sparsely linked components , whose structure is best described as module - like . on the other hand , the samples obtained with ffs consist of one connected component with a low average degree of .thus , the sparsely connected nodes also form groups , which are more similar to modules . due to the above reasons , we exclude rls and ffs from further analysis .we focus on rnd , rli , bfs , and exs , whose performance is clearly more comparable . the selected sampling techniques perform similarly across all networks as shown in table [ tbl : samplsoc ] for social and table [ tbl : samplinf ] for information networks .the samples consist of various number of groups , still in most cases less than the original networks .the mean sizes and are around , in contrast to groups with nodes on average in the original networks .still , irrespective of network type and the sampling technique , which implies stronger characterization by community - like groups , as already argued in the case of social networks in section [ subsec : orig ] .ccrrrrrrrrrr & & & & & + & & & & & & & + & / & & & & & & & & + & rnd & & & & & & & & + & rls & & & & & & & & + & rli & & & & & & & & + & bfs & & & & & & & & + & ffs & & & & & & & & + & exs & & & & & & & & + & / & & & & & & & & + & rnd & & & & & & & & + & rls & & & & & & & & + & rli & & & & & & & & + & bfs & & & & & & & & + & ffs & & & & & & & & + & exs & & & & & & & & + ccrrrrrrrrr & & & & & + & & & & & & & + & / & & & & & & & & + & rnd & & & & & & & & + & rls & & & & & & & & + & rli & & & & & & & & + & bfs & & & & & & & & + & ffs & & & & & & & & + & exs & & & & & & & & + & / & & & & & & & & + & rnd & & & & & & & & + & rls & & & & & & & & + & rli & & & & & & & & + & bfs & & & & & & & & + & ffs & & & & & & & & + & exs & & & & & & & & + indeed , the majority of groups found in sampled social networks are community - like , which reflects in the parameter . in sampled informationnetworks the number of mixtures decreases and communities appear , thus is larger than in the original networks .[ fig : tau - hists ] shows a clear difference in the distribution of between the original and sampled networks .furthermore , to confirm that differences exist between the structure of the original and sampled networks , we compute externally studentized residuals , where we include the value of considered property of the original network in computing the mean over different sampling techniques . we compare the number of groups and the parameter for the original networks and their samples ( fig .[ fig : stauo ] ) .the results prove that the original networks contain a significantly larger number of groups with significantly smaller than the sampled networks . yet , larger parameter and consequently more community - like groups in sampled social networks and less module - like groups in sampled information networks indicate clear changes in the network structure introduced by sampling .we conclude that these changes occur regardless of the network type or the adopted sampling technique .+ + notice that the largest and thus the strongest characterization by community - like groups is revealed in the sampled networks obtained with both random selection techniques , rnd and rli . in rnd nodes with higher degreesare more likely to be selected to the sample by definition , while rli is biased in a similar way .thus , densely connected groups of nodes have a higher chance of being included in the sampled network , while sparse parts of the networks remain unsampled . on the other hand , bfs and exs sample the broad neighborhood of a randomly selected seed node andthus the sampled network represents a connected component . in the case of bfs , all nodes and links of some particular part of the original network are sampled .the latter is believed to be representative of the entire network , yet bfs is biased towards sampling nodes with higher degree and overestimates the clustering coefficient , especially in information networks . on the other hand , exs ensures the smallest partition distance among several other sampling techniques , which means that nodes grouped together in communities of sampled network are also in the same community in the original network .therefore , the stronger characterization by community - like groups in sampled networks can also be explained by the definition and behavior of the sampling techniques .+ due to the relatively high time complexity of the node group extraction framework , we consider only networks with a few thousand nodes .however , our previous study proved that the size of the original network does not affect the accuracy of the sampling .still , for a complete analysis , we also inspect the changes in node group structure introduced by sampling of a large _ notredame _ network with more than a million links . due to the simplicity and execution time , we present the analysis for two sampling techniques , rnd from random selection and bfs from network exploration category. we also limit the number of groups extracted from the networks to ( i.e. , we consider top most significant groups with respect to the objective function ) .the _ notredame _ data are collected from the web pages of the university of notre dame _ nd.edu _ domain in .the network contains , nodes representing individual web pages , while ,, links denote hyperlinks among them .table [ tbl : wnd ] shows the properties of groups , found in the original and sampled networks .the samples consist of smaller groups , still the mean size remains larger than the mean size .the majority of groups extracted from the original network are module - like , which reflects in the parameter slightly larger than . on the other hand ,the changes introduced by sampling are clear , since the samples contain less modules , which is revealed by a larger parameter .these findings are consistent with the results on smaller networks from previous sections .the _ notredame _ as an information network expectedly consists of densely linked groups similar to modules , while the structure of sampled networks exhibits stronger characterization by community - like groups .that is again irrespective of the adopted sampling technique .crrrrrrrr & & & & + & & & & & & + / & & & & & & & & + rnd & & & & & & & & + bfs & & & & & & & & +in this paper , we study the presence of characteristic groups of nodes like communities and modules in different social and information networks .we observe the groups of the original networks and analyze the changes in the group structure introduced by the network sampling .the results first reveal noticeable differences in the group structure of original social and information networks .nodes in social networks form smaller community - like groups , while information networks are better characterized by larger modules . after applying network sampling techniques , sampled networks expectedly contain fewer and smaller groups. however , the sampled networks exhibit stronger characterization by community - like groups than the original networks .we have shown that the changes in the node group structure introduced by sampling occur regardless of the network type and consistently across different sampling techniques .since networks commonly considered in the literature are inevitably just a sampled representative of its real - world analogue , some results , such as rich community structure found in these networks , may be influenced by or are merely an artifact of sampling .our future work will mainly focus on larger real - world networks , including other types of networks like biological and technological .moreover , we will further analyze the changes in the node group structure introduced by sampling and explore techniques that could overcome observed deficiencies .this work has been supported in part by the slovenian research agency _ arrs _ within the research program no .p2 - 0359 , by the slovenian ministry of education , science and sport grant no . 430 - 168/2013/91 , and by the european union , european social fund .j. leskovec , j. kleinberg , c. faloutsos , graphs over time : densification laws , shrinking diameters and possible explanations , in : proceedings of the 11th acm sigkdd international conference on knowledge discovery and data mining , acm , 2005 , pp . 177187 .h. park , s. moon , sampling bias in user attribute estimation of osns , in : proceedings of the 22nd international conference on world wide web companion , international world wide web conferences steering committee , 2013 , pp .183184 .a. lakhina , j. w. byers , m. crovella , p. xie , sampling biases in ip topology measurements , in : proceedings of the 22nd annual joint conference of the ieee computer and communications , vol . 1 , ieee , 2003 , pp . 332341 .a. s. maiya , t. y. berger - wolf , benefits of bias : towards better characterization of network sampling , in : proceedings of the 17th acm sigkdd international conference on knowledge discovery and data mining , acm , 2011 , pp. 105113 .l. ubelj , n. blagus , m. bajec , group extraction for real - world networks : the case of communities , modules , and hubs and spokes , in : proceedings of the international conference on network science , 2013 , pp .
any network studied in the literature is inevitably just a sampled representative of its real - world analogue . additionally , network sampling is lately often applied to large networks to allow for their faster and more efficient analysis . nevertheless , the changes in network structure introduced by sampling are still far from understood . in this paper , we study the presence of characteristic groups of nodes in sampled social and information networks . we consider different network sampling techniques including random node and link selection , network exploration and expansion . we first observe that the structure of social networks reveals densely linked groups like communities , while the structure of information networks is better described by modules of structurally equivalent nodes . however , despite these notable differences , the structure of sampled networks exhibits stronger characterization by community - like groups than the original networks , irrespective of their type and consistently across various sampling techniques . hence , rich community structure commonly observed in social and information networks is to some extent merely an artifact of sampling . complex networks , network sampling , node group structure , communities , modules + _ pacs : _ 64.60.aq , 89.75.fb , 89.90.+n
an undirected graph is formed by a set of vertices and a set of undirected edges , with each edge connecting between two different vertices . a feedback vertex set ( fvs ) for such a graph is a set of vertices intersecting with every cycle of the graph . in other words , the subgraph induced by the vertices outside the fvs contains no cycle ( it is a forest ) .the feedback vertex set problem aims at constructing a fvs of small cardinality for a given undirected graph .it is a fundamental nondeterministic polynomial - complete ( np - complete ) combinatorial optimization problem with global cycle constraints . in terms of complete algorithms ,whether a graph has a fvs of cardinality smaller than can be determined in time . and an fvs of cardinality at most two times the optimal value can be easily constructed by an efficient polynomial algorithm .an optimal fvs is a feedback vertex set whose cardinality is the global minimum value among all the feedback vertex sets of the graph . for a given graph, an optimal fvs can be constructed in an exact way in time , where denotes the total number of vertices in the graph . applied mathematicians have obtained rigorous lower and upper bounds for the optimal fvs problem and have proved its tractability for graphs with specific structures( see for example , and references cited therein ) . due to the np - complete nature of the fvs problem , in general it is not feasible to construct optimal feedback vertex sets for large cycle - rich graphs .an important question is then to design efficent heurstic algorithms that are able to obtain near - optimal fvs solutions for given graph instances .such a task is quite nontrivial .a major technical difficulty is that cycles are global objects of a graph and therefore the existence of cycles can not be judged by checking only the neighborhood of a vertext .( similar difficulties exist in other combinatorial optimization problems with global constraints , such as the steiner tree problem and the optimal routing problem . ) in ref . , one of the authors succeeded in converting the fvs problem to a spin glass problem with local interactions .the fvs problem was then studied from the spin glass perspective , and a message - passing algorithm , belief propagation - guided decimaton ( bpd ) , was impletmented to solve the fvs problem heuristically for single graph instances .this bpd algorithm is quite efficient in terms of computing time and computer memory ( since there is no need of cycle checking ) , and it can obtain fvs solutions that are very close to the optimal ones when applied on large random graph instances and regular lattices . for the undirected fvs problem it is not yet known whether simple local search algorithms can achieve equally excellent results as the bpd algorithm .motivated by this question , we complement the message - passing approach in this paper by implementing and testing a simulated annealing local searching ( sals ) protocol for the undirected fvs problem . a similar algorithmic study has already been undertaken in for directed graphs . herewe modify the microscopic search rules of to make it applicable to undirected graphs . in the sals algorithm ,an order is defined for the vertices outside the focal fvs , and this order is constrained by a set of local vertex constraints .our simulation results suggest that this local search algorithm is comparable in performance to the bpd algorithm at least for random graph instances .the feedback vertex set problem has wide practical applications in the field of computer science ( such as integrated circuit design and database management ) .although not yet seriously explored , the fvs problem may have many potential applications in complex systems research as well .for example , if a vertex is contained in a large fraction of the near - optimal feedback vertex sets , we may expect this vertex to play a very significant role for the dynamical processes on the graph .therefore the probability of belonging to a near - optimal fvs can serve as a centrality index of dynamical significance for each vertex of a graph .such a probablity can be computed by sampling many independent near - optimal fvs solutions or be computed directly using the belief propagation iterative equations .the construction of a near - optimal fvs also facilitates the study of a complex dynamical system as a simpler response problem of a cycle - free subsystem ( which is intrinsically simple ) under the influence of the vertices in the fvs vertices .if the subgraph induced by the vertices in the fvs itself contains many cycles , such a decomposition can be applied on this subgraph again . through this iterated process , an initial cycle - rich complex graphis then organized into a hierarchy of forest ( cycle - free ) subgraphs and the edges between these forests . a simple illustration of this hierarchical organization is shown in fig . [ fig : fvshierarchy ] .we believe such a hierarchical representation of a complex graph will be very helpful in future dynamical applications .( color online ) .the four filled points form an optimal feedback vertex set for this graph .the subgraph induced by these four vertices and the cycle - free subgraph induced by all the remaining vertices ( shown as open points ) are connected through many edges ( shown in light blue ) .since the subgraph still contains cycles within itself , we decompose it into a tree subgraph of three vertices ( filled magenta points ) and a subgraph formed by a fvs of one vertex ( the central red point ) . by this way ,the vertices in the orginal graph are arranged into three different layers .the vertices of each layer form a cycle - free subgraph ( a tree or a forest ) , while different layers are connected by edges .an important property of such an organization is that each cycle must involves vertices from at least two layers ., scaledwidth=35.0% ] the next section describes the sals algorithm in detail , and in sec .3 we test the performance of this local search algorithm on random graph instances and compare the results with the results obtained by the bpd algorithm and those obtained by the replica - symmetric mean field theory .we conclude this work in sec .the two appendices are the proofs of the theorems of sec . 2 .for a graph of vertices , let us consider an ordered list formed by vertices of this graph following , we assign to the first vertex of this list an integer rank , to the second vertex an integer rank , ... , and to the last vertex an integer rank .therefore each vertex has an integer rank which marks the position of this vertex in the list . for the purpose of the fvs problem , we introduce for each vertex a ranking condition as where denotes an edge of the graph between vertex and vertex , and if and if . the ranking condition ( [ eq : rc ] )is satisfied by vertex if and only if among all the nearest neighboring vertices of vertex that are also contained in the list , at most one of them has a lower rank than that of .a list is referred to as a legal list if all its vertices satisfy the ranking condition ( [ eq : rc ] ) .the link between legal lists and feedback vertex sets is setup by the following two theorems : [ th:1 ] if is a legal list , then the subgraph of the graph induced by all the vertices of this list is cycle - free .therefore the set formed by all the remaining vertices of not included in is a fvs .[ th:2 ] if is a fvs for a graph , then it is possible to form a legal list using all the vertices not contained in .these two theoretms are easy to prove , see the appendices for technical details .they suggest that there is a one - to - many correspondance between a fvs and legal lists . therefore the problem of constructing an optimal fvs is converted to a problem of constructing a legal list of maximal cardinality .notice that judging whether a list is legal or not is algorithmically very easy as eq .( [ eq : rc ] ) involves only the neighborhood a focal vertex but not the connection pattern of the whole graph .a similar conversion from global constraints to local constraints has also been used in the steiner tree problem , which aims at constructing a tree of minimal total edge length connecting a set of specified vertices . just for simplicity of later discussions ,let us define the energy of a legal list as the total number of vertices not contained in it , namely in otherwords , is just the cardinality of the complementary fvs of the list .following ref . we implement a simulated annealing local search algorithm as follows : 1 .input the graph .initialize the legal set as containing only a single randomly chosen vertex of , and the complementary feedback vertex set then contains all the other vertices of .set the temperature to an initial value .2 . choose a vertex ( say ) uniformly at random from the feedback vertex set .+ \(a ) if the list contains no nearest neighbor of the vertex , delete from and insert it to the head of .then vertex has rank in the updated list and the energy of decreases by .+ \(b ) if contains exactly one nearest neighbor ( say with rank ) of the vertex , delete from and insert it to at the position just after vertex .then vertex has rank in the updated list and the energy of decreases by .+ \(c ) if contains two or more nearest neighbors ( say with vertex having the lowest rank among these vertices ) of the vertex , we make a proposal of moving from to the ordered list at the position just after vertex and deleting all those nearest neighbors of from if the insertion of causes the violation of the ranking condition ( [ eq : rc ] ) for these vertices .suppose vertices have to be deleted from ( and be added to ) as a result of inserting vertex to , then the energy increase of is . if we accept the proposed action with probability , otherwise we accept it with probability .if this proposal is accepted , then vertex has rank in the updated list .repeat step 1 until the list has been successfully updated for times . during this process ,record the best fvs so far reached and the corresponding lowest energy , .4 . decrease the temperature to , where is a fixed constant , and then repeat steps 1 - 2 .if the energy value does not change in contiguous temperature stages , then we stop the local search process and output the reached best fvs . in our computer experiments we use , and which are identical to the values used in ref .the temperature ratio is set to several different values , and . as shown in fig . 2 , the slower the rate of temperature decrease , the lower is the energy of the constructed best fvs solutions .the bpd algorithm was applied to erds - renyi ( er ) random graphs and regular ( rr ) random graphs in to test its performance .as we want to compare the performance of the present sals algorithm with the bpd algorithm , we apply the sals algorithm on the same er and rr random graphs used in .an er random graph of vertices and edges was generated in by first selecting different vertex pairs uniformly at random from the whole set of candidate vertex pairs and then connecting the two vertices of each selected vertex pair by an edge .a vertex in such a graph has on average nearest neighbors ( i.e. , the mean vertex degree of the graph is ) .when the vertex degree distribution of the graph converges to a poisson distribution of mean value .a rr graph differs from an er graph with the additional constraint that each vertex has exactly the same number of nearest neighbors ( here must be an integer ) .it was generated by first attaching to each vertex a number of half - edges and then randomly connecting two half - edges into a full edge , but prohibiting self - loops or multiple edges between the same pair of vertices ( see ref . for more details of the graph generation process ) .the vertex number of all these random graph instances is equal to the same value .there are independently generated er or rr random graph instances at each fixed ( mean ) degree value .we run the sals process once on each of these instances and then compare the mean value of the obtained final values with the mean value of the final energies obtained in by running the bpd process once on each of these graph instances . the evolution of the minimal energy as a function of the temperature is shown in fig . [ fig : evolution ] for er ( a ) and rr ( b ) random graphs of .when the evolution curves for the three different cooling parameters , and coincide with each other .this indicates that at the typical relaxation time of energy is shorter than the time scale of temperature decreasing . at curve of starts to separate from the other two curves , and the final value of is considerably higher than those of the other two evolutionary trajectories .this indicates that at cooling parameter the simulated annealing process is eventually trapped in a local region of the configuration space whose energy minimal value is extensively higher than that of the optimal solutions .the two evolutionary curves of and again separate from each other at a lower temperature of , and the final value of reached at cooling parameter is slightly lower than the final value of reached at . if the value of is set to be more closer to the final value of will decrease slightly further ( at the expense of much longer simulation times ) .the observation that lower final energy values can be reached by lowering the cooling rate strongly indicates the system has a very complicated low - energy landscape with many local minima .this is consistent with the prediction that the undirected fvs problem is in a spin glass phase at low enough energy values .figure [ fig : fvs ] compares the performance of the sals algorithm ( the cooling parameter fixed to ) with that of the bpd algorithm on er ( a ) and rr ( b ) random graphs . we see that bpd slightly outperforms sals for both ensembles of random graphsthis is not surprising , since the bpd algorithm takes into account the global structure of the graph through message - passing , while the sals algorithm considers only the local graph structure .it is interesting to see that the results of the two algorithms are actually very close to each other , especially for er random graphs . as also shown in fig .[ fig : fvs ] , at each value of ( mean ) degree , the result obtained by the sals algorithm is very close to the predicted value of the global minimal energy by the replica - symmetric mean field theory .therefore message - passing algorithms are not the only candidate choices to construct near - optimal feedback vertex sets for undirected random graphs .an advantage of the sals algorithm is that its implementation is very straightforward .it may be the method of choice for many practical applications . , a vertex s probability of being in a feedback vertex set increases with its vertex degree . each star point is the mean result obtained by averaging over the final fvs solutions obtained by the sals algorithm on the random graph instances of vertices .each plus point denotes the result obtained by the rs mean field theory at . , scaledwidth=35.0% ] we also notice from fig .[ fig : fvs ] that the sals algorithm performs poorer in regular random graphs than in er random graphs .this is probably due to the following fact : each vertex in a regular random graph has the same degree so there is local guide as to whether a vertex should be included into the feedback vertex set or not .on the other hand the degree heterogeneity of an er graph will give some local guide to the sals process to arrive at a near - optimal fvs .based on the final fvs solutions obtained by the sals algorithm on the er random graphs with mean vertex degree , we compute the mean probability that a vertex of degree is contained in the fvs . the results shown in fig .[ fig : vertexfvsprob ] demonstrate that is close to for and close to for , and it is an rapidly increasing function of in the range of .the empirically obtained mean value of is found to be very close to the value of computed using the replica - symmetric mean field theory .notice that for the empirical values of are slightly larger than those predicted by the rs mean field .such small differences cause the energies of the construsted fvs solutions to be higher than the corresponding optimal values .the graphs for real - world complex systems usually are highly heterogenous in terms of vertex degree distributions .therefore it is very likely that the sals algorithm to have very good performance for such graphs .in this paper , we implemented and tested a local search algorithm for the undirected feedback vertex set problem . similar to the local search algorithm for the directed fvs problem , our algorithm uses the technique of simulated annealing to explore the highly complex landscape of low - energy congiurations .our simulation results demonstrated that this algorithm is very efficient for large random graph instances .the relative sizes of the constructed fvs solutions by the local search algorithm are very close to the predicted values of the replica - symmetric mean field theory , and they are also very similar to the results obtained by the bpd message - passing algorithm . it should be emphasized that the microscopic dynamical rules of our local search algorithm do not obey the detailed balance condition . these dynamical rules therefore need to be appropriately modified if one is interested in the equilibrial fvs solutions at a given fixed temperature .we are currently using a modified set of microscopic dynamical rules to study the spin glass phase transition of the model proposed in .such an equilibrium study will offer more physical insights on the cooling - rate dependent behaviors of fig .[ fig : evolution ] .the numerical simulations were performed at the hpc computer cluster of the authors institute .this work was partially supported by the national basic research program of china ( no .2013cb932804 ) , the knowledge innovation program of chinese academy of sciences ( no . kjcx2-ew - j02 ) , and the national science foundation of china ( grant nos . 11121403 , 11225526 ) .hjz conceived research ; smq performed research ; hjz wrote the paper .consider a generic legal list formed by some vertices of the graph , see eq .( [ eq : glist ] ) .let us consider the subgraph induced by all the vertices of list and all the edges between these vertices . since is a legal list , every vertex must satisfy the rank condition ( [ eq : rc ] ) .consequently , every vertex in the subgraph has at most one nearest neighbor with rank lower than that of itself .this subgraph must be cycle - free .we prove this statement by contradiction .assume there is a cycle in the subgraph involving vertices : if the rank of vertex is lower than that of , then due to the fact of vertex having at most one nearest neighbor with lower rank than itself , the rank of must be higher than that of .continuing this analysis along the cycle , we obtain that the rank of vertex must be higher than that of but lower than that of .but this is impossible since the rank of must be higher than that of .similarly , if the rank of vertex is higher than that of , we will arrive at the contradicting results that the rank of is higher than that of while the rank of is higher than that of .because of these contradictions , the assumption of containing a cycle must be false .therefore must be a tree or a forest .then the set formed by the vertices not contained in must be a feedback vertex set .suppose the set is a fvs of a graph . then the subgraph induced by all the vertices not included in must be cycle - free .if has only one connected component ( i.e. , being a tree ) , we can pick a vertex ( say ) of uniformly at random and specify this vertex as the root of the tree subgraph .we can then construct an ordered list using all the vertices of in such a way : first the root vertex , then all the vertices of unit path length to ( in random order ) , followed by all the vertices of path length two to ( again in random order ) , ... , followed by the remaining vertices of the longest path length to ( in random order ) .obviously is a legal list with the ranking condition ( [ eq : rc ] ) satisfied for all its vertices .if is a forest with two or more tree components , we can perform the above - mentioned process for each of its tree components and then contatenate the constructed ordered lists in a random order to form a whole ordered list .this list must also be a legal list .
an undirected graph consists of a set of vertices and a set of undirected edges between vertices . such a graph may contain an abundant number of cycles , then a feedback vertex set ( fvs ) is a set of vertices intersecting with each of these cycles . constructing a fvs of cardinality approaching the global minimum value is a optimization problem in the nondeterministic polynomial - complete complexity class , therefore it might be extremely difficult for some large graph instances . in this paper we develop a simulated annealing local search algorithm for the undirected fvs problem . by defining an order for the vertices outside the fvs , we replace the global cycle constraints by a set of local vertex constraints on this order . under these local constraints the cardinality of the focal fvs is then gradually reduced by the simulated annealing dynamical process . we test this heuristic algorithm on large instances of erdos - renyi random graph and regular random graph , and find that this algorithm is comparable in performance to the belief propagation - guided decimation algorithm .
cake cutting is one the most fundamental topics in fair division ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?it concerns the setting in which a cake is represented by an interval ] .a _ piece of cake _ is a finite union of disjoint subintervals of ] is . as usual ,the set of agents is .each agent has a piecewise continuous _ value density function _ \rightarrow [ 0,\infty] ] ) and additive : where and are disjoint .the basic cake cutting setting can be represented by the set of agents and their valuations functions , which we will denote as _ a profile of valuations_. in this paper we will assume that each agent s valuation function is private information for the agent that is not known to the algorithm designer .each agent reports his valuation function to the designer and the designer then decides how to make the allocations based on the reported valuations . later on we will also consider two important extensions of cake cutting : claims and private endowments .we will assume that agents have the following _ claims _ on the cake respectively : .in the original cake cutting problem agents have equal claims .each agent has a _ private endowment _ which is a segment of the cake privately owned by .the cake is assembled by joining the pieces .therefore the cake cutting setting in its full generality can be represented as a quadruple .an allocation is a partitioning of the cake into pieces of cake such that the pieces are disjoint ( aside from the interval boundaries ) and is allocated to agent .a cake cutting algorithm takes as input and returns an allocation . in this paperwe will only consider _ piecewise uniform _ and _ piecewise constant _ valuations functions .a function is _ piecewise uniform _ if the cake can be partitioned into a finite number of intervals such that for some constant , either or over each interval .a function is _ piecewise constant _ if the cake can be partitioned into a finite number of intervals such that is constant over each interval . in order to report his valuation function to the algorithm designer ,each agent will specify a set of points that represents the consecutive points of discontinuity of the agent s valuation function as well as the constant value of the valuation function between every pair of consecutive s . for a function , we will refer by \} ] into a set , where is the number of agents and is a piece of cake that is allocated to agent .and is the piece of the cake that is not allocated .all of the fairness and efficiency notations that we will discuss next are with respect to the reported valuation functions . in an _ envy - free allocation _, we have for each pair of agent , . an allocation is _ individually rational _ if . in a _proportional _ allocation , each agent gets at least of the value he has for the entire cake .an allocation satisfies _ symmetry or equal treatment of equals _ if any two agents with identical valuations get same utility .clearly , envy - free implies proportionality and also symmetry .an allocation is _ pareto optimal _ if no agent can get a higher value without some other agent getting less value .formally , is pareto optimal if there does not exists another allocation such that for all and for some . for any ] , . in other words , an allocation is non - wasteful if every portion of the cake desired by at least one agent is allocated to some agent who desires it .we now define robust analogues of the fairness concepts defined above .an allocation satisfies _ robust proportionality _ if for all and for all , .an allocation satisfies _ robust envy - freeness _ if for all and for all , .notice that both robust envy - freeness and robust proportionality would require each agents to get a piece of cake of the same length if every agent desires the entire cake .we give an example of piecewise constant value density function and demonstrate how the standard concept of envy - freeness is not robust under uncertainty .[ example : piecewise ] consider the cake cutting problem in figure [ figure : piecewise ] . an allocation in which both agents get regions in which their value density function is the highest is envy - free .agent 1 gets utility one for his allocation and has the same utility for the allocation of agent 2 .however if its probability density function is slightly lower in region 0.1 0.3 0.5 1 10 3 2 0.1 0.3 0.5 1 10 3 2 ] interval representation of the cake .notice that this procedure preserves the aforementioned properties of fairness , efficiency and truthfulness .the free disposal assumption that we are making is necessary to ensure strategyproofness for piecewise uniform valuation functions .see for a discussion on the necessity of this assumption .+ before we present our algorithms , we will first take a detour to the literature on random assignments , as some of the algorithms in the random assignment literature are closely related to our algorithms .an assignment problem is a triple such that is a set of agents , is a set of houses and is the preference profile in which denotes the preferences of agent over houses .a _ deterministic assignment _ is a one - to - one mapping from to .a _ random allocation _ is a probability distribution over .a random assignment gives a random allocation to each agent .it can be represented by a bistochastic matrix in which the row is denoted by and all , and , , .holds when there are the same number of agents as there are objects , which can be assumed without lost of generality by adding dummy agents or objects ] the term denotes the probability with which agent gets house .an assignment problem has commonalities with cake cutting with piecewise constant valuations .they also have some fundamental differences .for example , in cake cutting , the agents do not have continuous constant valuations over pre - determined segments of the cake .given two random assignments and , i.e. , a player weakly sd prefers to if for all , another way to see the sd relation is as follows .a player weakly sd prefers allocation to if for all vnm utilities consistent with his ordinal preferences , gets at least as much expected utility under as he does under .furthermore , i.e. , _ stochastically dominates _ if for all and .an algorithm satisfies _ sd - efficiency _ if each returned assignment is pareto optimal with respect to the sd - relation ( see * ? ? ?an algorithm satisfies _ sd envy - freeness _ if each agent ( weakly ) sd prefers his allocation to that of any other agent .sd envy - freeness is a very demanding notion of fairness .the reader may be able to notice that our notion of robust envy - freeness in cake cutting is based on a similar idea as sd envy - freeness .we will consider random allocations as fractional allocations and random assignments as fractional assignments .viewing the probability of getting a house simply as getting a fraction of the house is especially useful when some houses are not complete but only partial . in this vein , the definition of sd dominanceshould also be considered from the perspective of fractional allocations rather than probability distributions .the most basic assignment problem concerns agents having strict preferences over objects .for this basic setting a simple yet ingenious _ ps ( probabilistic serial ) _ algorithm introduced by and which uses the _ simultaneous eating algorithm ( sea)_. each house is considered to have a divisible probability weight of one , and agents simultaneously and with the same eating rate consume the probability weight of their most preferred house until no house is left . the random allocation allocated to an agent by psis then given by the amount of each object he has eaten until the algorithm terminates .the main result of was that the fractional assignment returned by the ps algorithm is sd envy - free and sd - efficient .the ps algorithm has been extended in various ways .the eps ( extended ps algorithm ) of generalized ps to the case for indifferences using parametric network flows .eps also generalized the _ egalitarian rule _ of for dichotomous preferences . and extended the work of to propose ps generalization which also takes care of private endowments where indicates the endowment of agent . for the case of endowments, introduced the idea of justified envy - freeness .an assignment satisfies justified envy - freeness if for all , or the algorithms in satisfy justified envy - freeness in the presence of private endowments . in our algorithm ccea , we rely on the full power of the controlled - consuming ( cc ) algorithm of which combines almost all the desirable features of other extensions of ps . in particular , we use the following fact ._ there exists an extension of the ps algorithm which can simultaneously handle indifferences in preferences , unacceptable objects in preferences , allocation of multiple objects to agents , agent owning fractions of houses , partial houses being available , and still returns an assignments which satisfies sd justified envy - freeness and sd - efficiency ._ in addition , if there are no private endowments , then the extension can also handle variable eating rates .the controlled - consuming ( cc ) algorithm of can handle the case where each agent owns fractions of the complete houses .we also require that for some houses , only an arbitrary fraction of the house is available to the market .this can be handled by a modification to cc ( page 30 , * ? ? ?finally we require the agents to want to be allocated as many houses as possible .this does not require any modification to cc .in the absence of endowments but presence of variable eating rates , cc is equivalent to the eps algorithm that can also cater for variable eating rates ( section 6.4 , * ? ? ? * ) .ccea is based on cc ( controlled consuming ) algorithm of . since the original ps algorithm utilized the simultaneous eating algorithm , hence the name _controlled cake eating algorithm ._ ccea first divides the cake up into disjoint intervals each of whose endpoints are consecutive points of discontinuity of the agents valuation functions .we will refer to these intervals as _ intervals induced by the discontinuity points_. the idea is to form a one - to - one correspondence of the set of cake intervals with a set of houses of an assignment problem .since intervals may have different lengths , we consider the house corresponding to the interval with the maximum length as a complete house where as intervals corresponding to other houses are partial houses .the preferences of agents over the houses are naturally induced by the relative height of the piecewise constant function lines in the respective intervals .if an agent owns a sub - interval , then in the housing setting , is set to and not to one .the reason is that an agent can only own as much of the house as exists .the technical heart of the algorithm is in cc ( controlled consuming ) algorithm of .we recommend the reader to section 3.2 of in which an illustrative example on cc is presented . once cc has been used to compute a fractional assignment , it is straightforward to compute a corresponding cake allocation .if an agent gets a fraction of house , then in the cake allocation agent gets the same fraction of subinterval .piecewise constant value functions . a robust envy - free allocation .divide the regions according to agent value functions .let be the set of subintervals of ] , then one possibility of can be ] formed by consecutive points of discontinuity are identified : , j_2=[0.1,0.3 ] , j_3=[0.3,0.5] ] . is discarded because it is desired by no agent . in set , each house corresponds to subinterval .the preferences of the agents over are inferred from their valuation function height in the subintervals so that and we also set the number of units of each house that is available . since is the biggest interval , we consider as complete house .so , , , and .if we run cc over the housing market instance with the specified set of agents , houses , fraction of houses available to the market , and agent preferences , then the assignment returned by cc is as follows : , , , , , and .the house assignment can be used to divide the subintervals among the agents : , [ 0.7,1]\} ] .ccea satisfies the strong fairness property of robust envy - freeness .[ prop : ccea is robust envy - free ] for piecewise constant valuations , ccea satisfies robust envy - freeness and non - wastefulness . let be the number of relevant subintervals in a cake cutting problem with piecewise constant valuations . [prop : ccea time ] ccea runs in time , where is the number of agents and is the number of subintervals defined by the union of discontinuity points of the agents valuation functions .although ccea satisfies the demanding property of robust envy - freeness , it is not immune to manipulation .we show that ccea is not strategyproof even for two agents . in the next section, we will present a different algorithm that is both robust envy - free and strategyproof for two agents .[ prop : ccea is not sp ] for piecewise constant valuations , ccea is not strategyproof even for two agents .if we restricted the preferences to piecewise uniform with no private endowment or variable claims , then ccea is not only strategyproof but group - strategyproof .we first show that in this restricted setting , ccea is in fact equivalent to the algorithm of .[ prop : equivalence ] for piecewise uniform value functions with no private endowments and variable claims , ccea is equivalent to mechanism 1 of .since the set of valuations that can be reported is bigger in cake cutting than in the assignment problem , establishing group strategyproofness does not follow automatically from group - strategyproofness of cc for dichotomous preferences ( theorem 2 , * ? ? ?using similar arguments , we give a detailed proof that ccea and hence mechanism 1 of is group strategyproof for piecewise uniform valuations . in section [ sec : extensions ] , we extend the result to the case where agents may have variable claims . [ prop : gsp1 ] for cake cutting with piecewise uniform value functions , ccea is group strategyproof . for piecewise uniform valuations , ccea is also pareto optimal .the result follows directly from lemma [ prop : equivalence ] along with the fact that mechanism 1 of is pareto optimal .[ prop : pareto optimality ] for cake cutting with piecewise uniform value functions , ccea is pareto optimal .in the previous section we presented ccea which is not pareto optimal for piecewise constant valuations . it turns out that if we relax the robust notion of fairness to envy - freeness , then we can use fundamental results in general equilibrium theory and recent algorithmic advances to formulate a convex program that always returns an envy - free and pareto optimal allocation as its optimal solution . for each valuation profile , let be the intervals whose endpoints are consecutive points in the union of break points of the agents valuation functions . let be the length of any subinterval of that is allocated to agent . then we run a convex program to compute a pareto optimal and envy - free allocation .we will call the convex program outlined in algorithm [ algo : market ] as the _ market equilibrium algorithm ( mea)_. mea is based on computing the market equilibrium via a primal - dual algorithm for a convex program that was shown to be polynomial - time solvable by . notice that if we ignore strategyproofness , or in other words , if we assume that agents report truthfully , then agents are truly indifferent between which subinterval they receive since their valuation function is a constant over any . hence , one we determine the length of to be allocated to an agent , we can allocate any subinterval of that length to the agent .cake - cutting problem with piecewise constant valuations . a proportional , envy - free , and pareto optimal allocation .let be the intervals whose endpoints are consecutive points in the union of break points of the agents valuation functions .let be the length of any subinterval of that is allocated to agent . solve the following convex program . let , be an optimal solution to the convex program .partition every interval into subintervals where the -th subinterval has length . be the allocation of each . .[ algo : market ] [ pe , ef , prop ] mea is polynomial - time , pareto efficient and envy free .we mention here that the connection between a fair and efficient algorithm for cake cutting and computing market equilibria was already pointed by . presented an algorithm to compute an approximately envy - free and pareto optimal allocation for cake cutting with general valuations .however their algorithm is not polynomial - time even for piecewise constant valuations .mea requires the machinery of convex programming .it remains open whether mea can be implemented via linear programming . presented an algorithm that uses a linear program to compute an optimal envy - free allocation .the allocation is pareto optimal among all envy - free allocations. however it need not be pareto optimal in general .although mea is not robust envy - free like ccea , it is pareto optimal because it maximizes the nash product .what is interesting is that under uniform valuations , both mea and ccea are equivalent . in the next resultwe demonstrate this equivalence ( proposition [ ccea equivalence ] ) .the proof requires a careful comparison of both ccea and mea under uniform valuations .[ ccea equivalence ] for piecewise uniform valuations , the allocation given by ccea is identical to that given by mea . for piecewise uniform valuations ,mea is group - strategyproof .thus if we want to generalize mechanism 1 of to piecewise constant valuations and maintain robust envy - freeness then we should opt for ccea . on the other hand ,if want to still achieve pareto optimality , then mea is the appropriate generalization . in both generalization, we lose strategyproofness .thus far , we presented two polynomial time algorithms , each of which satisfies a different set of properties .ccea is robust envy - free and non - wasteful , whereas mea is pareto optimal and envy - free .this naturally leads to the following question : does there exist an algorithm satisfies all of the properties that ccea and mea satisfy ?it turns out that the answer is no , as theorem [ pe fair impossibility ] shows that there is no algorithm that is both pareto efficient and robust proportional .similarly , theorem [ pe sp prop impossibility ] argues that there is no algorithm that satisfies the properties ccea satisfies along with strategyproofness .lastly , theorem [ sp rob prop non w impossibility ] argues that there is no algorithm that satisfies the properties ccea satisfies plus strategyproofness .the impossibility results are summarized in table [ table : summary : impossible ] .consequently , we may conclude that the properties satisfied by ccea and mea are respectively maximal subsets of properties that an algorithm can satisfy for piecewise constant valuations .in the previous sections , we saw that ccea and mea are only strategyproof for piecewise uniform valuations . in light of the impossibility results established in the preivous section , it is reasonable to ask what other property along with strategyproofness can be satisfied by some algorithm .it follows from ( * ? ? ?* theorem 3 , ) that the only type of strategyproof and pareto optimal mechanisms are dictatorships . raised the question whether there exists a strategyproof and proportional algorithm for piecewise constant valuations .the algorithm csd answers this question partially . before diving into the csd algorithm ,it is worth noting that there is some fundamental difference between random assignment setting and the cake cutting setting . in the random assignmentsetting , the objects that we are allocating are well defined and known to the public . on the other hand , in the cake cutting setting , the discontinuity points of each agent s valuation function is private information for the agent .hence , any algorithm that uses the reported discontinuity points to artificially create the objects runs into the risk of having the objects created by the algorithm be manipulated by the reports of the agents . in order to illustrate this difficulty ,consider the uniform allocation rule .the uniform allocation rule ( that assigns of each house ) is both strategyproof and proportional in the random assignment setting .however it can not be adapted for cake cutting with piecewise constant valuations since strategyproofness is no longer satisfied if allocating of each interval ( induced by the agent valuations ) is done deterministically .[ prop : uniform allocation is not sp ] the uniform allocation rule ( done deterministically ) is not strategyproof .now we are ready to present csd . in order to motivate csd, we will give a randomized algorithm that is strategyproof and robust proportional in expectation .the algorithm is a variant of random dictatorship : each agent has uniform probability of being chosen as a dictator .however , if the whole cake is acceptable to each agent , then each time a dictator is chosen , he will take the whole cake .this approach is not helpful since we return to square one of having to divide the whole cake .we add an additional requirement which is helpful .we require that each time a dictator is chosen , the piece he takes has to be of maximum value length of the total size of the cake. we will call this algorithm constrained random serial dictatorship ( crsd ) . formally speaking , crsd draws a random permutation of the agents .the algorithm then makes the allocation to agents in the order that the lottery is drawn .everytime when it is agent s turn to receive his allocation , crsd looks at the remaining portion of the cake and allocates a maximum value length piece of the cake to agent ( break ties arbitrarily ) .notice that crsd is strategyproof , as in every draw of lottery , it is in the best interest of the agents to report their valuation function truthfully in order to obtain a piece that maximizes his valuation function out of the remaining pieces of cake .later on we will see , through the proof of proposition [ prop : csd is robust prop ] , that crsd is robust proportional in expectation .+ csd is an algorithm that derandomizes crsd by looking at its allocation for all different permutations and aggregate them in a suitable manner . the algorithm is formally presented as algorithm [ csd - algo ] .+ cake - cutting problem with piecewise constant valuations . a robust proportional allocation . ] and agent then takes the remaining piece \} ] and agent then takes the remaining piece ,[0.8,1]\} ] .when we we additionally consider the discontinuities in the players valuations , the set of relevant subintervals is , [ 0.1,0.3],[0.3,0.5],[0.5,0.6 ] , [ 0.6,0.8],[0.8,1]. ] . for ,let if and if .let \ ] ] if and \cup [ \mod(u_j + \sum_{k = 1}^{i-1}p_{nj}(b_j - a_j ) ) , b_j]\ ] ] otherwise .[ csd : subroutine ] we will refer this randomized implemention of csd as _ constrained mixed serial dictatorship _ , or _cmsd _ for short .[ csd : sp ] csd implemented with the aforementioned random allocation rule is strategyproof in expectation .although csd is strategyproof in expectation , it fails to satisfy truthfulness based on group - based deviations no matter how the fractional parts of each interval are allocated .[ csd : not gsp for pwc ] for cake cutting with piecewise constant valuations , csd is not weakly group - strategyproof even for two agents .moreover , for cake cutting with piecewise uniform valuations , csd is not weakly group - strategyproof for at least seven agents .the statement above follows from the fact that rsd is not weakly group - strategyproof for dichotomous preferences when there are at least seven agents .even though csd satisfies both proportionality and symmetry , it does not satisfy the stronger notion of envy - freeness .[ prop : csd is not envy - free ] csd is not necessarily envy - free for three agents even for piecewise uniform valuations .another drawback of csd is that it is not pareto optimal for piecewise constant valuations .the statement follows from the fact that rsd is not sd - efficient .however for the case of two agents , it is robust envy - free and polynomial - time . for two agents and piecewise constant valuations ,csd is robust envy - free , and polynomial - time but not pareto optimal . for piecewise uniform valuations, csd can be modified to be made pareto optimal .the main change is that for each permutation csd , the resultant outcome needs to be made pareto optimal .this can be done by using the idea in .in this section , we show how some of our positive results concerning ccea extend to more general settings where agents may have variable claims or they may have initial endowments ( please see algorithm [ algo : ccea - extensions ] ) . piecewise constant value functions with priority claims or private endowments a robust envy - free and individually rational allocation .let where is the agent owning all public cake but with no interest in any of the cake .join the segments to assemble a cake .divide the regions according to agent value functions .let be the set of subintervals of ] for all .an allocation satisfies _ robust proportionality for variable claims _ if for all and for all , }v_i'(x)dx ] into intervals of the form ] and of ] and ] and agent receives \cup [ 0.75 , 1] ] and agent 2 receives \cup [ 0.75 , 1] ] . by non - wastefulness ,agent must receive ] since agent has a utility of on these intervals .hence , agent in profile 2 would misreport so that he receives the allocation in profile .first of all , ccea is non - wasteful because an agent is never allowed to eat a piece of the cake that he has no desire for . on the other hand ,the algorithm terminates only when every portion of the cake that is desired by at least one agent is completely consumed by some agent who desires it .next , we show that the algorithm is robust envy - free . consider a fractional assignment returned by the cc algorithm . without private endowmentscc is equivalent to the eps algorithm of .assignment satisfies justified envy - freeness in presence of variable eating rates : for all utilities consistent with preferences of over the houses .the intuition is that at any point during the running of cc , an agent will be ` eating ' his most favoured object(s ) at the same rate as any other agent even if is also the eating the same object(s ) .hence , for all , it is the case that for , . +for a cake cutting instance , is the input size where is the number of agents and is the number of relevant subintervals . once the lengths of the subintervals in are computed , the size of each house can be computed in linear time .the number of houses in ccea is .we now analyse the running time of cc on agents and houses ( section 3.5 , * ? ? ?* ) . in the cc algorithm ,the flow network consists of vertices and arcs where and .the number of parametric flow problems needed to be solved is .a parametric flow network problem can be solved in time due to .hence , the running time of ccea is .ccea is not strategyproof even if all the piecewise intervals are of equal length , and there are no private endowments or variable claims , and agents have strict preferences over the intervals . in this caseccea is equivalent to the classic ps algorithm .it is known that ps is not strategyproof even for strict preferences when there are more objects than agents .in the absence of private endowments and variable claims , ccea can be solved by invoking eps instead of cc but with the slight modification that in the corresponding flow network of eps , the capacity of each arc is set to in step 2 of eps ( algorithm 1 , * ? ? ?let us refer to this simplified ccea as simpleccea .when simpleccea is run , it invokes eps and solves repeated parametric network flow problems ( step 3 , algorithm 1 , * ? ? ?* ) . in the step , eps computes a bottleneck set of agents and houses at each break - point .simpleccea computes bottleneck sets in the same way as mechanism 1 of and then allocates the resources in the bottleneck sets to the agents in the bottleneck set .the flow networks of the slightly modified eps ( figure 2 , * ? ? ?* ) and that of mechanism 1 ( figure 2 , * ? ? ?* ) are identical with only two insignificant differences namely that in the flow network of mechanism 1 of i ) the source and target are swapped and all the arcs are inverted and ii ) the size of the houses / intervals is not normalized. however , the eventual allocations are same .in light of lemma [ prop : equivalence ] , it suffices to show that simpleccea is gsp .we begin with some notations .+ let denote the length of ] denote the pieces of cake that are truly desired by each agent .+ let ] denote the allocation received by each agent under truthful reports .+ let ] .this contradicts the fact that .it is the case that . in other words ,no agent in is strictly better off when some subset of agents misreport their preference . since we have established that , and , it suffices to show that .suppose not , then for all , we have that and there exists some such that .summing over , we get that where the first two equalities follow from the fact that the s and s are disjoint subsets and the third equality follows from the way the algorithm allocates to the agents in the smallest bottleneck set . butthis set of inequalities contradict the fact that , which implies that .hence , it must be the case that for every , we have that , which implies that .no agent in appears in the coalition and is also the first bottleneck set for . by the previous lemma ,no agent in is strictly better off by misreporting his preference .thus , any agent in would potentially be in if by misreporting , he makes himself no worse off and simultaneously make some other agent in strictly better off .the only way that this can happen is by misreporting , the agents in make their collective claim over their desired pieces smaller , so that agents in later bottleneck set can claim some of their desired pieces . on the other hand , the following inequality implies that if any subset of agents of wants to misreport so they are not worse off , then collectively , they must over - report their preference to obtain allocations that together is weakly larger in total than the allocations they would get had they reported truthfully .thus , having a subset of agents in misreport will not benefit the other agents in the coalition .hence , we may conclude that no agent in appears in the coalition .provided that every agent in also reports truthfully in , there is no incentive for an agent that belongs to a subsequent bottleneck set in to misreport and prevent from being the first bottleneck set in since that would make the misreporting agent strictly worse off , as in doing so , he needs to create a bottleneck set such that and he would consequently receive an allocation of .+ since no agent in appears in the coalition and is also the first bottleneck set for , we can remove from and from ] to be allocated and the proof is complete by invoking the inductive hypothesis with being the first bottleneck set in the new instance .consider the following math program where , + notice that the feasible region of the math program contains all feasible allocations .an optimal solution given by the lp is not pareto dominated by any other feasible allocation because that would contradict the optimality of the solution .hence , it is pareto efficient .+ to see that the optimal solution of the math program is also an envy free allocation , if we instead view as the fractional amount of that is allocated to agent , then scaling the s appropriately ( i.e. setting ) , then solving the math program stated in the proposition is equivalent to solving the following math program . in turn equivalent to solving notice that the above math program is a convex program since we are maximizing a concave function ( or equivalently minimizing a convex function ) subject to linear constraints .+ in ( pp 105 - 107 , * ? ? ?* ) , vazirani invites us to consider a market setting of buyers ( agents ) and divisible goods ( intervals ) . each goodis assumed to be desired by at least one buyer ( i.e. for every good , for some buyer ) .there is a unit of each good and each buyer is given the same amount of money say dollar , for which he uses to purchases the good(s ) that maximizes his utility subject to a set of given prices .the task is to find a set of equilibrium prices such that the market clears ( meaning all the demands are met and no part of any item is leftover ) when the buyers seek purchase good(s ) to maximize their utility given the equilibrium prices .+ using duality theory , one can interpret the dual variable associated with the constraints as the price of consuming a unit of good . by invoking the kkt conditions, shows the prices given by the optimal dual solution is a unique set of equilibrium prices .moreover , the primal optimal solution for each buyer is precisely the quantity of good(s ) that the buyer ends up purchasing that maximizes his utility given the equilibrium prices .+ now we can argue as to why the optimal primal solution is an envy free allocation . because given the equilibrium prices , if a buyer desires another buyer s allocation , since he has the same purchasing power as any other buyer , he would instead use his money to obtain the allocation of the buyer that he envies .this would result in some surplus and deficit of goods , contradicting the fact that the given prices are equilibrium prices .for piecewise uniform valuations , it is known that ccea is equivalent to mechanism in , which we will refer to as _simpleccea_. the remainder of the proof will focus on showing the equivalence between simpleccea and the convex program for piecewise uniform valuations .to do so , given an allocation of simpleccea , which is a feasible solution of the convex program , we will find a set of prices corresponding to the allocation and show that the prices are in fact the equilibrium prices defined by vazirani on pages 105 - 107 of .moreover , this allocation would be an allocation that maximizes the agents utility given the equilibrium prices .+ using the same notations as those in , given a valuation profile , let be the set of buyers / agents and be the set of goods or intervals .let be the -th bottleneck set computed by simpleccea , i.e. in the -th iteration of the subroutine of simpleccea .let be the set of goods that are distributed amongst the buyers in . in the convex program, since each buyer is endowed with dollar and every buyer in receives units of good(s ) , it is natural to define the price of a unit of each good to be notice that the prices for each good is well defined ( i.e. each good has exactly one nonnegative price ) .this follows from the following observations : 1 . ( or every good has at least one price ) .this follows from the assumption that every good is desired by at least one agent , which means that simpleccea will allocate all of the goods . 2 . for all ( or every good has at most one price ) .this follows from the fact that no fractional parts of any good is allocated to agents from two or more bottleneck sets , which another algorithmic property of simpleccea . to show that the s form a set of equilibrium prices, we will show that given the s , the buyers in every will choose to purchase _ only _ goods from to maximize their utility function .we will do induction on the number of bottleneck sets .consider the first bottleneck set , we will show that are the only items that are desirable by buyers in .simpleccea finds by solving a parametrized max flow problem on a bipartite network .the network has a node for every buyer and a node for every good .there is a directed edge from a buyer to a good with infinite capacity if good is desired by buyer .in addition , there is a source and a sink .there is a directed edge from to each buyer with capacity and a directed edge from edge good to with capacity equaling the quantity of good . is set to initially so that the unique min cut in the network for is . is gradually raised until is no longer the unique min cut , at which point and are found by looking for another min cut of the form . since is a min cut , it must be the case that in the network , which proves the lemma .( if , then there will be an infinite capacity edge crossing the cut .on the other hand , if , then replacing with on the side of the cut would give a cut with a smaller capacity . ) since the agents have piecewise uniform valuation , in the setting of the convex program , each buyer in has the same utility for the items that he desire in .moreover , given that the prices of goods are identical in , each buyer is indifferent between choosing among his desirable items for the best item in terms of bang per buck .hence , for all buyers in , the allocation given by simpleccea maximizes buyer s utility given the prices s . moreover , notice that the money of buyers in and goods in are exhausted by the allocation given by simpleccea .+ after the goods in are allocated to the buyers in we repeat the same argument for the remaining buyers and goods and the inductive hypothesis allows to conclude that for every and for all buyers in , the allocation given by simpleccea maximizes buyer s utility given the prices s .moreover , the money of buyers in and goods in are exhausted by the allocation given by simpleccea .there is a slight difference between the inductive step and the base case , as it is possible that some buyer in also desires certain goods in for some .however , state that is a weakly increasing function of , which means that is a weakly decreasing function of .since we are dealing with piecewise uniform valuations , the utility of a buyer over his desirable goods are identical , this means that for any buyer in , the goods that maximize his bang for buck are in .+ putting everything together and we have shown that s constitute the set of equilibrium prices and simpleccea gives an equilibrium allocation , which is an optimal solution to the convex program .consider the following profile of two agents . profile 1 : * , \v_1(x ) = 0 \ \text{if } \ x \in ( 0.2 , 1]. ] the uniform allocation rule gives us the allocation : * , \frac{1}{2}(0.6,1]\} ] .let ] * \backslash a , \v_2(x ) = 1 \ \text{if } \ x \in a. ] . +* , \frac{1}{2}a\} ] induced by the discontinuities in agent valuations and the cake cuts in the cake allocations .we make a couple of claims about that following from the way is constructed .an agent is completely indifferent over each subinterval in .let denote a maximum preference cake piece of size chosen by agent in the serial order .for each either contains completely or it does not contain any part of . now consider a matrix of dimension : such that if and if .since for each , each agent gets of the cake in , then it follows that .hence , also consider a matrix of dimension : such that denotes the fraction of that agent gets in . from the algorithm csd ,we know that where is the number of permutations in which gets .it is immediately seen that each column sums up to .hence each is complete allocated to the agents .we now prove that each agent gets a total cake piece of size .we do so by showing that . hence the allocation returned by csd is a proper allocation of the cake in which each agent gets a total cake piece of size .we first argue for proportionality of csd . in the case where all agents have the same valuations as the valuation of , guaranteed of the value of the whole cake because of anonymity of csd .first note that for each and preferences of all agents other than , the reason is that when valuations are not identical , predecessors of in leave weakly better cake for as when their valuations are same as agent .hence , )/n . ] * , \v_2(x ) = 0 \ \text{if } \ x \in ( 0.5 , 1]. ] .+ * , \frac{1}{2}(0.5,1]\} ] * , \v_2(x ) = 1 \ \text{if } \ x \in ( 0.25 , 0.75 ] , v_2(x ) = 0 \\text{if } \ x \in ( 0.75 , 1]. ] .+ * , ( 0.5,0.75 ] , \frac{1}{2}(0.75,1]\} ] to agent 1 and ] in order to receive the allocation given in profile 1 and gain utility in doing so .consider the profiles and , where is a profile where every agent reports truthfully and is a profile where agent misreports while fixing every other agent s report to be the same as that in .let denote a permutation of = \{1,\ldots , n\} ] .let denote the intervals whose fractional allocations are specified to each agent by csd in profile and denote the intervals whose fractional allocations are specified to each agent by csd in profile .let denote agent s total utility derived from receiving the interval and and denote the agent s total utility derived from his allocated pieces when the serial ordering of the agents is in profile and respectively .let denote the probability that interval is assigned to agent .since random serial dictatorship is strategyproof in expectation , we have that now csd views as allocating a fraction of interval to agent . in order for a deviating agent to properly evaluate the utility derived from his allocation in the deviating profile , we have to come up with an allocation rule that actually _ attains _ the utility for agent ( either deterministically or in expectation ) when the profile of reports is .in particular , say if we want to allocate a subinterval of with length times that of to agent at random , then this random allocation rule must satisfy the property that = p_{ij}v_i(j'_j) ] for all valuation functions , where such that .+ notice that is uniformly distributed on ] follows from the following lemma .[ lemma : csd - random ] let be uniformly distributed on the interval ] if and \cup [ u , b] ] , where for any integrable function .define for and for outside ] . consider the following two profiles of valuations .+ profile 1 : * * running csd gives us the allocation : * .+ * . profile 2 : * * running csd gives us the allocation : * .+ * .hence , agents with true valuation in profile 1 would misreport together to profile 2 , which means that csd is not group strategyproof for 2 agents .there are three agents , each with piecewise uniform valuation function . for ] and otherwise .+ for ] , ] .we adopt the following implementation of csd : when it is agent i s turn to pick , out of the pieces of the remaining cake that he likes , he takes the _ left - most _ such piece with length 1/n , where n is the number of agents .if the priority ordering were , then a feasible assignment that respects the preferences is 1 a , 2 c , 3 b. if the priority ordering were 1,3,2 , then a feasible assignment that respects the preferences is , .if the priority ordering were , then a feasible assignment that respects the preferences is , , .if the priority ordering were , then a feasible assignment that respects the preferences is , , .if the priority ordering were , then a feasible assignment that respects the preferences is , , . if the priority ordering were , then a feasible assignment that respects the preferences is , , .then , the csd allocation is as follows . , \frac{1}{6}(1/3 , 2/3 ] , \frac{1}{3}(2/3 , 1]\}\\ y_2 = & \quad \{\frac{1}{2}[0,1/3 ] , \frac{1}{2}(2/3,1]\}\\ y_3 = & \quad \{\frac{5}{6}(1/3 , 2/3 ] , \frac{1}{6}(2/3,1]\ } \end{aligned}\ ] ] clearly , agent 1 envies agent 3 in this case .consider a fractional assignment returned by the cc algorithm . without private endowmentscc is equivalent to the eps algorithm of .assignment satisfies justified envy - freeness in presence of variable eating rates : for all utilities consistent with preferences of over the houses .the informal intuition is that at any point during the running of cc , an agent with a higher eating rate than will be ` eating ' his most favoured object(s ) faster than even if is also has the eating the same object(s ) .hence , for all , it is the case that for , .given a variable claim instance of agents where agent has claim rate for .we may assume without lost of generality that the claim rates are integral .if they are not , then we can simply multiple each claim rate by a common denominator to make each integral .doing so will not change the allocation given by the algorithm since only relative claim rates matter to the algorithm . + now consider a cake cutting instance of agents , where the agents have piecewise uniform utility function and there are no private endowments or variable claims . moreover , for every , there are agents in each of whom has the same utility function as that of agent in .it is not difficult to see that if one aggregates the allocation that the agents in who share agent s valuation in , then one would get an equivalent allocation ( in terms of utility ) to agent s allocation .suppose for the sake of contradiction that ccea is not gsp for the case of variable claims , then in some instance , there exists some coalition of the agents that weakly gains in utility by misreporting their preference .now consider the equivalent instance with no variable claims under the aforementioned transformation , then there exists some coalition of the agents in that weakly gains in utility by misreporting their preference , which implies that ccea is not group - strategyproof for the no variable claims case , contradicting the result of proposition [ prop : gsp1 ] .the allocation can be obtained by solving the following convex program . the proof of the desired properties is similar to the case where .consider a fractional assignment returned by the cc algorithm.we know that satisfies justified envy - freeness for the random / fractional assignment problem ( prop . 4 ,* ) . if , then . if , then .hence satisfies justified envy - freeness for private endowments .
cake cutting is one of the most fundamental settings in fair division and mechanism design without money . in this paper , we consider different levels of three fundamental goals in cake cutting : fairness , pareto optimality , and strategyproofness . in particular , we present robust versions of envy - freeness and proportionality that are not only stronger than their standard counter - parts but also have less information requirements . we then focus on cake cutting with piecewise constant valuations and present three desirable algorithms : _ ccea ( controlled cake eating algorithm ) _ , _ mea ( market equilibrium algorithm ) _ and _ csd ( constrained serial dictatorship)_. ccea is polynomial - time , robust envy - free , and non - wasteful . it relies on parametric network flows and recent generalizations of the probabilistic serial algorithm . for the subdomain of piecewise uniform valuations , we show that it is also group - strategyproof . then , we show that there exists an algorithm _ ( mea ) _ that is polynomial - time , envy - free , proportional , and pareto optimal . mea is based on computing a market - based equilibrium via a convex program and relies on the results of and . moreover , we show that mea and ccea are equivalent to mechanism 1 of chen et . al . for piecewise uniform valuations . we then present an algorithm _ csd _ and a way to implement it via randomization that satisfies strategyproofness in expectation , robust proportionality , and unanimity for piecewise constant valuations . for the case of two agents , it is robust envy - free , robust proportional , strategyproof , and polynomial - time . many of our results extend to more general settings in cake cutting that allow for variable claims and initial endowments . we also show a few impossibility results to complement our algorithms . the impossibilities show that the properties satisfied by ccea and mea are maximal subsets of properties that can be satisfied by any algorithm for piecewise constant valuation profiles .
the _ howard s algorithm _ ( also called _ policy iteration algorithm _ ) is a classical method for solving a discrete hamilton - jacobi equation .this technique , developed by bellman and howard , is of large use in applications , thanks to its good proprieties of efficiency and simplicity .it was clear from the beginning that in presence of a space of controls with infinite elements , the convergence of the algorithm is comparable to newton s method .this was shown under progressively more general assumptions until to , where using the concept of _ slant differentiability _ introduced in , the technique can be shown to be of semi - smooth newton s type , with all the good qualities in term of superlinear convergence and , in some cases of interest , even quadratic convergence . in this paper, we propose a parallel version of the policy iteration algorithm , discussing the advantages and the weak points of such proposal . in order to build such parallel algorithm, we will use a theoretical construction inspired by some recent results on domain decomposition ( for example ) .anyway , for our purposes , thanks to a greater regularity of the hamiltonian , the decomposition can be studied just using standard techniques .we will focus instead on convergence of the numerical iteration , discussing some sufficient conditions , the number of iteration necessary , the speed .parallel computing applied to hamilton jacobi equations is a subject of actual interest because of the strict limitation of classical techniques in real problems , where the memory storage restrictions and limits in the cpu speed , cause easily the infeasibility of the computation , even in cases relatively easy . with the purpose to build a parallel solver , the main problem to deal with is to manage the information passing through the threads .our analysis is not the first contribution on the topic , but it is an original study of the specific possibilities offered by the policy algorithm . in particular some non trivial questions are : is convergence always guaranteed ?in finite time ?with which rate ? which is the gain respect to ( the already efficient ) classical howard s algorithm ? in literature , at our knowledge ,the first parallel algorithm proposed was by sun in 1993 on the numerical solution of the bellman equation related to an exit time problem for a diffusion process ( i.e. for second order elliptic problems ) ; an immediately successive work is by camilli , falcone , lanucara and seghini , here an operator of the semilagrangian kind is proposed and studied on the interfaces of splitting .more recently , the issue was discussed also by zhou and zhan where , passing to a quasi variational inequality formulation equivalent , there was possible a domain decomposition .our intention is to show a different way to approach the topic . decomposing the problem directly in its differential form , effectively , it is possible to give an easy and consistent interpretation to the condition to impose on the boundaries of the sub - domains .thereafter , passing to a discrete version of such decomposed problem it becomes relatively easy to show the convergence of the technique to the correct solution , avoiding the technical problems , elsewhere observed , about the manner to exchange information between the sub - domains . in our technique ,as explained later , we will substitute it with the resolution of an auxiliary problem living in the interface of connection in the domain decomposition . in this way, data will be passed implicitly through the sub - problems .the paper is structured as follows : in section 2 we recall the classic howard s algorithm and the relation with the differential problem , focusing on the case of its control theory interpretation . in section 3 , after discussing briefly the strategy of decomposition , we present the algorithm , and we study the convergence .section 4 is dedicated to a presentation of the performances and to show the advantages with respect the non parallel version .we will end presenting some possible extensions of the technique to some problems of interest : reachability problems with obstacle avoidance , max - min problems .the problem considered is the following . let be bounded open domain of ( ) ; the steady , first order , _ hamilton - jacobi equation _ ( hj ) is : where , following its _ optimal control interpretation _ , is the _ discount factor _ , is the _ exit cost _ , and the _ hamiltonian _ is defined by : with ( _ dynamics _ ) and ( _ running cost _ ) .the choice of such hamiltonian is not restrictive but useful to simplify the presentation . as extension of the techniques we are going to present ,it will be shown , in the dedicate section , as the same results can be obtained in presence of different kind of hamiltonians , as in obstacle problems or in differential games .under classical assumptions on the data ( for our purposes we can suppose and continuous , and lipschitz continuous for all and verified the _ soner s condition _ ) , it is known ( see also , ) that the equation admits a unique continuous solution in the _ viscosity solutions _ sense . the solution is the value function to the infinite horizon problem with exit cost , where is the _ first time of exit _form : numerical schemes for approximation of such problem have been proposed from the early steps of the theory , let us mention the classical finite differences schemes , semilagrangian , discontinuous galerkin and many others . in this paperwe will focus on a _ monotone _ , _ consistent _ and _ stable _ scheme ( class including the first two mentioned above ) , which will provide us the discrete problem where to apply the howard s algorithm .considered a discrete grid with points , on the domain , the finite -dimensional approximation of , , will be the solution of the following discrete equation ( ) where , ( maximal diameter of the family of simplices built on ) is the discretization step , and related to a subset of the , there are included the dirichlet conditions following the obvious pattern we will assume on , some hypotheses sufficient to ensure the convergence of the discretization * _ monotony . _ for every choice of two vectors such that , ( component - wise ) then for all . * _ stability ._ if the data of the problem are finite , for every vector , there exists a such that , solution of , is bounded by i.e. independently from .* _ consistency ._ this hypothesis , not necessary in the analysis of the convergence of the scheme , is essential to guarantee that the numerical solution obtained approximates the continuous solution .it is assumed that for every , , with , , and . under these assumptionsit has been discussed and proved that , solution of , converges to , viscosity solution of for .the special form of the hamiltonian gives us a correspondent special structure of the scheme , in particular , with a rearrangement of the terms , the discrete problem can be written as a resolution of a nonlinear system in the following form : where is a matrix and is a vector .the name is chosen to underline ( it will be important in the following ) that such vector there are contained information about the dirichlet conditions imposed on the boundaries .policy iteration algorithm _ ( or howard s algorithm ) consists in a two - steps iteration with an alternating improvement of the policy and the value function , as shown in table [ ha ] .it is by now known that under a monotonicity assumption on the matrices , ( we recall that a matrix is monotone if ans only if it is invertible and every element of its inverse are non negative ) , automatically derived from ( h1 * ) ( as shown below ) , the above algorithm is a non smooth newton method that converges superlinearly to the discrete solution of problem .the convergence of the algorithm is also discussed in the earlier work where the results are given in a more regular framework . additionally , if has a finite number of elements , and this is the standard case of a discretized space of the controls , then the algorithm converges in a finite number of iterations .let us state , for a fixed vector the subspace of controls [ p:1 ] let us assume the matrix is invertible .if ( h1 * ) holds true , then is monotone and not null for every with . for a positive vector ,consider a vector such that componentwise , then for h1 * where , therefore suppose now that the column of has a negative entry : choosing ( column of the identity matrix ) multiplying the previous relation for we have a contradiction .then is monotone .( 1,0)380 + howard s algorithm ( ha ) ( 1,0)380 + inputs : , .( implicitly , the values of at the boundary points ) + initialize and + iterate : * find solution of .+ if and , then stop .otherwise go to ( ii ) .. + set and go to ( i ) outputs : .( 1,0)380 + it is useful to underline the conceptual distinction between the convergence of the algorithm and the convergence of the numerical approximation to the continuous function as discussed previously . in general, the howard s algorithm is an acceleration technique for the calculus of the approximate solution , the error with the analytic solution will be depending on the discretization scheme used . to conclude this introductory sectionlet us make two monodimensional basic examples .[ ex1 ] an example for the matrix and the vector is the easy case of an upwind explicit euler scheme in dimension one where is a uniform discrete grid consisting in knots of distance .moreover , and . in this casethe system is } { h\lambda } & -\frac{f^+_1}{h\lambda } & 0 & \cdots & 0 \\ \frac{f^-_2}{h\lambda } & 1+\frac{\left[f^+_2-f^-_2\right]}{h\lambda } & -\frac{f^+_2}{h\lambda } & \cdots & 0\\ 0 & \ddots & \ddots & \ddots & 0\\ 0 & \cdots & \cdots & \frac{f^-_n}{h\lambda } & 1+\frac{\left[f^+_n - f^-_n\right]}{h\lambda } \end{array } \right),\ ] ] and it is straightforward that the solution of howard s algorithm , verifying , is the solution of .[ ex2 ] if we consider the standard 1d semilagrangian scheme , the matrix and the vector are and where and the coefficients are the weights of a chosen interpolation (x_i+h f(x_i,\alpha_j))=\sum_{i=0}^{n+1}b_i(\alpha_j)v_i ] for . the hypotheses ( h1 * -h2 * ) will be naturally adapted to the new framework as below : * _ monotony ._ for every choice of two vectors such that , ( component - wise ) then for all , and .* _ stability ._ if the data of the problem are finite , for every vector , and every s.t . , there exists a such that , solution of with and , is bounded by independently from .this will be sufficient , thanks also to h3 , to ensure convergence of solution of to for . from the assumptions on the discretization schemesome specific properties of and can be derived [ pp ] let us assume .let state also * if then , for all , for all .then it holds true the following . 1 .if invertible , the matrices are monotone , not null for every , and for every with .if , we have that for all and for every , there exists a such that the same relation holds for .3 . called the fixed point of , if we have ( resp . ) , then there exists a such that , for all , to prove let us just observing that the monotony of is sufficient end necessary for the monotony of , ( elsewhere is a diagonal block matrix with all the other blocks invertible ) , then the argument is the same of proposition [ p:1 ] , starting from two vectors with the only difference that we need assumption h4 to get or equivalently then the thesis . to prove 2 , it is sufficient to see , then for h2 the thesis .the proof of 3 is a direct consequence of monotony assumption h1 with the definition of as here we introduce a convergence result for the ( pha ) algorithm .[ t:1 ] assume that the function , with invertible , and are continuous on the variable for , is a compact set of , and hold .then there exists a unique in solution of .moreover , the sequence generated by the ( pha ) has the following properties : * every element of the sequence is bounded by a constant , i.e. .* if then for all , vice versa , if then .* when tends to .the existence of a solution comes directly from the monotonicity of the matrices , the existence of an inverse and then the existence of a solution of every system of .let us show that such solution is limited as limit of a sequence of vectors of bounded norm .observing that , without loss of generality we assume that . considering the problem we have for h2 that if is bounded then .adding that is chosen bounded , the thesis follows for induction .let us to pass now to prove the uniqueness : taken two solutions of , we define the vector equal to in the identical arguments of and equal to elsewhere , for a .we have that , for a control ( for proposition [ pp].3 ) , then and for monotonicity . exchanging the role of and , and for the arbitrary choice of ( in some argumentsthe relation above is trivial ) we get the thesis .\(i ) to prove that is an increasing sequence is sufficient to prove that taken solution of with ( the opposite case is analogue ) , for a choice of is such that .let us observe , for a choice of and using of prop .[ pp ] then then . + we need also to prove that : if it should not be true , then , with a similar argument than above then for h4 , which contradicts what stated previously .it is also possible to show that the method stops to the fixed point in a finite time .this is an excellent feature of the technique ; unfortunately , the estimate which is possible to guarantee is largely for excess and , although important from the theoretical point of view , not so effective to show the good qualities of the method .the performances will checked in the through some tests in the section [ s : test ] .if and convergence requests of theorem [ t:1 ] are verified , then converges to the solution in less than iterative steps .the proof is slightly similar to the classic howard s case ( cf .for example ) .let us consider the abstract formulation , where is determined by parameter in , and , where is determined by parameter in .then if we consider the iteration and we suppose ( theorem [ t:1 ] ) , ; than called the variables in associated to we know that there exist a and a where , such that , and again . afterwards is a fixed point of .+ to restrict to our case is sufficient identify the process with the ( parallel ) resolution on the sub - domains and with the iteration on the interfaces between the sub domains .it is worth to notice that the above estimation is worse than the classical howard s case .in fact , the classical algorithm find the solution in , the will have the same number of iterative steps .this number has to be multiplied , called the maximum number of nodes in a sub - domain and the number of nodes belonging to the interface , for getting , at the end , a total number of simple steps equal to , much more than the classical case . in this analysiswe do not consider anyway , the good point of the decomposition technique , the fact that any computational step is referred to a smaller and simpler problem , with the evident advantages in term of time elapsed in every thread and memory storage needed .the performances of the algorithm and its characteristics as speeding up technique will be tested in this section .we will use a standard academic example where , anyway , there are present all the main characteristics of our technique .[ [ d - problem ] ] 1d problem + + + + + + + + + + consider the monodimensional problem it is well known that this equation ( _ eikonal equation _ ) modelize the distance from the boundary of the domain , scaled by an exponential factor ( _ kruzkov transform _ , cf . ) . through a standard euler discretizationis obtained the problem in the form . in table[ tt:2 ] is shown a comparison , in term of speed and efficacy , of our algorithm and the classic howard s one , in the case of a two thread resolution .it is possible appreciate as the parallel technique is not convenient in all the situations .this is due to the low number of parallel threads which are not sufficient to justify the construction . in the successive test , keeping fixed the parameter and tuning number of threads it is possible to notice how much influential is such variable in terms of efficacy and time necessary for the resolution .c|*2c|*4c & & + dx & time & it . & t. ( par .p. ) & it .p. ) & t. ( it .p. ) & total t. + * 0.1 & e-3 & 10 & 1e-4 & 4 & 1e-5 & 1e-3 + * 0.05 & 6e-3 & 20 & 8e-4 & 5 & e-5 & 3e-3 + * 0.025 & 0.09 & 40 & 7e-3 & 6 & 2e-5 & 0.04 + * 0.0125 & 0.32 & 80 & 0.048 & 8 & 1e-4 & 0.36 + * 0.00625 & 2.22 & 160 & 0.34 & 14 & 8e-4 & 3.26 + * * * * * c|*2c|*4c dx=0.0125 & & + threads & t. & it . & t. ( par .p. ) & it .( par . ) & t. ( it .p. ) & total t. + * 2 & & & 0.48 & 4 & 1e-4 & 0.36 + * 4 & & & 8e-3 & 6 & 1e-4 & 0.086 + * 8 & 0.32 & 80 & 18e-4 & 7 & 6e-4 & 0.014 + * 16 & & & 7e-4 & 10 & 4e-4 & + * 32 & & & 2e-4 & 8 & 6e-3 & 0.011 + * * * * * in table [ tt:2 ] we compare the iterations and the time ( expressed in seconds as elsewhere in the paper ) necessary to reach the approximated solution , analysing in the various phases of the algorithm , time and iterations necessary to solve every sub - problem ( first two columns ) , time elapsed for the iterative part ( which passes the information through the threads , next column ) , finally the total time .it is highlighted the optimal choice of number of threads ( 16 thread ) ; it is evident as that number will change with the change of the discretization step .therefore it is useful to remark that an additional work will be necessary to tune the number of threads accordingly to the peculiarities of the problem ; otherwise the risk is to is to loose completely the gain obtained through parallel computing and to get worse performances even compared with the classical howard s algorithm . as in the rest of the paper all the codesare developed in mathworks matlaband performed on a processor 2,8 ghz intel core i7 ; in the tests the parallelization is simulated .c|*2c|*5c & & + dx & t. & it . & t. ( p.p . ) & it .& t. ( it.p . ) & it .( it.p . ) & total t. + * 0.1 & 0.05 & 11 & 0.009 & 8 & 0.02 & 2 & 0.04 + * 0.05 & 2.41 & 21 & 0.05 & 13 & 0.03 & 2 & 0.14 + * 0.025 & 73.3 & 40 & 2.5 & 22 & 0.15 & 3 & 7.83 + * 0.0125 & & - & 76 & 40 & 1.293 & 5 & 383.3 + * * * * [ [ d - problem-1 ] ] 2d problem + + + + + + + + + + the next test is in a space of higher dimension .let us consider the approximation of the scaled distance function from the boundary of the square , solution of the eikonal equation where is the usual unit ball .for the discretization of the problem is used a standard euler discretization .similar tests than the 1d case are performed , confirming the good features of our technique and , as already shown , the necessity of an appropriate number of threads with respect to the complexity of the resolution .-norm ( left ) and distribution of the error , threads ( right ) ., title="fig:",height=170 ] -norm ( left ) and distribution of the error , threads ( right ) ., title="fig:",height=170 ] in table [ tt:3 ] performances of the classic howard s algorithm are compared with our technique . in this casethe number of threads are fixed to 4 ; the parallel technique is evaluated in terms of : maximum time elapsed in one thread and max number of iterations necessary ( first and second columns ) , time and number of iterations of the iterative part ( third and fourth columns ) and total time . in both the casesthe control set is substituted by a discrete version .it is evident , in the comparison , an improvement of the speed of the algorithm even larger than the simpler 1d case .this justifies , more than the 1d case , our proposal .c|*2c|*4c dx=0.025 & & + threads & t. & it . & t. ( par .p. ) & it .( par . ) & t. ( it .p. ) & total t. + * 4 & & & 2.5 & 22 & 0.15 & 7.83 + * 9 & & & 0.9 & 18 & 0.5 & 5.08 + * 16 & 73.3 & 40 & 0.05 & 13 & 1.6 & + * 25 & & & 0.03 & 12 & 2.4 & 2.52 + * 36 & & & 0.016 & 11 & 6.04 & 6.11 + * * * * * in the table [ tt:4 ] are compared the performances for various choices of the number of threads , for a fixed . as in the 1d caseis possible to see how an optimal choice of the number of threads can drastically strike down the time of convergence . in figure[ f : in ] is possible to see the distribution of the error .as is predictable , the highest concentration will correspond to the non - smooth points of the solution .it is possible to notice also how our technique apparently does not introduce any additional error in correspondence of the interfaces connecting the sub - domains .this is reasonable , although not evident theoretically . in fact , it is possible to prove the convergence of the scheme to the solution of using classical techniques but the rate of convergence could be different in the various subproblems , because of the ( possibly different ) local features of the problem .as shown in the tests , an important point of weakness of our technique is represented by the iterative part , which can be smaller and therefore easier than the ones solved in the parallel part , but it is highly influential in terms of general performances of the algorithm . in particular the number of the iterations of the coupling iterative - parallel partis sensible to a good initialization of the `` internal boundary '' points .as is shown in figure [ f : in ] a right initialization , even obtained on a very coarse grid , affects consistently the overall performances . in this section , all the tests are made with a initialization of the solution on a -points grid , with dimension of the domain space .the time necessary to compute the initial solution is always negligeable with respect to the global procedure .( left ) ( right ) ) of the approximated solution obtained with a and an pha . ,title="fig:",height=170 ] ( left ) ( right ) ) of the approximated solution obtained with a and an pha . ,title="fig:",height=170 ] c|*2c|*5c & & + dx & time & it . & t. ( p. p. ) & it .& t. ( it .p. ) & it .p. ) & total t. + * 0.4 & 0.004 & 4 & 0.003 & 4 & 0.002 & 1 & 0.05 + * 0.2 & 0.22 & 6 & 0.026 & 6 & 0.016 & 2 & 0.052 + * 0.1 & 164.2 & 11 & 1.102 & 8 & 2.1 & 4 & 6.78 + * 0.05 & & - & 164 & 10 & 4.98 & 3 & 494 + * * * * [ [ d - problem-2 ] ] 3d problem + + + + + + + + + + analogue results are obtained also in the approximation of a 3d problem .of course the effects of the increasing number of control points produces a greater complexity and will limit , for a same number of processors available , the possibility of a fine discretization of the domain . + let us consider the domain ^ 3 ] .clearly the optimal choice of the number of threads is such that the elements of the iterative part are balanced with the nodes in each subdomain , so it is straight forward to find the following optimal relation between number of splitting and total elements it is evident that for a very high number of elements , ( figure [ fig4 ] ) , it is useless to use a great and non optimal number of threads .this contradiction comes from the bottleneck effect of the resolution on the interfaces of communication between the subdomains , indeed the complexity of such subproblem will grow with the number of threads instead to decrease , reducing our possibilities of resolution .the problem can be overcome with an additional parallel decomposition of the iterative pass , permitting us to decompose each subproblem to a complexity acceptable .imagine to be able to solve ( for computational reasons , memory storage , etc . )only problem of dimension `` white square '' ( we refer to figure [ fig4 ] , right ) and to want to solve a bigger problem ( `` square 1 '' ) with an arbitrary number of processors available . through our techniquewe will decompose the problem in a finite number of subproblems `` white square '' and a ( possibly bigger than the others ) problem `` square 2 '' .we will replicate our parallel procedure for the `` square 2 '' obtaining a collection of manageable problems and a `` square 3 '' . through a reiteration of this ideawe arrive to a decomposition in subproblems of dimension desired .in this section there are shown some non trivial extensions to more general situations of the method .we will discuss , in particular , how to adapt the parallelization procedure to the case of a target problem , an obstacle problem and max - min problems , where the special structure of the hamiltonian requires some cautions and remarks .an important class of problems where is useful to extend the techniques discussed is the target problems where a trajectory is driven to arrive in a _ target set _ optimizing a cost functional . a easy way to modify our algorithm to this case is to change the construction procedure for and : :=\left\{\begin{array}{ll } \left[b(\alpha)\right]_i , & \hbox { if } x_i\notin { { \mathcal t } } , \\ \left[\mathbb{i}\right]_i , & \hbox { otherwise;}\end{array } \right . \ ; c'(\alpha)_i:=\left\{\begin{array}{ll } c(\alpha)_i , & \hbox { if } x_i\notin { { \mathcal t } } , \\ 0 , & \hbox { otherwise;}\end{array } \right.\ ] ] this , with the classical further construction of _ ghost nodes _ outside the domain to avoid the exit of the trajectories from , will solve this case . a question arises naturally in this modification : are the convergence results still valid ?the answer is not completely trivial because , for example , a monotone matrix modified as above is not automatically monotone ( the easiest counterexample is the identical matrix flipped vertically : it is monotone because invertible and equal to its inverse , but changing any row as in we get a non invertible matrix ) . to prove the convergence it is sufficient to start from the numerical scheme associated to such modified algorithm .it is quite direct to show verified the hypotheses ( h1-h4 ) getting as consequence the described properties of the algorithm .a well known benchmark in the field is the so - called zermelo s navigation problem , the main feature , in this case , is that the dynamic is driven by a force of comparable power with respect to our control .the target to reach will be a ball of radius equal to centred in the origin , the control is in .the other data are : ^ 2,\quad \lambda = 1 , \quad l(x , y , a)=1.\ ] ] .,title="fig:",height=170 ] .,title="fig:",height=170 ] in table [ tt:6 ] a comparison with the number of threads chosen is made .now we are in presence of characteristics not aligned with the grid , but the performances of the method are poorly effected .convergence is archived with performances comparable to the already described case of the eikonal equation .c|*2c|*4c dx=0.025 & & + threads & t. & it . & t. ( par .p. ) & it .( par . ) & t. ( it .p. ) & total t. + * 4 & & & 1.31 & 11 & 0.13 & 5.4 + * 9 & & & 0.7 & 9 & 0.7 & 4.2 + * 16 & 37.9 & 20 & 0.031 & 7 & 1.38 & + * 25 & & & 0.02 & 7 & 2.7 & 3.9 + * 36 & & & 0.01 & 8 & 5.19 & 5.28 + * * * * * dealing with an optimal problem with constraints using the bellman s approach , various techniques have been proposed . in this sectionwe will consider an implicit representation of the constraints through a level - set function .let us to consider the general single obstacle problem where the hamiltonian is of the form discussed in section [ s:1 ] and the standard hypothesis about regularity of the terms involved are verified .the distinctive trait of this formulation is about the term , assumed regular , typically stated as the opposite of the signed distance from the boudary of a subset .the solution of this problem is coincident , where defined , with the solution of the same problem in the space , explaining the name of `` obstacle problem '' ( cf . ) . through an approximation of the problem in a finite dimensional one , in a similar wayas already explained , is found the following variation of the howard s problem where the term is a sampling of the function on the knot of the discretization grid . it is direct to show that changing the definition of the matrix and , is possible to come back to the problem . adding an auxiliary control to the set and re - defying the matrices and as :=\left\{\begin{array}{ll } \left[b(\alpha)\right]_i , & \hbox { if } b(\alpha)v - c_g(\alpha)\geq v - w \\\left[\mathbb{i}\right]_i , & \hbox { otherwise;}\end{array } \right . \\c_g'(\alpha)_i:=\left\{\begin{array}{ll } c_g(\alpha)_i , & \hbox { if } b(\alpha)v - c_g(\alpha)\geq v - w \\w_i , & \hbox { otherwise;}\end{array } \right .\end{split}\ ] ] ( where the is the if is a matrix , and the element if is a vector , and is the identity matrix ) , the problem becomes which is in the form . even in this casethe verification of hypotheses ( h1-h4 ) by the numerical scheme associated to the transformation is sufficiently easy .it is in some cases also possible the direct verification of conditions of convergence in the obstacle problem deriving them from the free of constraints case .for example if we have that the matrix is strictly dominant ( i.e. for every , and there exists a such that for every , ) , then the properties of the terms are automatically verified , ( i.e. since all are strictly dominant and thus monotone ) .a classical problem of interest is the optimization of trajectories modelled by which produces a collection of curves in the plane with a constraint in the curvature of the path .typically this is a simplified model of a car of constant velocity with a control in the steering wheel .+ the value function of the exit problem from the domain , $ ] discretized uniformly in 8 points is presented in figure [ f : dub ] .it is natural to imagine the same problem with the presence of constraints .such problem can be handled with the technique described above producing the results shown in the same figure [ f : dub ] , where there are presented some optimal trajectories ( in the space ) for the exit from in presence of some constraints . from the picture it is possible to notice also the constraint about the minimal radius of curvature contained in the dynamics .the last , more complicated extension of the howard s problem is about max - min problems of the form ( 1,0)380 + pha ( maxmin case ) ( 1,0)380 initialize for all .+ k:=1 ; 1 .iterate _ ( parallel step ) _ for every do : + * find solution of .+ if and , then , and exit ( from inner loop ) .+ otherwise go to ( 1.ii ) .* .+ set and go to ( 1.i ) 2 .( sequential step ) _ for * find solution of .+ if and , then , and go to ( 3 ) .+ otherwise go to ( 2ii ) . * . +set and go to ( 2i ) 3 .compose the solution + k:=k+1 ; + if then _ exit _ , otherwise go to ( 1 ) .( 1,0)380 such a non linear equations arises in various contexts , for example in differential games and in robust control .the convergence of a parallel algorithm for the resolution of such problem is also discussed in .also in this case , a modified version of the policy iteration algorithm can be shown to be convergent ( cf .our aim in this subsection is to give some hints to build a parallel version of such procedure .let us introduce the function , for and defined by the problem , in analogy with the previous case , is equivalent to solve the following system of nonlinear equations the parallel version of the howard algorithm in the case of a maxmin problem is summarized in table [ mm ] .it is worth to notice that at every call of the function is necessary to solve a minimization problem over the set , this can be performed in an approximated way , using , for instance , the classical howard s algorithm .this gives to the dimension of this set a big relevance on the performances of our technique .for this reason , if the cardinality of ( in the case of finite sets ) is bigger than , it is worth to pass to the alternative problem ( here there are used the isaacs conditions ) before the resolution , inverting in this way , the role of and in the resolution .one of the most known example of max - min problem is the pursuit evasion game ; where two agents have the opposite goal to reduce / postpone the time of capture .the simplest situation is related to a dynamic where controls are taken in the unit ball and capture happens when the trajectory is driven to touch the small ball , ( , in this case ) .the passage to a target problem is managed as described previously . ,title="fig:",height=170 ] , title="fig:",height=170 ] in figure [ f : pe ] the approximated value function of that problem is shown .the main difficulty in the use of the howard s algorithm , i.e. the resolution of big linear systems can be overcome using parallel computing .this is important despite the fact that we must accept an important drawback : the double loop procedure ( or multi - loop procedure as sketched in remark [ multiloop ] ) does not permit to archive a superlinear convergence , as in the classical case ; we suspect ( as in figure [ f : in ] ) that such rate is preserved looking to the ( external ) iterative step , where we have to consider , anyway , that in every step of the algorithm a resolution of a reduced problem is needed .another point influential in the technique is the manner chosen to solve every linear problem which appears in the algorithm . in this paper , being not in our intentions to show a comparison with other competitor methods rather studying the properties of the algorithm in relation of the classical case , we preferred the simplicity , using a routine based on the exact inversion of the matrix . using of an iterative solver , with the due caution about the error introduced , better performances are expected ( cf . ) . through the paper we showed as some basic properties of the schemes used to discretized the problem bear to sufficient conditions for the convergence of the algorithm proposed, this choice was made to try to keep our analysis as general as possible . a special treatment about the possibility of a domain decomposition in presence of non monotone schemes is possible , although not investigated here .this work was supported by the european union under the 7th framework programme fp7-people-2010-itn sadco , sensitivity analysis for deterministic controller design .+ the author thanks hasnaa zidani of the uma laboratory of ensta for the discussions and the support in developing the subject . 00 , _ an efficient policy iteration algorithm for dynamic programming equations _ , pamm 13 n.1 ( 2013 ) 467468 ., optimal control and viscosity solution of hamilton - jacobi - bellman equations .birkhauser , boston heidelberg , 1997 . , _ a bellman approach for two - domains optimal control problems in _ , esaim contrva . , 19 n. 3 ( 2013 ) 710739 . , _ a bellman approach for regional optimal control problems in _ , siam j. cont ., 52 no . 3 ( 2014 ) 17121744 . ,stochastic and differential games : theory and numerical methods , birkhuser , boston , 1999 . , _ flow invariance on stratified domains _ , set - valued var ., 21 ( 2013 ) 377403 ., dynamic programming , princeton university press , princeton , nj , 1957 . , _ some convergence results for howard s algorithm _ , siam j. numer ., 47 n. 4 ( 2009 ) 30013026 ., _ a domain decomposition method for bellman equations _ , cont ., 180 ( 1994 ) 477483 . ,_ systems of convex hamilton - jacobi equations with implicit obstacles and the obstacle problem _ , comm ., 8 ( 2009 ) 12911302 ., _ a discontinuous galerkin finite element method for directly solving the hamilton - jacobi equations _ , j. comput .phys . , 223 n. 1 ( 2007 ) 398415 ., two approximations of solutions of hamilton - jacobi equations , math . comp ., 43 n. 167 ( 1984 ) 119 ., partial differential equations : graduate studies in mathematics .american mathematical society 2 , 1998 ., _ semi - lagrangian approximation schemes for linear and hamilton - jacobi equations_ , applied mathematics series , siam , 2013 ., _ advances in parallel algorithms for the isaacs equation _ , in advances in dynamic games .birkhuser boston , 2005 .515 - 544 . , dynamic programming and markov processes , the mit press , cambridge , ma , 1960 . , _ on the convergence of policy iteration in stationary dynamic programming _ , math .res . , 4 no.1 ( 1979 ) 60 - 69 . , _ convergence analysis of some algorithms for solving nonsmooth equations _ ,res . , 18 ( 1993 ) 227244 . ,_ a nonsmooth version of newton s method _ , math ., 58 ( 1993 ) 353367 . , _ hamilton - jacobi - bellman equations on multi - domains _ , in : _ control and optimization with pde constraints _ , birkhauser basel , 164 ( 2013 ) 93116 ., _ convergence properties of policy iteration _ , siam j. contr . opt ., 42 n. 6 ( 2004 ) 2094 - 2115 . ,_ optimal control problems with state - space constraints _ , siam j. contr . opt ., 24 ( 1986 ) 552562 . , _approximation schemes for viscosity solutions of hamilton - jacobi equations _ , j. differ .equations 59 n. 1 ( 1985 ) 143 . , _domain decomposition algorithms for solving hamilton jacobi - bellman equations _ , num .analysis opt ., 14 ( 1993 ) 145166 ., _ on the convergence of policy iteration in stationary dynamic programming _ , math .res . , 4 n. 1 ( 1979 ) 6069 . , _ a new domain decomposition method for an hjb equation _ , j. comput .appl . math ., 159 n. 1 ( 2003 ) 195204 .
the classic howard s algorithm , a technique of resolution for discrete hamilton - jacobi equations , is of large use in applications for its high efficiency and good performances . a special beneficial characteristic of the method is the superlinear convergence which , in presence of a finite number of controls , is reached in finite time . performances of the method can be significantly improved by using parallel computing ; how to build a parallel version of method is not a trivial point , the difficulties come from the strict relation between various values of the solution , even related to distant points of the domain . in this contribution we propose a parallel version of the howard s algorithm driven by an idea of domain decomposition . this permits to derive some important properties and to prove the convergence under quite standard assumptions . the good features of the algorithm will be shown through some tests and examples . * keywords : * howard s algorithm ( policy iterations ) , parallel computing , domain decomposition + * 2000 msc : * 49m15 , 65y05 , 65n55
since dorothy denning s seminal 1987 paper on intrusion detection , ml and data mining(dm ) have steadily gained attention in security applications .darpa s 1998 network intrusion detection evaluation , and kdd(conference on knowledge discovery and data mining ) cup s 1999 challenge have raised profile of ml in security contexts . yet ,constrained by hardware and system resources , large - scale ml applications did not receive much attention for many years . in 2008, acm conference on computer and communications security(ccs ) hosted the 1st artificial intelligence in security(aisec ) workshop , which has since been a dedicated venue at a top - level security conference for the intersection of ml and security . from 2008 ,the pace of research and publicity of ml in security started to accelerate in academic communities ( section 2.3 ) , and industry venues ( e.g. black hat , rsa ) also shifted interests .for instance , ml in security was still a topic of minority interest at black hat usa 2014 in august , but at rsa 2016 in february , the majority of vendors claimed to deploy ml in their products . a part of this shift may be motivated by the sudden increase in blackswan events like the discovery of crime , beast and heartbleed vulnerabilities .the discovery of these vulnerabilities suggest that organizations may be attacked via previously unknown classes of attacks . to defend against these types of attacksrequires monitoring not just for known vectors attacks , but also for behavior suggestive of a compromised machine .the latter requires the gathering and analysis of much larger sets of data .advances in hardware and data processing capacities enabled large - scale systems . with increasing amount of data from growing numbers of information channels and devices , the analytic tools and intelligent behaviors provided by mlbecomes increasingly important in security .with darpa s cyber grand challenge final contest looming , research interest in ml and security is becoming even more conspicuous .now is the crucial time to examine research works done in ml applications and security .to do so , we studied the state - of - art of ml research in security between 2008 and early 2016 , and systematize this research area in 3 ways : 1 .we survey cutting - edge research on applied ml in security , and provide a high - level overview taxonomy of ml paradigms and security domains . 2 .we point to research challenges that will improve , enhance , and expand our understanding , designs , and efficacy of applying ml in security .we emphasize a position which treats security as a game theory problem .while we realize there are different ways to classify existing security problems based on purpose , mechanism , targeted assets , and point of flow of the attack , our sok s section structure is based on the `` security and privacy '' category of 2012 acm computing classification system , which is a combination of specific use cases(e.g .malware , phishing ) , technique ( e.g. information flow ) , and targeted assets(e.g .web application , proxies ) .we present the state - of - art ml applications in security as the following : section 3 and table 2 & 3 discusses network security , section 4 and table 4 surveys security services , section 5 and table 5 specifies advances in software & applications security , section 6 and table 6 & 7 lays out taxonomy for system security , and section 7 and table 8 , 9 & 10 summarizes progress since 2008 in malware detection , ids , and social engineering . throughout the survey , we share our frameworks for ml system designs , assumptions , and algorithm deployments in security .we focus our survey on security _ applications _ and security - related ml and ai problems on the _ defense _ side , hence our scope excludes theories related to security such as differential privacy and privacy - preservation in ml algorithms , and excludes ml applications in side channel attacks such as . partly because there is already a 2013 sok on evolution of sybil defense in online social networks(osn ) , and partly because we would like to leave it as a small exercise to our readers , we excluded sybil defense schemes in osn as well . still with a broad base , we propose an alternative position to frame security issues , and we also recommend a taxonomy for ml applications in security use cases .yet , we do not conclude with a terminal list of `` right '' or `` correct '' approaches or methods .we believe that the range of the applications is too wide to fit into one singular use case or analysis framework .instead , we intend this paper as a systematic design and method overview of thinking about researching and developing ml algorithms and applications , that will guide researchers in their problem domains on an individual basis .we target our work to security researchers and practitioners , so we assume that our readers have general knowledge for key security domains and awareness of common ml algorithms , and we also define terms when needed .theory + + & 3.1/4/5/6.1/6.2/7.1/7.2/7.3&5/6.1/7.2&3.1/3.2/5/6.1/6.2/7.1/7.2&5/6.1 & + & 3.1/3.2/4/5/6.1/6.2/7.1/7.2/7.3&7.1/7.3&3.1/5&&6.2 + & & & 7.1&&6.2/7.2 + + & 6.1/7.2&7.2&6.2 & & + & 3.1/3.2/5/7.2/7.3&5/7.3&3.1/3.2/6.1/7.1&&6.2 + & 5&&5&5/6.1 & + & 3.1/4/7.2/7.3&7.2&4/7.2&&7.2 + & 6.1/6.2/7.1&6.1/7.1&6.2/7.1 & & + + & 4/5/6.1/7.2&&4/6.2/7.2 & & + & 3.1/6.2/7.3&7.2&3.1&&6.2/7.2 + & 3.2/5/6.1/6.2/7.1/7.2/7.3&5/6.1/7.1/7.2/7.3&3.2/5/6.1/7.1/7.2&5/6.1 & + we agree with assessment of top conferences in .we systematically went through all proceedings between 2008 and early 2016 of the top 6 network- and computer - security conferences to collect relevant papers .because of kdd s early and consistent publication record on ml applications in security , and its status as a top - level venue for ml and dm applications , we also include kdd s 2008 - 2015 proceedings .to demonstrate the wide - ranging research attention drawn to ml applications in security , we also added chosen selections from the workshop aisec , international conference on machine learning(icml ) , neural information processing systems(nips ) , and internet measurement conference(imc ) papers between 2008 - 2015 , mostly in the `` future development '' section .figure 1 shows the generalization of ml system designs when applied in security , that emerged from our survey of the papers(the legend is on the figure s bottom left ) . in different use cases, the system components may embody different names , but their functionalities and positions are captured in the figure .for example : 1 .* _ knowledge base _ * is baseline of known normality and/or abnormality , depending on use cases , they include but are not limited to blacklist(bl ) , whitelist(wl ) , watchlist ; known malware signatures , system traces , and their families ; initial set of malicious web pages ; existing security policies or rules , etc .. 2 . * _ data sources _ * are where relevant data is collected. they can be either off - line or live online data feed , e.g. malware traces collected after execution(off - line ) , url stream(online ) .* _ training data _ * are labeled data which are fed to classifiers in training .they can be standard research datasets , new data(mostly from industry ) labeled by human , synthetic datasets , or a mix ._ pre - processor and feature extractor _ * construct features from data sources , for example : url aggregators , graph representations , smtp header extractions , n - gram model builders .dynamic analyzer and static analyzer are used most often in malware - related ml tasks , and human feedback loop is added when the system s design intends to be semi - supervised or human - in - the - loop(hitl ) . theory + + & 58(49%)&7(5.9%)&24(20%)&2(1.7%)&0(0% ) + & 18(15%)&4(3.4%)&3(2.5%)&0(0%)&1(0.85% ) + & 0(0%)&0(0%)&0(0%)&0(0%)&2(1.7% ) + + & 4(3.4%)&1(0.85%)&1(0.85%)&0(0%)&0(0% ) + & 17(14.4%)&4(3.4%)&11(9.3%)&0(0%)&1(0.85% ) + & 4(3.4%)&0(0%)&1(0.85%)&2(1.7%)&0(0% ) + & 31(26%)&2(1.7%)&9(7.6%)&0(0%)&2(1.7% ) + & 20(17%)&4(3.4%)&5(4.2%)&0(0%)&0(0% ) + + & 16(13.6%)&0(0%)&7(6%)&0(0%)&0(0% ) + & 9(7.6%)&1(0.85%)&5(4.2%)&0(0%)&3(2.5% ) + & 51(43.2%)&10(8.5%)&15(12.7%)&2(1.7%)&0(0% ) + table 1 shows a matrix with rows indicating different ways of classifying the security problems , and the columns showing well - understood ml paradigms .based on the threat models and modeling purposes presented in the papers , we qualitatively group the attacker into three groups .if there are multiple attacker types in one section , the section s numbering appears multiple times accordingly . 1 .* _ passive _ * attackers make no attempt to evade detections ; their behaviors fit into descriptions of the threat models ._ semi - aggressive _ * attackers have knowledge of the detectors , and only attempt to evade detections .* _ active _ * attackers do not only have knowledge of the detectors and attempt to evade detections , but also actively try to poison , mislead , or thwart detection ._ knowledge _ * of attackers , is the information in at least one of the five aspects : the learning algorithms themselves , the algorithms feature spaces , the algorithm s parameters , training and evaluation data - regardless of being labeled or not - used by the algorithms , and decision feedback given by the algorithms .influenced by , we extend their definitions , and qualitatively categorize attackers primary purpose as to compromise _ confidentiality , availability _ or _ integrity _ of legitimate systems , services , and users . 1. attacks on * _ confidentiality _ * compromise the confidential or secret information of systems , services , or users ( e.g. password crackers ) .2 . attacks on * _ availability _ * make systems and services unusable with unwanted information , requests , or many errors in defense schemes ( e.g. ddos , spam ) .3 . attacks on * _ integrity _ * masquerade maliciously intentions as benign intentions in systems , services , and users ( e.g. malware ) .we also define ml paradigms shown in the matrix : 1 .* _ supervised _ * learning uses labeled data for training ._ semi - supervised _ * learning uses both labeled and unlabeled data for training ._ unsupervised _ * learning has no labeled data available for training ._ human - in - the - loop(hitl ) _ * learning incorporates active human feedback to algorithm s decisions into the knowledge base and/or algorithms ._ game theory(gt)_*-based learning considers learning as a series of strategic interactions between the model learner and actors with conflicting goals .the actors can be data generators , feature generators , chaotic human actors , or a combination . for `` means of attacks '' in table 1 , server , network , and userare straightforward and intuitive , so here we only describe `` client app '' and `` client machine '' . * _ client app _ * is any browser - based means of attack on any client device , and * _ client machine _ * is any non - browser - based means of attack on any client device .as shown in table 1 , the majority of surveyed papers in different security domains use supervised learning to deal with passive or semi - aggressive attackers .however , the core requirement of supervised learning - labeled data - is not always viable or easy to obtain , and authors have repeatedly written about the difficulty of obtaining labeled data for training .based on this observation , we conclude that _ exploring semi - supervised and unsupervised learning approaches would expand the research foundation of ml applications in security domains , because semi - supervised and unsupervised learning can utilize unlabeled datasets which had not been used by supervised learning approaches before ._ moreover , during our survey , we realized that many ml applications in security assume that training and testing data come from the same distribution ( in statistical terms , this is the assumption of stationarity ) .however , in the real world , it is highly unlikely that data are stationary , let alone that the data could very well be generated by an adversarial data generator producing training and/or testing data sets , as the case in , or simply be generated responding to specific models as in .our observation from the comprehensive survey confirmed s statement , and we propose that _ gt - based learning approaches and hitl learning system designs should be explored more , in order to design more efficient security defense mechanisms to deal with active and unpredictable adversaries .at the same time , human knowledge and judgment in htil should go beyond feature engineering , to providing feedback to decisions made by ml models_. some theory - leaning papers have modeled spam filtering as bayesian games or stackelberg games .use cases in data sampling , model training with poisoned or low - confidence data have also been briefly explored in literature . based on seminal works and establishments in notable venues , the gradually increasing levels of interest in ml research applied to security is fairly visible .here we gathered some milestone events : 1 .1987 : denning published `` an intrusion detection system '' , first framing security as a learning problem 2 .1998 : darpa ids design challenge 3 . 1999: kdd cup ids design challenge 4 .2008 : ccs hosted the 1st aisec workshop . continues to operate each year 5 .2007 , 2008 : twice , kdd hosted the international workshop on privacy , security , and trust(pinkdd) 6 . 2010 , 2012: twice , kdd hosted intelligence and security informatics workshop(isi) 7 .2011 : `` adversarial machine learning '' published in 4th aisec 8 .2012 : `` privacy and cybersecurity : the next 100 years '' by landwehr et al published 9 .2013 : manifesto from dagstuhl perspectives workshop published as `` machine learning methods for computer security '' by joseph et al . 10 .2014 : kdd hosted its 1st `` security & privacy '' session in the main conference program 11 . 2014 : icml hosted its 1st , and so far the only workshop on learning , security , and privacy(lsp) 12 . 2016 : aaai hosted its 1st artificial intelligence for cyber security workshop(aisc) despite the surge of research interests and industry applications in the intersection of ml and security , few surveys or overviews were published after 2008 , the watershed year of increasing interest in this particular domain . in 2013 server - side web application security , surveyed data mining applied to security in the cloud focusing on intrusion detection , discussed an ml perspective in network anomaly detection .while they are helpful and informative , the former two are limited by their scope and perspective , and the latter serves as a textbook , hence absent the quintessential of survey - mapping the progresses and charting the state - of - art .a collection of papers in 2002 and 2012 discussed applications of dm in computer security , but lacks a systematic survey on ml applications in resolving security issues . briefly compared two network anomaly detection techniques , but limited in scope . of 2009 conducted a comprehensive survey in anomaly detection techniques , some involving discussions of security domains . the dagstuhl manifesto in 2013 articulated the status quo and looked to the future of ml in security , butthe majority of the literature listed were published before 2008 . of 2010 highlighted use cases and challenges for ml in network intrusion detection , but did not incorporate a high - level review of ml in security in recent years .research works on botnets among our surveyed literature focuses mainly on designing systems to detect command - and - control(c&c ) botnets , where many bot - infected machines are controlled and coordinated by few entities to carry out malicious activities .those systems need to learn decision boundaries between human and bot activities , therefore ml - based classifiers are at the core of those systems , and are often trained by labeled data in supervised learning environments . the most popular classifier is support vector machines(svms ) with different kernels , while spatial - temporal time series analysis and probabilistic inferences are also notable techniques employed in ml - based classifiers .topic clustering , mostly seen in natural language processing(nlp ) , is used to build a large - scale system to identify bot queries . in botnet detection literature ,3 core assumptions are widely shared : 1 .botnet protocols are mostly c&c 2. individual bots within same botnets behave similarly and can be correlated to each other 3 .botnet behaviors are different and distinguishable from legitimate human user , e.g. human behaviors are more complex other stronger assumptions include that bots and humans interact with different server groups , and content features from messages generated by bots and human are independent . while classification techniques differ , wls , bls , hypothesis testing , and a classifier are usual system components .attempts have been made to abstract state machine models of network to simulate real - world network traffic and create honeypots .ground truths are often heuristic , labeled by human experts , or a combination - even at large scale , human labeled ground truths are used , for example in , game masters visual inspections serve as ground truth to detect bots in online games .in retrospect , the evolution of botnet detection is clear : from earlier and more straightforward uses of classification techniques such as clustering and nb , the research focus has expanded from the last step of classification , to the important preceding step of constructing suitable metrics , that measures and distinguishes bot - based and human - based activities . classifying dns domains that distribute or host malware , scams , and malicious contenthas drawn research interest especially in passive dns analysis .there are two main approaches : reputation system and classifier .reputation system scores benign and malicious domains and dns hosts , and a ml - based classifier learns boundaries between the two .nonetheless , both reputation system and classifier use various decision trees , random forest(rf ) , nave bayes(nb ) , svm , and clustering techniques for mostly supervised learning - based scoring and classification .many features used are from protocols and network infrastructures , e.g. border gateway protocol(bgp ) and updates , automated systems(as ) , registration , zone , hosts , and public bls .similar to botnet detectors , variations of bl , wl , and honeypots are used in similar functions as knowledge bases , while ground truths are often taken from public bls , limited wls , and anti - virus(av ) vendors such as mcafee and norton .but before any ml attempts take place , most studies would assume the following : 1 . malicious uses of dns are distinct and distinguishable from legitimate dns services .2 . the data collection process - regardless of different names such as data flow , traffic recorder , or packet assembler - follows a centralized model . in other words ,all the traffic / data / packets flow through certain central node or nodes to be collected .stronger assumptions include that as hijackers can not manipulate as path before it reaches them , and maliciousness will trigger an accurate ip address classifier to fail .besides analyzing the status quo , showed efforts to preemptively protect network measurement integrity and predict potentially malicious activities from web domains and ip address spaces .both offense and defense for access control , authentication , and authorization reside within the domain of security services . defeating audio and visual captchas(completely automated public turing test to tell computers and humans apart) , cracking passwords , measuring password strengths , and uncovering anonymity are 4 major use cases . on the offense , specialized ml domains such as computer vision , signal processing , and nlp automate attacks on user authentication services i.e. textual or visual passwords and captchas , and uncover hidden identities and services . on the defense side ,entropy - based and ml - based systems calculate password strengths .other than traditional user authentication schemes , behavioral metrics of users are also introduced . following the generalized ml pipeline shown in figure 1 , the `` classifier '' is replaced by `` recognition engine '' in the password cracking process , and `` user differentiation engine '' in authentic metric engineering .hence the process becomes : `` data source pre - process & feature extraction recognition or user differentiation engine decision '' for ml - based security services . a noteworthy trend to observe ,is that attacks on captchas are getting more generalized - from utilizing svm in 2008 to attack a specific type of text captcha , in 2015 a generic attach approach to attack text - based captcha was proposed .ml - based attacks on textual and visual captcha typically follow the 4-step process : 1 ._ segmentation _ : e.g. signal to noise ratio(snr ) for audio ; hue , color , value(hsv ) for visual 2 ._ signal or image representation _ : e.g. discrete fourier transformation(audio) , letter binarization(visual ) 3 . _ feature extraction _ : e.g. spectro - temporal features , character strokes 4 ._ recognition _ : k - nearest neighbor(knn ) , svm(rbf kernel ) , convolutional neural networks(cnn ) on the side of password - related topics in security services , there are 2 password models : whole - string markov models , and template - based models . concepts in statistical language modeling , such as natural language encoder and n - grams associated with markov models(presented as directed graphs with nodes labeled by n - grams ) , and context - free grammars are common probabilistic foundations to build password strength meters and password crackers .ml research in software and applications security mostly concentrate on web application security in our survey , and have used supervised learning to train popular classifiers such as nb and svm to detect web - based malware and javascript(js ) code , filter unwanted resources and requests such as malicious advertisements , predict unwanted resources and requests(e.g .future blacklisted websites) , and quantify web application vulnerabilities . while explored building web application anomaly detector with scarce training data , most use cases follow the supervised paradigm assuming plentiful labeled data : data source(web applications , static / dynamic analyzers ) feature extraction(often with specific pre - filter , metrics , and de - obfuscator if needed ) classifiers trained with labeled data . apart from this supervised setting , if a human expert s feedback is added after classifiers decisions , it forms a semi - supervised system .regardless of system designs , the usual assumption holds : _ malicious activities or actors are different from normal and benign ones likely do not change much_. the knowledge bases of normality and abnormality can vary , from historical regular expression lists to other publicly available detectors .graph - based algorithms and image recognition are both used in resource filtering , but in detecting js malware and evasions and quantifying leaks , having suitable measurements of similarities is a significant focal point . indeed , from , ml - based classifiers do well in finding similarities between mutated malicious code snippets , while the same code pieces could evade static or dynamic analyzer detections .as landwehr noticed , ml can be applied in spm . however , in automatic fingerprinting of operating systems(os ) , c4.5 decision tree , svm , rf , knn - some most commonly used ml - based classifiers in security - failed to distinguish remote machine instances with coarse- and fine - grained differences , as the algorithms can not exploit semantic knowledge of protocols or send multi - packet probes . yet by taking advantage of semantic and syntactic features , plus semi - supervised system design , showed that svm(optimized by sequential minimal optimization[smo ] algorithm ) , knn , and nlp techniques do well in android spm . on the other hand , in vulnerability management , ,clustering techniques have done well in predicting future incidents and infer vulnerability patterns in code , as well as nb , svm , and rf in ranking risks and identifying proper permission levels .both vulnerability management and spm also focus on devising proper metrics for ml applications : from heuristics based on training set , jaro distance , to outside reputation system oracles , metrics are needed to compare dependency graphs , string similarities , and inferred vulnerability patterns . in most use cases , because of the need for labeled data to train supervised learning systems , many systems follow the generalized training process in figure 1 : `` knowledge base offline trainer online or offline classifier '' . when policy management decisions need feedback , a hitl design is in place where end human users feedback is directed to knowledge base .one distinguishing tradition in ml applications research in this domain , is a strong emphasis on measurement - selecting or engineering proper similarity or scoring metrics are often important points of discussion in research literature . from earlier uses of heuristics in clustering algorithms , to more recent semantic connectivity measurement applied in semi - supervised systems , both the metrics and the system designs for vulnerability and security policy management have evolved to not only identify , but also to infer and predict future vulnerable instances .compared to other security domains , ml research in information flow and ddos focus more on evasion tactics and limits of ml systems in adversarial environments .hence we grouped together the two sub - domains , and marked studies in table 7 with `` ( if ) '' and `` ( ddos ) '' accordingly .for ddos , the usual assumption is that _ patterns of attack and abuse traffic are different from normal traffic _ , but challenged it by proposing an adversary who can generate attributes that look as plausible as actual attributes in benign patterns , and caused failure in ml - based automated signature generation to distinguish benign and malicious byte sequences .then , introduced gt to evaluate ddos attack and defense in real - world . for information flow, assumptions can take various forms . in pdf classifiers based on document structural features , it is _ malicious pdf has different document structures than good pdfs _ ; in android privacy leak detector , it is _ _ the majority of an android application s semantically similar peers has similar privacy disclosure scenarios__ . but poses semi - aggressive and active attackers with some information about the data , feature sets , and/or algorithms , and then attackers successfully evade ml - based pdf classifiers .another example is , pdf malware could be classified , and then a generic and automated evasion technique based on genetic programming is successfully experimented .overall , while using svm , rf , and decision trees trained with labeled data to detect and predict ddos and malicious information and data flows , ml applications in information flow and ddos challenge the usual assumption of stationary adversary behaviors . from collecting local information only , to proposing a general game theory - based framework to evaluate ddos attacks and defense , and from using static method to detect malicious pdf file to generic automated evasion , the scope of ml applications in both ddos and if have expanded and generalized over the years .program - centric or system - centric , there are 3 areas that draw most ml application research attention in malware : malware detection , classifying unknown malware into families , and auto - extract program or protocol specifications . realizing the signature and heuristic - based malware detectors can be evaded by obfuscation and polymorphism , more behavior - basedmatching and clustering systems and algorithms have been researched .figure 1 already shows a generalized ml system design for malware detection and classification , and a more detailed description is below : 1 .collect malware artifacts and samples , analyze them , execute them in a controlled virtual environment to collect traces , system calls , api calls , etc . . or ,directly use information from already completed static and/or dynamic analyses .2 . decide or devise similarity measurements between generalized binaries , system call graphs(scg ) , function call graphs(fcg ) , etc ., then extract features 3 .classify malware artifacts into families in - sample , or cluster them with known malware families .the classifiers and clustering engines are usually trained with labeled data .popular ones are svm and rf for classification , and hidden markov model(hmm ) and knn alongside different clustering techniques . even in the use case of auto - extract specifications , supervised learning with labeled data is needed when behavior profiles , state machine inferences , fuzzing , and message clustering are present .evasion techniques of detectors and poisoning of ml algorithms are also discussed , and typical evasion techniques include obfuscation , polymorphism , mimicry , and reflecting set generation .malware detection and matching based on structural information and behavior profiles show a tendency to use graph - based clustering and detection algorithms , and similarity measurement used in these algorithms have ranged from jaccard distance to new graph - based matching metrics . while clustering techniqueshave been mostly used in malware detection , a nearest neighbor technique is explored to evade malware detection .spams , malicious webpages and urls that redirect or mislead un - suspecting users to malware , scams , or adult content is perhaps as old as civilian use of the internet .research literature mostly focus on 3 major areas : detecting phishing malicious urls , filtering spam or fraudulent content , and detecting malicious user account behaviors . moreover , because phishing is a classic social engineering tactic , it is often the gateway of many studies to detect malicious urls , spam , and fraudulent content . to identify malicious urls ,ml - based classifiers draw features from webpage content(lexical , visual , etc . ) , url lexical features , redirect paths , host - based features , or some combinations of them .such classifiers usually act in conjunction with knowledge bases which are usually in - browser url bls or from web service providers .if the classifier is fed with url - based features , it is common to set an url aggregator as a pre - processor before extracting features . mostly using supervised learning paradigm , nb , svm with different kernels , and lr are popular ml classifiers for filtering spam and phishing .meanwhile , gt - based learning to deal with active attackers is also evaluated in spam filtering . evaluates a bayesian game model where the defense is not fully informed of the attacker s objectives and the active adversary can exercise control over data generation , proposes a stackelberg game where spammer reacts to the learner s moves .stronger assumptions also exist : for example , assumes spammers phone blocks follow a beta distribution as conjugate prior for bernoulli and binomial distribution .another social engineering tactic is spoofing identities with fake or compromised user accounts , and detection of such malicious behaviors utilize features from user profiles , spatial- , temporal- , and spatial - temporal patterns , and user profiles are used in particular to construct normality . graph representation and trust propagation models are also deployed to distinguish genuine and malicious accounts with different behavior and representations .tracing the chronology of applying ml to defend against social engineering , one trend is clear : while content- , lexical- , and syntactic - based features are still being widely used , constructing graph representations and exploring temporal patterns of redirect paths , events , accounts , and behaviors have been on the rise as feature spaces for ml applications in defend against social engineering efforts .accordingly , the ml techniques have also changed from different classification schemes to graphic models .it is also noteworthy that in , addressing adversarial environments challenges to ml systems is elaborated as primary research areas , instead of a short discussion point . from feature sets to algorithms and systems ,ids has been extensively studied . however , as cautioned , ml can be easily conflated with anomaly detection . while both are applied to build ids, important difference is that ml aims to generalize expert - defined distinctions , but anomaly detection focuses on finding unusual patterns , while attacks are not necessarily anomalous .for example , distinguished n - gram model s different use cases : anomaly detection uses it to construct normality(hence more appropriate when no attack is available for learning ) , and ml classifiers learn to discriminate between benign and malicious n - grams(hence more appropriate when more labeled data is present ) .since 2008 , works at top venues have added to the rigor for ml applications in ids .for example , a common assumption of ids is : _ anomalous or malicious behaviors or traffic flows are fundamentally different from normal ones _ , but challenges the assumption by studying low - cardinality intrusions where attackers do nt send a large number of probes . to address adversarial learning environment and minimal labels in training data , semi - supervised paradigms , especially active learning , are also used .heterogeneous designs of ids in different use cases give rise to many ad - hoc evaluations in research works , and a reproducibility and comparison framework was proposed to address the issue .meanwhile , techniques such as graph - based community detection , time series - based methods , and generalized support vector data description in cyber - physical system and adversarial environment for auto - feature selection , have also emerged .although they carry different assumptions of normality and feature representations , the supervised ml system design remains largely the same . besides the fact the more techniques and use caseshave been proposed , the focus of research in ids had evolved from discovering new techniques and use cases , to rigorously evaluating fundamental assumptions and workflows of ids .for example , while feature selection has stayed as a major component , there are re - examination of assumptions and measurements on what constitutes normality and abnormality , alternative to more easily acquire data and use low - confidence data for ml systems , and proposal on validating reproducibility of results from different settings .one key goal of our sok survey is to help researchers look into the future .ml applications in security domains are attracting academic research attention as well as industrial interest , and this presents a valuable opportunity for researchers to navigate the landscapes between ml theories and security applications .there are also opportunities to explore if there are some types of ml paradigms that are especially well suited to particular security problems .apart from highlighting that 1 ) semi - supervised and unsupervised ml paradigms are more effective in utilizing unlabeled data , hence ease the difficulty of obtaining labeled data , and 2 ) gt - based ml paradigms and hitl ml system designs will become more influential in dealing with semi - aggressive and aggressive attackers , we also share the following seven speculations of future trends , based on our current sok . 1 .* metric learning * : measurement has become more and more conspicuous for ml research in security , mostly in similarity measurement for clustering algorithms .proper measurements and metrics are also used to construct ground truths to evaluate ml - based classifiers , and also have important roles in feature engineering . given the ubiquitous presence of metrics and the complex nature of constructing them , ml applications in security will benefit much from metric learning .* nlp * : malicious content , spam , and malware analysis and detections have used tools from statistical language modeling(e.g .n - gram - based representation for strings in code and http request) , as textual information explodes , nlp will become more widely used beyond source filtering and clustering e.g. use n - gram models to infer state machines of protocols .* upstream movement of ml in security defense designs*. in malware detection and classifications , behavior- and signature - based malware classifiers have used inputs from static and dynamic binary analysis as features , and already shows rnn can be applied to automatically recognize functions in binary analysis .we also see ml algorithms applied in vulnerability , device , and security policy management , ddos mitigation , information flow quantifications , and network infrastructure .hence , it is reasonable to expect that more ml systems and algorithms will move upstream in more security domains .* scalability * : with increasing amount of data from growing numbers of information channels and devices , scale of ml - based security defenses will become a more important aspect in researching ml applications in security . as a result , large - scale systems will enable * distributed graph algorithms * in malware analysis , as path hijacker tracing , cyber - physical system fault correlation , etc .. 5 .* specialized probabilistic models * will be applied beyond the context of classifiers , e.g. access control .high fp rates have always been a concern for system architects and algorithm researchers . * reducing fp rates * will grow from an ad - hoc component in various system designs , to independent formal frameworks , algorithms , and system designs .* privacy enforcement * was framed as a learning problem recently in , in the light of many publications on privacy - preservation in ml algorithms , and privacy enhancement by probabilistic models .this new trend will become more prominent .in this paper , we analyzed ml applications in security domains by surveying literature from top venues of our field between 2008 and early 2016 .we attempted to bring clarity to a complex field with intersecting expertises by identifying common use cases , generalized system designs , common assumptions , metrics or features , and ml algorithms applied in different security domains .we constructed a matrix showing the intersections of ml paradigms and three different taxonomy structures to classify security domains , and show that while much research has been done , explorations in gt - based ml paradigms and hitl ml system designs are still much desired ( and under - utilized ) in the context of active attackers .we point out 7 promising areas of research based on our observations , and argue that while ml applications can be powerful in security domains , it is critical to match the ml system designs with the underlying constraints of the security applications appropriately .we would like to thank megan yahya , krishnaprasad vikram , and scott algatt for their time and valuable feedback . c. landwehr , d. boneh , j. c. mitchell , s. m. bellovin , s. landau , and m. e. lesk , `` privacy and cybersecurity : the next 100 years , '' _ proceedings of the ieee _special centennial issue , 2012 .t. ahmed , b. oreshkin , and m. coates , `` machine learning approaches to network anomaly detection , '' in _ proceedings of the 2nd usenix workshop on tackling computer systems problems with machine learning techniques _ , 2007 .p. g. kelley , s. komanduri , m. l. mazurek , r. shay , t. vidas , l. bauer , n. christin , l. f. cranor , and j. lopez , `` guess again ( and again and again ) : measuring password strength by simulating password - cracking algorithms , '' in _sp 2012_. r. wang , w. enck , d. reeves , x. zhang , p. ning , d. xu , w. zhou , and a. m. azab , `` easeandroid : automatic policy analysis and refinement for security enhanced android via large - scale semi - supervised learning , '' in _ usenix security 2015_. k. lu , z. li , v. p. kemerlis , z. wu , l. lu , c. zheng , z. qian , w. lee , and g. jiang , `` checking more and alerting less : detecting privacy leakages via enhanced data - flow analysis and peer voting . '' in _ ndss 2015_.
the idea of applying machine learning(ml ) to solve problems in security domains is almost 3 decades old . as information and communications grow more ubiquitous and more data become available , many security risks arise as well as appetite to manage and mitigate such risks . consequently , research on applying and designing ml algorithms and systems for security has grown fast , ranging from intrusion detection systems(ids ) and malware classification to security policy management(spm ) and information leak checking . in this paper , we systematically study the methods , algorithms , and system designs in academic publications from 2008 - 2015 that applied ml in security domains . 98% of the surveyed papers appeared in the 6 highest - ranked academic security conferences and 1 conference known for pioneering ml applications in security . we examine the generalized system designs , underlying assumptions , measurements , and use cases in active research . our examinations lead to 1 ) a taxonomy on ml paradigms and security domains for future exploration and exploitation , and 2 ) an agenda detailing open and upcoming challenges . based on our survey , we also suggest a point of view that treats security as a game theory problem instead of a batch - trained ml problem . * keywords * : security , machine learning , large - scale applications , game theory , security policy management
a series of revolutionary technological advances , optical transmission systems have enabled the growth of internet traffic for decades .most of the huge bandwidth of fiber systems is in use and the capacity of the optical core network can not keep up with the traffic growth .the usable bandwidth of an optical communication system with legacy standard single - mode fiber ( smf ) is effectively limited by the loss profile of the fiber and the erbium - doped fiber amplifiers ( edfas ) placed between every span .it is thus of high practical importance to increase the spectral efficiency ( se ) in optical fiber systems .even with new fibers , the transceiver will eventually become a limiting factor in the pursuit of higher se because the practically achievable signal - to - noise ratio ( snr ) can be limited by transceiver electronics .digital signal processing ( dsp ) techniques that are robust against fiber nonlinearities and also offer sensitivity and se improvements in the linear transmission regime are thus of great interest . a technique that fulfills these requirements and that has been very popular in recent years is _ signal shaping_. there are two types of shaping : geometric and probabilistic . in geometric shaping ,a nonuniformly spaced constellation with equiprobable symbols is used , whereas in probabilistic shaping , the constellation is on a uniform grid with differing probabilities per constellation point .both techniques offer an snr gain up to the ultimate shaping gain of 1.53 db for the additive white gaussian noise ( awgn ) channel ( * ? ? ?iv - b ) , ( * ? ? ?viii - a ) .geometric shaping has been used in fiber optics to demonstrate increased se .probabilistic shaping has attracted considerable attention in fiber optics . in particular , use the probabilistic amplitude - shaping scheme of that allows forward - error correction ( fec ) to be separated almost entirely from shaping by concatenating a distribution matcher and an off - the - shelf systematic fec encoder .probabilistic shaping offers several advantages over geometric shaping .using the scheme in , the labeling of the quadrature amplitude modulation ( qam ) symbols can remain an off - the - shelf binary reflected gray code , which gives large achievable information rates ( airs ) for bit - wise decoders and makes exhaustive numerical searching for an optimal labeling obsolete .a further feature of probabilistic shaping that , for fiber - optics , has only been considered in is that it can yield rate adaptivity , i.e. , the overall coding overhead can be changed without modifying the actual fec .probabilistic shaping also gives larger shaping gains than purely geometric shaping ( * ? ? ?4.8 ( bottom ) ) for a constellation with a fixed number of points . given these advantages ,we restrict our analysis in this work to probabilistic shaping on a symbol - by - symbol basis . shaping over several time slotshas been studied theoretically and is beyond the scope of the present study . in this paper, we extend our previous work on probabilistic shaping for optical back - to - back systems and investigate the impact of shaping for qam formats on the nonlinear interference ( nli ) of an optical fiber channel with wavelength division multiplexing ( wdm ) . for the analysis , we use a recently developed modulation - dependent gaussian noise ( gn ) model in addition to full - field split - step fourier method ( ssfm ) simulations .this gn model includes the impact of the channel input on the nli by taking into account higher - order standardized moments of the modulation , which allows us to study the impact of probabilistic shaping on the nli from a theoretical point of view .the contributions of this paper are twofold .firstly , we show that one shaped qam input , optimized for the awgn channel , gives large shaping gains also for a multi - span fiber system .this allows potentially for a simplified implementation of probabilistic shaping because just one input pmf can be used for different fiber parameters .secondly , no significant additional shaping gain is obtained for such a multi - span system with 64qam when the pmf is optimized to the optical fiber channel using a gn model .the relevance of this result is that numerical optimizations of the channel input pmf are shown to be obsolete for many practical long - haul fiber systems .in the following , we review the basic principles of probabilistic shaping . the focus is on airs rather than bit - error ratios after fec .both symbol - wise airs and airs for bit - wise decoding are discussed . for a more detailed comparison ,we refer the reader to ( * ? ? ?iii ) , ( * ? ? ?4),, .consider an independent and identically distributed ( iid ) discrete channel input and the corresponding continuous outputs .the channel is described by the channel transition probability density , as shown in the center of fig .[ fig : model ] .the symbol - wise inputs are complex qam symbols that take on values in according to the probability mass function ( pmf ) on . without loss of generality, the channel input is normalized to unit energy , i.e. , =1 ] denotes expectation and is the marginal distribution of .the mi in is an air for a decoder that uses soft metrics based on .since the optical channel is not known in closed form , we can not directly evaluate . a technique called mismatched decoding used in this paper , which gives an air for a decoder that operates with the auxiliary channel instead of the true . in this paperwe consider memoryless auxiliary channels of the form which means that , in the context of fiber - optics , all correlations over polarization and time are neglected at the decoder .we assume a fixed auxiliary channel , i.e. , , and restrict the analysis in this paper to 2d circularly symmetric gaussian distributions where is the noise variance of the auxiliary channel , , and complex . for details on the impact of higher - dimensional gaussian auxiliary channels ,see .irrespective of the particular choice of the auxiliary channel , we get a lower bound to by using instead of ( * ? ? ?vi ) , \triangleq { \ensuremath{\text{r}_{\text{sym}}}\xspace } , \end{aligned}\ ] ] where the expectation is taken with respect to , and .the value of be estimated from monte carlo simulations of input - output pairs of the channel as the symbol - wise air is achievable for a decoder that assumes . for the practical bit - interleaved coded modulation schemes that are also used in fiber - optics , a bit - wise demapperis followed by a binary decoder , as shown in fig .[ fig : model ] . in this setup ,the symbol - wise input considered to consist of bit levels that can be stochastically dependent , and the decoder operates on bit - wise metrics .an air for this bit - metric decoding ( bmd ) scheme is the bmd rate ^+ \\ & = \bigl [ { \ensuremath{\mathbb{h}}}({\ensuremath{\boldsymbol{b}}\xspace } ) - \sum_{i=1}^m { \ensuremath{\mathbb{h}}}(b_i|{\ensuremath{y}\xspace } ) \bigr]^+ , \end{aligned}\ ] ] which is the air considered for the simulations in this work . in, the index indicates the bit level , ] is . note that bounded above by the symbol - wise mi , . the first term of is the sum of the mis of parallel bit - wise channels .the term in corrects for a rate overestimate due to dependent bit levels . for independent bit levels , i.e. , , the term is zero and the well - known generalized mutual information calculated with soft metrics that are matched to the channel .we calculate , which is an instance of , in monte carlo simulations of samples as \nonumber \\ & - \frac{1}{n } \sum_{k=1}^{n}\sum_{i=1}^m \left[\log_2\left ( 1+e^{(-1)^{b_{k , i } } \lambda_{k , i } } \right)\right ] , \end{aligned}\ ] ] where are the sent bits .the air a function of the soft bit - wise demapper output . these log - likelihood ratios ( llrs ) are computed with the auxiliary channel as where and denote the set of constellation points whose ^th^ bit is 1 and 0 , respectively .the first term of is the llr from the channel and the second term is the a - priori information .for uniformly distributed input , the a - priori information is 0 . using the 2d gaussian auxiliary channel of, we have these llrs can be computed equivalently in 1d if a symmetric auxiliary channel is chosen , a product labeling is used ( * ? ? ?2.5.2 ) and is generated from the product of 1d constellations .we search for the input distribution that maximizes , \leq 1}{\max}~ { \ensuremath{\text{r}_{\text{bmd}}}\xspace},\ ] ] where the underlying channel is awgn .probabilistic shaping for the nonlinear fiber channel is discussed in sec .[ sec : mismatched_shaping_fiber ] .as the awgn channel is symmetric , the 1d pmfs are also symmetric around the origin , i.e. , which in 2d corresponds to a fixed probability per qam ring .a common optimized input for is to use shaped input distributions from the family of maxwell - boltzmann ( mb ) distributions ( * ? ? ?iv ) , ( * ? ? ?viii - a ) .the method to find an optimized input for a particular snr is discussed in detail in ( * ? ? ?iii - c ) and briefly reviewed in the following paragraph .in bits/4d - sym for uniform qam input ( dashed lines ) and qam with the shaped , snr - dependent mb pmf of ( solid lines ) .the awgn capacity ( dotted line ) is shown as a reference . ]let the positive scalar denote a constellation scaling of with a fixed constellation .furthermore , let the pmf of the input be where is another scaling factor .for each choice of , there exists a scaling that fulfills the average - power constraint =1 $ ] .we optimize the scalings and such that maximized while using a distribution from and operating at the channel snr that is defined as }{\sigma^2}=\frac{{e_\text{s}}}{n_0}=\frac{1}{n_0 } , \end{aligned}\ ] ] where the 1d signal power is normalized to 1 due to the average - power constraint and is the noise variance per dimension .this optimization can be carried out with efficient algorithms , see ( * ? ? ?iii - c ) . in fig .[ fig : bmd_awgn_uniformshaped ] , in bits per four - dimensional symbol ( bit/4d - sym ) is shown versus the snr of an awgn channel .we choose to plot 4d symbol to have values that are consistent with the dual - polarization airs of sec [ sec : mismatched_shaping_fiber ] .further , only shown as it virtually achieves ( * ? ? ?* table 3 ) , ( * ? ? ?* fig . 1 ) , and the more practical air compared to .the dotted curves represent uniformly distributed input and the solid lines show the qam with optimized mb input .the awgn capacity is given for reference .significant gains are found from probabilistic shaping over uniform input , with sensitivity improvements of up to 0.43 db for 16qam , 0.8 db for 64qam and more than 1 db for 256qam . in order to find the optimized mb input , the snr of the channel over which we transmit , denoted _ channel_ snr , must be known or estimated _ a priori _ at the transmitter .this transmitter - side estimate of the snr is referred to as _ shaping _ snr . in a realistic communication system , it can be difficult to know the channel snr at the transmitter because of varying channel conditions such as the number and properties of co - propagating signals , dsp convergence behavior , and aging of components .hence , shaping without knowledge of the channel snr could simplify the implementation of probabilistic shaping .we will see later that an offset from the shaping snr to the channel snr has a minor effect on the awgn channel if a suitable combination of qam format and shaping snr is used in the proper snr regime .c||c|c||c|c||c|c & * a ) * & * b ) * & * c ) * & * d ) * & * e ) * & * f ) * + -qam & 16 & 16 & 64 & 64 & 256 & 256 + .fixed distributions of -qam ( with ) that lead to at most 0.1 snr loss compared to the full shaping gain [ cols="^ " , ] = dbm ( solid , crosses ) and = 3 dbm ( dashed , circles ) .although these two power levels have approximately the same , the pmf for 3 dbm is not shaped as much in order to avoid increased nli . ] in fig .[ fig : egn_model_optim_2000 km ] , shown versus channel for different input distributions of 64qam .all results are obtained from the spm - xpm model .the dotted curves show a 1d pmf ( red ) and a 2d pmf ( gray ) , both optimized with the spm - xpm model , and the airs for uniform , mb - shaped input , pmf * d ) * are also included .the two optimized pmfs give identical values of , and their shapes are very similar , as the insets in fig .[ fig : egn_model_optim_2000 km ] show .we conclude that , for the considered system , there is virtually no benefit in using the optimized 2d input . additionally , the mb shaped input gives identical gains to the 1d - optimized input at low power and around the optimal one .it is only in the high - power regime that slightly increased airs are obtained with the optimized input .this indicates that , also for a multi - span fiber channel , the shaping gain is very insensitive to variations in the input distribution , and an optimized input gives shaping gains that are no larger than those with an mb pmf .in fact , it is sufficient for the considered system simply to use the fixed input distribution * d ) * from table [ table : input_dists_perfectshaping ] to effectively obtain the maximum shaping gain . in bit/4d - sym for 64qam versus channel in dbm for the spm - xpm model . the airs for the 1d - optimized input ( red dotted ) , 2d - optimized pmf ( gray dotted ) and for all other shaped inputs lie on top of each other over a wide range of launch powers .inset : the optimized 1d pmf in its 2d representation and the 2d pmf , each for .5 dbm . ]in this work , we have studied probabilistic shaping for long - haul optical fiber systems via both numerical simulations and a gn model .we based our analysis on awgn results that show that just two input pmfs from the family of maxwell - boltzmann distributions are sufficient per qam format to realize large shaping gains over a wide range of snrs .we have found that these fixed shaped distributions also represent an excellent choice for applying shaping to a multi - span fiber system . using just one input distribution for 64qam ,large shaping gains are reported from transmission distances between 1,400 km to 3,000 km . for a fixed distance of 2,000 km, we have studied the impact of probabilistic shaping with maxwell - boltzmann distributions and other pmfs .the adverse effects of shaping in the presence of modulation - dependent nonlinear effects of a wdm system have been shown to be present .an nli penalty from shaping is found to be relatively minor around the optimal launch power in a multi - span system .this means that , for the considered system , just one input pmf for 64qam effectively gives the maximum shaping gain and an optimization for the fiber channel is not necessary .this could greatly simplify the implementation and design of probabilistic shaping in practical optical fiber systems .we expect similar results for other qam formats such as 16qam or 256qam when they are used in fiber systems that are comparable to the ones in this work .we have also found that the gn model is in excellent agreement with the ssfm results , confirming its accuracy for shaped qam input . for nonlinear fiber links in which the contribution of significant ,e.g. , those with in - line dispersion management or single - span links with high power , further optimizations of the shaping scheme can be both beneficial for a large shaping gain and incur low nli .additionally , instead of shaping on a per - symbol basis , constellation shaping over several time slots to exploit the temporal correlations by xpm is an interesting future step to increase se .also , optimizing distributions in four dimensions could be beneficial for highly nonlinear polarization - multiplexed fiber links .the authors would like to thank prof .frank kschischang ( university of toronto ) for encouraging us to use the spm - xpm model to study probabilistic shaping for the nonlinear fiber channel .the authors would also like to thank the anonymous reviewers for their valuable comments that helped to improve the paper .p. bayvel , r. maher , t. xu , g. liga , n. a. shevchenko , d. lavery , a. alvarado , and r. i. killey , `` maximizing the optical network capacity , '' _ philosophical transactions of the royal society of london a _ , vol .374 , no . 2062 , jan .r. maher , a. alvarado , d. lavery , and p. bayvel , `` modulation order and code rate optimisation for digital coherent transceivers using generalised mutual information , '' in _ proc .european conference and exhibition on optical communication ( ecoc ) _ , valencia , spain , paper mo.3.3.4 , sep .2015 .u. wachsmann , r. f. h. fisher , and j. b. huber , `` multilevel codes : theoretical concepts and practical design rules , '' _ ieee transactions on information theory _ , vol .45 , no . 5 , pp . 13611391 , jul . 1999 .i. b. djordjevic , h. g. batshon , l. xu , and t. wang , `` coded polarization - multiplexed iterative polar modulation ( pm - ipm ) for beyond 400 gb / s serial optical transmission , '' in _ proc .optical fiber communication conference ( ofc ) _ , san diego , ca , usa , paper omk2 , mar .h. g. batshon , i. b. djordjevic , l. xu , and t. wang , `` iterative polar quantization based modulation to achieve channel capacity in ultra - high - speed optical communication systems , '' _ ieee photonics journal _ , vol . 2 , no . 4 , pp . 593599 , aug .t. h. lotz , x. liu , s. chandrasekhar , p. j. winzer , h. haunstein , s. randel , s. corteselli , b. zhu , and d. w. peckham , `` coded pdm - ofdm transmission with shaped 256-iterative - polar - modulation achieving 11.15-b / s / hz intrachannel spectral efficiency and 800-km reach , '' _ journal of lightwave technology _ , vol .31 , no . 4 , pp . 538545 , feb .j. estaran , d. zibar , a. caballero , c. peucheret , and i. t. monroy , `` experimental demonstration of capacity - achieving phase - shifted superposition modulation , '' in _ proc .european conference on optical communications ( ecoc ) _ , london , uk , paper we.4.d.5 , sep .t. liu and i. b. djordjevic , `` multidimensional optimal signal constellation sets and symbol mappings for block - interleaved coded - modulation enabling ultrahigh - speed optical transport , '' _ ieee photonics journal _ , vol . 6 , no . 4 , pp . 114 , aug . 2014 .a. shiner , m. reimer , a. borowiec , s. o. gharan , j. gaudette , p. mehta , d. charlton , k. roberts , and m. osullivan , `` demonstration of an 8-dimensional modulation format with reduced inter - channel nonlinearities in a polarization multiplexed coherent system , '' _ optics express _ , vol .22 , no .17 , pp . 2036620374 , aug . 2014 .o. geller , r. dar , m. feder , and m. shtaif , `` a shaping algorithm for mitigating inter - channel nonlinear phase - noise in nonlinear fiber systems , '' _ journal of lightwave technology _ , vol .pp , no .99 , jun . 2016 .b. p. smith and f. r. kschischang , `` a pragmatic coded modulation scheme for high - spectral - efficiency fiber - optic communications , '' _ journal of lightwave technology _ , vol .30 , no . 13 , pp .20472053 , jul .2012 .m. p. yankov , d. zibar , k. j. larsen , l. p. christensen , and s. forchhammer , `` constellation shaping for fiber - optic channels with qam and high spectral efficiency , '' _ ieee photonics technology letters _26 , no . 23 , pp .24072410 , dec .t. fehenberger , g. bcherer , a. alvarado , and n. hanik , `` ldpc coded modulation with probabilistic shaping for optical fiber systems , '' in _ proc . optical fiber communication conference ( ofc ) _ , los angeles , ca , usa , paper th.2.a.23 , mar . 2015 .f. buchali , g. bcherer , w. idler , l. schmalen , p. schulte , and f. steiner , `` experimental demonstration of capacity increase and rate - adaptation by probabilistically shaped 64-qam , '' in _ proc .european conference and exhibition on optical communication ( ecoc ) _ , valencia , spain , paper pdp.3.4 , sep .2015 . c. diniz , j. h. junior , a. souza , t. lima , r. lopes , s. rossi , m. garrich , j. d. reis , d. arantes , j. oliveira , and d. a. mello , `` network cost savings enabled by probabilistic shaping in dp-16qam 200-gb / s systems , '' in _ proc . optical fiber communication conference ( ofc ) _ , anaheim , ca , usa , paper tu3f.7 , mar .t. fehenberger , d. lavery , r. maher , a. alvarado , p. bayvel , and n. hanik , `` sensitivity gains by mismatched probabilistic shaping for optical communication systems , '' _ ieee photonics technology letters _28 , no . 7 , pp . 786789 , apr . 2016 .f. buchali , f. steiner , g. bcherer , l. schmalen , p. schulte , and w. idler , `` rate adaptation and reach increase by probabilistically shaped 64-qam : an experimental demonstration , '' _ journal of lightwave technology _ , vol .34 , no . 7 , pp . 15991609 , apr .m. p. yankov , f. da ros , e. p. da silva , s. forchhammer , k. j. larsen , l. k. oxenlwe , m. galili , and d. zibar , `` constellation shaping for wdm systems using 256qam/1024qam with probabilistic optimization , '' mar . 2016 .[ online ] .available : http://arxiv.org/abs/1603.07327 g. bcherer , p. schulte , and f. steiner , `` bandwidth efficient and rate - matched low - density parity - check coded modulation , '' _ ieee transactions on communications _, vol . 63 , no . 12 , pp .46514665 , dec . 2015 .r. dar , m. feder , a. mecozzi , and m. shtaif , `` on shaping gain in the nonlinear fiber - optic channel , '' in _ proc .ieee international symposium on information theory ( isit ) _ , honolulu , hi , usa , jun .2014 .a. ganti , a. lapidoth , and i. e. telatar , `` mismatched decoding revisited : general alphabets , channels with memory , and the wide - band limit , '' _ ieee transactions on information theory _46 , no . 7 , pp . 23152328 , nov . 2000 .m. secondini , e. forestieri , and g. prati , `` achievable information rate in nonlinear wdm fiber - optic systems with arbitrary modulation formats and dispersion maps , '' _ journal of lightwave technology _ , vol .31 , no .38393852 , dec . 2013 .t. fehenberger , t. a. eriksson , a. alvarado , m. karlsson , e. agrell , and n. hanik , `` improved achievable information rates by optimized four - dimensional demappers in optical transmission experiments , '' in _ proc .optical fiber communication conference ( ofc ) _ , anaheim , ca , usa , paper w1i.4 , mar .t. a. eriksson , t. fehenberger , p. andrekson , m. karlsson , n. hanik , and e. agrell , `` impact of 4d channel distribution on the achievable rates in coherent optical communication experiments , '' _ journal of lightwave technology _ , vol .34 , no . 9 , pp .22562266 , may 2016 .d. arnold , h .- a .loeliger , p. vontobel , a. kavcic , and w. zeng , `` simulation - based computation of information rates for channels with memory , '' _ ieee transactions on information theory _ , vol .52 , no . 8 , pp . 34983508 , aug .2006 .p. poggiolini , g. bosco , a. carena , v. curri , y. jiang , and f. forghieri , `` the gn - model of fiber non - linear propagation and its applications , '' _ journal of lightwave technology _ , vol .32 , no . 4 ,694721 , feb .2014 .r. dar , m. feder , a. mecozzi , and m. shtaif , `` inter - channel nonlinear interference noise in wdm systems : modeling and mitigation , '' _ journal of lightwave technology _ , vol .33 , no . 5 , pp . 10441053 , mar .2015 .t. koike - akino , k. kojima , d. s. millar , k. parsons , t. yoshida , and t. sugihara , `` pareto - efficient set of modulation and coding based on rgmi in nonlinear fiber transmissions , '' in _ proc .optical fiber communication conference ( ofc ) _ , anaheim , ca , usa , paper th1d.4 , mar .
different aspects of probabilistic shaping for a multi - span optical communication system are studied . first , a numerical analysis of the additive white gaussian noise ( awgn ) channel investigates the effect of using a small number of input probability mass functions ( pmfs ) for a range of signal - to - noise ratios ( snrs ) , instead of optimizing the constellation shaping for each snr . it is shown that if a small penalty of at most 0.1 db snr to the full shaping gain is acceptable , just two shaped pmfs are required per quadrature amplitude modulation ( qam ) over a large snr range . for a multi - span wavelength division multiplexing ( wdm ) optical fiber system with 64qam input , it is shown that just one pmf is required to achieve large gains over uniform input for distances from 1,400 km to 3,000 km . using recently developed theoretical models that extend the gaussian noise ( gn ) model and full - field split - step simulations , we illustrate the ramifications of probabilistic shaping on the effective snr after fiber propagation . our results show that , for a fixed average optical launch power , a shaping gain is obtained for the noise contributions from fiber amplifiers and modulation - independent nonlinear interference ( nli ) , whereas shaping simultaneously causes a penalty as it leads to an increased nli . however , this nonlinear shaping loss is found to have a relatively minor impact , and optimizing the shaped pmf with a modulation - dependent gn model confirms that the pmf found for awgn is also a good choice for a multi - span fiber system . achievable information rates , bit - wise decoders , gaussian noise models , mutual information , nonlinear fiber channel , probabilistic shaping , wavelength division multiplexing .
in many applications and implementations of quantum information processing , one has to compare two different quantum states . in this context ,the quantum fidelity is a very useful tool to measure the `` closeness '' between two states in the hilbert space of a quantum system . for two arbitrary states and , it is defined as for any pair of pure states and , the quantum fidelity reduces to their ( squared ) overlap , . although the fidelity does not define a metric on the state space , it is the core ingredient for several of them , like for instance the bures distance ^{1/2} ] ) , respectively , yields with , and . we then get } } _ { \delta \phi \in [ 0,2\pi [ } \mathcal{n}^2 c_n^k \left|\sum_{j=0}^{k ' } c_j(x , x ' ) e^{i j \delta \phi}\right|^2,\ ] ] with as given by eq .( [ cjxxp ] ) . finally , using the identity and setting yields eq .( [ rsnkkp ] ) .we then define the unnormalized dicke states , which satisfy .\label{eq : decomposition_gd}\end{aligned}\ ] ] inserting eq .( [ bnkeps ] ) in eq .( [ eq : decomposition_gd ] ) and observing that for any and the symmetric state reads in the dicke state basis yields straightforwardly with , , and as given by eq .( [ ajbjcj ] ) and . in the sum over in eq .( [ gnkeps ] ) , the first term merely yields the state , which is nothing but the dicke state [ see eq .( [ unkrec ] ) ] .the rest of the sum from to is by definition the state [ eq . ( [ psieps ] ) ] .we thus get from which eq .( [ psink2 ] ) immediately follows for any . for , and the normalized state not defined . c. d. cenci , d. w. lyons , s. n. walck , arxiv:1011.5229 ; _ theory of quantum computation , communication and cryptography _ , edited by d. bacon , m. martin - delgado , and m. roetteler , lecture notes in computer science , vol .6745 ( springer , berlin , 2014 ) , p. 198 .we recall that the -excitation dicke states ( ) are defined as , where denotes the binomial coefficient , the multiqubit states in the sum contain qubits in state , and denotes all permutations of the qubits leading to different terms in the sum .all dicke states ( ) are symmetric and they form an orthonormal basis in the symmetric subspace of the multiqubit system . for , where denotes the floor function , the dicke states are slocc inequivalent between each others . in contrast ,the and states are lu equivalent .although the -qubit local operation remains defined for , though not invertible in this case , the normalized state is in contrast not defined in this case since ( see appendix [ apb ] ) .
for two symmetric quantum states one may be interested in maximizing the overlap under local operations applied to one of them . the question arises whether the maximal overlap can be obtained by applying the same local operation to each party . we show that for two symmetric multiqubit states and local unitary transformations this is the case ; the maximal overlap can be reached by applying the same unitary matrix everywhere . for local invertible operations ( stochastic local operations assisted by classical communication equivalence ) , however , we present counterexamples , demonstrating that considering the same operation everywhere is not enough .
research interest in the origins of the long - range bidirectional movement of particles ( organelles , vesicles , nutrients ) driven by molecular motors is motivated by fundamental questions concerning the nature of interactions between motors and their cargos as transport processes take place .a current explanation for the phenomenon relies on the idea that motors of different polarities act coordinately on the same particle at different times . if , however , they act in parallel , the bidirectional movement would reflect dominance of one or another kind of motor achieved by a _ tug - of - war _ mechanism , , , , . an important question that remains in this context concerns the mechanisms that would promote such coordination .alternatives to the coordination or _ tug - of - war _ models in the literature arise from the possibility of attributing the phenomenon to a dynamic role of the microtubules or to a mechanical coupling between different motors .a general difficulty encountered within any of these views is related to the presence of other particles ( including other motors ) on the microtubule at a given time that are not directly involved with the transfer process .these other particles are expected to impose restrictions on motility and performance of the motors that are directly interacting with cargo at that time .contrarily to these expectations , however , data from observations of beads driven by kinesins in steady - state conditions indicate that the number of long length runs of such beads increases significantly as the density of motors at the microtubule increases , although their velocities remain essentially unaltered within a wide range of motor concentrations , .thus , the reality of traffic jam in crowded microtubules still challenges the current view of long - range cargo transport that presupposes an effective and controllable movement of the motor(s ) arranged into a motor - cargo complex .this , of course , requires a certain degree of stability of motor - cargo interactions and motor processivity .our intention here is to discuss these problems from a different perspective by bringing into this scenario the model introduced in to examine cargo transport as a _ hopping _ process .according to that , motors and cargos would not assemble into complexes to put transport into effect . on the contrary , each motor would function as an active overpass for cargo to step over to a neighboring motor . in this case , the long - range movement of cargo is envisaged as a sequence of these elementary ( short - range ) steps either forwards or backwards . in we examined the conditions under which this may happen , accounting for the fact that motor motility is affected by the interactions with other motors and with cargos on the microtubule .there , we considered the presence of a collection of interacting motors , all of them presenting the same polarity ( kinesins may be thought of as prototypes ) and a single cargo . here , we examine whether it is possible to explain in a similar context the origin of the observed bidirectional movement displayed by cargos .the particular mechanism we propose to substantiate the hopping differs from that suggested in .it keeps , however , the same general ideas of the original .as it will be explained below , we view the hopping of cargo between motors as an effect of thermal fluctuations undergone by motor tails .the flexibility of the tails may promote contact and , eventually , exchange of cargo between neighboring motors . as in ,the model dynamics is mapped into an asymmetric simple exclusion process ( asep ) , , whose stationary properties are resolved explicitly in the limit of very large systems .other asep models have already been considered in the literature to study the conditions for motor jamming in the absence of cargo , , .our model is conceived to account explicitly for changes in the dynamics of the motors that at a certain instant of time are interacting with cargos .the model is reviewed here in order to include a second cargo in the system , still keeping the presence of motors of a single polarity .we believe that this approaches more realistic situations in which the simultaneous presence of many cargos and motors on the same microtubule must be the prevailing situation .we show that under these conditions , a cargo may be able to execute long - range bidirectional movement as it moves over clusters of motors assembled either at its back end or at the back end of the cargo in front .one may recognize in this a possibility for explaining the origins of self - regulation in intracellular transport since it has been suggested in the last few years that signaling pathways involved in intracellular traffic regulation can be performed simply by the presence of cargos at the microtubule .we then speculate that the passage of cargos on microtubules does not get blocked by motor jamming . on the contrary ,jamming operates as an allied process to promote long runs of cargos across motor clusters . in this case , the density of motors on the microtubule can be identified as an element of control in intracellular transport since it directly affects the conditions for jamming .it is worth mentioning that the model developed here does not rule out other possibilities , such as the _ tug - of - war _ or competition models .what we suggest is that the presence of motors of different polarities may not be essential to explain the origin of the bidirectional movement .the hopping mechanism is presented in sec.2 .the kinetic properties of the extended version are developed in sec.3 , considering the presence of two cargos .in sec.4 we present our results .additional remarks and conclusions are in sec.5 .the stochastic model in formulated in a lattice describes the dynamics of motors and cargos accounting for ( i ) steric interactions among different particles moving on the same microtubule ; ( ii ) the presence of motors of a single polarity ; ( iii ) the fact that cargos do not move if not driven by motors .the crucial point is in item ( iii ) because it requires a specific model for motor - cargo dynamics as transportation takes place .we offer here a slightly different view from that in keeping however the reliance on the ability of motors to transfer cargo .one way by which this may be achieved is sketched in _ figure ( 1)_. the stepping of cargo would be accomplished as it is released from a motor to which it is attached at a certain instant of time and then get attached to another ( neighboring ) motor either at the left or at the right see _ figure ( 1a)_. this process , like the one discussed in , relies strongly on the flexibility of the motor 's tail .the idea was inspired by experimental results suggesting a dynamic role of the kinesin 's coiled - coil segment in the process , and also by data indicating that under load , kinesin motors display an oscillatory movement . here , we think of these oscillations as signaling fluctuations in the position of motor 's tail , not necessarily being correlated with displacement of its center of mass .if this is the case , such oscillations would promote contact between neighboring motors favoring cargo exchange .accordingly , long - range displacements of cargo would reflect a hopping process extended over many neighboring motors which may be accomplished if these motors get jammed into clusters for sufficient long periods of time.notice that the whole mechanism does not require special stability of cargo - motor binding . on the contrary , the transfer of cargo by a motor would be ease by a loose attachment between them . the model in resolved explicitly considering the presence of a single cargo in the system .the averages for the quantities of interest were determined in steady - state conditions .we showed there that the long - range displacements of the cargo would occur predominantly in the backwards direction , i.e. in opposition to the direction of the movement of the considered motors .we shall show here that the same dynamics may lead cargo to display bidirectional movement if the system contains at least one more cargo interfering with the movement of the motors . to examine the properties of the system containing two cargos and an arbitrary number of motors, we map it into the same asep as in goldman1 whose dynamics can also reproduce the scheme in _ figure(1)_. we consider a one - dimensional lattice with sites , representing the microtubule with periodic boundary conditions .this system contains motors and a number of other particles - the cargos - that interact with motors in order to move .each site can be occupied by a motor or by a motor attached to a cargo _ ( see fig.1(a ) ) _ , otherwise it is empty .the total number of sites that remain unoccupied is . here, we analyze the long - time behavior of this system for and determine the average cargo velocity as a function of the parameters .the results indicate conditions for cargos to perform a type of long - range movement that share the characteristics of the observed bidirectional movement . the map of the dynamics shown in _figure.1 _ into the considered asep is carried out as follows ._ _ _ _ first , each site is identified by its position at the lattice .then , to each of these sites is associated a variable that assumes integer values or such that if the site is empty , if it is occupied by a motor ; or if it is occupied by a motor attached to a cargo . with these, a configuration of the lattice is specified by the set the dynamics of the asep that reproduces the elementary steps in _fig.1 _ can now be defined . for this , consider that at each time interval a pair of consecutive sites , say and are selected at random .the occupancy of these two sites is then switched according to the following rules the pair is represented by the values of the corresponding site variables parameters and are the assigned probabilities per unit time ( rates ) for occurrence of the processes indicated .process describes the possibility for a motor ( kinesin ) that carries no cargo to step forward to a neighboring empty site _ ( figure 1b)_. processes and account for the switching of the cargo between two neighboring motors .this accounts either for backward or for forward stepsnotice that the dynamics conserves the number of particles of type-1 as well as those of type-2 . in order to investigate the long - time dynamics of a cargo resulting from these elementary steps we use the _ matrix _ _ ansatz _ introduced by derrida , .the idea is to represent the probability of a configuration of the system with sites and particles of type-1 as a trace over a product of non - commuting matrices , each specifying the corresponding site occupancy: the normalization .the sum runs over all configurations for which and . in this product , a site represented by a matrix if it is occupied by a motor ( ) or by a matrix if occupied by a motor with a cargo ( ) ; if the site is empty it is represented by a matrix ( ) . in order to calculate averages over these configurations in the stationary state ,it is necessary at first to find the _ algebra _ that must be satisfied by these matrices such that the probabilities defined in ( [ prob do estado ] ) satisfy the stationary conditions , where the sum extends over all configurations of motors distributed over lattice sites .observe that the nonzero terms on the lhs of the above equation are those for which configurations and differ from each other at most by the positions of a pair of consecutive sites , which can be reversed by any of the elementary processes defined by the dynamics in ( [ dinamica ] ) . in this case , each factor ( or ) must be replaced by the rate , or for the corresponding elementary process that brings back from ( or from ) .the algebra corresponding to the asep defined by the dynamics ( [ dinamica ] ) has been presented in ( ) for with , we shall use this same algebra to evaluate the traces over products of matrices and that appear in calculating averages over the quantities that characterize the movement of a cargo . before proceeding , however , a few remarks are in order .the model with two cargos is not ergodic .the dynamics preserves the number of empty spaces in each of the two partitions defined by the initial positions of the two cargos in the system with periodic boundary conditions . in this case , all configurations and that satisfy equation ( [ mestra ] ) must share the number of empty spaces in each of the partitions .moreover , configurations in which the empty spaces are all concentrated in one of the two partitions must be excluded , for these do not satisfy ( [ mestra ] ) with the algebra ( [ algebra ] ) .we treat the initial conditions ( _ ic _ ) , namely the number of empty spaces - in one of the partitions and in the other , as _ random variables_. this artifact shall account for the uncertainty one has in experimental data regarding the relative positions of the particles , and also for effects of random processes that are not explicitly described by the present model such as motor binding and unbinding at the microtubule . for computing averages , we shall account first for all possible configurations at fixed and then average the results over .the procedure is further specified observing that ( a ) because there is no reason to favor any initial configuration , we may consider that is uniformly distributed and ( b ) in analogy with a situation of equilibrium , we take the average _ annealing _ as the averages over particle configurations are performed in parallel with average over the measure of a configuration of a _subset_- is written as the trace over a product of matrices and that satisfy the algebra in ( [ algebra ] ) : in the expression above , each is a binary variable such that and satisfying the normalization conveniently expressed in terms of the weights .these are defined as the sum over the traces corresponding to the configurations that belong to the subset for which the occupation of the sites ... ... and ... ... in the _n - tuples _ are _ _ _ _ fixed and _ _ _ _ specified by the values of the corresponding site variables and respectively .the defined above must satisfy the stationary conditions the sum extends over all configurations that belong to the subset .the transition rates lead configurations into configurations . consistently with the above definitions we represent the average value of the velocity of any of the two cargos at fixed as configurations associated with are such that the specified cargo has one motor at its right side that allows it to move one step to the right at a rate .similarly , in the configurations associated to there is a motor at the left side of this cargo that allows it to move one step to the left at rate . in both types of configurations the neighborhood of the other cargo is not specified. it shall be convenient to subdivide the above sum into sums over configurations having the same trace .this is achieved by specifying in ( vmedio(a)2h ) the occupation of the sites that precede both cargos .for this , is rewritten as \right .\notag \\ & & \left . -w\left [ w_{\left ( 12\right ) -\left ( 12\right ) } ^{(h)}+w_{\left ( 12\right ) -\left ( 02\right ) } ^{(h)}\right ] \right\ } \label{vmedio(b)2h}\end{aligned}\ ] ] to proceed in the evaluation of ( [ vmedio(b)2h ] ) it is also convenient to replace the site variables by block variables and , that assume integer values to represent , respectively , sequences of motors and empty sites in a configuration with these , the sum over configurations that contribute to for example , can be expressed as . here , indicates a product of matrices .the symbol on the summation signals indicates that these are restricted to the configurations that satisfy the constraints in ( [ vinculo ] ) for .all the traces in the rhs of ( [ vmedio(b)2h ] ) can now be reduced with the aid of the algebra in ( [ algebra ] ) . for this, we use the identity also follows directly from ( [ algebra ] ) .the results are quoted as follows the above expressions are independent of .the only dependence on in the evaluation of the weights comes from the multiplicity of the configurations .configurations for which * * * * * * or * * * * * * contribute with a factor with respect to the contributions from all other configurations that result in the same trace .we now take the average of over all realizations of that assumes an integer value within the interval $ ] for , with equal probability .this is performed here as corresponds to the average _ annealing _ in analogy to a situation of equilibrium .the averaged quantities are indicated by the bars over the corresponding symbols representing the weights and normalization .now , observe that because the traces do not depend on , then the sums in the above expression , both in the numerator and in the denominator , account for all possible configurations of arbitrary sequences of empty and occupied sites , keeping fixed just the _ n - tuples _ indicated in each term . with this, the restrictions imposed on the sums in ( [ tracos tabela1 ] ) are removed .we estimate the number of configurations that contribute to for a given _ n - tuple _ by fixing the relative position of the cargos and counting for all possible sequences of and .we then sum over observing the invariance of the trace under cyclic transformations .the results are compiled below . tr(e ) \\ & & & \\( d ) & \overline{w}_{(021)-(12 ) } & \simeq & \displaystyle\sum\limits_{m_{i}=1}^{m-2}\dbinom{n - m_{i}-5}{m - m_{i}-1}\left [ ( n - m_{i}-4)x^{m_{i}}\right ] tr(e ) \\ & & & \\ ( e ) & \overline{w}_{(121)-(02 ) } & \simeq & \displaystyle\sum\limits_{m_{i}=1}^{m-1}\dbinom{n - m_{i}-5}{m - m_{i}-1}\left [ ( n - m_{i}-4)x^{m_{i}}\right ] tr(e ) \\ & & & \\( f ) & \overline{w}_{(021)-(02 ) } & \simeq & \dbinom{n-5}{m-1}\left [ ( n-4)\right ] tr(e ) \\ & & & \\ ( g ) & \overline{w}_{(02)-(02 ) } & \simeq & \frac{1}{2}\dbinom{n-4}{m}tr(e)\end{array } \label{config}\ ] ] .... .... variables and indicate the number of possible consecutive motors at the left of the cargos in each of these configurations contributing to a given .the sums over and are estimated here in the limit of very large systems for which and keeping the motor density finite within the range in this limit , the sums converge to integrals and these integrals can be evaluated using laplace 's asymptotic method .consider , for instance , the sum in ( [ config ] ) .we use stirling 's formula to approximate the factorials involving the variables and and define the new variables marchetti and assume continuous values in this limit so that the referred sum converges to the integral function in the expression above depends only on the sum \ln [ 1-\left ( y+z\right ) ] -[\rho -\left ( y+z\right ) ] \ln [ \rho -\left ( y+z\right ) ] .\label{h}\ ] ] thus , by defining ( [ w1212 - 1 ] ) can be rewritten as in order to apply laplace 's method for estimating the above integral , it is convenient to change the order of the integration observing that this change , the integral in becomes trivial and the double integral in ( [ somaw12 - 12 ( 3 ) ] ) reduces to can be estimated by its maximum at large .for this , notice that has a maximum at , if the condition can not be satisfied and the maximum contribution to the integral in ( [ if2 ] ) comes from the extremum of the interval at which is a local maximum .the result is , \frac{1}{n^{2}}\frac{1}{\sqrt{\rho } } \frac{\exp [ -n\rho \ln \rho ] } { \left [ \ln ( x\rho ) \right ] ^{2}}. \label{if2 estimativa a e c}\]]if however , then is localized inside the integration interval so that the integral in ( [ if2 ] ) is estimated as \frac{(x-1)^{2}}{\left ( x\right ) ^{3/2}}\frac{1}{(1-\rho ) ^{5/2}}\frac{(1-x\rho ) } { 1-x}\sqrt{\frac{2\pi } { n}}\exp \left\ { n\left [ \ln x-(1-\rho ) \ln \tfrac{(x-1)}{(1-\rho ) } \right ] \right\ } .\label{if2 estimativa b}\ ] ] we use this same procedure to estimate all the remaining terms in expression ( [ < v(2 ) > ] ) .we merely quote the results below , making some extra comments when necessary . for estimating the sum indicated as in ( [ config ] ) we notice that the difference between the expression in the rhs of and the sum in is laplace 's method to the resulting integrals after taking the thermodynamic limit, it gives \notag \\ & & \times \left\ { \begin{array}{l } \dfrac{1}{n^{2}}\dfrac{1}{\sqrt{\rho } } \dfrac{\exp \left ( -n\rho \ln \rho \right ) } { \left ( \ln x\rho \right ) ^{2}},\quad \qquad x\rho < 1 \\ \text{or } \\ \dfrac{-\sqrt{\rho } } { \left ( x\right ) ^{5/2}}(1-x\rho )\dfrac{(1-x)^{2}}{(1-\rho ) ^{7/2}}\sqrt{\dfrac{2\pi } { n}}\exp \left\ { n\left [ \ln x-(1-\rho ) \ln \tfrac{(x-1)}{(1-\rho ) } \right ] \right\ } , \text { } \\\hspace{2.5in}\text{\quad\ \ \ \ \ \ \ \ } x\rho > 1\ \hspace{0.1in}\text { } \end{array}\right.\end{aligned}\ ] ] the sum over configurations that contribute to in ( [ config ] ) - is estimated through the asymptotic behavior of a single integral , which gives \right\ } \sqrt{\dfrac{2\pi } { n}},\qquad \text { } x\rho > 1\ \end{array}\right .\notag\end{aligned}\ ] ] analogous procedures are used to estimate the sums over configurations of the kind and in ( [ config ] ) - and coincide in this limit : \right\ } \sqrt{\dfrac{2\pi } { n}},\quad x\rho > 1\end{array}\right .\notag\end{aligned}\ ] ] for the remaining sums indicated in ( [ config ] ) - and it is sufficient to estimate the relevant contributions to which are } \label{w02 - 02 comb(1)}\]]and } \label{somaw021 - 02 ( 0)}\ ] ] in the following , we analyze the results for the average velocity of a cargo obtained in this limit using the estimates above .the average velocity of a cargo in the system of interacting motors and cargos that obey the asep dynamics set in ( [ dinamica ] ) can now be analyzed observing the differences in the expressions obtained above for the integrals in each of the asymptotic regions limited by the range of the product of the two variables and such differences lead to distinct behaviors for characterizing different phases of the system that , in turn , reflect the differences in the distribution of motors along the considered microtubule .a a fixed value of such that and for is obtained from the behavior of the integrals for resulting in -p(1-\rho ) } { 1 + 4\left\vert \ln \rho x\right\vert + \left [ \ln \rho x\right ] ^{2 } } \label{<v(2)>(a)}\ ] ] within the complementary region in which is determined from the behavior of the integrals for we find notice that for any the condition ( a ) is always satisfied so that the results for are given in this case by ( [ < v(2)>(a ) ] ) within the entire interval thus , for small values of the system does not exhibits phase transitions .the behavior of is shown in _fig.2 _ for the whole range of motor density , at fixed and and at various values of ( x ) .we notice in these results that for varying and for slightly above the average velocity of the cargo changes sign .this means that at steady state , which may be achieved at sufficiently short times after an eventual change in motor density at the microtubule , cargos may adjust and change their direction of propagation moving across motor clusters .the mechanism for cargo transfer envisaged here is equivalent to a hopping process in which the associated rates depend on site occupation .because motors move and their movement is affected by the presence of the cargos and all other motors on the microtubule , the long - time dynamics of the system must be examined globally .our results indicate that the existence of mutual interactions and the fact that many cargos are allowed to coexist at the microtubule are determinant for reproducing in this context the characteristics of the bidirectional movement . we show that within a certain range of motor density a cargo in this system executes long - range displacements in both directions .we may argue then that long - range cargo transfer is facilitated by traffic and specifically , by the assembly of motors into clusters , which characterizes traffic jam .the presence of the other cargos in the system is essential for this to occur as they function as additional obstacles that interfere in the motor density profile .each cargo induces aggregation of motors at its back end . in turn , this provides the conditions for cargo to execute long - range displacements either backwards , over the aggregate assembled at its back end , as well as forwards , over the aggregate assembled at the back end of the cargo in front .as it was originally formulated the model does not account for the possibility that a motor with one or more attached cargos may move as well .in fact , this is the only mechanism that is usually employed to describe cargo transport and it is the basis for _ coordination _ or _ tug - of - war _ models .also , for simplicity we have considered interactions of a cargo with a single motor at a time . should a set of motors be allowed to interact with the cargo to participate in the transfer process then the map into the asep would need to be modified accordingly .we are currently working on these possibilities by including into the dynamics ( [ dinamica ] ) a process of the kind that recovers ergodicity of the model we should emphasize that the occurrence of the long - range bidirectional movement as a consequence of the hopping processes devised here may happen by the action of motors of just one kind possessing a well defined polarity .changes in motor density and related traffic profile suffice as a mechanism to control cargo direction and the size of the runs determined essentially by the extent of motor clusters at jamming conditions .this offers a rather straightforward explanation for the data mentioned above suggesting that the number of long run - lengths performed by the observed beads increases significantly as the density of motors at the microtubule increases beeg .in addition , the results presented here indicate that , for sufficiently high values of the motor density for which at the point where the model displays a phase transition , cargos would perform a uniform movement ( on average ) since their velocities become independent of such behavior has also been observed in the same set of experiments .as noticed by ma and chisholm , `` little is known regarding motor traffic and how it correlates with the movement of cargo '' . here ,we offer a possibility based on the idea that the transport does not require the action of an external agent to coordinate the process , or a _ tug - of - war _ mechanism or even the existence of a mechanical coupling between two kinds of motors as proposed more recently .instead , it suggests that such coordination can be achieved by collective effects on the course of the dynamics as the system `` self - organizes '' so that it presents characteristics that reflect an internal ( and global ) order that does not have its origin in the characteristics of the external medium .it is then possible that the necessary transport in cells is accomplished just by adjusting the density of motors at the microtubule .accordingly , the presence of processive motors of different polarities that are normally required to explain the movement of a putative motor - cargo complex would not be necessary .transportation here is based on a mechanism that requires formation of clusters of motors , not necessarily on their ability to travel along long distances .i would like to thank domingos h.u .marchetti for very helpful and enthusiastic discussions regarding the conditions imposed by fubini 's theorem and the procedure used here to estimate the double integrals ; and also elisa t. sena for pointing to me many of the difficulties in the initial developments of this work .i thank scott hines for kindly editing a preliminary version of the text .this work had integral support from fundao de amparo a pesquisa do estado de so paulo ( fapesp ) - brazil .i. m. kulic , a. e. x. brown , h. kim , c. kural , b. blehm , p. r. selvin , p. c. nelson , v. i. gelfand , the role of microtubule movement in bidirectional organelle transport , proc .usa 105 , 10011 - 10016 ( 2008 ) .b. derrida , m.r .evans , the asymmetric exclusion model : exact results through a matrix approach , nonequilibrium statistical mechanics in one dimension , university press , uk , chapt .14 , 277 - 304 ( 1997 ) ; b. derrida , m.r .evans , v. hakim , v. pasquier , an exact solution of a 1d asymmetric exclusion model using a matrix formulation , j. phys.a26 , 1493 - 1517 ( 1993 ) .* figure 1 - * dynamics of motors and cargos .( a ) * cargo transfer .* it happens here through a mechanism of hopping between neighbor motors .due to the flexibility of the tail , the attached cargo may display small oscillations leading to the possibility of it being caught either by the motor at its left or by the motor at its right .the corresponding processes 12 21 or 21 12 are represented in the figure .( b ) * the step of a motor*. the time spent by the motor with the two heads attached to the microtubule is much larger than the time it spends with just one of the heads attached , as a part of the `` hand - over - hand '' mechanism proposed to explain the kinetics of two - headed motor proteins .occupation of a site by a motor occurs here whenever it is occupied by the two heads of a motor .the motor step is then represented as which is indicated in the figure .
most models designed to study the bidirectional movement of cargos as they are driven by molecular motors rely on the idea that motors of different polarities can be coordinated by external agents if arranged into a motor - cargo complex to perform the necessary work . although these models have provided us with important insights into these phenomena , there are still many unanswered questions regarding the mechanisms through which the movement of the complex takes place on crowded microtubules . for example ( i ) how does cargo - binding affect motor motility ? and in connection with that - ( ii ) how does the presence of other motors ( and also other cargos ) on the microtubule affect the motility of the motor - cargo complex ? we discuss these questions from a different perspective . the movement of a cargo is conceived here as a _ hopping process _ resulting from the transference of cargo between neighboring motors . in the light of this , we examine the conditions under which cargo might display bidirectional movement even if directed by motors of a single polarity . the global properties of the model in the long - time regime are obtained by mapping the dynamics of the collection of interacting motors and cargos into an asymmetric simple exclusion process ( asep ) which can be resolved using the matrix _ ansatz _ introduced by derrida . _ keywords - _ intracellular transport by molecular motors ; bidirectional movement of cargo , traffic jam on microtubules ; asep models .
in the real world , one single object may have different representations in different domains .for example , the declaration of independence has versions translated into different languages .let denote the number of objects , and be the number of domains .then we have where the object has measurements ; is the representation for object in space .the problem explored in this paper is that for new objects , how to classify their representations given the representations with .for this task , , described above are needed to learn the relation between and so that we can map data from and to a common space .thus are the domain relation learning training data . in our scenario, we are interested in a particular setting that the data to be classified is in separated classes different from the data used to learn the low dimensional manifold .this is shown in figure [ classification ] , where disks represent the domain relation learning training data and squares denote the classifier training and testing data . a classification rule trained on and applied on .we consider one domain relation learning method , canonical correlation analysis ( cca ) , which can be carried out using reduced - rank regression routines .we investigate classification performance in the common space obtained via cca , training the classifier on and testing on .the focus of this paper is not on optimizing the classifier ; rather , we investigate performance for a given clasifier ( 5-nearest neighbor ) as a function of the number of domain relation learning training data observation used to learn . the main contribution of this paper is an investigation of the notion of supplementing the training data of classifier by using data from other disparate sources / spaces .the structure of the paper is as follows : section [ background ] talks about related work .section [ method ] discusses the methods employed , including the manifold matching framework as well as embedding and classification details .experimental setup and results are presented in section [ results ] .section [ conclusion ] is the conclusion .different methods of transfer learning , multitask learning and domain adaptation are discussed in a recent survey .there are algorithms developed on unsupervised document clustering where training and testing data are of different kinds .the problem explored in this paper can be viewed as a domain adaptation problem , for which the training and testing data of the classifier are from different domains .when the classification is on the text documents in different languages , as described in the later sections of this paper , it is called cross - language text classification .there is much work on inducing correspondences between different language pairs , including using bilingual dictionaries , latent semantic analysis ( lsa ) features , kernel canonical correlation analysis ( kcca ) , etc .machine translation is also involved in the cross - language text classification , which translates the documents into a single domain .in this paper , we focus on manifold matching .the whole procedure can be divided into the following steps : 1 . for each single space , calculate the dissimilarity matrix for all domain relation learning training data observations .2 . for each , use multidimensional scaling ( mds ) on the dissimilarity matrix to get a euclidean representation .3 . run cca ( for ) or generalized cca ( ) to map the collection to a common space .4 . pursue joint inference ( i.e. classification ) in the common space .this procedure combines mds and ( generalized ) cca in a sequential way .firstly mds is applied to learn low - dimensional manifolds , then ( generalized ) cca is used to match those manifolds to obtain a common space .this paper focuses on manifold matching and it demonstrates the classification improvement via fusing data from additional space to learn the common low dimensional manifold .it is interesting to investigate how to generate the low dimensional space using all data instead of matching separate manifolds . but this requires calculating the dissimilarity information for the objects representation in different spaces properly for the multi - dimensional scaling purpose .this issue had been investigated , e.g. , , but there had not been any clear answer . the framework structure for manifold matchingis shown in figure [ model ] .for each of the objects , there are representations generated by the mappings .manifold matching works to find to map to a low - dimensional common space : after learning the , we can map a new measurement into the common space via : this allows joint inference to proceed in .the work described in this paper is based on dissimilarity measures .let denote the dissimilarity measure in the space , and be the euclidean distance in the common space .there are two kinds of mapping errors induced by the : fidelity error and commensurability error .fidelity measures how well the original dissimilarities are preserved in the mapping , and the fidelity error is defined as the within - condition squared error : commensurability measures how well the matchedness is preserved in the mapping , and the commensurability error is defined as the between - condition squared error : multidimensional scaling ( mds ) works to get a euclidean representation while approximately preserving the dissimilarities . given the dissimilarity matrix $ ] in space , multidimensional scaling generates embeddings for , which attempts to optimize fidelity , that is , .for the case , multidimensional scaling generates matrices from and from .the row vector of is the multidimensional scaling embedding for .canonical correlation analysis is applied to the multidimensional scaling results .canonical correlation works to find matrices and as the linear mapping method to maximize correlation for the mappings into , where two matices satisfy and .that is , for the ( ) dimension , the mapping process is defined by and , the column vector of and respectively .the orthonormal requirement on the columns of ( similarly ) implies that the correlation between different dimensions of the embedding is 0 .the correlation of the mapping data is calculated as which is equivalent to subject to and the constraint can be proved to be equivalent to for cca it holds . for new data , out - of - sample embedding for multidimensional scaling generates dimensional row vector .the final embeddings in the common space are given by and .canonical correlation analysis optimizes commensurability without regard for fidelity . for our work, first we use multidimensional scaling to generate a fidelity - inspired euclidean representation , and then we use canonical correlation analysis to enforce low dimensional commensurability . canonical correlation analysis is developed as a way of measuring the correlation of two multivariate data sets , and it can be formulated as a generalized eigenvalue problem .the expansion of canonical correlation analysis to more than two multivariate data sets is also available , which is called generalized canonical correlation analysis ( gcca ) .generalized canonical correlation analysis simultaneously find to map the multivariate data sets in spaces to the common space .similarly for the new data , we can get their representations in the common space as .similar to cca , the correlation of data in the mapping dimension is calculated as subject to gcca can be formulated as a generalized eigenvalue problem .different algorithms have been developed as the solution , e.g. least square regression .for the particular dataset used in our experiments , because it is not very large , we can perform eigenvalue decomposition on the respective matrices directly . given the measurements of new data points , ( generalized ) canonical correlation analysis in section [ embedding ] yields the embeddings in the common space . to classify , instead of using data points from the same space ( i.e. ) ,we consider the problem in which we must borrow the embeddings from another space for training , that is , .this problem is motivated by the fact that in many situations there is a lack of training data in the space where the testing data lie .we investigate the effect of the number of domain relation learning training data observations on the classification performance .our experiments apply canonical correlation analysis and its generalization to text document classification .the dataset is obtained from wikipedia , an open - source multilingual web - based encyclopedia with around 19 million articles in more than 280 languages .each document may have links pointing to other documents in the same language which explain certain terms in its content as well as the documents in other languages for the same subject .articles of the same subject in different languages are not necessarily the exact translations of one another .they can be written by different people and their contents can differ significantly .english articles within a 2-neighborhood of the english article `` algebraic geometry '' are collected .the corresponding french documents of those english ones are also collected .so this data set can be viewed as a two space case : is the english space and is the french space .there are in total 1382 documents in each space .that is , , and .note that includes both domain relation learning training data and new data points ( ) used for classification training and testing .all 1382 documents are manually labeled into 5 disjoint classes ( ) based on their topics .the topics are category , people , locations , date and math things respectively. there are 119 documents in class 0 , 372 documents in class 1 , 270 documents in class 2 , 191 documents in class 3 , and 430 documents in class 4 . the documents in classes are the domain relation learning training data .there are in total documents in those 3 classes ( ) .the ( ) documents in classes are the new data .they are used to train a classifier and run the classification test .the method described in section [ embedding ] starts with the dissimilarity matrix . for our worktwo different kinds of dissimilarity measures are considered : text content dissimilarity matrix and graph topology dissimilarity matrix .both matrices are of dimension , containing the dissimilarity information for all data points .graphs can be constructed to describe the dataset ; represents the set of vertices which are the 1382 wikipedia documents , and is the set of edges connecting those documents in language .the ( ) entry is the number of steps on the shortest path from document to document in . in the english space , , where the 4 comes from the 2-neighborhood document collection . in the french space , is the document in french corresponding to the document , and depends on the french graph connections .it is possible that . at the extreme end , when and are not connected .we set for . is based on the text processing features for documents .given the feature vectors , is calculated by the cosine dissimilarity . for our experiments ,we consider three different features for : mutual information ( mi ) features , term frequency - inverse document frequency ( tfidf ) features and latent semantic indexing ( lsi ) features .the wikipedia dataset used in the experiments are available online .see the paper for more details / description . to choose the dimension for the common space , we pick a sufficiently large dimension and embed and via multidimensional scaling . the scree plot for the mds embeddingis shown in fig [ eigplot_all ] ( term frequency - inverse document frequency features are used for the text dissimilarity calculation ) .based on the plots in figure [ eigplot_all ] , we choose for the dimension of the joint space , which is low but preserves most of the variance .this model selection choice of dimension is an important issue in its own right ; for this paper , we fix throughout . for the canonical correlation analysis step ,since it requires to multidimensional scale the dissimilarity matrices to at the beginning , as described in section [ embedding ] , when we choose different number of domain relation learning training documents , depends on .the choice of dimension is once again an important model selection problem ; for this paper , the values of with different are shown in table [ ccadim ] .we believe that the values of are chosen large enough to preserve most of the structure yet still small enough to avoid dimensions of pure noise which might deteriorate the following ( g)cca step .the second column indicates what percentage of the total manifold matching training data is used ..mds dimensions [ cols="^,^,^",options="header " , ] inferences regarding differences in the relative performance between competing methodologies ( as well as the seemingly non - monotonic performance across for a given methodology ) are clouded by the variability inherent in our performance estimates .however , these real - data experimental results nonetheless illustrate the general relative performance characteristics of cca and gcca and their regularized versions , as a function of .canonical correlation analysis and its generalization are discussed in this paper as a manifold matching method .they can be viewed as reduced rank regression , and they are applied to a classification task on wikipedia documents .we show their performance with manifold matching training data from different domains and different dissimilarity measures , and we also investigate their efficiency by choosing different amounts of manifold matching training data .the experiment results indicate that the generalized canonical correlation analysis , which fuses data from disparate sources , improves the quality of manifold matching with regard to text document classification . also , if we use regularized canonical correlation analysis and its generalization , we further improve performance . finally , increasing the amount of domain relation learning training data from to ( in the figures [ pic_selfidm_stopwordsrm_stem_lsi_cca ] , [ pic_selfidm_stopwordsrm_stem_tfidf_lsidim_cca ] , [ pic_selfidm_stopwordsrm_stem_mi_lsidim_cca ] , [ pic_selfidm_stopwordsrm_stem_lsi_halflsidim_cca ] , [ pic_selfidm_stopwordsrm_stem_tfidf_halflsidim_cca ] and [ pic_selfidm_stopwordsrm_stem_mi_halflsidim_cca ] ) of the available 819 documents yield approximately improvement in classification performance .this improvement is independent of the amount of training data available for the classifier .s. t. dumais , t. a. letsche , m. l. littman and t. k. landauer ._ automatic cross - language retrieval using latent semantic indexing_. 1em plus 0.5em minus 0.4emin aaai symposium on cross language text and speech retrieval , 1997 .b. fortuna and j. shawe - taylor . _the use of machine translation tools for cross - lingual text mining_. 1em plus 0.5em minus 0.4emin proceedings of the icml workshop on learning with multiple views , 2005 .d. r. hardoon , s. r. szedmak , and j. r. shawe - taylor ._ canonical correlation analysis : an overview with application to learning methods_.1em plus 0.5em minus 0.4emneural computation 16(12 ) : 2639 , 2004 .d. karakos , j. eisner , s. khudanpur and c. e. priebe ._ cross - instance tuning of unsupervised document clustering algorithms_. 1em plus 0.5em minus 0.4emproceedings of the main conference human language technology conference of the north american chapter of the association for computational linguistics , 2007 .d. lin and p. pantel . _ concept discovery from text_.1em plus 0.5em minus 0.4emin proceedings of the 19th international conference on computational linguistics , ( morristown , nj , usa ) , pp . 1 - 7 ,association for computational linguistics , 2002 .z. ma , d. marchette and c. e. priebe . _ fusion and inference from multiple data sources in a commensurate space_. 1em plus 0.5em minus 0.4emstatistical analysis and data mining , accepted for publication , january , 2012 . c. e. priebe , d. j. marchette , y. park , e. j. wegman , j. l. solka , d. a. socolinsky , d. karakos , k. w. church , r. guglielmi , r. r. coifman , d. lin , d. m. healy , m. q. jacobs , and a. tsao ._ iterative denoising for cross - corpus discovery_.1em plus 0.5em minus 0.4emin proceedings of the 2004 symposium on computational statistics ( invited talk ) , prague , august 23 - 27 , 2004 . c. e. priebe , z. ma , d. marchette , e. hohman and g. coppersmith . _fusion and inference from multiple data sources_.1em plus 0.5em minus 0.4emthe 57th session of the international statistical institute , durban , august 16 - 22 , 2009 . c. e. priebe , d. j. marchette , z. ma and s. adali ._ manifold matching : joint optimization of fidelity and commensurability_.1em plus 0.5em minus 0.4embrazilian journal of probability and statistics , accepted for publication , february , 2012 .j. via , i. santamaria and j. perez ._ canonical correlation analysis ( cca ) algorithms for multiple data sets : application to blind simo equalization_. 1em plus 0.5em minus 0.4em13th european signal processing conference , antalya , turkey , 2005 .
manifold matching works to identify embeddings of multiple disparate data spaces into the same low - dimensional space , where joint inference can be pursued . it is an enabling methodology for fusion and inference from multiple and massive disparate data sources . in this paper we focus on a method called canonical correlation analysis ( cca ) and its generalization generalized canonical correlation analysis ( gcca ) , which belong to the more general reduced rank regression ( rrr ) framework . we present an efficiency investigation of cca and gcca under different training conditions for a particular text document classification task . manifold matching , canonical correlation analysis , reduced rank regression , efficiency , classification .
the emergence of cooperation in evolving populations with exploitative individuals is still a challenging problem in biological and social sciences .most theories that explain cooperation are based on direct reciprocity , as the famous iterated prisoner s dilemma .cooperation can also arise from indirect reciprocity when agents help others only if these are known as sufficiently altruistic . in most of these modelsa finite population of agents is simulated , pairs of agents meet randomly as potential donator and receiver .a donation involves some cost to the donor while it provides a larger benefit to the receiver .agents reproduce depending on their payoffs after a certain number of such meetings .obviously selfish individuals that do not donate would quickly spread in the population if help is not channeled towards more cooperative players . if agents do not meet repeatedly as in a large population direct reciprocity does not work .indirect reciprocity can solve this problems when donations are given only to those individuals that are known as sufficiently helpful .this mechanism effectively protects a cooperative population against exploiters .riolo et al . introduced a model in which cooperation is not based on reciprocity , but on similarity . in this model donations are channeled towards individuals that are sufficiently similar to the donator . to distinguish between different groups of individuals every agent has a tag $ ] .school ties , club memberships , tribal costumes or religious creeds are all tags that induce cooperation .in addition agents have a tolerance threshold , which determines the tag interval that the agent classifies as its own group .an agent donates to another agent if their tags are sufficiently similar , .the cost of such a donation for is while the benefit for is . for simplicity, is normalized to 1 , since a multiplication of payoffs with a constant factor does not change the game .initially , the tag and the tolerance threshold are uniformly distributed random numbers . in each generationevery agent acts as a potential donor for other agents chosen at random .hence it is on average also chosen times as a recipient .after each generation each agent compares its payoff with the payoff of another randomly chosen agent and adopts and if has a higher payoff .in addition every agent is subject to mutation . with probability agent receives a new drawn from a uniform distribution and also with probability a new which is gaussian distributed with standard deviation around the old .if this new becomes smaller than zero it is set to .obviously , it seems to be the best strategy for an individual to donate as little as possible , i.e. to have a very small .however , the whole population would be better off if everybody would cooperate .this `` tragedy of the commons '' can be solved in different ways , e.g. by volunteering .riolo et al .solve this problem by channeling help towards others that are sufficiently similar to the donator . instead of a cooperative population the formation and decay of cooperative clustersis observed for certain parameter ranges ( high and low , see fig .[ riolo ] ) .the average tolerance of a cooperative cluster grows slowly over time .occasionally it declines sharply .this decline occurs when the cluster is exploited by agents that are sufficiently similar to the cluster s agents to get support , but do not help themselves .however , the mechanism that generates these tides of tolerance remained unclear .[ ] [ ] [ 1][180]tolerance [ ] [ ] [ 1][180]donationrate , , ).,title="fig : " ] , , ).,title="fig : " ] here we develop a minimal model for tag - based cooperation that displays these `` tides of tolerance '' if there is a net average drift towards more cooperation .we find that these fluctuations vanish if such a drift is not included in the model .the importance of this observation stems from the fact that if we have species that can distinguish between themselves and others and donate only to others with the same tag , then this would in the long run lead to a single group of cooperating species having a single tag .but if we introduce a small rate of biased conversions from intolerant to tolerant species we observe a waxing and waning in time of species with different tags .in other words , the small conversion rate leads to a coexistence of different species where different species appear cyclically at different times .this consitutes a new mechanism that generates biodiversity in a group of competing species .this paper is organized as follows .first the model of riolo et al .is simplified in order to allow an analytical treatment .then the system without the effects of mutations is analyzed .thereafter we introduce a drift that increases the tolerance and leads to oscillations of tolerance .we show that the truncated mutations in the model of riolo et al .also lead to such a drift .here we simplify the model of riolo et al . in order to allow for an analytical treatment . in a first step we restrict the game to only two tags , red and blue .similarly we allow only two tolerances .the agents can either only donate to others bearing the same tag if they have zero tolerance or to every other agent ( ) .this leads to four possible strategies .then we allow partners to donate and to receive in an single interaction instead of defining different roles for donators and receivers .we end up with the payoff matrix [ cols="<,^,^,^,^",options="header " , ] which describes the prisoners dilemma .the stability of the fixed points with only one tag can be calculated as follows . for and find the jacobian matrix with the eigenvalues , and .the fixed point is marginal stable as long as , for it becomes unstable .the reasoning can be adopted for the fixed line .a fixed point that is conserved for can be found if all players are tolerant . for jacobian matrix is given by where and .the eigenvalues of this matrix are , and . ( ) is not possible for .hence the fixed line is instable for .for there is an interval of stability given by .if this inequation and are both fulfilled by , the biased conversions ensure stability of the fixed point although the replicator dynamics alone would make this point instable .the first inequation can only be fulfilled for .for it is always fulfilled and the whole fixed line is stable . the fixed point given by reduces to the mixed nash equilibrium for .the jacobi matrix at this fixed point is the eigenvalues of this matrix are where .for we have and .the third eigenvalue corresponds to an instable direction .the corresponding eigenvector is , which is the normal of the separatrix for . in the case of we have for only if .hence becomes stable where it coincides with the fixed points described in appendix [ addfpstab ] . in all other cases ,at least one eigenvalue of is outside the unit circle .numerical simulations show that the additional fixed points for can always be found in the plane spanned by ( ) , ( ) and ( ) .together with and we have three equations that describe these points .two of the solutions are fixed points not described above .the first fixed points can be written as where and . can be calculated by exchanging with and with .these fixed points have only real coordinates for . for have .the eigenvalues of the jacobi matrix at the fixed points can be calculated numerically . for fixed points are only stable if . at collapse with in a supercritical pitchfork bifurcation and form a single stable fixed point .for the fixed points are the only stable attractors and the order measures described in section [ cooperationcost ] can be calculated analytically .we find for for the fixed point becomes stable and we find , and .99 r. axelrod , _ the evolution of cooperation _ ( basic books , new york , 1984 ) m. a. nowak and k. sigmund , nature * 393 * , 573 ( 1998 ) r. l. riolo , m. d. cohen , and r. axelrod , nature * 414 * , 441 ( 2001 ) c. hauert , s. de monte , j. hofbauer and k. sigmund , nature * 296 * , 1131 ( 2002 ) g. szab and c. hauert , phys .lett . * 89 * 118101 ( 2002 ) g. szab and c. hauert , phys .e. * 66 * , 062903 ( 2002 ) k. sigmund and m. a. nowak , nature * 414 * , 403 ( 2001 ) h. g. schuster , _ complex adaptive systems _ ( scator verlag , saarbrcken , 2002 ) j. hofbauer and k. sigmund , _ evolutionary games and population dynamics _( cambridge univ . press ,cambridge , 1998 ) j. guckenheimer and p. holmers , _ nonlinear oscillations , dynamical systems and bifurcation of vector fields _ ( springer , new - york , 1983 ) h. g. schuster , _ deterministic chaos . an introduction . _( wiley - vch , weinheim , 1995 ) m. frean and e. r. abraham , proc .b * 268 * , 1323 ( 2001 ) b. kerr , m. a. riley , m. w. feldman , and b. j. m. bohannan , nature * 418 * , 171 ( 2002 ) a. traulsen and h. g. schuster , in preparation g. roberts and t. n. sherratt , nature * 418 * , 499 ( 2002 )
recently , riolo et al . [ r. l. riolo et al . , nature * 414 * , 441 ( 2001 ) ] showed by computer simulations that cooperation can arise without reciprocity when agents donate only to partners who are sufficiently similar to themselves . one striking outcome of their simulations was the observation that the number of tolerant agents that support a wide range of players was not constant in time , but showed characteristic fluctuations . the cause and robustness of these tides of tolerance remained to be explored . here we clarify the situation by solving a minimal version of the model of riolo et al . it allows us to identify a net surplus of random changes from intolerant to tolerant agents as a necessary mechanism that produces these oscillations of tolerance which segregate different agents in time . this provides a new mechanism for maintaining different agents , i.e. for creating biodiversity . in our model the transition to the oscillating state is caused by a saddle node bifurcation . the frequency of the oscillations increases linearly with the transition rate from tolerant to intolerant agents .
in this paper we study the dynamics of two different incompressible fluids with the same viscosity in a bounded porous medium .this is known as the confined muskat problem . for this problemwe show that there are global in time lipschitz continuous solutions corresponding to initial data that fulfills some conditions related to the amplitude , slope and depth .this problem is of practical importance because it is used as a model for a geothermal reservoir ( see and references therein ) or a model of an aquifer or an oil well ( see ) .the velocity of a fluid flowing in a porous medium satisfies darcy s law ( see ) where is the dynamic viscosity , is the permeability of the medium , is the acceleration due to gravity , is the density of the fluid , is the pressure of the fluid and is the incompressible velocity field . to simplify the notation we assume the motion of a fluid in a two - dimensional porous medium is analogous to the hele - shaw cell problem ( see and the references therein ) .let us consider the spatial domain for .we assume impermeable boundary conditions for the velocity in the walls . in this domainwe have two immiscible and incompressible fluids with the same viscosity and different densities ; fills the upper subdomain and fills the lower subdomain ( see figure [ ivscheme ] ) .the graph is the interface between the fluids .it is well - known that the system is in the ( rayleigh - taylor ) stable regime if the denser fluid is below the lighter one in every point , _ i.e. _ .conversely , the system is in the unstable regime if there is at least a point where the denser fluid is above the lighter one .if the fluids fill the whole plane the contour equation satisfies ( see ) for this equation the authors show the existence of classical solution locally in time ( see and also ) in the rayleigh - taylor stable regime , and maximum principles for and ( see ) .moreover , in the authors show the existence of turning waves and finite time singularities . in authors show an energy balance for the norm and some results concerning the global existence of solutions corresponding to _ small _ initial data .furthermore , they show that if initially , then there is global lipschitz solution and if the initial data has small norm then there is global classical solution . the case where the fluid domain is the strip , with , has been studied in . in this domainthe equation for the interface is for equation the authors in obtain local existence of classical solution when the system starts its evolution in the stable regime and the initial interface does not reach the walls , and the existence of initial data such that blows up in finite time .the authors also study the effect of the boundaries on the evolution of the interface , obtaining the maximum principle and a decay estimate for and the maximum principle for for initial data satisfying the following hypotheses : and these hypotheses are smallness conditions relating , and the depth .we define as the solution of the system then , for initial data satisfying the authors in show that these inequalities define a region where the slope of the solution can grow but it is bounded uniformly in time .this region only appears in the finite depth case . in this paperthe question of global existence of weak solution ( in the sense of definition [ ivdefi ] ) for in the stable regime is adressed .in particular we show the following theorem : [ ivglobal ] let be the initial datum satisfying hypotheses , and or in the rayleigh - taylor stable regime .then there exists a global solution moreover , if the initial data satisfy , and the solution fulfills the following bounds : while , if the initial datums satisfy , the solution satisfies the following bounds : this result excludes the formation of cusps ( blow up of the first and second derivatives ) and turning waves for these initial data , remaining open the existence ( or non - existence ) of corners ( blow up of the curvature with finite first derivative ) during the evolution .notice that in the limit we recover the result contained in .in this paper and the works the effect of the boundaries over the evolution of the internal wave in a flow in porous media has been addressed . when these results for the confined case are compared with the known results in the case where the depth is infinite ( see ) three main differencesappear : 1 .the decay of the maximum amplitude is slower in the confined case .2 . there are smooth curves with finite energy that turn over in the confined case but do not show this behaviour when the fluids fill the whole plane .3 . to avoid the turning effect in the confined case you need to have smallness conditions in and .however , in the unconfined case , only the condition in the slope is required . moreover , in the confined case a new region without turning effect appears : a region without a maximum principle for the slope but with an uniform bound . in both cases (the region with the maximum principle and the region with the uniform bound ) , theorem [ ivglobal ] ensures the existence of a global lipschitz continuous solution . keeping these results in mind , there are some questions that remain open . for instance, the existence of a wave whose maximum slope grows but remains uniformly bounded , or the existence of a wave with small slope such that , due to the distance to the boundaries , its slope grows and the existence ( or non - existence ) of corner - like singularities when the initial data considered is small in .the proof of theorem [ ivglobal ] is achieved using some lemmas and propositions .first , we define _hoc _ diffusive operators and the regularized system ( see section [ ivsec1 ] ) . for this regularized system , we show some _ a priori _ bounds for the amplitude and the slope . with these _a priori _ bounds we show global existence of solution ( see section [ ivsecglobal ] ) .then , we obtain the weak solution to , , as the limit of the regularized solutions ( see sections [ ivsec4 ] and [ ivsec5 ] ) . on the rest of the paperwe take and and we drop in the notation the dependence .we write for a universal constant that can change from one line to another . we denote . ] .thus , we need to obtain _priori _ bounds for the amplitude and the slope .we define positive constants that will be fixed below depending only on the initial datum considered . taking derivatives in , we obtain some terms with positive contribution .so , we attach some diffusive operators to the regularized system . given a smooth function , we define we notice that , if the depth is not , the previous operators should be rescaled and we write the subscript to keep this dependence in mind .these operators are finite depth versions of the classical . roughly speaking , there are three different types of _ extra _ terms appearing in the derivatives of and that we need to control to obtain the _ a priori _ bound for the slope : 1. there are terms which have an integrable singularity and they appear multiplied by . in order to handle these termswe add and .these two scales , , appear naturally due to the nonlinearity present in .there are terms which are nonlinear versions of and .these terms go to zero due to the convergence of the operators but they are not multiplied by . in order to handle these terms we add and .3 . to absorb the nonsingular terms we add . we notice that , as , the square root converges to zero less than linearly .this factor will be used because the contribution of some terms is with . once the _a priori _ bounds are achieved , we should prove global solvability in for the regularized system . to get this boundwe add .we also regularize the initial datum . we take , and , a symmetric mollifier and define .given we define the initial datum for the regularized system as putting all together , we define the regularized system where are universal constants that will be fixed below depending only on the initial datum .we remark that for all .notice that , due to the continuity of , uniformly on any compact set in . since , we get and then , as as , we have a.e . thus , we have and . furthermore , we have that if satisfies the hypotheses , and , also satisfy these hypotheses if is small enough .moreover , if satisfy the same remains valid for and if is small enough .we use some properties of the operators .for the reader s convenience , we collect them in the following lemma : [ ivops ] for the operators ( see ) , the following properties hold : 1 . is -symmetric . is positive definite .3 . let be a schwartz function .then , they converge acting on as goes to zero : 4 .let be a schwartz function .then , the derivative can be written in two different forms as the proof of the first two statement follows from .for the proof of the third part we recall some useful facts : if , due to the mean value theorem , we get and now the proof follows in a straightforward way . for the last statement we use the cancellation coming from the principal value to define using the uniform convergence of the derivative , we conclude the result . in this sectionwe prove an _ a priori _ bound for . to simplify notationwe define [ ivmpf ] let be the initial datum in , define as in and let be the classical solution of corresponding to the initial datum .then verifies moreover , if has a sign then this sign is preserved during the evolution of . changing variables and taking the derivative we obtain that is equivalent to if we define .then we have ( see for the details ) . if we write and we get .we compute by notational convenience we use the notation and we define evaluating in we have using the definition of and classical trigonometric identities we have putting together all the terms in , we obtain [\tan^2(\sigma)+|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]^{-1}}{[(\tan(\sigma)+\tan(\theta))^2+(1-\tan(\sigma)\tan(\theta))^2|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}\\ + \frac{2\tan^2(\sigma)\tan(\theta)[1-|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}{[\tan^2(\sigma)+|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)][(\tan(\sigma)+\tan(\theta))^2+(1-\tan(\sigma)\tan(\theta))^2|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}\\ + \frac{\tan^2(\sigma)\tan(\theta)[1+\tan^2(\theta)|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}{[\tan^2(\theta)+|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)][(\tan(\sigma)+\tan(\theta))^2+(1-\tan(\sigma)\tan(\theta))^2|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}\\ + \frac{2\tan(\sigma)\tan^2(\theta)[1-|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}{[\tan^2(\theta)+|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)][(\tan(\sigma)+\tan(\theta))^2+(1-\tan(\sigma)\tan(\theta))^2|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)]}\\ + \frac{(\tan(\sigma)+\tan(\theta))\tan(\sigma)\tan(\theta)}{(\tan(\sigma)+\tan(\theta))^2+(1-\tan(\sigma)\tan(\theta))^2|\tanh|^{2 - 2\epsilon}\left(\frac{\eta}{2}\right)}.\end{gathered}\ ] ] assuming that , then and we obtain and . in the case , we have and we get and . integrating this in time , we get where in the last step we use the definition . in order to prove that the initial sign propagates we observe that if is positive ( respectively negative ) the same remains valid for .assume now that and suppose that the line is reached ( if this line is not reached at any time we are done ) .we write .we have , and we get and . if we denote .we have and . integrating in timewe conclude the result . in this sectionwe prove an _ a priori _ bound for .we define where and are defined in and is a critical point for .we will use some bounds for and , for the reader s convenience , we collect them in the following lemma : [ ivlemmu ] let be an initial datum that fulfills , and ( or ) , and let be the solution with initial datum defined in . then for following inequalities hold 1 .if , due to , we have 2 .if , we get latexmath:[\[\label{ivmu1 } 3 . if and is the point where reaches its maximum , 4. if and to prove this lemma we use the following splitting taylor s theorem and the appropriate bounds using proposition [ ivmpf ] .first , we assume .notice that we can take small enough to ensure that defined in also fulfills the hypotheses , and . from ,taking one derivative and using lemma [ ivops ] , we get where is the integral corresponding to , is the integral corresponding to and this extra term appear from the regularization present in both .we have where with and the second term is given by where we compute with where and the second term is given by we need to obtain the local decay for . assuming the classical solvability for with an initial datum fulfilling the hypotheses , and we have that also fulfills , and if is small enough .recall that and .the linear terms in have the appropriate sign and they will be used to control the the positive contributions of the nonlinear terms . we need to prove that . for the sake of simplicity , we split the proof of this inequality in different lemmas .[ ivlemdf1 ] if , we have using the linear term to control , we have if . due to , we have .then , the term is the term is this kind of terms will be absorbed by .we have to deal with .we start with the term corresponding to in .we write [ ivlemdf2 ] if , we have we split since is small enough to ensure that the hypotheses , and hold at time , we have that , if , is not singular and can be bounded using and : we compute with and using the mean value theorem ,we bound the inner term as due to , the outer term is putting all together , we obtain then , using the diffusion given by to control , we get due to and , some terms have the appropriate sign : thus we can neglect their contribution .furthermore , we have taking and using the mean value theorem , we get combining these terms we conclude this result .the term corresponding to in is [ ivlemdf3 ] if , we have the proof follows the same ideas as in lemma [ ivlemdf2 ] .we are done with , thus , using the previous bound for , we are done with in .the terms in are not multiplied by and we have to obtain this decay from the integral .we write [ ivlemdf4 ] we have we have with and the term is not singular and can be bounded using and as follows : we can bound in the same way , we split the term as follows where and to bound we need to use the diffusion coming from .notice that , according to lemma [ ivops ] , we have and , when evaluating in the point where reaches its maximum , the first two terms are positive and they can be neglected .we get where in the last step we have used the previous splitting in and , and .this concludes the result .now that we have finished with , the term with is we have [ ivlemdf5 ] if , we have the proof is similar to the proof of lemma [ ivlemdf4 ] and , for the sake of brevity , omit it . in order to finish bounding in , we have to bound the term this term , akin to the singular term in , is bounded using the hypotheses and .[ ivlemdf6 ] using , and , we obtain using classical trigonometric identities we can write and therefore , as in , the sign of is the same as the sign of the roots of are and , so , if we have then we can ensure that this contribution is negative .since , we get using the cancellation when , we obtain where we remark that .we consider the cases given by the sign and the size of . _1 . case : _ in this case , we have and . using the definition of in and the fact that , we have ( see lemma [ ivlemmu ] ) .notice that , in this case , we have and we get .due to and we obtain _ 2 . case : _ in this case we have and .therefore , we get and we can neglect it .case : _ we remark that in this case we have and .we split the last term is now positive due to the definition of .then , in this case , we have and we can neglect its contribution .using taylor s theorem in we obtain the bound and .we are done with in and now we move on to .these terms are easier because the integrals are not singular . with the same ideas as before we can bound the term involving : [ ivlemdf7 ] the contribution of bounded by the proof is straightforward .we are left with in .first , we consider [ ivlemdf8 ] the term is bounded as using classical trigonometric identities , we compute we have to bound the terms containing . these terms are to obtain the decay with we split the integral in the regions and as before .[ ivlemdf9 ] the terms and are bounded by using this splitting , , , and , we get with the same ideas and using , we have in order to estimate the decay with of these integrals we compute and we have the following result concerning the evolution of the slope : [ ivmpdf ] let be the initial datum in satisfying , and , define as in and let be the classical solution of corresponding to the initial datum .then verifies for the sake of simplicity we split the proof in different steps .* step 1 ( local decay ) : * combining in and in lemma [ ivlemdf8 ] , and using the bounds and and the hypothesis we obtain we take , , . since we have a term and , we can compare the bounds in lemmas [ ivlemdf1]- [ ivlemdf9 ] with if is chosen big enough .the universal constant in all these bounds can be .we have shown that for every small enough , there is local in time decay .as is positive and arbitrary , we have * step 2 ( from local decay to an uniform bound ) : * then , in the worst case , we have these inequalities ensure that the hypotheses , and hold at time and decays again .* step 3 ( the case where ) : * this case follows the same ideas , and we conclude , thus , the result .[ ivubdf ] let be the initial datum in satisfying and define as in .let be the classical solution of corresponding to the initial datum .then , verifies the region delimited by is below the region with maximum principle ( see ) .then , in the worst case , at some we have that fulfills the hypotheses , and . from themthe result follows .in this section we obtain _ a priori _ estimates in that ensure the global existence for the regularized systems for initial data satisfying hypotheses , and or .first , notice that if the initial datum satisfies hypotheses , and , by propositions [ ivmpf ] and [ ivmpdf ] , the solution satisfies if the initial datum satisfies , by propositions [ ivmpf ] and [ ivubdf ] , the solution to the regularized system again satisfies the bounds .then we have the following proposition : [ existence ] let be the initial datum in satisfying , and or and define as in .then for every and there exists a solution ,h^3({\mathbb r})) ] .moreover , up to a subsequence , in for all compact set .first , notice that , due to propositions [ ivmpf ] , [ ivmpdf ] and [ ivubdf ] and hypotheses , and , the regularized solutions satisfy while , if the initial datum , instead of hypotheses , and , satisfies then due to the banach - alaoglu theorem , these bounds imply that there exists a subsequence such that and with ,w^{1,\infty}({\mathbb r})) ] and every . fixing , due to the uniform bound in and the ascoli - arzela theorem we have that , up to a subsequence , uniformly on any bounded interval .moreover , for all , we have )}\rightarrow0.\ ] ] in order to prove this uniform convergence on compact sets we use the spaces and results contained in . for , we define the norm we define the banach space as the completion of with respect to the norm .we have the embedding is continuous and , due to the ascoli - arzela theorem , the embedding is compact .we use the following lemma [ ivlemapaco ] consider a sequence \times b(0,n)) ] . assume further that the weak derivative is in ,l^\infty(b(0,n))) ] .then there exists a subsequence of that converges strongly in \times b(0,n)). ] ( not uniformly ) and in ,w^{-2,\infty}_*(b(0,n))) ], the linear terms in can be bounded easily with a bound depending on . to bound the nonlinear terms we split the integral and we compute where we have used , , and the second term with the kernel involving is the terms with the kernel involving not singular and can be bounded following the same ideas and putting together all these estimates we get thus we conclude with the bound in \times b(0,n)) ]we extend by zero outside of this ball of radius .then , using lemma [ ivops ] , we _ integrate by parts _ and obtain and using we bound the linear terms in as being a universal constant .the nonlinear terms are and using the boundedness of , we get the outer part is not singular and can be bounded ( as it was done before ) applying .we get putting together all these bounds we obtain }\|{\partial_t}f(t)\|_{w^{-2,\infty}_*(b(0,n))}\leq c\left(\|f_0\|_{l^\infty({\mathbb r})}\right).\ ] ] using lemma [ ivlemapaco ] , we conclude the result .looking at we give the following definition first , we deal with the linear terms . using the weak- * convergence in ,w^{1,\infty}({\mathbb r}))$ ] and lemma [ ivops ], we obtain and where , in the last step , we use the dominated convergence theorem and the convergence of the mollifier . to deal with the nonlinear terms we split the integrals for sufficiently small and large enough .these parameters , , that will be fixed below , can depend on but they do nt depend on . for the inner part of the integrals , we get \times{\mathbb r})}.\end{gathered}\ ] ] the outer integral goes to zero as grows .we compute as , the integrals are not singular and we only have to deal with the decay at infinity . using , , , the bound , integrating by parts and using the extra decay coming from the principal value at infinity ( see , for instance , the term in proposition [ existence ] in section [ ivsecglobal ] ) , we have the only thing to check is the convergence of . due to the compactness of the support of , we have with large enough to ensure . since we have ( up to a subsequence ) that uniformly on compact sets ( see lemma [ ivlemafeps ] ) , the uniform convergence if and the continuity of all the functions in this integral , the limit in and the integral commute and we get
in this paper we show global existence of lipschitz continuous solution for the stable muskat problem with finite depth ( confined ) and initial data satisfying some smallness conditions relating the amplitude , the slope and the depth . the cornerstone of the argument is that , for these _ small _ initial data , both the amplitude and the slope remain uniformly bounded for all positive times . we notice that , for some of these solutions , the slope can grow but it remains bounded . this is very different from the infinite deep case , where the slope of the solutions satisfy a maximum principle . our work generalizes a previous result where the depth is infinite . * keywords * : darcy s law , inhomogeneus muskat problem , well - posedness . * acknowledgments * : the author is supported by the grant mtm2011 - 26696 from ministerio de economa y competitividad ( mineco ) . the author thanks david paredes and professors diego crdoba and rafael orive for comments that greatly improved the manuscript . the author is grateful to reviewers for their helpful suggestions .
there have been many generalizations of conway s `` game of life '' ( gol ) since its invention in 1970 .almost all attributes of the gol can be altered : the number of states , the grid , the number of neighbors , the rules .one feature of the original gol is the glider , a stable structure that moves diagonally on the underlying square grid .there are also `` spaceships '' , similar structures that move horizontally or vertically .attempts to construct gliders ( as we will call all such structures in the following ) , that move neither diagonally nor straight , have led to huge man - made constructions in the original gol .an other possibility to achieve this has been investigated by evans , namely the enlargement of the neighborhood .it has been called `` larger than life '' ( ltl ) . instead of 8 neighborsthe neighborhood is now best described by a radius , and a cell having neighbors .the rules can be arbitrarily complex , but for the start it is sensible to consider only such rules that can be described by two intervals .they are called `` birth '' and `` death '' intervals and are determined by two values each .these values can be given explicitly as the number of neighbors or by a filling , a real number between 0 and 1 .in the first case , the radius has to be given , too , in the last case , this can be omitted .the natural extension of evans model is to let the radius of the neighborhood tend to infinity and call this the continuum limit .the cell itself becomes an infinitesimal point in this case .this has been done by pivato and investigated mathematically .he has called this model `` reallife '' and has given a set of `` still lives '' , structures that do not evolve with time .we take a slightly different approach and let the cell not be infinitesimal but of a finite size .let the form of the cell be a circle ( disk ) in the following , although it could be any other closed set .then , the `` dead or alive '' state of the cell is not determined by the function value at a point , but by the filling of the circle around that point .similarly , the filling of the neighborhood is considered .let the neighborhood be ring shaped , then with our state function at time we can determine the filling of the cell or `` inner filling '' by the integral and the neighborhood or `` outer filling '' by the integral where and are normalization factors such that the filling is between 0 and 1 . because the function values of lie also between 0 and 1 the factors simply consist of the respective areas of disk and ring .the radius of the disk or `` inner radius '' is given by which is also the inner radius of the ring .the outer radius of the ring is given by .in the original gol the state of a cell for the next time - step is determined by two numbers : the live - state of the cell itself , which is 0 or 1 , and the number of live neighbors , which can be between 0 and 8 .one could model all general rules possible by a matrix containing the new states for the respective combinations .it could be called the transition matrix . now in our case this translates to the new state of the point being determined by the two numbers and .the new state is given by a function .let us call it the transition function .it is defined on the interval \times [ 0,1] ] . to resemble the corresponding situation in gol ,typically is chosen ( the diameter of the neighborhood is 3 cells wide ) .as simple as the theoretical model is , it is not immediately obvious , how to implement it on a computer , as a computer can not handle infinitesimal values , continuous domains , etc .but it can handle real numbers in the form of floating point math , and as it turns out , this is sufficient .we also can model the continuous domain by a square grid , the ideal data structure for computation .so we will be able to implement our function as a array . when implementing the circularly shaped integrals we run into a problem .pixelated circles typically have jagged rims .so either we let the radius of the circle be so huge , that the pixelation due to our underlying square grid is negligible .then the computation time will be enormous .or we use another solution used in many similar situations : anti - aliasing .consider for example the integration of the inner region .for the cell function values are taken at locations .let us define . with an anti - aliasing zone around the rim of width take the function value as it is , when . in the casewhen we take 0 . in betweenwe multiply the function value by . similarly for the inner rim of the ring and the outer rim . in this waythe information on how far the nearest grid point is away from the true circle , is retained .typically , is chosen .we also have to construct the transition function explicitly .luckily we can restrict ourselves like ltl , for the beginning , to four parameters : the boundaries of the birth and death intervals . to make things smooth and to stay in the spirit of the above described anti - aliasing we use smooth step functions instead of hard steps .we call them sigmoid functions to emphasize this smoothness. for example we could define then we can define the transition function as where birth and death intervals are given by ] respectively .the width of the step is given by .as we have two different types of steps we have an and an .note that neither the anti - aliasing nor the smoothness of the transition function are necessary for the computer simulation to work .they just make things smoother and allow one to choose smaller radii for neighborhood and inner region and so achieve faster computation times for the time - steps .so far we have made everything smooth and continuous but one thing : the time - steps are still discrete . at time the function is calculated for every and this gives the new value at time .if we think of the application of as a nonlinear operator we can write \ , f(\vec x , t)\ ] ] to give us the ability to obtain arbitrarily small time steps , we introduce an infinitesimal time and reinterpret the transition function as a rate of change of the function instead of the new function value. then we can write \ , f(\vec x , t)\ ] ] where we have defined a new , that has values in the range ] .if the transition function in the discrete time - stepping scheme was then the smooth one is .the formula above is also the most trivial integration scheme for the integro - differential equation \ , f(\vec x , t)\ ] ] this equation however leads to a different form of life .the same generic gliders can not be found at the same birth / death values as in the version with discrete time - stepping , but it also leads to gliders , oscillating and stable structures .we have described a model to generalize conway s `` game of life '' to a continuous range of values and a continuous domain .the transition matrix of the gol has been generalized to the transition function . the 8 pixel neighborhood and 1 pixel cell of golhave been generalized to a ring shaped neighborhood and a disk shaped cell .the rule set has been generalized to four real numbers : the boundaries of the birth and death intervals .the last remaining discrete attribute was the time - stepping .we proposed a method for continuous time - stepping which reinterprets the transition function as the velocity of change .the technique with two radii has been used in other contexts , but no gliders were described .there has also been a computer implementation of a continuous version of gol , but without the inner radius technique , and no gliders were found at that time .the goal of finding a glider that can move in arbitrary directions has been achieved . of the original gol it resembles both the glider and the spaceship at the same time .it also resembles similar structures found in ltl .so we think we have found the generic , generalized glider , and call it the `` smooth glider '' .
we present what we argue is the generic generalization of conway s `` game of life '' to a continuous domain . we describe the theoretical model and the explicit implementation on a computer .
the complex error function also widely known as the faddeeva function can be defined as where is the complex argument .it is a solution of the following differential equation where the initial condition is given by the complex error function is principal in a family of special functions .the main functions from this family are the dawsons integral , the complex probability function , the error function , the fresnel integral and the normal distribution function .the dawsons integral is defined as it is not difficult to obtain a relation between the complex error function and the dawson integral . in particular , comparing right sides of equations and immediately yields another closely related function is the complex probability function . in order to emphasize the continuity of the complex probability function at , it may be convenient to define it in form of principal value integral or the complex probability function has no discontinuity at and since according to the principal value we can write where ] from the identity it follows that = \frac{{\sqrt \pi } } { 2}l\left ( x \right).\ ] ] consequently , according to this identity and equation we can write .\ ] ] since the argument is real , it would be reasonable to assume that this equation remains valid for a complex argument as well if the condition is satisfied . therefore , substituting the rational approximation of the dawson integral into identity results in , \qquad { \mathop{\rm im}\nolimits } \left [ z \right ] < < 1 .\end{aligned}\ ] ] the representation of the approximation can be significantly simplified as given by < < 1 , \end{aligned}\ ] ] where the expansion coefficients are and as we can see from equation only the -function needs a nested loop in multiple summation .therefore , most time required for computation of the complex error function is taken for determination of the -function . however , this approximation is rapid due to its simple rational function representation .although the first term in approximation is an exponential function dependent upon the input parameter , it does not decelerate the computation since this term is not nested and calculated only once .consequently , the approximation is almost as fast as the rational approximation .the relatively large area at the origin of complex plane is the most difficult for computation with high - accuracy . according to karbach _ the newest version of the _ roofit _ package , written in c / c++ code , utilizes the equation ( 14 ) from the work in order to cover accurately this area , shaped as a square with side lengths equal to . in the algorithm we have developed , instead of the squarewe apply a circle with radius centered at the origin .this circle separates the complex plane into the inner and outer parts .only three approximations can be applied to cover the entire complex plain with high - accuracy .the outer part of the circle is an external domain while the inner part of the circle is an internal domain consisting of the primary and secondary subdomains .these domains are schematically shown in fig . 1 .external domain is determined by boundary and represents the outer area of the circle as shown in fig .in order to cover this domain we apply the truncated laplace continued fraction that provides a rapid computation with high - accuracy .it should be noted , however , that the accuracy of the laplace continued fraction deteriorates as decreases .the inner part of the circle bounded by is divided into two subdomains .most area inside the circle is occupied by primary subdomain bounded by . for this domainwe apply the rational approximation that approaches the limit of double precision when we take , and ( see our recent publication for description in determination of the parameter ) .the secondary subdomain within the circle is bounded by narrow band ( see fig . 1 along -axis ) .the rational approximation sustains high - accuracy within all domain required for applications using the hitran molecular spectroscopic database .however , at its accuracy deteriorates by roughly one order of the magnitude as decreases by factor of .the proposed approximation perfectly covers the range .in particular , at , and this approximation also approaches the limit of double precision as the parameter tends to zero . in order to quantify the accuracy of the algorithm we can use the relative errors defined for the real and imaginary parts as follows - { \mathop{\rm re}\nolimits } \left [ { w\left ( { x , y } \right ) } \right]}}{{{\mathop{\rm re}\nolimits } \left [ { { w_{ref}}\left ( { x , y } \right ) } \right ] } } } \right|\ ] ] and - { \mathop{\rm im}\nolimits } \left [ { w\left ( { x , y } \right ) } \right]}}{{{\mathop{\rm im}\nolimits } \left [ { { w_{ref}}\left ( { x , y } \right ) } \right ] } } } \right|,\ ] ] where is the reference . the highly accurate reference values can be generated by using , for example , algorithm 680 , algorithm 916 or a recent c++ code in _package from the cern s library .consider figs . 2 and 3 illustrating logarithms of relative errors of the considered algorithm for the real and imaginary parts over the domain , respectively .the rough surface of these plots indicates that the computation reaches the limit of double precision .in particular , while the accuracy can exceed . however , there is a narrow domain along the along axis in fig . 2 ( red color area ) where the accuracy deteriorates by about an order of the magnitude .apart from this , we can also see in fig .3 a sharp peak located at the origin where accuracy is . figures 4 and 5 depict these areas of the plots magnified for the real and imaginary parts , respectively .it should be noted , however , that the worst accuracy is relatively close to the limitation at double precision computation .furthermore , since the corresponding areas are negligibly small as compared to the entire inner circle area , where the approximations and are applied , their contribution is ignorable and , therefore , does not affect the average accuracy .thus , the computational test reveals that the obtained accuracy at double precision computation for the complex error function is absolutely consistent with _ cernlib _ , _ libcerf _ and _ roofit _ packages ( see the work for specific details regarding accuracy of these packages ) .a matlab code for computation of the complex error function is presented in appendix c.a rational approximation of the dawson integral is derived and implemented for rapid and highly accurate computation of the complex error function .the computational test we performed shows the accuracy exceeding in the domain of practical importance .a matlab code for computation of the complex error function covering the entire complex plane is presented .this work is supported by national research council canada , thoth technology inc . and york university .it is not difficult to show that the complex error function can be expressed alternatively as ( see equation ( 3 ) in and , see also appendix a in for derivation ) consequently , from this equation we have } \\ & = \frac{1}{{\sqrt \pi } } \int\limits _ { - \infty } ^\infty { { e^ { - { t^2}/4}}\left [ { { e^{i\left ( { x + iy } \right)t } } + { e^ { - i\left ( { x + iy } \right)t } } } \right]dt}. \end{aligned}\ ] ] using the eulers identity where , from the equation it follows that t } \right)dt } = 2{e^ { - { { \left ( { x + iy } \right)}^2}}}\ ] ] or we have shown recently in our publication , the real part of the complex error function can be found as = \frac{{w\left ( { x , y } \right ) + w\left ( { - x , y } \right)}}{2}\ ] ] and since = k\left ( { x , y = 0 } \right ) = { e^ { - { x^2}}},\ ] ] the substitution of the approximation at into equation leads to according to the definition of the -function , at we can write consequently , substituting the approximation for the exponential function into equation yields . \end{aligned}\ ] ] the integrand of the integral is analytic everywhere except isolated points on the upper half plane therefore , using the residue theorems formula } } , \ ] ] where denotes a contour in counterclockwise direction enclosing the upper half plain ( for example as a semicircle with infinite radius ) and integrand of the integral , we obtain the rational approximation of the -function ..... function ff = fadfunc(z ) % this program file computes the complex error function , also known as the % faddeeva function .it provides high - accuracy and covers the entire % complex plane . the inner part of the circle |x + i*y| < = 8 is covered by % the equations ( 10 ) and ( 14 ) .derivation of the rational approximation % ( 10 ) and its detailed description can be found in the paper [ 1 ] .the new % approximation ( 14 ) shown in this paper computes the complex error % function at small im[z ] << 1 . the outer part of the circle |x + i*y| > 8 % is covered bythe laplace continued fraction [ 2 ] .the accuracy of this % function file can be verified by using c / c++ code provided in work [ 3 ] .% % references % [ 1 ] s. m. abrarov and b. m. quine , a new application methodology of the % fourier transform for rational approximation of the complex error % function , arxiv:1511.00774 .% http://arxiv.org/abs/1511.00774 % % [ 2 ] w. gautschi , efficient computation of the complex error function , % siam j. numer ., 7 ( 1970 ) 187 - 198 .% http://www.jstor.org/stable/2949591 % % [ 3 ] t. m. karbach , g. raven and m. schiller , decay time integrals in % neutral meson mixing and their efficient evaluation , % arxiv:1407.0748v1 ( 2014 ) .% http://arxiv.org/abs/1407.0748 % % the code is written by sanjar m. abrarov and brendan m. quine , york % university , canada , december 2015 .% * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * % all parameters in this table are global and can be used anywhere inside % the program body .n = 1:23 ; % define a row vector sigma = 1.5 ; % the shift constant h = 6/(2*pi)/23 ; % this is the step % ------------------------------------------------------------------------- % expansion coefficients for eq .( 10 ) % ------------------------------------------------------------------------- an = 8*pi*h^2*n.*exp(sigma^2 - ( 2*pi*h*n).^2).*sin(4*pi*h*n*sigma ) ; bn = 4*h*exp(sigma^2 - ( 2*pi*h*n).^2).*cos(4*pi*h*n*sigma ) ; cn = 2*pi*h*n ; % ------------------------------------------------------------------------- % expansion coefficients for eq .( 14 ) % ------------------------------------------------------------------------- alpha = 8*pi*h*n*sigma.*exp(-(2*pi*h*n).^2).*sin(4*pi*h*n*sigma ) ; beta = 2*exp(-(2*pi*h*n).^2).*cos(4*pi*h*n*sigma ) ; gamma = ( 2*pi*h*n).^2 ; % end of the table % * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ext_d = coeff(end)./z ; % start computing using the last coeff for m = 1:length(coeff ) - 1 ext_d = coeff(end - m)./(z - ext_d ) ; end ext_d = 1i / sqrt(pi)./(z - ext_d ) ; end int_d(secd ) = exp(-z(secd).^2 ) + 2i*h*exp(sigma^2)*z(secd ) . *... theta(z(secd).^2 + sigma^2 ) ; % compute using eq .( 14 ) int_d(~secd ) = rappr(z(~secd)+1i*sigma ) ; % compute using eq .( 10 ) thf = 1./z ; for k = 1:23 thf = thf + ( alpha(k ) + beta(k)*(z - gamma(k)))./ ... ( 4*sigma^2*gamma(k ) + ( gamma(k ) - z).^2 ) ; end end end j.p .boyd , evaluating of dawsons integral by solving its differential equation using orthogonal rational chebyshev functions , appl .comput . , 204 ( 2 ) ( 2008 ) 914 - 919 .g. pagnini and f. mainardi , evolution equations for the probabilistic generalization of the voigt profile function , j. comput ., 233 ( 2010 ) 1590 - 1595 . http://dx.doi.org/10.1016/j.cam.2008.04.040 b.m .quine and j.r .drummond , genspect : a line - by - line code with selectable interpolation error tolerance j. quant .transfer 74 ( 2002 ) 147 - 165 .http://dx.doi.org/10.1016/s0022-4073(01)00193-5 l.e .christensen , g.d .spiers , r.t .menzies and j.c jacob , tunable laser spectroscopy of near : atmospheric retrieval biases due to neglecting line - mixing , j. quant .transfer , 113 ( 2012 ) 739 - 748 .http://dx.doi.org/10.1016/j.jqsrt.2012.02.031 a. berk , voigt equivalent widths and spectral - bin single - line transmittances : exact expansions and the modtran5 implementation , j. quant .transfer , 118 ( 2013 ) 102 - 120 .http://dx.doi.org/10.1016/j.jqsrt.2012.11.026 b.m . quine and s.m .abrarov , application of the spectrally integrated voigt function to line - by - line radiative transfer modelling .transfer , 127 ( 2013 ) 37 - 48 .http://dx.doi.org/10.1016/j.jqsrt.2013.04.020 s.j .mckenna , a method of computing the complex probability function and other related functions over the whole complex plane .space sci . ,107 ( 1 ) ( 1984 ) 71 - 83 .http://dx.doi.org/10.1007/bf00649615 s.m . abrarov and b.m .quine , master - slave algorithm for highly accurate and rapid computation of the voigt / complex error function , j. math .research , 6 ( 2 ) ( 2014 ) 104 - 119 .http://dx.doi.org/10.5539/jmr.v6n2p104 s.m .abrarov and b.m .quine , sampling by incomplete cosine expansion of the sinc function : application to the voigt / complex error function , appl .comput . , 258 ( 2015 ) 425 - 435 .10.1016/j.amc.2015.01.072[http://dx.doi.org/ 10.1016/j.amc.2015.01.072 ] s.m .abrarov and b.m .quine , efficient algorithmic implementation of the voigt / complex error function based on exponential series approximation , appl .comput . , 218 ( 5 ) ( 2011 ) 1894 - 1902 .h. amamou , b. ferhat and a. bois , calculation of the voigt function in the region of very small values of the parameter where the calculation is notoriously difficult , amer .chem . , 4 ( 2013 ) 725 - 731 .http://dx.doi.org/10.4236/ajac.2013.412087 l.s .rothman , i.e. gordon , y. babikov , a. barbe , d.c .benner , p.f .bernath , m. birk , l. bizzocchi , v. boudon , l.r .brown , a. campargue , k. chance , e.a .cohen , l.h .coudert , v.m .devi , b.j .drouin , a. fayt , j .-flaud , r.r .gamache , j.j .harrison , j .-hartmann , c. hill , j.t .hodges , d. jacquemart , a. jolly , j. lamouroux , r.j .le roy , g. li , d.a .long , o.m .lyulin , c.j .mackie , s.t .massie , s. mikhailenko , h.s.p .mler , o.v .naumenko , a.v .nikitin , j. orphal , v. perevalov , a. perrin , e.r .polovtseva and c. richard , the hitran2012 molecular spectroscopic database , j. quant .transfer , 130 ( 2013 ) 4 - 50 .http://dx.doi.org/10.1016/j.jqsrt.2013.07.002 s.m .abrarov and b.m .quine , a rational approximation for efficient computation of the voigt function in quantitative spectroscopy , j. math .research , 7 ( 2 ) ( 2015 ) , 163 - 174 .
in this work we show a rational approximation of the dawson integral that can be implemented for high - accuracy computation of the complex error function in a rapid algorithm . specifically , this approach provides accuracy exceeding in the domain of practical importance . a matlab code for computation of the complex error function with entire coverage of the complex plane is presented . + * keywords : * complex error function , faddeeva function , dawsons integral , rational approximation
the dynamics of any stellar system may be characterized by the following three dynamical time scales : ( i ) the crossing time , which is the time needed by a star to move across the system ; ( ii ) the two - body relaxation time , which is the time needed by the stellar encounters to redistribute energies , setting up a near - maxwellian velocity distribution ; ( iii ) the evolution time , which is the time during which energy - changing mechanisms operate , stars escape , while the size and profile of the system change .several ( different and precise ) definitions exist for the relaxation time . the most commonly used is the half - mass relaxation time of spitzer ( 1987 , eq . 2 - 62 ) , where the values for the mass - weighted mean square velocity of the stars and the mass density are those evaluated at the half - mass radius of the system ( see meylan & heggie 1997 for a review ) .in the case of globular clusters , , 100 , and 10 . table 1 displays , for open clusters , globular clusters , and galaxies , some interesting relations between the above three time scales . for open clusters , crossing time and relaxation time are more or less equivalent , both being significantly smaller than the evolution time .this means that most open clusters dissolve within a few gigayears . for galaxies , the relaxation time and the evolution time more or less equivalent , both being significantly larger than the crossing time .this means that galaxies are not relaxed , i.e. , not dynamically evolved .it is only for globular clusters that all three time scales are significantly different , implying plenty of time for a clear dynamical evolution in these stellar systems , although avoiding quick evaporation altering open clusters .consequently , star clusters open and globular represent interesting classes of dynamical stellar systems in which some dynamical processes take place on time scales shorter than their age , i.e. , shorter than the hubble time , providing us with unique dynamical laboratories for learning about two - body relaxation , mass segregation from equipartition of energy , stellar collisions , stellar mergers , and core collapse .all these dynamical phenomena are related to the internal dynamical evolution only , and would also happen in isolated globular clusters .the external dynamical disturbances tidal stripping by the galactic gravitational field influence equally strongly the dynamical evolution of globular clusters . .dynamical time scales for open clusters , globular clusters and galaxies [ cols="<,^,^,^ , < " , ]mass segregation was one of the early important results to emerge from computer -body simulations of star clusters .see , e.g. , von hoerner ( 1960 ) who made the first -body calculations with = 4 , 8 , 12 , and 16 bodies . the heavier stars would gradually settle towards the center , increasing their negative binding energy , while the lighter stars would preferentially populate the cluster halo , with reduced binding energy .later , direct integrations using many hundreds of stars showed the same tendency .soon it was also realized that computation of individual stellar motions could be replaced by statistical methods .the same mass segregation was observed in models which integrated the fokker - planck equation for many thousands of stars ( e.g. , spitzer & shull 1975 ) .mass segregation is expected from the basic properties of two - body relaxation .the time scale for dynamical friction to significantly decrease the energy of a massive star of mass is less than the relaxation time scale for lighter stars of mass by a factor ( see , e.g. , eq . 14.65 in saslaw 1985 ) . as massive stars in the outer regions of a clusterlose energy to the lighter ones , they fall toward the center and increase their velocity .the massive stars continue to lose the kinetic energy they gain by falling and continue to fall . the lighter stars , on the other hand , increase their average total energy and move into the halo . as light stars rise through the system , their velocity decreases , altering the local relaxation time for remaining massive stars .will this mass segregation process have an end , i.e. will the system reach an equilibrium ? two conditions would have to be satisfied : mechanical equilibrium determined by the scalar virial theorem : and thermal equilibrium determined by equipartition of energy among components of different mass : all species must have the same temperature , so there is no energy exchange among the different species .from a pure observational point of view , mass segregation has now been observed clearly in quite a few open and globular clusters .these observational constraints are essentially photometric : different families of stars , located in different areas of the color - magnitude diagram ( cmd ) , exhibit different radial cumulative distribution functions .such an effect , when observed between binaries and main sequence stars or between blue stragglers and turn - off stars , is generally interpreted as an indication of mass segregation between subpopulations of stars with different individual masses .we present hereafter examples of observations of mass segregation in three different kinds of star clusters : ( i ) in the very young star cluster r136 , ( ii ) in a few open clusters , and ( iii ) in a few globular clusters .the large magellanic cloud star cluster ngc 2070 is embedded in the 30 doradus nebula , the largest hii region in the local group ( see meylan 1993 for a review ) . the physical size of ngc 2070 , with a diameter 40 pc , is typical of old galactic and magellanic globular clusters and is also comparable to the size of its nearest neighbor , the young globular cluster ngc 2100 . with an age of 4 10 yr ( meylan 1993 , brandl 1996 ) ,ngc 2070 appears slightly younger than ngc 2100 which has an age of 12 - 16 10 yr ( sagar & richtler 1991 ) .brandl ( 1996 ) obtained for r136 , the core of ngc 2070 , near - ir imaging in bands with the eso adaptive optics system adonis at the eso 3.6-m telescope .they go down to = 20 mag with 0.15 resolution over a 12.8 12.8field containing r136 off center .they present photometric data for about 1000 individual stars of o , b , wr spectral types .there are no red giants or supergiants in their field .brandl ( 1996 ) estimate from their total magnitude that the total stellar mass within 20 is equal to 3 10 , with an upper limit on this value equal to 1.5 10 . a star cluster with a mass of this range and a typical velocity dispersion of would be gravitationally bound , a conclusion not immediately applicable to ngc 2070 because of the important mass loss due to stellar evolution experienced by a large number of its stars ( see kennicutt & chu 1988 ) .mass segregation may have been observed in r136 , the core of ngc 2070 . from their luminosity function ,brandl ( 1996 ) determine , for stars more massive than 12 , a mean mass - function slope = 1.6 [ (salpeter ) = 1.35 ] , but this value increases from = 1.3 in the inner 0.4 pc to = 1.6 for 0.4 pc 0.8 pc , and to = 2.2 outside 0.8 pc .the fraction of massive stars is higher in the center of r136 .brandl ( 1996 ) attribute these variations to the presence of mass segregation .given the very young age of this system , which may still be experiencing from violent relaxation , the cause of this mass segregation is not immediately clear. it may be due to a spatially variable initial mass function , a delayed star formation in the core , or the result of dynamical processes that segregated an initially uniform stellar mass distribution . obviously , the older the cluster , the clearer the mass segregation effect .one of the first such clear cases was observed by mathieu & latham ( 1986 ) in m67 which , with an age of about 5 gyr , is one of the oldest galactic open clusters .they studied the radial cumulative distribution functions of the following three families of stars : single stars , binaries , and blue stragglers , the latter being possibly the results of stellar mergers .the radial cumulative distribution functions of binaries and blue stragglers are similar and significantly more concentrated than the distribution function of the single stars .in such a dynamically relaxed stellar system , this result may be explained only by mass segregation between stars of different individual masses. in one of the most recent such studies , raboud & mermilliod ( 1998 ) have observed some clear presence of mass segregation ( see fig .1 ) in three open clusters ngc 6231 , the pleiades , and praesepe which , with ages equal to 4 , 100 , 800 myr , respectively , are significantly younger than m67 .the presence of mass segregation in the pleiades and praesepe open clusters is expected given the fact that their relaxation times are shorter than their ages .this is not the case for ngc 6231 , where the presence of mass segregation may be as problematic as it is in the case of r136 .because of their very high stellar densities , globular clusters have been hiding for decades any clear observational evidence of mass segregation , expected to be present essentially in their crowded central regions .differences in the radial distributions of stars of different luminosities / masses have finally been definitely observed with hst , providing conclusive observational evidence of mass segregation in the central parts of globular clusters .one of the most serious and detailed such studies is the one by anderson ( 1997 ) who has used hst / foc and hst / wfpc2 data to demonstrate the presence of mass segregation in the cores of three galactic globular clusters : m92 , , and .anderson has first determined the luminosity function of each cluster at two different locations in the core .then he has compared these luminosity functions with those from king - michie multi - mass models , in the cases with and without mass segregation between the different stellar species .3 displays the comparison between the observed luminosity function ( dots ) and the model predictions with ( continuous lines ) and without ( dashed lines ) mass segregation , at the center of ( left panel ) and at one core radius from the center ( right panel ) .the two different models differ strongly over a large range in magnitude ( 18 - 26 mag ) .the observed luminosity function shows a very clear agreement with the model containing mass segregation , and rules out completely any model without mass segregation ( anderson 1997 ) .the globular cluster m92 displays results very similar to those obtained for .this is not surprising given the fact that both clusters have rather similar structural parameters and concentrations , providing similar central relaxation times of the order of 100 myr .this is not the case for , which is the most massive galactic globular cluster and has a central relaxation time of about 6 gyr .as expected , the two model luminosity functions ( with and without mass segregation ) computed at the center of differ only slightly , and the observed luminosity function is right between the curves of the two models .the two model luminosity functions computed at 16 from the center ( at about 5 core radii ) do not differ significantly and consequently agree similarly with the observations .as expected , , which has had hardly any time to become dynamically relaxed , even in its center , exhibits a very small amount of mass segregation ( anderson 1997 ) .as seen above , the various stellar species of a star cluster must have the same temperature in order to have equipartition of energy .spitzer ( 1969 ) derived a criterion for equipartition between stars of two different masses and .let us consider the analytically tractable case where the total mass of the heavy stars , , is much smaller than the core mass of the system of the lighter stars , , and the individual heavy stars are more massive than the light stars , .in such a case , equipartition will cause the heavy stars ( e.g. , binaries and/or neutron stars ) to form a small subsystem in the center of the core of the system formed by the light ( e.g. , main sequence ) stars . in equipartition , , where represents the central one - dimensional dispersion of the light stars .it can be easily seen ( e.g. , binney & tremaine 1987 ) that equipartition can not be satisfied unless the following inequality holds : where and are dimensionless constants .when become too large , the inequality is violated , there is the `` equipartition instability '' ( spitzer 1969 ) , which has a simple physical explanation : when the mass in heavy stars is too large , these stars form an independent high - temperature self - gravitating system at the center of the core of light stars . in a realistic system with a distribution of stellar masses ,the chief effect of the equipartition instability is to produce a dense central core of heavy stars , which contracts independently from the rest of the core .however , as this core becomes denser and denser , the gravothermal instability dominates over the equipartition instability ( antonov 1962 , lynden - bell & wood 1968 ) and the cluster experiences core collapse ( makino 1996 ) . from an internal point of view, the dynamical evolution of star clusters is driven by two - body relaxation , mass segregation , equipartition instability , and core collapse . from an external point of view, the dynamical evolution of star clusters is driven by the dynamical disturbances due to the crossing of the galactic plane , which create tidal tails . in whatever location ,these stellar systems are dynamically never at rest .the globular cluster m15 has long been considered as a prototype of the collapsed - core star clusters .high - resolution imaging of the center of m15 has resolved the luminosity cusp into essentially three bright stars .post - refurbishment hubble space telescope star - count data confirm that the 2.2 core radius observed by lauer ( 1991 ) and questioned by yanny ( 1994 ) , is observed neither by guhathakurta ( 1996 ) with hst / wfpc2 data nor by sosin & king ( 1996 , 1997 ) with hst / foc data .this surface - density profile clearly continues to climb steadily within 2 .it is not possible to distinguish at present between a pure power - law profile and a very small core ( sosin & king 1996 , 1997 ) .consequently , among the galactic globular clusters , m15 displays one of the best cases of clusters caught in a state of deep core collapse .sosin & king ( 1997 ) have estimated the amount of mass segregation in the core of m15 from their hst / foc data : the mass functions at 20 and 5 from the center clearly show substantial mass segregation for all stars with masses between 0.55 and 0.80 .-0.5truecm the mf at = 20 is best fit by a power - law with slope = 0.26 , -0.5truecm the mf at = 5 is best fit by a power - law with slope = 0.25 .these two slopes differ at the 5- level . once compared with models ,the amount of mass segregation is somewhat less than predicted by a king - michie model , and somewhat greater than predicted by a fokker - planck model .see also king ( 1998 ) in the case ngc 6397 .mass segregation is also present in kinematical data , i.e. , in the radial velocities and proper motions of individual stars .so far , radial velocities have been obtained essentially only for the brightest stars , giants and subgiants , which have very similar masses .it is only recently that internal proper motions of individual stars have been obtained in globular clusters .the following team ( pi . g. meylan , with cois .d. minniti , c. pryor , e.s .phinney , b. sams , c.g .tinney , joined later by j. anderson , i.r .king , and w. van altena ) have acquired hst / wfpc2 images in the core of in three different epochs ( oct .1995 - nov .1997 - oct .1999 ) defining a total time baseline of 4 years .the choice of the = fw300 filter prevents saturation for the brightest stars and allows simultaneous measurement of proper motions for the brightest stars as well as for stars more than two magnitudes bellow the turn - off ( meylan 1996 ) . for each epochwe have 15 images with careful dithering .each measurement of the position of a star has a different bias since in each pointing the star is measured at a different pixel phase .we use an iterative process on positions and local psf determinations .we achieve a position accuracy of 0.020 pixel for a single image , amounting to 0.006 pixel for the mean of 15 images .this corresponds to 0.3 mas in the pc frame and 0.6 mas in the wf2 , wf3 , and wf4 frames , for about 14,000 stars in the core of ( anderson & king in preparation ) .preliminary results show a clear difference between the proper motions of blue stragglers and stars of similar magnitudes : the former are significantly slower than the latter . since blue stragglers are either binaries or mergers , with masses higher than the turn - off mass , the above difference unveils the first kinematical observation of mass segregation in a globular cluster ( meylan in preparation ) .anderson j. , 1997 , phd thesis , university of california , berkeley antonov v.a ., 1962 , vest .7 , 135 ; english translation : antonov , v.a . , 1985 , in dynamics of star clusters , iau symp . 113 , eds .goodman j. & hut p. , ( dordrecht : reidel ) , p. 525binney j. , tremaine s. , 1987 , galactic dynamics , ( princeton : princeton university press ) brandl b. , sams b.j . , bertoldi f. , , 1996 , apj , 466 , 254 guhathakurta p. , yanny b. , schneider d.p . , bahcall j.n ., 1996 , aj , 111 , 267 kennicutt r.c . , chu y .- h . , 1988 ,aj , 95 , 720 king i.r . , anderson j. , cool a.m. , piotto g. , 1998 , apj , 492 , l37 lauer t.r . , holtzman j.a . ,faber s.m . , , 1991 , apj , 369 , l45 lynden - bell d. , wood r. , 1968 , mnras , 138 , 495 makino j. , 1996 , apj , 471 , 796 meylan g. , 1993 , in the globular cluster - galaxy connection , asp conference series vol .smith g.h .& brodie j.p . ,( san francisco : asp ) , p. 588 meylan g. , heggie d.c ., 1997 , a&ar , 8 , 1 - 143 meylan g. , minniti d. , pryor c. , tinney c. , phinney e.s . , sams b. , 1996 , in the eso / stsci workshop on _ science with the hubble space telescope - ii _ ,p. benvenuti , f.d .macchetto , e.j .schreier , ( baltimore : stsci ) , p. 316raboud d. , mermilliod j .- c . , 1998 , a&a , 333 , 897 sagar r. , richtler t. , 1991 , a&a , 250 , 324 saslaw w.c . , 1985 , gravitational physics of stellar and galactic systems , ( cambridge : cambridge university press )sosin c. , king i.r ., 1996 , in dynamical evolution of star clusters : confrontation of theory and observations , iau symp. 174 , eds .hut p. & makino j. ( dordrecht : kluwer ) , p. 343sosin c. , king i.r ., 1997 , aj , 113 , 1328 spitzer l. , 1969 , apj , 158 , l139 spitzer l. , 1987 , dynamical evolution of globular clusters , ( princeton : princeton university press ) spitzer l. jr . , shull j.m . , 1975 ,apj , 201 , 773 von hoerner s. , 1960 , z. f. a. , 50 , 184 yanny b. , guhathakurta p. , bahcall j.n . , schneider d.p . , 1994 , aj , 107 , 1745 * comment by s. portegies zwart * from a theoretical point of view , it is not always clear what observers considered as the center of a star cluster and what theorists should use as the center .one can use , for example , the geometric center , the area with the highest luminosity density , number density , mass density .* comment by g.m . * from an observational point of view , the determination of the center of a globular cluster is also difficult and uncertain . ideally , the algorithm used should determine the barycenter of the stars , not of the light . in the case of a collapsed globular cluster like m15 , which has a very small unresolved core ,the task is difficult because of the very small number of stars detectable in such a small area .the uncertainty in the position of the center is of the order of the core radius value , i.e. , about 0.2 . in the case of a globular cluster like ,various methods give positions differing by 2 - 3 .such a large uncertainty is nevertheless acceptable , given the core radius value of about 25 .* question by h. zinnecker * is the mass segregation observed in the 30 doradus cluster due to dynamical evolution or due to preferential birth of the more massive stars near the cluster center ?can we distinguish between these two possibilities ?* answer by g.m . *it is not known if the mass segregation is the consequence of dynamical evolution or of a flatter imf in the center .i fail to see any reliable way to distinguish between these two possibilities .* question by p. kroupa * the globular clusters and do not appear to have a pronounced binary sequence in color - magnitude diagrams , whereas other globulars have pronounced binary sequences .does this imply different dynamical histories ?* answer by g.m .* it would be interesting to compare the locations where these various color - magnitude diagrams have been obtained . in the case of, the excellent photometry we obtained is for stars right in the center , where encounters and collisions operate and probably decrease the fraction of binaries. it would be interesting to make a precise comparative study , for a few globular clusters , for which we would have data from the same instrument and reduced with the same software , in fields at the same relative distance from the center .* question by d. calzetti * about studies which find flatter imfs in the centers of clusters : do you think that these studies may suffer from effects of crowding towards the cluster center , and therefore , find a flatter imf because of this ?* answer by g.m .* yes , definitely ! crowding is always present in photometry of globular clusters , especially in observations from the ground .nevertheless , i think that some careful photometric studies using hst data have provided reliable results in relation to imf slope ( see , e.g. , king , 1998 , apj , 492 , l37 ) .it would be an interesting study , which , as far as i know , has not been done yet .it is partly due to the difficulties in determining precisely the center of the core and to the difficulties in determining precisely outer isophotes which suffer from very low star counts and are strongly polluted by foreground stars and background galaxies .
star clusters open and globulars experience dynamical evolution on time scales shorter than their age . consequently , open and globular clusters provide us with unique dynamical laboratories for learning about two - body relaxation , mass segregation from equipartition of energy , and core collapse . we review briefly , in the framework of star clusters , some elements related to the theoretical expectation of mass segregation , the results from n - body and other computer simulations , as well as the now substantial clear observational evidence . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
the importance of the stem industry to the development of our nation can not be understated .as the world becomes more technology - oriented , there is a necessity for a continued increase in the stem workforce .however , the u.s . has been experiencing the opposite . in the united states ,200,000 engineering positions go unfilled every year , largely due to the fact that only about 60,000 students are graduating with stem degrees in the united states annually [ 17 ] .another obvious indication is the relatively fast growth in wages in most stem - oriented occupations : for computer workers alone , there are around 40,000 computer science bachelor s degree earners each year , but roughly 4 million job vacancies [ 29 ] .therefore , our motivation is to solve this problem of stem workforce shortage by promoting stem education and careers to college students so as so to increase the number of people who are interested in pursuing stem majors in college or stem careers after graduation . in this paper, we present an innovative approach to promote stem education and careers using social media in the form of introducing stem role models to college students .we chose college students as our target population since they are at a life stage where role models are important and may influence their career decision - making [ 15 ] .social media is useful for our study in the following two ways : 1 ) the massive amount of personal data on social media enables us to predict users real life identities and interests so we can identify college students and role models from mainstream social networking websites such as the microblogging website twitter and professional networking website linkedin ; 2 ) social media itself also can serve as a _ natural and effective platform _ by which we can connect students with people already in stem industries .-0.1 in our approach is effective in the following three ways .first , increasing stem presence will inspire students to develop interests in stem fields [ 18 ] .second , the exposure of career stem role models that students can identify with will have positive influence on students , as strongly supported by previous studies [ 12 ] .finally , as a form of altruism , accomplished people are likely to help young people [ 11,6 ] and people who resemble them when they were young [ 21 ] . more importantly, social learning theory [ 1,2 ] , psychological studies , and empirical research have suggested that students prefer to have role models whose race and gender are the same as their own [ 12,30,15 ] as well as who share similar demographics [ 7 ] and interests [ 16 ] . motivated and supported by the findings of these related studies , we select _ gender , race , geographic location , and interests _ as the four attributes that we will use for matching the students with stem role models .in addition , similar interests and close location will further facilitate the potential _ personal connection _ between the students and role models .in particular , we first use social media as a tool to identify college students and stem role models using the data mined from twitter and linkedin . as a popular online network , on the average, twitter has over 350,000 tweets sent per minute [ 27 ] .moreover , in 2014 , 37% social media users within the age range of 18 - 29 use twitter [ 5 ] .this suggests a large population of college users on twitter . in contrast , as world s largest professional network , linkedin only has roughly 10% college users out of more than 400 million members [ 25 ] , but has a rich population of professional users .part of its mission is to connect the world s professionals and provide a platform to get access to people and insights that help its users [ 14 ] .our goal , to connect college students with role models , is _ organically consistent with linkedin s mission and business model_. specifically , we train a reliable classifier to identify college student users on twitter , and we build a program that finds stem role models on linkedin .we employ various methods to extract gender , race , geographic location and interests from college students and stem role models based on their respective social media public profiles and feeds .we then develop a ranking algorithm that ranks the top-5 stem role models for each college student based on the similarities of their attributes .we evaluated our ranking algorithm on 2,000 college students from the 297 most populated cities in the united states , and our results have shown that around half of the students are correctly matched with at least one stem role model from the same city .if we expand our geographic location standard to the state - level , this percentage increases by 13% ; if we look at the college students who are from the top 10 cities that our stem role models come from separately , this percentage increases by 33% . our objective is to do social good , and we expect to promote stem education and careers to real and diverse student population . in order to make a real life impact on the college students after we obtain the matches from the ranking algorithm , we design an implementation to help establish connections between the students and stem role models using social media as the platform . for each student , we generate a personalized webpage with his top-5 ranked stem role models linkedin public profile links as well as a feedback survey , and recommend the webpage to the student via twitter . ultimately , it is entirely up to the student and the role models if they would like to get connected via linkedin or other ways , and we believe these connections are beneficial for increasing interest in stem fields .it is noteworthy that _ linkedin has already implemented a suite of mechanisms to make connection recommendations _ , even though none of which is intended to promote stem career specifically .fig.2 illustrates how our implementation naturally fits into the work flow and business model of linkedin .-0.2 in -0.1 in our study has many advantages .leveraging existing social media ensures that we are able to retrieve a large scale of sampling users and thus our implementation is able to influence a large scale of students .also , due to available apis and existing social media infrastructures , our data collection and our implementation are low cost or virtually free .more importantly , unlike some traditional intervention methods , we recommend stem role models to college students in a _ non - intrusive _ way .we tweet at a student with the link of his personalized webpage , and it depends on himself if he wants to take actions afterwards .finally , our approach is failure - safe in delivery .if there are some twitter users that are classified incorrectly as college students , it has no harmful impact on them even if we promote stem education to them .the major contributions made in this study are fourfold .first , we take advantage of social media to do social good in solving a problem of paramount national interest .second , we take advantage of human psychology , motivation , and altruism . that people are more likely to be inspired by models who are like them , and people who are accomplished are likely to help young people who share similarities with them .third , we have developed a simple yet effective ranking algorithm to achieve our goal and verified its effectiveness using real students .lastly , we design an implementation that seamlessly mashes up with the natural work flow and business model of linkedin to establish the connections between students and role models .stem workforce is significant to our nation , and the shortage in such fields makes promoting stem education and careers indispensable .we review the existing methods of promoting stem education and build on previous research in both computer science and human psychology .previous effort has been made to promote stem education .most existing intervention methods focus on promoting through school educators [ 19 ] , external stem workshops [ 26 ] , and public events such as conferences [ 22 ] .however , very little evidence has shown that these strategies were effective .on the other hand , while none of the methods has utilized the rich database and powerful networking ability of social media , social media - driven approaches have succeeded in many applications , such as health promotion and behavior change [ 33 ] .the abundance of social media data has attracted researchers from various fields .we benefit the most from studies that related to age prediction and user interest discovery .nguyan et al .[ 20 ] studied various features for age prediction from tweets , and guided our feature selection for identifying college students .michelson and macskassy [ 31 ] proposed a concept - based user interest discovery approach by leveraging wikipedia as a knowledge base while xu , lu , and yang [ 32 ] , ramage , dumais , and liebling [ 23 ] both discovered user interest using methods that built on lda ( latent dirichlet allocation)[3 ] or tf - idf [ 24 ] .our study also adopts knowledge from psychological studies that demonstrate the importance of having a role model with similar demographics and interests .karunanayake [ 12 ] discussed the positive effect of having role models with the same race , and it holds across different races ; weber and lockwood [ 30 ] discovered that female students are more likely to be inspired by female role models ; ensher and murphy [ 7 ] indicated that liking , satisfaction , and contact with role models are higher when students perceive themselves to be more similar to them in demographics ; and lydon et al . [16 ] suggested that people are attracted to people who share similar interests .these studies help determine the attributes that we selected to match the students with role models .we used the rest api to retrieve twitter data . instead of directly searching for college twitter users among all the general users, we focused on the followers of 112 u.s .college twitter accounts since there is higher percentage of college students among these users . in total, we successfully retrieved more than 90,000 followers . for each user, we extracted the entities of his most recent 200 tweets ( if a user has fewer than 200 tweets , all his tweets were extracted ) and his user profile information , which includes geographic location , profile photo url , and bio .after we filtered out api failures , duplicates , and users with zero tweet or empty profile , we are left with 8,688,638 tweets from 62,445 distinct users . due to the limited information that linkedin apiallows us to retrieve , we employed web crawling techniques to obtain the desired information directly from the webpage .we built a program that does automated linkedin public people search and used it to search users based on the most common 1,000 surnames for asians , blacks , and hispanic , and more than 5,000 common american given names . despite some overlapping surnames , the large number of names we searched is still able to ensure the diversity of the potential role models , and our results confirmed that . for each search , the maximum number of users returned is 25 , and we collected the public profile urls of all the returned users .after we deleted the duplicates , we retained 182,016 distinct linkedin users .we employed machine learning techniques to identify twitter college student users ( i.e. from incoming freshmen to seniors ) .first , we labeled our training set .we used regular expression techniques to label college student users and non - college student users .specifically , we studied patterns in users tweets and bio , and constructed 45 different regular expressions for string matching . for example , expressions such as `` i m going to college '' , `` # finalsweek '' , or `` university 19 '' are used to label college students ; and expressions such as `` professor of '' , `` manager of '' , or `` father '' are used to label non - college students .if a user s tweets and bio do not contain any of the 45 expressions , the user is unlabeled .we then manually checked and only counted the correctly labeled users . in the end , we are left with 2,413 labeled users , where 1,103 are college students and 1,310 are non - college students , as well as 60,032 unlabeled users .second , we trained our labeled data set to develop a reliable classifier using the libsvm library [ 4 ] in weka [ 8 ] .we chose svm for our binary classification because it is efficient for the size of our data set .we learned from nguyan et al.s study of language and age in tweets [ 20 ] that the usage of emoji , hashtag , and capitalized expressions such as `` haha '' and `` lol '' are good age indicators .we built on their study and took a step further to use these three features for differentiating college students ( i.e. specific age group ) from general users .we were also curious about whether re - tweet would be another good age indicator , so we also extracted this feature . for each user , each feature is represented by its _ relative frequency _ among the user s tweets : since relative frequencies are continuous , we discretized them into 10 bins with an equal width of 0.1 and assigned them with ordinal integer values for classification .-0.2 in -0.1 in fig.3 demonstrates our analysis of the four features . on average ,college student users use emojis and haha / lol more frequently while non - college student users use hashtags more frequently .we note that these results are consistent with the conclusions of a previous study [ 20 ] . however , there is not much difference in re - tweet between these two groups .we experimented training the classifier with and without re - tweet , our 10-folds cross - validation results showed that including re - tweet actually slightly lowers the accuracy of the classifier .thus , we confirmed that re - tweet is a noise and does not help us to differentiate college student users .our final classifier trained from the other three features achieves a high accuracy of 84% .we then used this trained classifier to infer college student users among the unlabeled users .we further labeled 18,351 users as college students , and with our manually labeled college student users , together we have labeled 19,454 college student users in total .our goal is to find diverse stem role models from linkedin in terms of geographic locations and industries .while the definition of a role model is subjective to an individual student , we take an objective view and consider people who have received stem education and work in stem - related industries or have a career in stem industries as role models .we first filtered out users who are outside of the united states and then built a _ role model identification _ program to find stem role models .the program takes in a user s profile url , crawls the contents in `` industry '' and `` education '' fields on the user s profile and only outputs the url if the user is a stem role model . specifically , we divided all 147 linkedin industries into three groups , `` non - stem '' , `` stem '' , and `` stem - related '' .for example , `` biotechnology '' and `` computer software '' are `` stem '' , `` music '' and `` restaurants '' are `` non - stem '' , and `` financial services '' and `` management consulting '' are `` stem - related '' .we only consider those users who are under `` stem '' or under `` stem - related '' with a degree in stem majors as role models .we used the 38 stem majors offered at our university as our standard .after we obtained the profile urls of stem role models , we crawled their entire profiles using the urls .we successfully found 25,637 stem role models from 2,022 distinct locations in the united states , including some places in hawaii and alaska .fig.4 shows a rough visualization of the diverse geographic locations the stem role models come from .the top-10 cities that role models come from are , not surprisingly , san francisco , new york city , atlanta , los angeles , dallas , chicago , washington d.c . ,boston , seattle and houston .this section presents the methods we employed to extract the gender , race , geographic location and interests from college students and stem role models as well as our ranking algorithm that matches them based on the similarities of these attributes .we reiterate that our selection of attributes are supported by a variety of previous related studies .these factors can make the most influential pairing because they ensure that a student gets a mentor with a similar background for affinity. moreover , close geographic location and similar interests are valuable for potential real life interaction between the students and role models .we extracted race and gender from both textual and visual features , namely the users names and profile photos .we recognize that there are people who identify themselves with genders other than male and female ; we also recognize that there are a variety of ways for categorizing races .to build a prototype system , we will use male , female for gender categorization , and use white , black , asian , asian pacific islander ( i.e. api ) and hispanic for race categorization .-0.1 in in particular , we used genderize.io , face++ , and demographics to extract these two attributes . genderize.io anddemographics predict gender or both gender and race based on the user s given name or full name while face++ predicts both using the user s profile photo . in total, we obtained three gender predictions and two race predictions for each user .each prediction is returned with an accuracy , and in the case the tool fails to predict , the prediction will be null .we picked the gender and race predictions with the highest accuracy . as a result , we extracted the gender of 80% college students and 97% role models , and the race of 46% college students and 92% role models .almost all role models have both attributes since we used their linkedin profiles , where the profile photos are usually high quality and the names are usually real . in contrast , twitter profiles sometimes can contain profile photos with random objects and invented names .fig.5 shows the make - up of those college students and stem role models whose gender and race were successfully extracted .we directly extracted geographic locations from the `` location '' field in twitter and linkedin profiles .the interests extraction is less straightforward and we used other features as proxies for this attribute .we were able to extract the locations of all stem role models since linkedin requires users to have a valid geographic location on their profiles .these locations usually contain the city and the state that role models work in .however , twitter does not have this requirement , and we noticed that not every college student has filled the location field on his profile and some of the filled locations are not valid .in fact , 34% twitter users either did not fill the `` location '' field or provided fake geographic locations ; among those valid locations , roughly 65% are at city - level [ 11 ] .in addition , we observed that many students use the name of their educational institutions as locations , and some locations are not correctly spelled or formatted .for example , a student s location is `` mcallentx '' , which refers to the city mcallen in texas , but not a place called `` mcallentx '' . due to the difference in the nature of linkedin and twitter , we selected different proxies as interests for role models and college students . for role models , we directly extracted the contents in `` interests '' and `` skills '' fields as their interests because skills such as `` web development '' can also be an interest , and people usually are good at things that they are interested in . for college students , we extracted hashtags ( excluding prefix `` # '' ) as interests .a hashtag is a user - defined , specially designated word in a tweet , prefixed with a `` # '' [ 31 ] .originally , we experimented lda topic modeling to discover topics of interests from all college students tweets and intended to use these to define each student s interests .however , due to the noise and non - interest related terms in tweets ( excluding stop words and non - english words ) , most of the terms generated are too generic to be defined as topics of interests .therefore , we extracted one s unique hashtags as proxy for interests .hashtags have been used in characterizing topics in tweets [ 23 ] and have shown to be interest - related to a decent extent [ 32 ] .although high - frequency hashtags are intuitively better representations of one s interests , including all unique hashtags allows us to extract a wilder range of interests . after we extracted interests from both students and role models ,we stored everyone s interests as a set which we call _ interest set_. the size of the set varies from user to user depending on the number of interests of that user .we rank all stem role models for each student based on the similarities of their attributes .specifically , for each comparison of a student and a role model , we calculate the similarity of each attribute , and rank the role model based on the arithmetic average across similarities of all attributes .we will now explain our methods used for each comparison . for gender and race ,we simply compared if the two people have the same string for gender or race . in our case , there are two strings for gender , `` female '' and `` male '' , and five strings for race , `` white '' , `` black '' , `` asian '' , `` api '' , and `` hispanic '' . therefore , the _ gender similarity _ is either 1 or 0 because two people either have the same gender or not , and the same went for _ race similarity_. for geographic locations , we used string comparison method to measure the similarity of two locations .originally , we experimented two ways to calculate it : the actual distance between two locations based on their latitudes and longitudes , and the levenshtein distance between the two strings that represent the two locations . due the variety of possible expressions of the same location , traditional tool such as geocodercan only correctly convert well - formatted locations that do not contain non - letter characters .for example , a real college student has location `` buffalo state 18 psych majorr '' and it can not be successfully converted into coordinates using geocoder , but clearly that the student studies in buffalo . since our objective is to be able to compare as many locations as possible , we decided to use string comparison , which allows the flexibility of using various location representations for the same place . specifically , we employed levenshtein distance [ 13 ] to calculate the distance between two strings , and the _ levenshtein - based similarity _( a ratio between 0 and 1 ) is defined as : where in the case of location , is the string of the student s ( role model s ) location , and we then have our _ location similarity_. a minor problem of this similarity measure is that two geographically different locations might contain similar words and have a high similarity , such as `` washington d.c . ''and `` washington state '' . butthis happens relatively rare only if there are enough people from one of the location or both .we used jaccard coefficient [ 10 ] combined with _levenshtein - based similarity _ to compute the similarity of two _interest sets_. hashtags are often not real words but a combination of words without spaces . while a real student s hashtag `` computersciencelife '' and a real role model s interest `` computer science '' clearly refer to the same interest in the field of computer science , the two strings are different and have a _ levenshtein - based similarity _ of 0.86 . therefore , in order to capture the overlapping interests between two _interest sets _ , we need a threshold for _ levenshtein - based similarity _ that decides whether two strings refer to the same interest . after extensive experimenting with real data , we chose our threshold to be 0.8 .our _ interest similarity _ is then defined as : where is the student s ( role model s ) _interest set_. a potential problem is that since our measurement is string - based but not concept - based , it might not capture the synonymous of interests as overlapping interests . after we calculated the similarities of all four attributes , we combined them by taking the arithmetic average and used that to rank the role models . in the cases of missing values ,any unlabeled attributes is not taken into account .for instance , if a student does not have gender information , the arithmetic average will entirely depend on the similarities of his other three attributes . in this section, we verified our ranking algorithm on 2,000 college students from the 297 most populated cities in the united states [ 28 ] .all these students are randomly selected from our database .we manually evaluated their top-5 ranked role models , and we also recommended these role models to them via twitter .although it is desirable to evaluate the ultimate impact of our study , we recognize that this would require tracking the subjects of the study over their career of substantial length ( e.g. , over 10 years ) .therefore , it is beyond the scope of this study , and we decided to use matching accuracy as the performance measure , which is defined as : where n is the second metric , the specific number of role models out of the top-5 that are correctly matched with the student .it represents the granularity level of matching .we consider a student is _ correctly _ matched with a role model if the linkedin user is indeed a stem role model and has the same gender , race and geographic location as the student .we did not evaluate interests since they are often not explicitly stated in social media and it would be too difficult to discover every student s real interests by reading his tweets .we took a careful effort to manually evaluate the matching results of these 2,000 college students by checking their twitter profile pages and the linkedin profile pages of their top-5 ranked stem role models .we utilized all the information on their respective social media profiles to determine their gender , race , and geographic location . in order to determineif someone is indeed a stem role model , we make our best judgment , as a career counselor would , based on the entire linkedin profile , which usually includes demographic background , personal summary , industry , education , working experience and skills .if we failed to determine any of the three attributes of a student , we will have to consider that he is not correctly matched with any role model because we are unable to conduct the evaluation .consequently , for twitter public accounts and students with unlabeled gender , race , or invalid location , they all receive zero correctly matched role models .location should not have been a problem since we selected these students by their locations , but we found that a handful of students have removed or changed their locations after we collected the data .-0.15 in -0.1 in table 1 shows two randomly selected representative examples of the matching results for two students .we consider that the student in the top table was correctly matched with all five role models and the student in the bottom table was only correctly matched with # 3 and # 4 role models at state - level because # 1 role model is not in stem - related occupation and # 2 and # 5 are not asian .none of the role models was correctly matched at city - level .taking into consideration that our limited database of stem role models may have an impact on the performance of the ranking algorithm , we conducted evaluation in four levels : city - level for 297 cities , state - level for top 297 cities , city - level for top-10 cities , and state - level for top-10 cities .among the 2,000 selected students , about a quarter of the selected students are from the top 10 cities .intuitively , we expect more students to be correctly matched with role models at the state - level than city - level .also , we expect students from the top-10 cities to be correctly matched with more stem role models because there should be more diverse role models in these cities .-0.1 in -0.1 in -0.1 in in fig.6 we show the overall matching accuracy in the four levels .we first look at our baseline , the city - level for 297 cities .42% of the college students were correctly matched with at least one role model .we noticed that around half of them was not matched with any role model and this is partly due to those college students with unlabeled gender and race information .we then noticed that our ranking algorithm performs better in the 10 cities than in the 297 cities for both city and state levels .numerically , the difference increases as the the minimum number of correctly matched role models decreases .if we look at students who were correctly matched with at least one role model , for both city and state levels , the top-10 cities outperforms the 297 cities by 33% and 21% , respectively ; the ranking algorithm achieves a decent accuracy of 57% in both city and state levels for the 10 cities .also , our ranking algorithm performs better in the state - level than in city - level for the 297 cities . with students who were at leastcorrectly matched with one role model , the difference is 13% , which is smaller but still very significant .however , there is almost no difference in state and city levels for the top-10 cities .a possible explanation is that because there are more stem role models of various types in the top-10 cities , the student can usually get matched with stem role models who are from the exact same city . during our evaluation , we are encouraged to see that there is a good variety of stem role models in different industries even for students with the same demographic background .we think this is a positive indicator that the attribute , interests , in fact contributes to our ranking algorithm .-0.1 in -0.1 in in order to make a real - life impact , for each student , we generated a personalized webpage and delivered the link of the webpage via tweeting at him from the official twitter account of our study .fig.7 shows an example of such webpage .it contains the linkedin public profile links of his top-5 role models and a survey regarding the accuracy of our recommendations .we only received a very small number of responses and conducted preliminary analysis .all responses indicated that they are indeed currently college students , a third agree that the recommendations are good and a third indicated that they would be more interested in stem majors / careers if they had role models in stem fields .we would need more responses to validate our implementation , and a potential way to do so is to cooperate with our university , apply the ranking algorithms on students who are twitter users and ask for responses .in this paper , we present an innovative social media - based approach to promote stem education by matching college students on twitter with stem role models from linkedin .our ranking algorithm achieves a decent accuracy of 57% in the city - level for the top-10 cities that the stem role models come from .we also design a novel implementation that recommends the matched role models to the students . to achieve this , we identified college students from twitter and stem role models from linkedin , extracted race , gender , geographic location and interests from their social media profiles , and developed a ranking algorithm to rank the top-5 ranked stem role models for each student .we then created a personalized webpage with the student role models and recommended the webpage to the student via twitter .our recommendation is not imposed on either side .it is the students choice if they want to initiate the connection with the role models via linkedin or other methods ; and it is for the role models to decide if they want to accept their linkedin invitations or other forms of communication . in the case of linkedin , note that if a student decides to approaches a potential role model , he can express why he would like to get connected ( e.g. , interest in stem fields ) , and the role model can make his own judgment .one may worry that our implementation of recommendations may be considered a form of spamming on students , however , our intention is clearly to help their careers , and _ not to profit _ from them .there are several possible extensions of our study in the future .our approach might have a reduced effect for college seniors since it is more difficult for them to switch majors .however , it is not uncommon that students change their career paths after graduation , and in the future we could recommend role models with similar experiences specifically to seniors .we could also expand our target population to high school students or focus on promoting stem education specifically to minority college students .in addition , we could classify stem role models into specific groups such as current stem major college students and experienced stem role models since students might feel more comfortable reaching out to their peers .finally , we could design an application based on our implementation to achieve real - time matching , where a college student could log into our application using their twitter account , and we could collect their data , extract their attributes , and give them stem role model recommendations in real - time .this application could also be generalized to other social media since many methods we used are compatible with other platforms .we hope this study can serve as a starting point to make use of the rich data and powerful networking ability of social media `` by the people '' in order to promote stem education and build positive influence `` for the people '' .this work was supported in part by xerox foundation , and new york state through the goergen institute for data science at the university of rochester .we thank all anonymous subjects for contributing to the evaluation of our system .chang , c .- c . , lin , c .- j .: libsvm : a library for support vector machines , 2001 .duggan , m.,ellison , n. b. , lampe , c ., lenhart , a. , and madden , m. : demographics of key social networking platforms ( january 2015 ) .emmerik , h.v.,baugh , s.g . ,euwema , m. c. : who wants to be a mentor ?an examination of attitudinal , instrumental , and social motivational components.career development international 10 , 4 , 310 - 340 ( 2005 ) hecht , b. ; hong , l. ; suh , b. ; chi , e. h. : tweets from justin bieber s heart : the dynamics of the `` location '' field in user profiles .proceedings of acm chi conference on human factors in computing systems ( 2011 ) machi , e. : improving u.s .competitiveness with k-12 stem education and training ( sr 57 ) .a report on the stem education and national security conference october 21 - 23 .( 2008 ) merrill , c. , custers , r.l . ,daugherty , j . ,westrick , m. , and zeng , y. : delivering core engineering concepts to secondary level students .journal of technology education 20 , 1 , 48-64 ( 2008 ) nguyen , d . , gravel ,, trieschnigg , d. , and meder , t. : `` how old do you think i am ? '' : a study of language and age in twitter .proceedings of international conference on weblogs and social media , pp . 439448 ( 2013 )xu , z.h . ,lu , r.,xiang , l.,yang , q.:discovering user interest on twitter with a modified author - topic model .proceedings of the 2011 ieee / wic / acm international conferences on web intelligence and intelligent agent technology ( 2011 ) heldman , a. b. , schindelar , j. , weaver , j. b. : social media engagement and public health communication : implications for public health organizations being truly `` social '' .public health reviews , 35 , 1 ( 2013 ) hadgu , a.t . ;jaschke , r. : identifying and analyzing researchers on twitter .proceedings of acm conference on web science , pp .23 - 32 ( 2014 ) mislove , a. ; lehmann . s. ; ahn , y. ; onnela , j. ; rosenquist , j. n. : understanding the demographics of twitter users , proceedings of fifth international aaai conference on weblogs and social media .aaai press , pp.554 - 557 ( 2012 )
stem ( science , technology , engineering , and mathematics ) fields have become increasingly central to u.s . economic competitiveness and growth . the shortage in the stem workforce has brought promoting stem education upfront . the rapid growth of social media usage provides a unique opportunity to predict users real - life identities and interests from online texts and photos . in this paper , we propose an innovative approach by leveraging social media to promote stem education : matching twitter college student users with diverse linkedin stem professionals using a ranking algorithm based on the similarities of their demographics and interests . we share the belief that increasing stem presence in the form of introducing career role models who share similar interests and demographics will inspire students to develop interests in stem related fields and emulate their models . our evaluation on 2,000 real college students demonstrated the accuracy of our ranking algorithm . we also design a novel implementation that recommends matched role models to the students . , recommendation systems , social media , text mining
there exists an open discussion on the validity of online interactions as indicators of real social activity .most of the online social networks incorporate several types of user - user interactions that satisfy the need for different level of involvement or relation intensity between users .the cost of establishing the cheapest relation is usually very low , and it requires the acceptation or simply the notification to the targeted user .these connections can accumulate due to the asymmetric social cost of cutting and creating them , and pile up to the astronomic numbers that capture popular imagination . if the number of connections increases to the thousands or the millions , the amount of effort that a user can invest into the relation that each link represents must fall to near zero .does this mean that online networks are irrelevant for understanding social relations , or for predicting where higher quality activity ( e.g. , personal communications , information transmission events ) is taking place ? by analyzing the clusters of the network formed by the cheapest connections between users of twitter, we show that even this network bears valuable information on the localization of more personal interactions between users .furthermore , we are able to identify some users that act as brokers of information between groups . the theory known as _ the strength of weak ties _ proposed by granovetter deals with the relation between structure , intensity of social ties and diffusion of information in offline social networks .it has raised some interest in the last decades and its predictions have been checked in a mobile phone calls dataset .on one hand , a tie can be characterized by its strength , which is related to the time spend together , intimacy and emotional intensity of a relation .strong ties refer to relations with close friends or relatives , while weak ties represent links with distant acquaintances . on the other hand ,a tie can be characterized by its position in the network .social networks are usually composed of groups of close connected individuals , called communities , connected among them by long range ties known as bridges . a tie can thus be internal to a group or a bridge .grannoveter s theory predicts that weak ties act as bridges between groups and are important for the diffusion of new information across the network , while strong ties are usually located at the interior of the groups .burt s work later emphasizes the advantage of connecting different groups ( bridging structural holes ) to access novel information due to the diversity in the sources .more recent works , however , point out that information propagation may be dependent on the type of content transmitted and on a _ diversity - bandwidth tradeoff _ .the bandwidth of a tie is defined as the rate of information transmission per unit of time .aral et al . note that weak ties interact infrequently , therefore have low bandwidth , whereas strong ties interact more often and have high bandwidth .the authors claim that both diversity and bandwidth are relevant for the diffusion of novel information .since both are anticorrelated , there has to be a tradeoff to reach an optimal point in the propagation of new information .they also suggest that strong ties may be important to propagate information depending on the structural diversity , the number of topics and the dynamic of the information . due to the different nature of online and offline interactions ,it is not clear whether online networks organize following the previous principles .our aim in this work is to test if these theories apply also to online social networks .online networks are promising for such studies because of the wide data availability and the fact that different type of interactions are explicitly separated : e.g. , information diffusion events are distinguished from more personal communications .diffusion events are implemented as a system option in the form of _ share _ or _ repost _ buttons with which it is enough to single - click on a piece of information to rebroadcast it to all the users contacts .this is in contrast to personal communications and information creation for which more effort has to be invested to write a short message and ( for personal communication ) to select the recipient .all these features are present in twitter , which is a micro - blogging social site .the users , identified with a username , can write short messages of up to characters ( tweets ) that are then broadcasted to their followers .when a new follower relation is established , the targeted user is notified although his or her explicit permission is not required .this is the basic type of relation in the system , which generates a directed graph connecting the users : the follower network .after some time of functioning , some peculiar behaviors started to extend among twitter users leading to the emergence of particular types of interactions .these different types of interactions have been later implemented as part of twitter s system . _mentions _ ( tweets containing ) are messages which are either directed only to the corresponding user or mentioning the targeted user as relevant to the information expressed to a broader audience .a _ retweet _ ( rt ) corresponds to content forward with the specified user as the nominal source .in contrast to the normal tweets , mentions usually include personal conversations or references while retweets are highly relevant for the viral propagation of information .this particular distinction between different types of interactions qualifies twitter as a perfect system to analyze the relation between topology , strength of social relation and information diffusion in online social networks .the properties of the follower network have been extensively analyzed especially in relation to its topological structure , propagation of information , homophily , tie formation and decay , etc . finding users with thousands or even millions of followers is not exceptional , so the question is whether the structure of the follower network carries any information on where personal relations ( mentions ) or information transmission events ( retweets ) take place . to answer this question , we first analyze a sample of the follower network with clustering - detection algorithms and identify a set of groups .our dataset is a sample of the network containing users connected with follower relations , as well as the tweets , retweets , mentions , and was gathered through the twitter api during november and december of ( see the methods section for further detail ) .whether the clusters we identify are traces of underlying social groups ( online or offline ) is a question we can not answer with the available information .we follow an alternative path by checking the correlation between the location of the personal conversations ( mentions ) and information diffusion events ( retweets ) and the structural properties of the link bearing those activities with respect to the detected groups in the network .note that we consider mentions and retweets to happen always on follower links .this allow us to describe user activity in terms of the detected groups .our first step is to identify the groups in the follower network .clustering in large graphs is still a topic of very active research and many algorithms are available . due to the size , density , and directness of the follower network and in order to capture the possible inclusion of users in multiple groups or in none , we have used oslom ( see methods ) .the analysis has also been performed with other clustering techniques , reaching similar conclusions ( see setion in supplementary information [ figs .s6-s14 and table s1 ] for a detailed account on these results ) .we have detected groups , three of which are graphically depicted in figure 1a with each sphere corresponding to a single user . in general , the links can be classified according to their position with respect to the user groups : internal , between groups , intermediary and links involving nodes not assigned to any group as shown in figure 1b .the statistics characterizing the groups and links are displayed in figure 2 .the group size distribution decays slowly for three orders of magnitude and does not show a characteristic group size ( figure 2a ) .for instance , the largest group contains around users . also the number of groups each user belongs to shows high heterogeneity : of the users has not been allocated to any group , while there exists a user belonging to more than groups ( see figure 2b ) .the percentage of links falling in the different types regarding the groups is depicted in figure 2c . although the non - classified users are of the total , the links connected to them are less than and the percentage is even lower for those with mentions or retweets .the most common type of connections is the between - group links .one may wonder if the algorithm for clusters detection is doing a good job when there is such a large proportion of between - group links .the clustering method is trying to find groups of mutually interconnected nodes that would be extremely rare in a randomized instance of the network , rather than optimizing the ratio between number of between - group and internal links . in sections and of the supplementary information ( figs .s1-s5 ) , this argument is further developed and the capacity of oslom to detect planted communities is proved in a benchmark even in situations with a high ratio between the number of between - groups and internal links . another relevant point to highlight is the different potential of each type of links to carry mentions and retweets . as it can be seen in the figure 2c , the red bars for mentions in internal links and intermediary links almost double the abundance of links in the follower network in these categories . the links between groups , on the other hand ,attract far less mentions . of internal links as a function of the group size in number of users .the curve for the follower network acts as baseline for mentions and retweets . note that if mentions / retweets were randomly appearing over follower links then the red / green curve should match the black curve .( b ) distribution of the number of mentions per link .( c ) fraction of links with mentions as a function of their intensity .the dashed curves are the total for the follower network ( black ) and for the links with mentions ( red ) . while the other curves correspond ( from bottom to top ) to fractions of links with : 1 non - reciprocated mention ( diamonds ) , 3 mentions ( circles ) , 6 mentions ( triangle up ) and more than 6 reciprocated mentions ( triangle down).,width=325 ] besides their location with respect to the groups , the links can be also characterized by their intensity . in twitter mentionsare typically used for personal communication , which establishes a parallelism between links with mentions and strength of social ties .the more mentions has been exchanged between two users , even more so if reciprocated , the stronger we consider the tie between them .we define intensity of a link as the number of mentions interchanged on it .different predictors have been considered to estimate social tie strength including , for instance , time spent together or the duration of phone calls .we consider the intensity as an approximation to social strength given that writing a mention involves some effort and addresses only single targeted users .according to granovetter s theory , one could expect the internal connections inside a group to bear closer relations .mechanisms such as homophily , cognitive balance or triadic closure favor this kind of structural configurations .unfortunately , we have no means to measure the closeness of a user - user relation in a sociological sense in our twitter dataset. however we can verify whether the link has been used for mentions , whether the interchange has been reciprocated or whether it has happened more than once .we define the fraction of links with interaction in position with respect to the groups of size as where is the number of links with that type of interaction in position with respect to the groups of size and in the total number of links with interaction .the fractions reveals an interesting pattern as function of the group size as can be seen in figure 3a .note that the fraction of links in the follower network ( black curve ) is taken as the reference for comparison .links with mentions are more abundant as internal links than the baseline follower relations for groups of size up to users .this particular value brings reminiscences of the quantity known as the dunbar number , the cognitive limit to the number of people with whom each person can have a close relationship and that has recently been discussed in the context of twitter .although we have identified larger groups , the density of mentions is similar to the density of links in the follower network .in addition , the distribution of the number of times that a link is used ( intensity ) for mentions is wide , which allows for a systematic study of the dependence of intensity and position ( see figure 3b ) .the more intense ( or reciprocated ) a link with mentions is , the more likely it becomes to find this link as internal ( figure 3c ) .this corresponds to granovetter expectation that the stronger the tie is the higher number of mutual contacts of both parties it of links of the different types ( follower , with mentions and with retweets ) as a function of the size of the group at the link origin , and ( c ) at the targeted group .( d ) frequency of between - group links as a function of the group - group similarity for the different type of links . in the inset ,ratio between the frequency of links with retweets and with mentions.,width=325 ] the next question to consider is the characteristics of links between groups .these links occur mainly between groups containing less than users ( figure 4a - c ) .however , their frequency depends on the quality of the links ( if they bear mentions or retweets ) .while links with mentions are less abundant than the baseline , those with retweets are slightly more abundant . according to the strength of weak ties theory , weak links are typically connections between persons not sharing neighbors , being important to keep the network connected and for information diffusion .we investigate whether the links between groups play a similar role in the online network as information transmitters .the actions more related to information diffusion are retweets that show a slight preference for occurring on between - group links ( figures 4b and 4c ) .this preference is enhanced when the similarity between connected groups is taken into account .we define the similarity between two groups , a and b , in terms of the jaccard index of their connections : the similarity is the overlap between the groups connections and it estimates network proximity of the groups .the general pattern is that links with mentions more likely occur between close groups and retweets occur between groups with medium similarity ( figure 4d ) .mentions as personal messages are typically exchanged between users with similar environments , what is predicted by the strength of weak ties theory .links with retweets are related to information transfer and the similarity of the groups between which they take place should be small according to the granovetter s theory .the results show that the most likely to attract retweets are the links connecting groups that are neither too close nor too far .this can be explained with aral s theory about the trade - off between diversity and bandwidth : if the two groups are too close there is no enough diversity in the information , while if the groups are too far the communication is poor .these trends are not dependant on the size of the considered groups ( see section 3 [ figs s6-s14 and table s1 ] in the supplementary information ). the communication between groups can take place in two ways : the information can propagate by means of links between groups or by passing through an intermediary user belonging to more than one group .we have defined as intermediary the links connecting a pair of users sharing a common group and with at least one of the users belonging also to a different group ( see fig .these users and their links have a high potential to pass information from one group to another in an efficient way .several previous works pointed out to the existence of special users in twitter regarding the communication in the network . in order to estimate the efficiency of the different types of links as attractors of mentions and retweets , we measure a ratio for links in position and for interaction defined as where , as before, is the number of links with the interaction in position and is the total number of links in that position .the bar plot with the values of is displayed in figure 5a .the efficiency of the different type of links can thus be compared for the attraction of mentions ( red bars ) and retweets ( green bars ) .links internal to the groups attract more mentions and less retweets than links between groups in agreement with the predictions of the strength of weak ties theory .intermediary links attract mentions as likely as internal links : the fraction of intermediary links with mentions is very close to the fraction of internal links with mentions .this is expected because intermediary links are also internal to the groups .however , the aspect that differentiates more intermediary links from other type of links is the way that they attract retweets .intermediary links bear retweets with a higher likelihood than either internal or between - groups connections ( see figure 5a and section [ figs .s1-s4 ] in the supplementary information ) .this fact can be interpreted within the framework of the tradeoff between diversity and bandwidth : strong ties are expected to be internal to the groups and to have high bandwidth , while ties connecting diverse environments or groups are more likely to propagate new information .high bandwidth links in our case correspond to those with multiple mentions , while links providing large diversity are the ones between groups .intermediary links exhibit these two features : they are internal to the groups and statistically bear more mentions , and introduce diversity through the intermediary user membership in several groups .although some theoretical works suggest that ties with high bandwidth and high diversity should be scarce , we find that intermediary links are as abundant as internal links ( see fig .moreover , in line with the theories , higher diversity increases the chances for a link to bear retweets as can be seen in figure 5b , which implies a more efficient information flow . in the inset of the figureit is shown that the number of non - shared groups assigned to the users connected by the link positively correlates with a higher than expected number of retweets .in summary , we have found groups of users analyzing the follower network of twitter with clustering techniques . the activity in the network in terms of the messages called mentions and retweets clearly correlates with the landscape that the presence of the groups introduces in the network. mentions , which are supposed to be more personal messages , tend to concentrate inside the groups or on links connecting close groups .this effect is stronger the larger the number of mentions exchanged and if they are reciprocated .retweets , which are associated to information propagation events , appear with higher probability in links between groups , especially those that connect groups that do not show a high overlap , and more importantly on links connected to users who intermediate between groups .these intermediary users belong to multiple groups and play an important role in the spreading of information .they acquire information in one group and launch retweets targeting the other groups of which they are members . at the same time, the access to new information can transform them into attractive targets to be retweeted by their followers .the relevance of certain users for the spread of information in online social media has been discussed in previous works .our method provides a way to identify these special users as brokers of information between different groups using as only input the follower network . from the sociological point of view , the way that the activity localizes with respect to the groups allow us to establish a parallelism with the organization of offline social networks.in particular , we have shown that the theory of the strength of weak ties proposed by granovetter to characterize offline social network applies also to an online network .furthermore , some of our results can be explained within the framework of burt s brokerage and closure and aral s diversity - bandwidth tradeoff theories .the specific properties of twitter offers an opportunity to study directly the importance of the links for personal communications or for information diffusion . according to these theories ,the strong social ties tend to appear at the interior of the groups or between close groups as happens for the links with mentions in twitter .in addition , the socially weak ties are expected to be more common connecting different groups and to be important for the propagation of information in the network .this is similar to what we observe for the links with retweets that concentrate with high probability in links between dissimilar groups or in intermediary links . besides the roles assigned by these two theories to the links, we have found that intermediary users and links are also an important component to take into account for understanding information propagation .these links tend to be characterized by high bandwidth and diversity in the context of aral s study , and exhibit high information diffusion efficiency . based on all these findings , despite the myth of one million friends and the doubts on the social validity of online links , the simplest connections of the online network bear valuable information on where higher quality interactions take place ..overall characteristics of the follower network and of the interactions taking place on it .[ cols="<,^ , > , > , > , > " , ] [ tab_clus ] in this section , we check the reliability of the results of the main text regarding the localization of the activity when the groups are obtained with clustering techniques different from oslom .the reasons to select oslom as the main method are that ( i ) the software is publicly available , ( ii ) the method is able to analyze the full directed follower network in a reasonable amount of time , ( iii ) it detects the overlapping communities ( bridging nodes and bridges ) and nodes not belonging to any group and ( iv ) the clusters obtained are statistically significant according to a clear null model .we do not ask the same properties to other methods explored here but they should meet the following conditions : the methods should be available online in the form of software tools , they should be able to deal with relatively large samples of dense graphs , and , if possible , they should include a version to analyze directed networks .we have found several methods satisfying fully or partially these conditions and so we will show in the remainder results with groups detected in the follower network ( or in sub - sampling of it ) by infomap , moses , a message - passing algorithm proposed by raghavan et al that we will refer to as real - time community detection and two algorithms to optimize modularity : the louvain method for community detection and a slower modularity optimization algorithm adapted to deal with directed networks that was implemented in the software radatools .cluster detection in graphs is a topic of very active research .there are plenty of methods in the literature based on different techniques and with different features and capabilities .the methods selected here are representative of some of the most popular approaches used today for searching groups in networks .modularity optimization is based on a comparison between the number of internal links and the average expected number in a random graph .it has been one of the most popular community detection methods in the last few years , although it is not free problems such as resolution limits or difficulties to find the absolute maximum of the modularity due to a rough landscape of its value in the space of the possible network partitions .the louvain method is based on a contraction of the network similar to real - space renormalization and attempts to keep the modularity function constant at each contraction step .moses includes an overlapping stochastic blockmodelling approach as the basis to its community detection algorithm .it provides a procedure for the optimization of the log - likelihood introduced by the blockmodelling approach .oslom is based on a structural approach comparing the internal and external links with the best expectations in an equivalent random graph . in this method ,the fact that clustering algorithms follow a process of optimization is incorporated to give an assessment of the probability of finding a similar group in a random network .the method then searches for groups in the given network that have low probability of appearance in the equivalent random graphs .the last tested method , infomap , is based on a different approach , trying to optimize the information fluxes in the given graph by the compartmentalization of the network .infomap is based on the optimization of the description of random walkers paths in the network using information theory concepts the clustering methods can be distinguished by other aspects apart from the approach that they use to find groups . to start with , some as infomap, radatools as well as oslom have specific versions to analyze directed graphs .we used them on the original or sub - samples of the directed follower network .in order to use the other three methods ( moses , louvain , real - time ) , we have symmetrized the network .the symmetrization consists in ignoring the directionality of the links , and considering them as undirected .this procedure neglects information that can be important to define the groups and can affect the performance of the methods .a second difference is the ability to find overlapping communities or bridging nodes . only oslom and moses are able to detect users belonging to more than one group . and , finally , the performance of the methods varies .a way to compare clustering methods is to generate benchmark networks in which the groups are a priori known , then to increase the level of disorder in the connections and to test up to which point the methods recover the planted groups .infomap was found to be one of the best performing methods in this sense in a recent comparative work , while oslom has been thoroughly tested in a later work getting results that are comparable to infomap s .a final point to consider is that not all the methods are able to cope with the large size and the high density of our empirical follower network .the computational cost of the analysis makes difficult for some of them to deal with large networks . for this reason , andas can be seen in table [ tab_clus ] , we have run the methods against several samplings of the follower network .these samplings include , when possible , the full network and , when not , subnetworks extracted with different procedures .we have implemented a snow - balling technique similar to the one that led to the collection of the original network and obtained subgraphs using shells of and hops starting from different initial seeding nodes .we have also generated subgraphs by choosing nodes at random and extracting all the links between them .another network sample has been extracted by considering the full graph without the hubs ( nodes with total degree higher than ) .this idea comes from the fact that in twitter there can be users such as celebrities with thousands and millions of followers but whose connections bear no information on bona fide social activity in the network . and , finally , as a sanity check, we have also generated a sub - graph formed by some of the groups detected by oslom . in the results of the main text ,the links with mentions are more abundant than the baseline inside groups of size up to users .larger groups do not seem to behave in the same way and the abundance of links with mentions fall upon the baseline .we find a similar signal when the groups are extracted with other clustering algorithms , see the summary of table [ tab_clus ] .the results for the full network for out of algorithms tested are in qualitative correspondence with the results of oslom .these include infomap s results for all the network samples ( infomap is supposed to be one of the most trustable methods for community detection ) .the results , together with results of oslom for the same sample of the network , are depicted in fig .[ fig_panel_sel0 ] .the figure reveals that the fraction of links with mentions inside of groups is higher than the fraction of any links inside groups irrespectively of the algorithm used . in case of all clustering algorithms ( including oslom ) , for both the sample and the full network , the effect is not visible for groups larger than - users ( this number varies for different algorithms ) .when we take into account the number of mentions and whether they are reciprocated , the results show a remarkably consistent pattern .the more mentions , especially reciprocated , the link has , the higher the probability that it is inside of a small group independently of the community detection algorithm used or the sample of the network considered ( see figs .[ fig_panel_sel3nei2b ] to [ fig_panel_sel2oslom5k ] ) . in order to checkwhether the results on the localization patterns of the activity in the links between groups discussed in the figure 4 of the main text and in figs [ maps ] of this supplementary can be reproduced with groups obtained by other clustering methods , we have repeated the analysis of the group - group links using the groups found by infomap .the results are in figure [ fig_infomap ] . even though the shape of some of the curves is different, the main qualitative results confirm the trends observed with oslom regarding the concentrations of mentions in links between more similar groups , and retweets in those connected groups with medium or low similarity .also the between - group links concentrate a higher quantity of retweets and slightly less mentions .the role of bridging users and bridging connections can be investigated with clustering algorithms capable of detecting overlapping communities , and so capable of assigning nodes to more than one group ( moses and oslom ) .the results for oslom has been presented in the main text . here in fig .[ fig_moses]a we show results for the moses algorithm , run for the sample of the network with removed hubs .the shape of the curves , especially for ratios between distributions for different types of links shown in fig .[ fig_moses]b , is consistent for the two clustering algorithms .the probability of having a retweet over a bridging link , which is proportional to the ratio , steadily grows from almost zero with the number of non - shared groups of the pair of interacting users until it reaches a maximum around value 20 , and then it drops down .the pattern for links with mentions remains consistent for both algorithms , with small maximum for smallest values of number of non - shared groups and steady decay for larger values .the theory of strength of weak ties have two formulations .the macroscopic formulation predicts strong ties to be inside of communities , whereas weak ties as the connectors between these communities .the microscopic formulation states that strong ties happen between users having many friends in common . in the paper , we focus on the communities but here we also show that in fact the microscopic predictions can be confirmed in twitter as well .jaccard similarity of two users is equal to number of shared followers by the two users divided by total number of unique followers the two users have . in fig .[ fig_usim ] we plot the distribution of the similarity for pairs of users who either send a mention or a retweet to each other or are simply connected by a follower tie .the distribution of similarity for the interaction which is supposed to be the most personal ( link with mention ) is shifted to the right , showing that indeed mentions happens much more often between users who share friends .this result and the finding , that mentions are more abundant inside of the communities , are consistent with the expectations of the theory of the strength of weak ties .in this section , we discuss in more detail the imbalance between the number of internal and between - group links that is seen in figure 3c and the effect it may have on cluster detection .the objective of a clustering algorithm is to find areas of the network relatively isolated and dense in internal connections .how is then possible that the number of overall internal links is lower than that of links between groups in the clusters found by oslom ?the answer is that oslom ( as many community detection algorithms ) is not attempting to optimize the balance between internal and between - group links in a direct way .the method searches for areas denser in internal connections than a baseline established by the properties of the random graphs obtained by reshuffling the links of the original network while maintaining the nodes degrees constant . to illustrate this idea , we have generated a benchmark formed by cliques ( fully connected subgraphs ) of size each . the final graph is then obtained by the addition of links between groups connecting nodes of different cliques at random . to quantify the level of similarity between the original cliques and the groups detected by oslom, we use the normalized mutual information between partitions .this quantity is equal to one when the two divisions of the network in groups the original cliques and the groups detected are strictly equal and tends to zero when there is no relation between them .the results as function of the ratio between the number of links between groups and the number of internal links ( ) is shown in figure [ nmi ] .oslom is able to detect the planted cliques up to high levels of the ratio internal vs between links , higher in any terms than the values seen in the real follower network . at the same time, the performance of the method improves with larger groups and with a larger number of cliques .cliques ( fully connected subgraphs ) of size each ., width=302 ] the reason for this ability is that the connections between groups are introduced at random , without any clear statistical preference for connections between two particular groups .oslom can detect these random links and ignore them to evaluate which nodes belong to each group despite the high ratio of between - groups links over internal links .we revisit the group similarity as defined in the main text .it is important to recall that the group - group similarity is defined as the jaccard index of the connections of two groups ( the ratio between the number of links in common and total links ) .we consider as the connections of a group all links originating from or targeting the nodes of the group .the similarity of the groups is dependent on the size of the groups of origin and destination of the particular link .figure [ ggsim ] shows how the average similarity of groups connected by follower relations depends on the size .the largest similarity concentrates in links connecting groups of similar size . although one should keep in mind the figure 4a of the main text that shows where most of links are located , namely between groups of size from to , making this region the most relevant .taking the average similarity of the groups for the links with mentions and for the links with retweets and plotting the ratio of these quantities divided by the baseline average similarity of the follower network links , we obtain figures [ maps]a and [ maps]b .the signal reproduces the results of the overall histogram in the figure 4d of the main text : ( i ) the links with mentions tend to connect groups with higher similarity compared to those with retweets or the baseline given by the follower network and ( ii ) the retweets normally happen in links between groups with a medium value of the similarity .the same picture is observed almost independently of the size of the group of origin and destination except for some areas that anyway concentrate a lower density of links .finally , we show results that complement the discussion on bridges of figure 5 of the main text . the fraction of follower links that are bridges , bridges with mentions or with retweets as a function of the groups size can be seen in figure 18a. note that although a node can belong to more than one group , the links usually belong to a single group unless they connect two bridging nodes .the size of the group in figure 18a refers to each of the groups a bridging link belongs to .if a link connects two bridging nodes and so belongs to several groups , the link is counted as many times as groups it is in . as mentioned in the main text ,bridges are very effective in attracting retweets .this behavior , present also in figure 18a , contrasts with the one observed in figure 3 of the main text for internal links .however , what is similar between internal and bridges is their attraction for mentions .this attraction increases with the number of times a link has been used for mentions ( see figure 18b ) , also in parallel to the results for pure internal links .the authors would like to thank sergio gmez , andrea lancichinetti and ian leung for making network - clustering software available and for their advice in its use .p.a.g . and j.j.r .acknowledge support from the jae program of the csic .funding was provided by the spanish ministry of science through projects modass ( fis2011 - 24785 ) and mosaico ( fis2006 - 01485 ) .d.m . romero and j. kleinberg , _ the directed closure process in hybrid social - information networks , with an analysis of link formation on twitter _4th international aaai conference on weblogs and social media 138145 ( 2010 ) .
an increasing fraction of today social interactions occur using online social media as communication channels . recent worldwide events , such as social movements in spain or revolts in the middle east , highlight their capacity to boost people coordination . online networks display in general a rich internal structure where users can choose among different types and intensity of interactions . despite of this , there are still open questions regarding the social value of online interactions . for example , the existence of users with millions of online friends sheds doubts on the relevance of these relations . in this work , we focus on twitter , one of the most popular online social networks , and find that the network formed by the basic type of connections is organized in groups . the activity of the users conforms to the landscape determined by such groups . furthermore , twitter s distinction between different types of interactions allows us to establish a parallelism between online and offline social networks : personal interactions are more likely to occur on internal links to the groups ( the weakness of strong ties ) , events transmitting new information go preferentially through links connecting different groups ( the strength of weak ties ) or even more through links connecting to users belonging to several groups that act as brokers ( the strength of intermediary ties ) .
the detection of compact signals ( sources ) embedded in a background is a recurrent problem in many fields of astronomy .some common examples are the separation of individual stars in a crowded optical image , the identification of local features ( lines ) in noisy one - dimensional spectra or the detection of faint extragalactic objects in microwave frequencies .the detection , identification and removal of the extragalactic point sources ( eps ) is fundamental for the study of the cosmic microwave background radiation ( cmb ) data ( franceschini et al .1989 , toffolatti et al . 1998 ,de zotti et al . 1999 ) . in particular, the contribution of eps is expected to be very relevant at the lowest and highest frequency channels of the future esa planck mission ( mandolesi et al .1998 , puget et al . 1998 ) .the heterogeneous nature of the eps that appear in cmb maps as well as their unknown spatial distribution make difficult to separate them from the other physical components ( cmb , galactic dust , synchrotron , etc ) by means of statistical component separation methods .techniques based on the use of linear filters , however , are well - suited for the task of detecting compact spikes on a background .several techniques based on different linear filters have been proposed in the literature : the mexican hat wavelet ( mhw , cayn et al .2000 , vielva et al. 2001a , b , 2003 ) , the classic _ matched _ filter ( mf , tegmark and oliveira - costa 1998 ) , the adaptive top hat filter ( chiang et al 2002 ) and the scale - adaptive filter ( saf , sanz et al . 2001 ) . a certain deal of controversy has appeared about which one , if any , of the previous filters is _ optimal _ for the detection of point sources in cmb data . in order to answer that question it is necessary to consider first a more fundamental issue , the concept of _ detection _ itself .the detection process can be posed as follows : given an observation , the problem is to _ decide _ whether or not a certain signal was present at the input of the receiver .the decision is not obvious since the observation is corrupted by a random process that we call ` noise ' or ` background ' .formally , the _ decision _ is performed by choosing between two complementary hypotheses : that the observed data is originated by the background alone ( _ null hypothesis _ ) , and the hypothesis that the observation corresponds to a combination of the background and the signal .to decide , the detector should use all the available information in terms of the probabilities of both hypotheses given the data . the _ decision device _ separates the space of all possible observations in two disjoint subspaces , and , so that if an observation the null hypothesis is accepted , and if the null hypothesis is rejected , that is , a source is ` detected ' ( is called the region of acceptance ) .hence , we will call any generic decision device of this type a _detector_. the simplest example of detector , and one that has been extensively used in astronomy , is _ thresholding _ : if the intensity of the field is above a given value ( e.g. 5 ) , a detection of the signal is accepted , on the contrary one assumes that only background is present .thresholding has a number of advantages , among them the facts that it is straightforward and that it has a precise meaning in the case of gaussian backgrounds in the sense of controlling the probability of spurious detections .however , it does not use all the available information contained in the data to perform decisions .for example , the inclusion of spatial information ( such as the curvature ) could help to distinguish the sources from fluctuations in the background with similar scale but a different shape .a general detector that can use more information than simple thresholding is given by the neyman - pearson ( np ) decision rule : where is called the likelihood ratio , is the probability density function ( _ pdf _ ) associated to the null hypothesis ( i.e. there is no source ) whereas is the _ pdf _ corresponding to the alternative hypothesis ( i.e. there is a source ) . are a set of variables which are measured from the data . is an arbitrary constant , which defines the region of acceptance , and must be fixed using some criterion .for instance , one can adopt a scheme for object detection based on maxima .the procedure would consist on considering the intensity maxima of the image as candidates for compact sources and apply to each of them the np rule to decide whether they are true or spurious . for a 1d image, the ratio of probabilities would then correspond to the probability of having a maximum with a given intensity and curvature ( which are the variables in this case ) in the presence of background plus signal over the probability of having a maximum when only background is present . if this ratio is larger than a given value , the candidate is accepted as a detection , if not , it is rejected .unfortunately , in many cases the sources are very faint and this makes very difficult to detect them . in order to improve the performance of the detector ,a prior processing of the image could be useful .here is where _ filtering_ enters in scene .the role of filtering is to transform the data in such a way that a detector can perform better than before filtering .once the detector is fixed , it is interesting to compare the performance of different filters , which has been rarely considered in the literature . in a recent work , barreiro et al .( 2003 ) introduce a novel technique for the detection of sources based on the study of the number density of maxima for the case of a gaussian background in the presence or absence of a source . in order to define the region of acceptancethe neyman - pearson decision rule is used with _ pdf _s associated to the previous number densities and using the information of both the intensities and the curvatures of the peaks in a data set .in addition , is fixed by maximising the _ significance _ , which is the weighted difference between the probabilities of having and not having a source .in that work the performances of several filters ( saf , mf and mhw ) is compared in terms of their _ reliability _ , defined as the ratio between the number density of true detections over the number density of spurious detections . they find that , on the basis of this quantity , the choice of the optimal filter depends on the statistical properties of the background .however , the criterion chosen to fix based on the significance does not necessarily leads to the optimal reliability .therefore , if we are considering the reliability as the main criterion to compare filters , a different criterion for , based on number densities must be used . in a posterior article , vio et al .( 2004 ) , following the previous work , adopt the same neyman - pearson decision rule , based on the _ pdf _ s of maxima of the background and background plus source , to define the region of acceptance .however , they propose to find by fixing the number density of spurious detections and compare the performance of the filters based on the number density of true detections . in this case, the mf outperforms the other two filters .note that in these last two works different criteria have been used to fix , thus leading to different results . in the present work , our goal will also be to find an optimal filter that gives a maximum number density of detections fixing a certain number density of spurious sources . in order to define the detector , we will use a decision rule based on the neyman - pearson test .we will consider some standard filters ( mf , saf and mh ) introduced in the literature as well as a new filter that we call the biparametric scale adaptive filter ( bsaf ) . in all the filters appears in a natural way the scale of the source .we will modify such a scale introducing an extra parameter .in fact , it has been shown by lpez - caniego et al .( 2004 ) that the standard matched filter can be improved under certain conditions by filtering at a different scale from that of the source .the performance of the bsaf will be compared with the other filters .the overview of this paper is as follows . in section 2 ,we introduce two useful quantities : number of maxima in a gaussian background in the absence and presence of a local source . in section 3, we introduce the detection problem and define the region of acceptance . in section 4 , we introduce an estimator of the amplitude of the source that is proven to be unbiased and maximum efficient . in section 5 and 6 ,we obtain different analytical and numerical results regarding weak point sources and scale - free background spectra and compare the performance of the new filter with others used in the literature . in section 7 , we describe the simulations performed to test some theoretical aspects and give the main results and finally , in section 8 , we summarize the conclusions and applications of this paper .appendix a is a sketch to obtain a sufficient linear detector whereas we obtain the linear unbiased and maximum efficient estimator in appendix b.let us assume a 1d background ( e. g. one - dimensional scan on the celestial sphere or time ordered data set ) represented by a gaussian random field with average value and power spectrum : , where is the fourier transform of and is the 1d dirac distribution .the distribution of maxima was studied by rice ( 1954 ) in a pioneering article . the expected number density of maxima per intervals , and is given by being the expected total number density of maxima ( i.e. number of maxima per unit interval ) where and represent the normalized field and curvature , respectively . is the moment of order associated to the field . are the coherence scale of the field and maxima , respectively . as an example, figure [ fig : fig1 ] shows the values of the ratio for the case ( a typical value for the backgrounds we are considering ) . in this case, the expected density of maxima has a peak around and , that is , most of the peaks appear at a relatively low threshold and curvature , and the density of peaks decreases quickly for extreme values of and . if the original field is linear - filtered with a circularly - symmetric filter , dependent on parameters ( defines a scaling whereas defines a translation ) we define the filtered field as then , the moment of order of the linearly - filtered field is being the power spectrum of the unfiltered field and the fourier transform of the circularly - symmetric linear filter .now , let us consider a position in the image where a gaussian source ( i.e. profile given by , where is the beam width ) is superimposed to the previous background . then , the expected number density of maxima per intervals , and , given a source of amplitude in such spatial interval , is given by ( barreiro et al .2003 ) where and , is the normalized amplitude of the source and is the normalized curvature of the filtered source .the last expression can be obtained as note that due to the statistical homogeneity and isotropy of the background , the previous equations are independent of the position of the source .we consider that the filter is normalized such that the amplitude of the source is the same after linear filtering : .we want to choose between different filters based on _ detection_. to make such a decision , we will focus on the following two fundamental quantities : a ) the number of spurious sources which emerge after the filtering and detection processes and b ) the number of real sources detected .as we will see in this section , these quantities are properties of the gaussian field and source that can be calculated from equations ( [ nbackground ] ) and ( [ nsource ] ) .as we will see , the previous properties are not only related to the snr gained in the filtering process but depend on the filtered momenta up to 4th - order ( in the 1d case ) , i.e. the amplification and the normalized curvature .let us consider a local peak in the 1d data set characterised by the normalized amplitude and curvature .let : n.d.f . represents the _ null _ hypothesis , i.e. the local number density of background maxima , and : n.d.f . represents the _ alternative _ hypothesis , i.e. the local number density of maxima when there is a compact source : in the previous equation , we have introduced a priori information about the probability distribution of the sources : we get the number density of source detections weighting with the a priori probability . to construct our detector , we will assume a neyman - pearson ( np ) decision rule using number densities instead of probabilities : where is a constant .the previous equation defines a region in , the so - called region of acceptance .therefore , the decision rule is expressed such that if the values of of the candidate maximum is inside ( i.e. ) we decide that the signal is present . on the contrary , if we decide that the signal is absent . )is equivalent to the one defined by the usual neyman - pearson test in terms of probabilities where , are the _ pdf _ s associated to the number densities given by equations ( [ nbackground ] ) and ( [ eq : number_b+s ] ) and , in order to compare different filters , the constant must be found by fixing the number density of spurious sources in the region of acceptance instead of the _ false alarm _ probability . ]it can be proved that the previous region of acceptance is equivalent to the sufficient linear detector ( see appendix a ) where is a constant and is given by we remark that the assumed criterion for detection leads to a _ linear _ detector ( i.e. linear dependence on the threshold and curvature ) . moreover ,this detector is independent of the _ pdf _ of the source amplitudes . using this detector ,the expected number density of spurious sources and of true detections are given by we remark that in order to get the true number of real source detections such a number must be multiplied by the probability to have a source in a pixel in the data set .note that for a fixed number density of spurious sources , the np detector leads to the maximum number density of true detections .taking into account equations ( [ eq : r _ * ] ) to ( [ eq : ndet ] ) , one can find and for a gaussian background .after a straightforward calculation , the number density of spurious sources found using the np rule is given by : , \label{eq : nb*}\end{aligned}\ ] ] similarly , the number density of detections is obtained as : ^{- \frac{(1 - \rho^2)\varphi^2}{2{(1 - \rho y_s)}^2 } } , \label{eq : nb}\end{aligned}\ ] ] where signal has an unknown parameter , the amplitude , that has to be estimated from the data .we shall assume that the most probable value of the distribution gives an estimation of the amplitude of the source ( criterion for amplitude estimation ) .the result is given by the equation where the function is given by equation ( [ eq : phi ] ) .one can prove that the previous expression corresponds to a linear estimator that is unbiased and maximum efficient ( minimum variance ) , i.e. where denotes average value over realizations ( see appendix b ) .we will consider as an application the detection of compact sources characterised by a gaussian profile , and fourier transform , though the extension to other profiles will be considered in the future .such a profile is physically and astronomically interesting because it represents the convolution of a point source ( dirac distribution ) with a gaussian beam .the source profile above includes a `` natural scale '' that characterises the source .this is a fundamental scale that will appear in all the filters we will consider here . by construction ,the standard mf and saf operate on this scale , as well as the canonical mhw at the scale of the source .however , it has been shown that changing the scale at which the mhw and the mf filter the image can improve its performance in terms of detection ( vielva et al .2001a , lpez - caniego et al .following this idea , we will introduce another degree of freedom in all the filters that allows us to change their scale in a continuous way ( similarly to the scaling of a continuous wavelet ) .this degree of freedom is obtained by multiplying the scale by a new parameter .we will show that with this new parameter the improvement in the results is significant .the idea of a scale - adaptive filter ( or optimal pseudo - filter ) has been recently introduced by the authors ( sanz et al .2001 ) . by introducing a circularly - symmetric filter , , we are going to express the conditions in order to obtain a scale - adaptive filter for the detection of the source at the origin taking into account the fact that the source is characterised by a single scale .the following conditions are assumed : , i.e. is an _ unbiased _ estimator of the amplitude of the source ; the variance of has a minimum at the scale , i.e. it is an _efficient _ estimator ; has a maximum with respect to the scale at .then , the filter satisfying these conditions is given by ( sanz et al .2001 ) , \nonumber\end{aligned}\ ] ] assuming a scale - free power spectrum , , and a gaussian profile for the source , the previous set of equations lead to the filter , \ \q\alpha r , \nonumber\end{aligned}\ ] ] where we have modified the scale as . in this casethe filter parameters and the curvature of the source are given by figure [ fig : fig3 ] shows the saf for two values of the spectral index .if one removes condition ( 3 ) defining the saf in the previous subsection , it is not difficult to find another type of filter after minimization of the variance ( condition ( 2 ) ) with the constraint ( 1 ) this will be called _ matched _ filter as is usual in the literature .note that in general the matched and adaptive filters are different . for the case of a gaussian profile for the source and a scale - free power spectrum given by , the previous formula leads to the following modified matched filter where and is given by equation ( [ eq : saf_m ] ) and we have included the scale parameter .figure [ fig : fig3 ] shows the mf for the case ( standard mf ) and values of the spectral index .we remark that for the scale - adaptive filter and the matched filter coincide , and for ( not shown in the figure ) , the matched filter and the mexican hat wavelet are equal . for the mfthe parameters and the curvature of the source are given by we remark that the linear detector is reduced to for the standard matched filter ( ) .i.e. curvature does not affect the region of acceptance for such a filter .the mh is defined to be proportional to the laplacian of the gaussian function : thus , in fourier space in this case the filter parameters and the curvature of the source are given by the generalization of this type of wavelet for two dimensions has been extensively used for point source detection in 2d images ( cayn et al .2000 , vielva et al .2001 , 2003 ) . as for the previous filter ,the mh is modified by including the scale parameter in the form for the mh the parameters and the curvature of the source are given by figure [ fig : fig3 ] shows the mh for different values of the spectral index . if one removes condition ( 3 ) defining the saf in subsection 5.1.1 and introduces the condition that has a spatial maximum in the filtered image at , i.e. , it is not difficult to find another type of filter where is an arbitrary constant that can be related to the curvature of the maximum .we remark that the constraint is automatically satisfied for any circularly - symmetric filter if the source profile has a maximum at the origin . for the case of a scale - free power spectrum ,the filter is given by the parametrized equation where we have modified the scale as .hereinafter , we will call this new filter containing two arbitrary parameters , and , the biparametric scale - adaptive filter ( bsaf ) .a calculation of the different moments leads to where m and t are defined in equation ( [ eq : saf_m ] ) and and are given by \gamma ( m),\ ] ] \delta^m \gamma ( m).\ ] ] note that the bsaf contains all the other considered filters as particular cases : the mf is recovered for , when the bsaf defaults to the saf and , finally , the mh wavelet is obtained in the two cases : , and , .we will test two different _ pdf _ : a uniform distribution in the interval and a scale - free distribution with a lower and upper cut - off .in particular , we will especially focus on values for the cut - off s that lead to distributions dominated by weak sources .it is in this regime where sophisticated detection methods are needed , since bright sources can be easily detected with simple techniques . in this case ,.\ ] ] this allows us to obtain ,\ ] ] in general , we will consider a cut - off in the amplitude of the sources such that after filtering with he standard mf . note that this correspond to different thresholds for the rest of the filters . in this case , ,\ \\beta \neq 1,\ ] ] where the normalization constant n and are }.\\\ ] ] in general , we will consider and , after filtering with the standard mf and the corresponding thresholds for the other filters .for a fixed number density of spurious sources , we want to find the optimal filter that produces the maximum number density of true detections for different spectral indices ( ) , values of and point source distributions .in order to do this , we first obtain implicitly the value of from equation ( [ eq : nb * ] ) ( for a fixed value of ) and then substitute it in equation ( [ eq : nb ] ) to calculate .we consider two different distributions of sources to test the robustness of the method : a uniform distribution and a scale - free distribution .given that bright point sources are relatively easy to detect , we mainly concentrate on the more interesting case of weak sources . in any case, we also mention some results for distributions containing bright sources .we remark that the bsaf has an additional degree of freedom , the parameter , as it appears in equation ( [ eq : eqnndf ] ) .note that the bsaf and the saf are not the same filter .the parameter in the bsaf can take any positive or negative value , while the coefficient , for the saf , is a known function of . by construction , the bsaf always outperforms the mf and saf or , in the worst case , defaults to the best of them . as a first case, we consider a uniform distribution of sources with amplitudes in the interval \sigma_0 ] .thus , the corresponding upper limit for in the original ( unfiltered ) map is below 2 , what means that we are considering the detection of weak sources . as a reference example , in figure [ fig : numdet_alfa_unif_g0 ] , we plot , the number density of detections , as a function of for the case , and , where is given in pixel units . for completeness, the theoretical values of are given , in this figure , for values of down to zero ( note that when ) .however , from a practical point of view , we do not expect the theoretical results to reproduce the values obtained for a pixelized image when filtering at small scales ( since the effect of the pixel is not taken into account ) .therefore , hereinafter , we will only consider those results obtained when filtering at scales larger ( or of the order ) of the pixel size , which corresponds to . taking into account this constraint , the best results are obtained for for the bsaf , that clearly outperforms the standard mf ( i.e. , ) with an improvement of the in .if we compare with the mf at , the improvement is of . in figure[ fig : numdet_alfa_unif_g05 ] , we give the same results for the case . in this case , the bsaf at improves again significantly the standard mf , with an increase in the number density of detections of . as increases , the improvement of the bsaf with respect to the standard mf decreases .in fact , for values of they produce very similar results . as an example , we give the number of detections achieved for each filter for the case , and in fig . [fig : unif_g1p5 ] .it can be seen that the maximum number of detections is approximately found for the standard mf .however , we would like to point out that the saf and mh wavelet at the optimal scale give approximately the same number of detections as the standard mf .these results show the importance of filtering at scales instead of the usual scale of the source .this can also be seen in fig .[ fig : unif_gamma ] , that summarizes how the relative performance of the considered filters with respect to the standard mf changes with the spectral index ( again for and ) . for each filter ,the results are given for the optimal scale ( and parameter in the case of bsaf ) .the improvement of the bsaf with respect to the mf ranges from ( for white noise ) to zero ( for the largest values of .we would also like to point out that the mh at the optimal scale performs similarly to the standard mf .in addition , the mh has an analytical expression which makes it very robust and easy to implement .therefore , it can be a useful filter in some practical cases .we have also explored how the previous results change when varying and . in particular , we have considered vaules of in the interval 0.01 - 0.05 , and and values of .the results are summarized in table [ tab : tabla1_n ] for the bsaf and the standard mf ( we present only those cases where the bsaf improves at least a few per cent the standard mf ) .the values of and for the bsaf are found as the ones that maximise in each case .. number density of detections for the standard mf ( ) and the bsaf with optimal values of c and .rd means relative difference of number densities in percentage : .[tab : tabla1_n ] [ cols="^,^,^,^,^,^,^,^",options="header " , ] we have also explored how the results depend on the values of and . in table[ tab : tabla2_n ] , we show the number density of detections for the bsaf and for the standard mf ( ) for and , with ranging from 0.01 to 0.05 , and for values of ( we only include the results for those cases where the relative difference between the bsaf and standard mf is at least a few per cent ) .we also give the optimal values of and where the bsaf performs better ( taking into account the constraint ) . as for the previous case of the uniform distribution , the relative performance of the bsaf improves when increasing and .it is also interesting to consider other values of the parameter .for instance ] ( i.e. , a mixture of weak and bright sources ) for .we find , for the reference case ( , , ) , that the bsaf improves the standard mf around a , with optimal parameters and .we would like to point out that for for a given set of , and , this distribution of weak and bright sources leads to very similar optimal parameters for the bsaf as the scale - free distribution of weak sources .in addition , we have also tested the performance of the filters for a scale - free distribution of bright sources with \sigma_0 ] in the same arbitrary units of the background .the images filtered with the standard mf ( ) for this scale ( ) have dispersion .thus , the sources are distributed in the interval ] .the parameters used for these simulations are , and .the optimal filter parameters have been chosen at each case .the points and the error bars are calculated as the average and the dispersion of the detected sources that fall in each of the amplitude bins from a total of 10000 detected sources .we find a similar positive bias in the determination of the amplitude for the bsaf ( , ) and mf ( ) .however , the error bars corresponding to the bsaf are slightly smaller than those of the mf . in the right panels ,we give the results for a uniform distribution of sources with \sigma_{0} ] and a scale - free power law distribution in the interval ] and for a scale - free distribution with ] , i.e. , dominated by bright sources , we find that the optimal bsaf defaults to the standard mf , which gives the maximum number of detections in this case .we find that the bsaf gives in any case the best performance among the considered filters . indeed ,the saf and the mf are particular cases of the bsaf and the strategy we follow , i.e. maximization of the detections , guarantees that the parameters of the bsaf will default to the best possible of these filters in each case .in addition , we also find that the bsaf performs at least as well as the mh in all the considered cases .therefore , the number density of detections obtained with the bsaf will be at least equal to the best of the other three filters , and in certain cases superior .however , in some other cases , the gain is small and it is justified to use an analitically simpler filter .our results suggest that for power law spectra , from the practical point of view , one could use the bsaf when since , in this range , clearly improves the number of detections with respect to the other filters .however , for the usage of the mh is justified due to its robustness ( since it has an analytical form ) and it gives approximately the same number of detections obtained either with the bsaf or mf .for all the studied cases of source distributions ( except for the one dominated by bright sources ) and fixing the values of , and , we find that the optimal parameters of the bsaf are only weakly dependent on the distribution of the sources .we have done some simple tests in order to study the robustness of the method when the knowledge about the source _ pdf _ or the background spectral index is not perfect . we find that the values of the optimal filter parameters vary slightly when we assume that the source distribution is uniform when , in reality , it is scale - free and vice versa .the uncertainties in the cut - off values of the source _ pdf _ affect the number of detections , but in a similar way for all the filters , and therefore the relative behaviour of the filters do not change .errors in the estimation of the spectral index reduces the efectiveness of the bsaf , but it still outperforms the other filters .all of this indicates that our detection scheme is robust against uncertainties in the knowledge of the distribution of the sources and spectral index . to test the validity of our results in a practical example , we have tested our ideas with simulations for the uniform distribution ( using our reference case , , ) and find that the results follow approximately the expected theoretical valuesregarding source estimation , we propose a linear estimator which is unbiased and of maximum efficiency , that we have also tested with simulations .the ideas presented in this paper can be generalized : application to other profiles ( e.g. multiquadrics , exponential ) and non - gaussian backgrounds is physically and astronomically interesting .the extension to include several images ( multi - frequency ) is relevant .the generalization to two - dimensional data sets ( flat maps and the sphere ) and nd images is also very interesting .finally the application of our method to other fields is without any doubt .we are currently doing research in some of these topics .the authors thank enrique martnez - gonzlez and patricio vielva for useful discussions .mlc thanks the ministerio de ciencia y tecnologa ( mcyt ) for a predoctoral fpi fellowship .rbb thanks the mcyt and the universidad de cantabria for a ramn y cajal contract .dh acknowledges support from the european community s human potential programme under contract hprn - ct-2000 - 00124 , cmbnet .we acknowledge partial support from the spanish mcyt project esp2002 - 04141-c03 - 01 and from the eu research training network ` cosmic microwave background in europe for theory and data analysis ' .de zotti , g. , toffolatti , l. , argeso , f. , davies , r.d . , mazzotta , p. , partridge , r.b ., smoot g.f . &vittorio , n. , 1999 , 3k cosmology , proceedings of the ec - tmr conference held in rome , italy , october , 1998 .woodbury , n.y .: american institute of physics , vol .476 , 204 . the ratio can be explicitly written as and taking into account the np criterion for detection , we find where is a constant . by differentiating the previous equation with respect to therefore , is equivalent to , where is a constant , i.e. given by equation ( [ eq : phi ] ) is a sufficient linear detector .let us assume a linear estimator combination of the normalized amplitude and normalized curvature with the constraint if the estimator is unbiased , i.e. , taking into account that and , we obtain the constraint on the other hand , the variance is given by where we have taken into account that . by minimizing the previous expression with the constraint ( [ eq : constraint_appc ] ) , one obtains therefore , one obtains :
this paper considers the problem of compact source detection on a gaussian background . we make a one - dimensional treatment ( though a generalization to two or more dimensions is possible ) . two relevant aspects of this problem are considered : the design of the detector and the filtering of the data . our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima . a neyman - pearson test is used to define the region of acceptance , that is given by a sufficient linear detector that is independent on the amplitude distribution of the sources . we study how detection can be enhanced by means of linear filters with a scaling parameter and compare some of them that have been proposed in the literature ( the mexican hat wavelet , the matched and the scale - adaptive filters ) . we introduce a new filter , that depends on two free parameters ( biparametric scale - adaptive filter ) . the value of these two parameters can be determined , given the a priori _ pdf _ of the amplitudes of the sources , such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once fixed the number density of spurious sources . the new filter includes as particular cases the standard matched filter and the scale - adaptive filter . then , by construction , the biparametric scale adaptive filter outperforms these filters . the combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters ( one of them a scaling ) improves significantly the number of detections in some interesting cases . in particular , for the case of weak sources embedded in white noise the improvement with respect to the standard matched filter is of the order of . finally , an estimation of the amplitude of the source ( most probable value ) is introduced and it is proven that such an estimator is unbiased and it has maximum efficiency . we perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical ones . methods : analytical - methods : data analysis - techniques : image processing
searching and staying up - to - date with scholarly literature is an essential part of scientific research . with the advent of the world - wide web ( www ) and the evolution of electronic publishing , a powerful environmentwas created to open the vast universe of scientific literature on a world - wide scale . in early 1994the www had become sophisticated enough to allow the search of electronic resources via `` web forms '' . over the past decade , this environment matured into an unavoidable and indispensable fact of life and tools have emerged in it that have become a crucial ingredient in scientific research .being able to search vast amounts of data electronically obviously facilitates the review process , but applying advanced technologies such as pattern recognition to the electronic data or by allowing nested searches , one is able to produce results that are unattainable in conventional , non - electronic ways . in that senseone can argue that the available electronic search tools on the www even further scientific research . using a straight - forward search engine ( google , yahoo , msn , altavista , ... ) results in thousands of documents , ranked by some sophisticated algorithm .even with the advanced versions of these tools , we still find ourselves awash in information . to search the electronic , scholarly literature , scientists need to be able to zoom in on bibliographic data using additional descriptors and search logic .what scholarly tools are available for specialists in astronomy and ( astro)physics ? the principal bibliographic services are the nasa astrophysics data system ( nasa ads ) , google scholar , inspec and the astronomy and astrophysics abstracts .important additional resources are the science citation index ( isi web of science ) , scopus and zetoc .although these tools allow researchers to zoom in on the scholarly literature , they do not offer additional tools to determine the most popular or most cited papers in a given subject . especially for staying up - to - date ,it is essential to be notified of the most popular and most cited papers .late 2003 , the ads introduced the _ myads _service , a fully customizable newspaper covering ( journal ) research for astronomy , physics and/or the arxiv e - prints .this service will give the user an overview of the most recent papers by his / her favorite authors , and the most recent , most cited and most popular papers in a particular subject area .additionally , the user will see an overview of citations to his / her papers . between a fifth and a quarter of allworking astronomers already subscribe to myads ._ myads - arxiv _ is a fully customizable , open access , virtual journal , covering the most important papers of the past week in physics and astronomy . in other words , for the specialist, the myads - arxiv service provides a one stop shop for staying up - to - date in his / her field of interest .myads - arxiv is based on the existing services of the nasa astrophysics data system ( see and wikipedia ) and the arxiv e - print repository ( see and ) .the ads repository is completely synchronized with the arxiv e - prints system .each night ads bibliographic records are created for all the e - prints that were newly added . the references are also extracted from the e - prints and matched against existing records in the ads . thus we add lists of references to the bibliographic records and we use these references to maintain citation statistics . both of these elements are used in the myads - arxiv service .the service also uses readership information from both the ads and the arxiv to compute the most popular papers in a subject area . by this continuous influx of information ,myads - arxiv provides the subscriber with a service that is as up - to - date as your morning newspaper .the service provides a weekly overview , which offers the unique view on what is happening in a field , and a daily notification . why is this view unique ? it is * fully customizable * because you specify the queries that determine the results .myads - arxiv is an * open access * service , because it is _ totally free _( no subscription costs ) .furthermore , it is a * virtual journal * , because the overview is a regularly appearing collection of scholarly papers , that are only available in electronic format ( the vast majority of e - prints have not yet been published as journal papers ) .last but not least , myads - arxiv covers the * most important * papers of the past week .this follows from concordance between e - printed and published papers , and citation statistics : in astronomy and physics , the most important papers are submitted as e - prints first .the url for the service is * http://myads.harvard.edu*. from here , you can set up your account and specify the queries for the myads - arxiv service .a maximum of 2 subject queries and 1 author query can be specified .the results page is like an an automatically generated newsletter . for each of your subject queries, you will get an overview of the `` recent '' , `` most popular '' and `` most cited '' papers .the `` recent '' papers are the newly ( i.e. since the previous query ) added entries in the e - print database that match your query .the `` most popular '' papers are found by looking at the also - read statistics for the top 100 of all papers that match your query and the `` most cited '' papers are obtained from the reference lists of the papers that match your query , published in the previous three months .the link to this newsletter is public , so it can be shared with colleagues .there is also a daily alerting service , showing you the latest e - prints in the categories of your choice , sorted according to a query specified by you ( or e - print number if you did not specify one ) .entries that match you query are preceded by an asterisk .why is myads - arxiv so unique and powerful ? it is because of the combination of the following two factors : the search capabilities of the myads - arxiv service guarantees the proper selection of bibliographic records for your queries , and the quality of the e - prints guarantees the relevance and importance of the bibliographic records .the unique machinery that powers the queries consists of _ reference resolving _ ( associating reference strings with existing bibliographic records , see ) , _ bibcode matching _ ( setting up the e - print / paper concordance ) , _ second order operators _ ( operations on results lists ) and _ also - read statistics _ ( `` people who read paper x , also read paper y '' , see ) . by associating reference lists in newly added papers with already existing records , we keep citation statistics up - to - date , which allows us to generate lists of `` most cited papers '' . with the reads statistics, we can construct a list of `` most popular papers '' , using the second order operators .the bibcode matching procedure is only relevant in cases where the preprint already appeared in a journal .titles , abstracts and author lists are indexed , so that they are available for searching .the quality of the e - prints is reflected in the observation : the most important papers in astronomy and physics appear as e - prints on the arxiv .this fact is illustrated by figure 1 ( left ) , which shows for a number of important astronomy and physics journals , the fraction of e - printed papers for the top 100 most cited papers , during the period of 1992 through 2005 . over 90%appears as e - print first ( for month .notices ras and nucl .b , it is even 100% ) .just the effect of primacy through early access is not enough to explain the fraction of e - printed papers in the top 100 most cited papers ( at a given moment ) . according to is an effect called `` self - selection bias '' that results in a further increase of citation rates for e - printed papers .`` because papers in the arxiv are not refereed ... this suggests that authors self - censor or self - promote , or that for some reason the most citable authors are also those who first use the new publication venue '' .the papers are not cited more because they are read more , they are cited more and read more because of their quality .figure 1 ( right ) illustrates the fact that e - printed papers are cited and read more than papers that did not appear as e - print .the rule `` better searches give better results '' most definitely applies to the myads - arxiv service .based on the sophisticated search capabilities of the ads , myads - arxiv will provide the most powerful results for those who are able to characterize a field in a couple of keywords and key phrases .additionally , the use of `` simple logic '' allows a user to really zoom in on a specific research field .the myads - arxiv service is unique in the world of electronic libraries and publishing .no other electronic newsletter or alerting service produces a view on the scholarly physics and astronomy literaturei as comprehensive as this service . from the harvard - smithsonian center for astrophysics press release in spaceref.ca ( article number 16658 ) on the myads - arxiv service : `` it s the best thing since two pieces of sliced bread were assembled to make a sandwich , '' said paul ginsparg , professor of physics and information science at cornell university ( april 17 , 2005 ) .ginsparg , p. ( 2001 ) .`` creating a global knowledge network '' , in electronic publishing in science ii .proceedings of joint icsu press / unesco conference , paris , france .available at : http://arxiv.org/blurb/pg01unesco.html kurtz , michael j. , eichhorn , guenther , accomazzi , alberto , grant , carolyn , demleitner , markus , henneken , edwin , murray , stephen s. ( 2005b ) .`` the effect of use and access on citations '' , information processing and management , vol .41 , issue 6 , p. 1395 - 1402
the myads - arxiv service provides the scientific community with a one stop shop for staying up - to - date with a researcher s field of interest . the service provides a powerful and unique filter on the enormous amount of bibliographic information added to the ads on a daily basis . it also provides a complete view with the most relevant papers available in the subscriber s field of interest . with this service , the subscriber will get to know the lastest developments , popular trends and the most important papers . this makes the service not only unique from a technical point of view , but also from a content point of view . on this poster we will argue why myads - arxiv is a tailor - made , open access , virtual journal and we will illustrate its unique character . the ads is funded by nasa grant nng06gg68 g .
structural multi - factor _ economic capital _ ( ec ) models derived from the creditmetrics framework have become the most widely adopted tools for risk quantification in credit portfolios .an outcome of these models , a portfolio ec and its allocation down to individual facilities , is used by financial institutions for any or all of the following : internal capital adequacy assessment , external reporting , risk - based pricing , performance management , acquisition / divestiture analyses , stress - testing and scenario analysis , etc . while in most cases monte carlo simulations are used due to limited analytical tractability of the multi - factor models , the recently reported advanced analytical techniques may be viewed as an alternative . unfortunately , neither the industry standard simulation - based approach nor the existing analytical techniques can fully address the needs of the financial institutions .in particular , the risk - based real - time pricing remains the ultimate challenge : none of the existing models is capable of providing sufficiently accurate , stable and time - efficient input . yet another practical aspect which has not received enough attention in the literatureis the sometimes overly complex structure of the models as perceived by end users .very often the complexity of the models makes them hard to be understood and , hence , affects their acceptance within an organization .the approach presented here aims to overcome the above mentioned difficulties and is in its spirit similar to the one reported by .the content , however , is quite different since the presented model has more solid theoretical background , is easier to implement and use and is capable of covering fully featured multi - factor setup .the model described here was developed with kiss principle in mind . while based on the previous author s research on the analytical tractability of multi - factor models , the proposed model has very simple and intuitive structure . despite the simple structure , the model produces meaningful and reasonably accurate results andcan be used by financial institutions for any of the purposes described above . in particular, the problem of capital allocation has a simple and time - efficient solution allowing real - time risk - based pricing . from conceptual point of view ,one of the most attractive features of the model is its ability to quantify risk concentrations on both sector and obligor levels in a similar fashion .this article is organized as follows .a short description of structural multi - factor model and the necessary theoretical background are given in section [ sec : background ] .mathematical foundations of the proposed model are presented in section [ sec : kiss ] and are substantiated by benchmarking with monte carlo simulations in section [ sec : benchmarking ] .section [ sec : summary ] contains some concluding remarks and summarizes the presentation .let us consider a portfolio of credit risky facilities with loss functions at horizon ( one year ) being a function of random variables ( normalized asset returns ) .dependencies within the portfolio are modeled by means of a set of common factors : the random variables are independent and standard normally distributed . the instrument specific and define the systematic sensitivities of the instruments .the latter are subject to normalization condition .the idiosyncratic risk components are represented by .the economic capital of the portfolio is defined as na - quantile ( usually set to 99.9% or higher ) of the portfolio loss distribution relative to the expected loss of the portfolio : - \text{e}[l]\end{aligned}\ ] ] the above quantifies the overall portfolio risk which can be consistently distributed between the underlying facilities using the euler principle as ( * ? ? ?* see e.g. ) : where is a weight of the asset in the portfolio . to simplify the notations, these weights will not be written explicitly in what follows .no closed form analytical solution exists for either portfolio ec or its allocation in general case .however , in a single - factor case , i.e. one common factor and for any , the portfolio loss distribution quantile can be trivially found to be is an invertible function . ]=l_{1f}(\eta_{1f}=n^{-1}(\alpha)) ] which can be expressed as : where the inner vector product stands for and are hermite polynomials .the series converge very well provided values of are not too close to 1 ( which is the case in practice ) and allow for very fast ( re)calculations of the conditional expectations once the constants have been computed .the above technique is particularly useful when considering arbitrary loss functions .as long as credit portfolio modeling is concerned , it became a common practice to distinguish the unsystematic and the idiosyncratic risk components .the usual assumption is that the former drive the portfolio risk dynamics while the latter only give minor contributions .however , the two sets of random variables do not differ from mathematical point of view .in fact , one can not draw a clear line between the systematic and idiosyncratic components using practical considerations either .indeed , imagine that the portfolio contains a single relatively big exposure or a set of exposures corresponding to the same borrower and , hence , sharing the same idiosyncratic random variable .depending on the size of this exposure(s ) , the risk brought to the portfolio by may be higher than the one originating from some or even all of the systematic factors .big enough exposure will eventually dominate the portfolio dynamics even if its sensitivity to the systematic factors is zero . introducing credit contagion effects by assigning more than one overlapping idiosyncratic factors to a group of dependent borrowers makes it even harder to make a distinction between the systematic and idiosyncratic factors .treating the systematic and the idiosyncratic risk equally not only simplifies the model structure , but also allows straightforward incorporation of the borrower concentration effects into the portfolio risk metrics .the notations used so far can be generalized as follows . for common factors and the portfolio consisting of borrowers let us introduce the single factor approximation of the portfolio losscan be written as unification of the systematic and idiosyncratic risk factors results in idiosyncratic factors being incorporated in which , as will be shown , defines the portfolio risk dynamics .thus , the idiosyncratic risk is accounted for in the same fashion as the systematic one . as was mentioned before, analytical tractability of the single factor case is the starting point for approaching the more general multi - factor setup . starting with a single factor approximation and calculating multi - factor ( including idiosyncratic ) adjustments as in, one can in principle calculate the portfolio economic capital .this approach , however , suffers from some difficulties .first , the choice of is not obvious , yet a very important first step .next , calculations of the multi - factor corrections may be quite laborious and hardware demanding . finally ,as will be demonstrated later , the multi - factor corrections are not guaranteed to be convergent . instead of trying to overcome the difficulties associated with the multi - factor adjustments calculations ,let us put all the efforts into constructing the single - factor approximation .some factor should exist which maximizes the relative contribution of the single - factor approximation in and , thus , diminishing the relative importance of the multi - factor corrections .assuming that the multi - factor corrections give positive contribution to the -quantile of the portfolio loss distribution , the optimization problem reduces to maximization of the contribution from the single - factor approximation : \approx \max_{1f } q_\alpha [ l_{1f}]\end{aligned}\ ] ] the validity of this crucial assumption will be substantiated later in section [ sec : benchmarking ] by benchmarking with monte carlo simulations . from now on let us define the economic capital ec of the credit portfolio as an -quantile of the optimal single factor distribution . using the notations introduced in this section the economic capital can be written as , \qquad r_i=\vec{\alpha}\cdot\vec{\rho_i}\end{aligned}\ ] ] the optimal single factor is defined by which maximizes the above expression and has the following solution this equation , however , contains on both sides ( on the right contains ) and does not allow a straightforward analytical solution .the problem can still be solved numerically by applying , for example , the method of steepest descent .based on , the following starting point can be suggested the calculations can be significantly facilitated by the series expansion . in practice ,only few iterations are needed to have an accurate solution to the optimization problem .the calculations are not hardware demanding and very fast .the optimal single factor defined by leads to another simplification for the portfolio capital allocation problem .the individual capital contributions \right)\end{aligned}\ ] ] can be written as + \sum_j\frac{\partial\overline{l}_i}{\partial r_i}\vec{\rho}_j\cdot w_i\frac{\partial}{\partial w_i}\frac{\vec{p}}{\|\vec{p}\| } = \overline{l}_i(\vec{\alpha}\vec{\rho}_i ) - \text{e}[l_i]\end{aligned}\ ] ] where the third term can be shown to be zero : in other words , the choice of according to leads to particularly simple expressions for capital contributions .the overall portfolio ec is a sum of conditional expectations of the excess losses ] {pd1conc } \includegraphics[width=0.4\textwidth , viewport=0 595 265 785,clip]{pd1highconc } \\\includegraphics[width=0.4\textwidth , viewport=0 600 265 790,clip]{pd01conc } \includegraphics[width=0.4\textwidth , viewport=0 595 265 785,clip]{pd01highconc } \\ \includegraphics[width=0.8\textwidth , viewport=0 0 530 190,clip]{cap_vs_conf2 } \end{array} ] {single } & \includegraphics[width=0.45\textwidth , viewport=0 5 380 205,clip]{single_kiss } \\\includegraphics[width=0.45\textwidth , viewport=0 5 380 205,clip]{second } & \includegraphics[width=0.45\textwidth , viewport=0 5 380 205,clip]{third } \end{array}$ ]despite its simplicity , the analytical approximation presented here is capable of quantifying credit portfolio risks in a general multi - factor setup .the _ var _ risk measure used here can easily be replaced with the _ expected shortfall_. the arbitrary loss functions used allow for covering not only default - only regime , but also mtm valuation or even the dependency of in - default loss severities on the systematic factors .the default - only case has a particularly simple solution mimicing the well - known irb capital rules : the less than perfect accuracy of the approximation is not crucial for day - to - day practical needs of credit portfolio managers .the advantages of the proposed technique are significant .the model allows very fast and straightforward calculations including real - time risk - based pricing .the simple , robust and transparent structure can facilitate user acceptance and integration on all levels of financial institutions .99 j. cespedes , j. herrero , a. kreinin and d. rosen ( 2006 ) a simple multifactor `` factor adjustment '' for the treatment of credit capital diversification . _ journal of credit risk _ , vol.2 , no.3 ,fall 2006 .
a simple , yet reasonably accurate , analytical technique is proposed for multi - factor structural credit portfolio models . the accuracy of the technique is demonstrated by benchmarking against monte carlo simulations . the approach presented here may be of high interest to practitioners looking for transparent , intuitive , easy to implement and high performance credit portfolio model .
the series of states of accretion disks called the radiatively inefficient accretion flows ( riaf ) forms an optically - thin , under - luminous ( usually radiating at a small fraction of the eddigton luminosity of each central object ) branch in the accretion - rate vs. surface - density diagram .another separate branch exists in a more optically - thick ( i.e. , large surface - density ) domain and continues from the standard - disk series to the slim - disk series , via a thermally unstable part , as the accretion rate increases ( e.g. , ) . specifically for the riaf theories , a more detailed description can be found , e.g. , in .the main efforts to take the effects of ordered magnetic fields into account in the accretion disk theories may be divided into two classes . in one class ,the presence in the disk of only a toroidal field with even polarity ( i.e. , the polarity is the same on both upper and lower sides of the equatorial plane ) is taken seriously .the resulting magnetic pressure is added to gas pressure to support the disk against the vertical component of gravity .further , if the -prescription ( ) with respect to the total pressure is adopted for a viscosity , an additional viscous extraction of angular momentum passing through the disk plane becomes possible .for example , the modifications of the standard - disk theory ( e.g. , ) and of riafs ( e.g. , ) have been discussed , respectively , in relation to some controversial spectral features seen in cataclysmic variables and to the state transitions seen in galactic black - hole x - ray binaries . in the other class , on the contrary, the presence of both poloidal and toroidal components of an ordered field are taken seriously . since the toroidal component is considered to appear as a result of dragging of the vertical field lines by the rotational motion of the disk , its polarity reverses on both sides of the equatorial plane ( i.e. , an odd polarity ) .thus , the toroidal component develops mainly outside the disk and vertically compresses the disk against gas pressure .moreover , such a configuration makes it possible to vertically extract the angular momentum by the maxwell stress .this point is essential in relation to the production of astrophysical jets ( e.g. , ; ; ) often observed to emanate from the vicinity of the disk inner edge . in most of the analytic models that is addressed to the formation of jets ,however , the magnetic field is not treated self - consistently with the fluid disk .self - consistent inclusion of an ordered magnetic field into riaf states has been performed in a series of works by the present author ( for a review , see ; hereafter referred to as paper i ) . in this model ,a twisted magnetic field works to extract angular momentum from the disk plasma , and the resistive dissipation converts the available gravitational energy into heat .this makes a good contrast with the usual riaf models , in which only turbulent magnetic fields are included , and the fluid viscosity plays an essential role in converting energy and extracting angular momentum .therefore , we call the former the resistive - riaf model , distinguished from the latter , the viscous - riaf model. it should be mentioned also that there is another series of studies in which the presence of an ordered magnetic field is treated self - consistently ( for a review , see ) . although its relation to riafs is not so clear , ferreira and his coworkers discuss an inner region of the accretion disk threaded by a global magnetic field .their main interest is in the local ( i.e. , at a given radius ) mechanisms to launch magnetohydrodynamic ( mhd ) jets , and the details of vertical transport of energy and angular momentum are investigated . on the other hand ,the present concern of the resistive - raif model is to show how the energy can be supplied to the jet launching site from wide aria of an accretion disk .this paper is a direct follow - up of paper i that has been devoted to discuss the appearance of the poynting flux near the inner edge of a resistive - riaf , which may lead to the jet launching .however , the discussion was based on the inward extrapolation of an outer asymptotic solution whose accuracy is not necessarily guaranteed in the inner region .moreover , the outer solution has been derived by assuming a specific condition , which we call hereafter the extended riaf condition ( equation [ 9 ] in paper i or [ [ eqn : exriaf ] ] below ) .this condition may seem rather arbitrary or artificial .therefore , we give it up in the present paper . instead ,according to the spirit of this condition , we first obtain several asymptotic solutions in the outer region of an accretion disk , which are equivalent to each other within the accuracy to the first order in the smallness parameter ( the definition will be given in the next section ) . under the above situation , the criterion to sift a specific solution from others would be the wideness of its applicability range .thus , we are naturally led to examine the behavior of these outer solutions in the opposite limit of small radius , and find that only one case among them becomes exact also in this limit .namely , the selected one becomes accurate not only in the limit of large radius but also in that of small radius .therefore , it may be called a global solution , although it is still an approximate one at middle radii .this finding is indeed a great improvement since we can discuss global operation of such accretion flows based on this much secure ground than before .another advantage of this improved solution is that the expressions for all relevant physical quantities are written analytically in closed forms .the organization of this paper is as follows . in section 2 , the variable - separated version of the governing equationsare summarized , extracted from paper i for convenience . as plausible examples of asymptotic solutions at large radii ,four possibilities are derived in section 3 without employing the extended riaf condition . by examining the behavior of these outer asymptotic solutions in the opposite limit of small radius , we find in section 4 that there is one and only one case in which the same expressions become asymptotically exact also in this limit .full expressions for the relevant quantities within this global solution are derived also in this section . using these expressions , we calculate and discuss in section 5 the local energy budgets of a few types . as summarized in the final section, the obtained results clearly show the appearance of preferable circumstances for jet launching .as for notation , we completely follow paper i , and hence adopt spherical polar coordinates , ( , , ) . the normalized versions of the radius and co - latitude are , , and , respectively . here , denotes the inner - edge radius of the accretion disk .all physical quantities have been expressed in the variable - separated forms , equations ( 16)-(30 ) in paper i , should read . ] within the geometrically - thin disk approximation where the half opening - angle of the accretion disk is very small , i.e. , .the fundamental equations for the radial - part functions in quasi - stationary problems ( for the definition , see paper i ) are summarized below .* leading equations * * magnetic flux conservation : * ampre s law : * ohm s law : where * mass continuity : * equation of motion : \tilde{v}_r^2 = \tilde{v}_{\varphi}^2 - v_{\rm k}^2 , \label{eqn : eomr } \end{aligned}\ ] ] where * equation of state : * subsidiary equations * * faraday s law : * charge density : one of the characteristic aspects of our treatment is that ohm s law is used directly , without substituted into faraday s law . the poloidal and toroidal magnetic - reynolds numbers , equations ( [ eqn : rep ] ) and ( [ eqn : ret ] ) , have been introduced in rewriting ohm s law . the final expression for be derived by using equation ( 73 ) of paper i. the expression ( [ eqn : eomr ] ) for the radial component of equation of motion ( eom ) follows from equation ( 93 ) of paper i , but the second term on the right - hand side ( rhs ) of the latter has been shifted to the left - hand side ( lhs ) . the derivation may become evident , when one is referred to the identity in equation ( [ eqn : eomr ] ) , denotes the kepler velocity with and being the gravitational constant and the mass of a central object , respectively .the expression ( [ eqn : eomph ] ) for the azimuthal component of eom has been quoted from equation ( 102 ) of paper i. we often refer to the expansion parameter that becomes very small at large radii , where and are the radial and azimuthal components of the velocity , respectively .the suffix vpf means that the quantity is evaluated by the lowest - order solution , the vanishing poynting - flux ( vpf ) solution ( see paper i for details ) .the original riaf condition , has been assumed in the asymptotic region at large radii .this condition means that the ratio of the pressure gradient force to the gravity should be a constant in this region , which guarantees a characteristic nature of optically thin riafs , i.e. , a virial - like temperature .combined with the equation ( 68 ) of paper i , which is the lowest - order version of the -component of eom in the power series of , we have derived a lowest - order solution ( ) in the asymptotic region . since the poynting flux vanishes identically in this solution ( i.e. , the vpf solution ) , it can not explain the jet launching which is commonly expected for the accretion disks of the resistive - riaf type . this is because the electrodynamic launching surely requires the supply of jet - driving power through the poynting flux . in order to overcome this difficulty, we tried in paper i to improve the accuracy of the solution by taking the first - order corrections in into account . in this connection , we have replaced the original riaf condition by the extended riaf condition , here , is a constant which controls the radial profiles of relevant physical quantities in the vpf solution , and in this sense plays a similar role to the polytropic index that replaces the energy transfer equation ( see paper i ) .in contrast to the original riaf condition , the extended one does not have firm grounds to stand on , except that it becomes identical with the original one in the vpf limit .therefore , we can not reject the criticism that it is only a makeshift policy , and if possible , such an obscure postulate should be avoided in deriving higher - order solutions .when the above postulate has been removed , the only remaining requirement is that any improved solution should coincide with the vpf solution in the limit of vanishing corrections . under such circumstances , the new policy we adopt here is to portion out and from equation ( [ eqn : eomph ] ) so as to reproduce a term that is proportional to or , as the leading contribution from the partial - pressure gradient force appearing in the identity ( [ eqn : id_1 ] ) .this requirement is equivalent to the condition ( [ eqn : orriaf ] ) or ( [ eqn : exriaf ] ) , respectively , as far as the leading - order terms are concerned .even if we follow this policy , the solution is not determined uniquely , and anyway the process of finding solutions becomes necessarily a kind of trial and error . in the following subsections ,we show four examples of successful trials .they are all different from the outer asymptotic solution obtained in paper i , which has been derived under the restriction of extended riaf condition .although only one of them leads to a final global solution , we dare to mention all of the new results .we believe that such a description will be helpful for the reader to become familiar with subtle insight into the strategies in finding solutions and to experience how the type of an accretion flow ( i.e. , sub- or trans - critical infall ) is actually determined .if that had been omitted , the description of this paper would become very abrupt and less understandable . in specifying the radial profiles of the velocity and magnetic fields ,it is natural to first follow the results obtained in paper i. the velocity components have been written as where is a positive constant whose value is specified in the course of discussion , and the profile of the -component for the magnetic field has been fixed as at the disk outer edge , the magnitude of this component becomes comparable to the externally imposed uniform field , , as guaranteed by the relation .the above specifications result in the derivatives and further from equations ( [ eqn : fcont ] ) , ( [ eqn : rep ] ) and ( [ eqn : eomph ] ) , respectively , and it is evident that the above set of selections for and is effective in keeping the expression ( [ eqn : ret_1 ] ) rather simple .* case 1*. when we portion out and from equation ( [ eqn : ret_1 ] ) as we obtain the relation reproducing one of our aimed form . since the above determined is a constant smaller than unity for any value of in the allowed range , ( see ) , the accretion flow in this case is said to be a ` sub - critical ' ( i.e. , ) .actually in our treatment , however , no such criticality exists at . ] infall .although as confirmed from , the solution obtained in case 1 does not belong to a vpf solution since ( i.e. , ) .the form of follows from the definition of : which yields at large radii , the first term on the rhs of equation ( [ eqn : lnbp_1 ] ) is the quantity of while the second and third terms are of .they are referred to , respectively , as and .then , shifting only the leading - order term resulting from ( [ eqn : lnbp_1 ] ) to the rhs in the -component of eom ( [ eqn : eomr ] ) , we have on the other hand , the lhs of eom ( [ eqn : eomr ] ) becomes \ \tilde{v}_r^2 \nonumber \\ & \simeq & -\left\{\frac{2n+9}{2(2n+1 ) } - 2a\right\ } \tilde{v}_r^2 .% \label{eqn : lhs_1}\end{aligned}\ ] ] in the final line of the this equation , only the lowest - order terms in have been kept in the curly brackets in order to match the form of the rhs .finally , equating both sides , we can fix the value of as * case 2*. the other option for portioning out of and is in which we have of the form and the relations differently from case1 , the accretion flow in this case is a ` trans - critical ' ( see the footnote 2 ) infall because it starts at a subcritical velocity ( i.e. , ) at the disk outer edge ( ) and reaches a super - critical value ( i.e. , ) at small radii ( ) .after shifting only the term including to the rhs of equation ( [ eqn : eomr ] ) , we have the remaining lhs becomes according to the same procedure as in case 1 . equating both sides of this equation , we obtain which requires since should be positive definite . in this subsection , we discuss fairly different types of expressions for the radial profiles of the velocity and magnetic fields .the new guideline in specifying them is to pay special attention to the identity where these are monotonic functions as shown in figure 1 , and their behaviour at large radii is within the accuracy to the first order in .first , the radial profile of is specified as it is easy to confirm that equations ( [ eqn : br_2 ] ) and ( [ eqn : br_1 ] ) are equivalent within the accuracy to the first order in . then , we have from the relation in order to keep equation ( [ eqn : eomph ] ) simple , we are naturally led to select which gives for the radial velocity component , it turns out after some trials and errors that the specification and hence is very interesting to examine .then , the velocity ratio remains the same as in subsection 3.2 , and we find very simple expressions for the magnetic reynolds numbers : similarly to the discussion in the previous subsection , there are again two possibilities in portioning out and from the latter of equation ( [ eqn : ret_2 ] ) .* case 3*. the first possibility is the specifications , which describe a subcritical accretion flow analogously to case 1 . in this case, we have the results and in the last equation derived above , the two terms on the rhs of the logarithmic derivative of are both of order unity at large radii , since as . therefore , we shift the whole term containing on the lhs of eom ( [ eqn : eomr ] ) to the rhs : \\ & & \qquad \simeq -\frac{6a}{2n+1}\ \tilde{v}_r^2 , \nonumber % \label{eqn : rhs_3}\end{aligned}\ ] ] where the last expression is the limiting form at large radii calculated with the aid of equation ( [ eqn : lgfs ] ) .note that all the leading - order terms in the square brackets cancel out in this limit and we have only a first order term ( ) .the terms remaining on the left is \ \tilde{v}_r^2 \nonumber\\ & = & -\frac{1}{2(2n+1)}\left\{4n+10-(2n+1)s\right\}\ \tilde{v}_r^2 \\ & \simeq & -\frac{2n+9}{2(2n+1)}\ \tilde{v}_r^2 , \nonumber % \label{eqn : lhs_3}\end{aligned}\ ] ] where the last expression is also the approximate form at large radii .therefore , this equation holds asymptotically at large radii , as far as * case 4*. the last option in our consideration is the profiles which describe a trans - critical accretion flow analogously to case 2 . in this case, we have the results and as in case 3 , both terms resulting from the above logarithmic derivative of are of order unity , and hence the term containing this factor in the -component of eom should be shifted altogether to the right .then , the rhs becomes where the last expression is the limiting form at large radii .again , note that the leading - order terms in the curly brackets have been completely cancelled out in this limit . on the other hand ,the lhs becomes \ \tilde{v}_r^2 } % \nonumber \\ & = & -\frac{1}{6}\ v_{\rm k ,in}^2\xi^{-2}\ \left[\ 2(2n+1 ) \right .\nonumber \\ & & \qquad\qquad\qquad \left .+ \left\{8-(2n+1)s\right\}\xi^{-1}f\right ] \label{eqn : lhs_4 } \\ & \simeq & -\frac{2n+9}{2(2n+1)}\ \tilde{v}_r^2 .\nonumber \end{aligned}\ ] ] therefore , the equation holds in the outer asymptotic region , when the previous section , we have obtained four different sets of asymptotic solutions at large radii .although these sets have different expressions for any one of the relevant physical quantities ( and their components ) , they are equivalent within the accuracy to the first order in ( ) . as far as we remain only in this outer asymptotic region, we can not therefore judge which type of the accretion flows ( e.g. , the sub - critical or trans - critical type ) , is more likely to fit for the reality .for the resolution of this problem , considerations from a global point of view are needed .thus , we are led to examine the behavior of the above sets in the opposite limit of small radius . fortunately , as shown below , there is one and only one case ( case 4 ) in which the same set also serves as the asymptotic solution at small radii .this means that this set can be regarded as a global solution , though the accuracy may be somewhat poor in the middle region . in order to discuss the small radius limit ,we note here that in the first three cases discussed in the pervious section , the asymptotic behavior is different on both sides of the -component of eom .indeed we obtain , lhs and rhs in case 1 ; lhs and rhs in case 2 ; lhs and rhs in case 3 . however , in case 4 , equations ( [ eqn : lhs_4 ] ) and ( [ eqn : rhs_4 ] ) yield , respectively , then equating both sides , we obtain which specifies the value of in the asymptotic solution at small radii . if the values of in the two asymptotic regions at large and small radii ( i.e. , equations [ [ eqn : ala ] ] and [ [ eqn : asm ] ] ) coincide , the asymptotic solutions match smoothly and become a global solution .this actually happens when the presence of a select value of may suggest that a preferable thermodynamic circumstance is required for realization of the state described by our global solution .the global solution indicates that the infall has a trans - critical nature as seen from equation ( [ eqn : d_4 ] ) .however , this should not be interpreted as a restriction on the radial profiles of infall velocity , since it is fixed always by equation ( [ eqn : vr_2 ] ) for both cases 3 and 4 .rather , it should be interpreted as a restriction on the profile of the characteristic velocity , and hence , on those of and .other physical quantities ( than , , , , and ) are derived straightforwardly as follows .it should be noted that all quantities are written in closed forms , i.e. , not in the forms of truncated power series in .we obtain from equation ( [ eqn : defd ] ) and further substituting it in equation ( [ eqn : mcont ] ) , equation ( [ eqn : vthf ] ) indicates that the generation of a wind from an accretion disk is directly controlled by the parameter ( differently from the result in paper i ) .the direction of the wind is upward ( i.e. , vertically outgoing ) when , and downward ( i.e. , vertically ingoing ) when .the above selected value , , means the presence of a medium - strength upward wind .it can be seen in figure 1 that , as far as the profiles of the velocity components are concerned , they are essentially the same as in paper i. ( 85 mm , 95mm)figure1.eps the pressure and temperature are calculated respectively from equations ( [ eqn : eomth ] ) and ( [ eqn : eos ] ) as the deviation of temperature from the virial form is expected only in a middle region and remains to be rather small .the results for every component of the current density and the electric field follow from ampre s law and ohm s law : similarly to the case of , the coefficient of is proportional to .it seems rather natural from the viewpoint of current closure that the value of determined in equation ( [ eqn : na ] ) is no - zero .finally , we cite the component expressions for the poynting flux , including their -dependences since they have been dropped in paper i by accident . ( 85 mm , 95mm)figure2.eps as it turns out from figure 2 , and are negative everywhere , but changes sign from slightly positive ( outgoing ) to negative ( incoming ) as the radius decreases across . in order to see this fact more clearly, the formula in the curly brackets in equation ( [ eqn : pfth ] ) is shown as in the figure .taking the -dependece of also into account , we can say that the poynting flux in the poloidal plane flows from the wide outer region ( ) into the narrow inner region ( ) , almost along the surface of an accretion disk .it should be noted that here has the opposite sign to that obtained in paper i , in the inner region where becomes very large .this means that the extrapolation of an outer solution that is not the global solution can actually lead to an erroneous result .before proceeding to energy budgets , it is convenient to introduce the mass accretion rate through a vertical cross - section of arbitrary radius , where the approximation has been used , extrapolating the functional form near the equator even to large regions ( i.e. , neglecting the term ) .this is because the -dependences are reliable only near the disk midplane owing to the adopted method of approximation ( see paper i ) .since we need a co - latitudinal integration in the above calculation , this may cause some worry about the accuracy of the result .however , the most important thing is a finiteness of the integral and its precise value does not matter on the essence of the following discussions . reflecting the presence of the vertical flows , varies with like .it is a constant only when , i.e. , there is no wind . the accretion rate at the outer edge of an accretion diskis given explicitly as because , at this radius , can be approximated by its asymptotic form , .first substituting , and in the above definition , and then solving for , we obtain this equation will be used to eliminate in the following subsections .the heating rate due to the joule dissipation of the electric current per unit volume is calculated as in obtaining the approximate expression in the first line of the above equations , we have neglected since , , and also neglected a term containing dependence . in obtaining the expression on the third line , and have been eliminated with the aids of equation ( 73 ) in paper i and equation ( [ eqn : b0 ] ) above , respectively . on the other hand ,the advection cooling rate per unit volume is written as where is the enthalpy for an ideal gas . similarly to the case of , the approximate expression on the first line of the above equations is obtained by neglecting the terms . however , the heating and cooling terms given above does not balance generally .this is because i ) there may exist the exchange of thermal energy between neighboring fluid elements through conduction and/or convection ( these are expressed as non - adiabaticities ) , even if the radiation loss may be negligibly small , or ii ) the balance can not be achieved in principle , implying the break - down of the model .the amount of this discrepancy is when it is expressed in terms of an additional heating ( or a cooling if it is negative ) .there are two distinct terms in the above result ; the term which contains and which does not .the former dominates over the latter in the outer asymptotic region ( i.e. , in the vpf limit ) . in this asymptotic region , this term has been interpreted as due to the non - adiabaticities ( ) .our global solution indicates that both terms enhance in a narrow region at around .even in such inner regions , the former drives an upward wind ( i.e. , ) when there is additive heating ( i.e. , ) , and vice versa , as seen in equation ( [ eqn : vthf ] ) . on the other hand, the latter term always contributes to a cooling .although it is not so certain , this fact may suggests that the latter corresponds to a radiative cooling that may enhance at such an inner region .as shown in figure 3 , the advection cooling becomes largely exceeds the joule heating where .the cause of this enhancement is in the monotonic increase of and toward the center . in principle, it is impossible to balance the advective cooling even by supplying heat through conduction or convection , as far as the cooling there exceeds the peak value of the joule heating . in this sense, the present model does not seem to hold in the region where .thus , we are consistently led to the conclusion that there is an inner edge of an accretion disk at .the energy equation for the electromagnetic field is known as the poynting theorem : where ( see paper i ) is the electromagnetic energy density , and is the poynting flux .the two terms on the right - most side of equation ( [ eqn : pyt ] ) represent losses of the electromagnetic energy through the joule dissipation and through work done by the magnetic force on the fluid , respectively .when we concentrate our attention mainly to the region near the disk midplane , we can neglect terms as before .the contribution from vanishes in this process , and the poynting equation finally reduces to the form as the above definition of includes a minus sign , it means the work done by fluid on the electromagnetic force ( hence , on the field ) . the explicit expression of can be reached either by calculating the right - hand side of the expression on the middle line or by calculating its left - hand side directly according to its definition , confirming that the poynting equation actually holds in our case . since at any finite radius , the fluid motion is doing work on the electromagnetic field everywhere in the disk . as seen from figure 2 , the vertical poynting flux , , is positive in the region while negative in .this implies that where and where .namely , in the wide outer region ( ) the fluid in the disk is doing work that exceeds the local dissipation and the excess is gathered into the small inner region ( ) through the poynting flux ( see also the discussion at the end of the previous section ) .thus , in the inner region , energy input through the poynting flux can largely exceeds local supply through the work , and both are thermalized as the joule dissipation ( i.e. , ) .in other words , the outer main disk is driving the innermost region electrodynamically , suggesting that , if a more careful treatment of the vertical structure and flow in this region is introduced in the model , the launching of an mhd jet could be obtained . within the present status of our accretion disk model ,however , this ability is wasted only on the joule dissipation there . in a sense ,the new global picture of our model is reasonable because it closes within itself . on the other hand , the previous picture stated in paper i is less persuasive , because it does not close unless assuming a presence of some external component ( e.g. , an mhd jet ) that is driven by the large positive ( i.e. , outgoing ) poynting flux mainly emanating from the innermost disk .( 85 mm , 95mm)figure3.eps the binding energy per unit volume of a fluid element is described by the bernoulli sum ( , ) , where the approximations of geometrically - thin disk and that of respecting midplane have been used again in obtaining the expression on the second line . calculating this quantity in terms of our global solution, we obtain the asymptotic values for large and small are the upper line of the above equation reproduces the vpf result ( ) .as seen in figure 3 , the global solution predicts that remains positive everywhere ( cf ., ; however see also , ; ; ; ; ) and becomes divergently large in the limit of , even when .this fact suggests that the accretion flows described by our global solution should be ejected , at least in its fraction , before it reaches the center .summarizing the discussions in the previous section , we have reached the conclusions that i ) the accretion state characterized under the name of resistive - riaf does not seem to extend into the region , ii ) the main disk ( ) is driving the innermost region ( ) electrodynamically , and iii ) infalling matter always stay unbound and can not reach the gravitational center , as a whole . judging from these evidences ,the most probable scenario is an ejection of the infalling matter , at least in its fraction , at around which may be regarded as the inner edge of an accretion disk .thus , it has been shown clearly that one of the most preferable circumstances necessary for the mhd jet launching is actually prepared within our resistive - riaf model .abramowicz , m. , lasota , j .-, & igumenshchev , i.v . , 2000 , , 314 , 775 beckert , t. , 2000 , apj , 539 , 223 begelman , m. c. , & pringle , j. e. , 2007 , , 375 , 1070 blandford , r. d. , & begelman , m. c. , 1999 , , 303 , l1 blandford , r. d. , & payne , d. g. , 1982 , , 199 , 883 bondi , h. , 1952 , mnras , 112 , 195 ferreira , j. , 2008 , new astron ., 52 , 42 kaburaki , o. , 2000 , apj , 531 , 210 kaburaki , o. , 2001 , apj , 563 , 505 kaburaki , o. , 2012 , pasj , 64 , 39 , paper i kato , s. , fukue , j. , & mineshige , s. , 2008 , black - hole accretion disks : towards a new paradigm ( kyoto university press , kyoto ) lubow , s.h . , papaloizou , j. c. b. & pringle , j. e. 1994 , , 268 , 1010 nakamura , k. e. , 1998 , , 50 , l11 narayan , r. & mcclintock , j. e. 2008 , new astron .51 , 733 narayan , r. & yi , i. 1994 , apj , 428 , l13 narayan , r. & yi , i. 1995 , apj , 444 , 231 oda , h. , machida , m. , nakamura , k. e. , matsumoto , r. & narayan , r. 2012 , , 64 , 15 parker , e. n. , 1960 , apj , 132 , 175 pudritz , r. e. & norman , c. a. 1983 , apj , 274 , 677 turolla , r. & dullemond , c. p. 2000 , , 531 , l49
in our recent paper , we have obtained a model solution to the problem of radiation - inefficient accretion flows ( riafs ) in a global magnetic field ( so called , resistive riaf model ) , which is asymptotically exact in outer regions of such flows forming accretion disks . when extrapolated inwardly , the model predicts a local enhancement of the vertical poynting flux within a small radius that may be regarded as the disk inner - edge . this fact has been interpreted as the origin of power source for the astrophysical jets observationally well - known to be ejected from this type of accretion disks . since the accuracy of the solution may become rather poor in such inner regions , however , the ground of this assertion may not seem to be so firm . in the present paper , we develop a sophisticated discussion for the appearance of jet - driving circumstances , based on a much more firm ground by deriving a global solution in the same situation . although the new solution still has an approximate nature , it becomes exact in the limits not only of large radius but also of small radius . the analytic results clarify that the electrodynamic power is gathered by the poynting flux , from outer main - disk region to feed the innermost part of an accretion disk . the injected power largely exceeds the local supply of work by the fluid motion .
the achievable rate for a time - selective fading channel depends on what channel state information ( csi ) is available at the receiver and transmitter .namely , csi at the receiver can increase the rate by allowing coherent detection , and csi at the transmitter allows adaptive rate and power control ( e.g. , see ( * ? ? ?6 ) ) . obtaining csi at the receiver and/or transmitter requires overhead in the form of a pilot signal and feedback .we consider a correlated time - selective flat rayleigh fading channel , which is unknown at both the receiver and transmitter .the transmitter divides its power between a pilot , used to estimate the channel at the receiver , and the data .given an average transmitted power constraint , our problem is to optimize the instantaneous pilot and data powers as functions of the time - varying channel realization .our performance objective is a lower bound on the achievable rate , which accounts for the channel estimation error . power control with channel state feedback , assuming the channel is perfectly known at the receiver , has been considered in .there the focus is on optimizing the input distribution for different channel models using criteria such as rate maximization and outage minimization .optimal power allocation in the presence of channel estimation error has been considered in .the problem of optimal pilot design for a variety of fading channel models has been considered in . therethe pilot power and placement , once optimized , is fixed and is not adapted with the channel conditions . a key difference hereis that the transmitter uses the csi to adapt _ jointly _ the instantaneous data and pilot powers .because the channel is correlated in time , adapting the pilot power with the estimated channel state can increase the achievable rate .we also remark that although we analyze a single narrowband fading channel , our results apply to a set of parallel fading gaussian channels , where the average power is split over all channels .we start with a correlated block fading model in which the sequence of channel gains is gauss - markov with known statistics at the receiver . the channel estimate is updated at the beginning of each block using a kalman filter , and determines the power for the data , and the power for the pilot symbols in the succeeding coherence block .optimal power control policies are specified implicitly through a bellman equation .other dynamic programming formulations of power control problems have been presented in , although in that work the channel is either known perfectly ( perhaps with a delay ) , or is unknown and not estimated . because an analytical solution to the bellman equation appears to be difficult to obtain , we study a diffusion limit in which the correlation between successive coherence blocks tends to one and the average power goes to zero .( this corresponds to a wideband channel model in which the available power is divided uniformly over a large number of parallel flat rayleigh fading sub - channels . ) in this limit , the gauss - markov channel becomes a continuous - time ornstein - uhlenbeck process , and the bellman equation becomes a partial differential equation ( pde ) .a diffusion equation is also derived , which describes the evolution of the state ( channel estimate and the associated error variance ) , given a power allocation policy . in this limit, we show that given a peak power constraint for the pilot power , the optimal pilot power control policy is a switching policy ( `` bang - bang '' control ) : the pilot power is either the maximum allowable or zero , depending upon the current state .hence the optimal pilot power control policy requires at most one feedback bit per coherence block .also , the optimal data power control policy is found to be a variation of waterfilling . other work in which the wireless channel is modeled as a diffusion process is presented in .the switching points for the optimal policy form a contour in the state space , which is referred to as the _ free boundary _ for the corresponding pde . solving this pde then falls in the class of _ free boundary problems _ .we show that in the diffusion limit the system state becomes confined to a narrow region along the boundary .furthermore , the associated probability distribution over the boundary is exponential .that enables a numerical characterization of the boundary shape .our results show that the average pilot power should decrease as the channel becomes more severely faded .we observe that the optimal switching policy is equivalent to adapting the pilot symbol insertion rate with _ fixed _pilot symbol energy .the optimal pilot insertion rate as a function of the channel estimate is then determined by the shape of the free boundary .we show that the boundary shape essentially shifts pilot power from more probable ( faded ) states to less probable ( good ) states .furthermore , the boundary shape guarantees that the channel estimate is sufficiently accurate to guide the power adaptation .numerical results show that pilot power adaptation can provide substantial gains in achievable rates ( up to a factor of two ) .the gains are more pronounced at low snrs and with fast fading channels .although these results are derived in the limit of large bandwidth ( low snr ) , monte carlo simulations show that they provide an accurate estimate of the performance when the bandwidth is large but finite ( a few hundred coherence bands ) .moreover , the optimal switching policy in the diffusion limit accurately approximates the optimal pilot power control policy for the discrete - time model , and provides essentially the same performance gains relative to constant pilot power . to limit the overall feedback rate, we also consider combining the adaptive pilot power with `` on - off '' data power control , which also switches between a fixed positive value and zero .( hence that also requires at most one bit feedback per coherence block . )the corresponding optimal free boundaries are computed , and results show that this scheme gives negligible loss in the achievable rate .the next section presents the system model and section [ discrete ] formulates the pilot optimization problem as a dynamic program .section [ cont ] presents the associated diffusion limit and the corresponding bellman equation .the optimal policy is then characterized in sections [ solanal]-[water ] with optimal data power control , and in section [ on - off ] with optimal on - off data power control .numerical results showing free boundaries and the corresponding performance are also presented in sections [ water ] and [ on - off ] .training overhead is discussed in section [ tl ] , and conclusions and remaining issues are discussed in section [ conc ] .we start with a block fading channel model in which each coherence block contains symbols , consisting of pilot symbols and data symbols .the vector of channel outputs for coherence block is given by where and are , respectively , vectors containing the pilot and data symbols , each with unit variance , and and are the associated pilot and data powers .the noise contains circularly symmetric complex gaussian ( cscg ) random variables , and is white with covariance .the channel gain is also cscg , is constant within the block , and evolves from block to block according to a gauss - markov process , i.e. , where is an independent cscg random variable with mean zero and variance , and ] where and are , respectively , and vectors and ^\dag ] is over the conditional probability of given and action . using the channel state evolution equations derived in section [ model ], we have = \int_0^{\infty}\ , v(u , \theta_{i+1})\ , f_{{\hat{\mu}}_{i+1 } | s_i } ( u ) du\ ] ] where is the conditional density of given , and is given by . from itfollows that is ricean with noncentrality parameter and variance \theta_{i+1 } + r^2 \hat{\mu} ] , the state process , which is the solution to the stochastic differential equations and , has continuous sample paths . the proof is given in appendix [ ap : cont ] .note that this lemma does not require the control input to be continuous in time .this observation will be useful in the subsequent discussion .we now consider the continuous - time limit of the optimization problem .if the data power for the discrete coherence block is , then for large ( but finite ) , the objective becomes close to the continuous objective ] ( corresponding to the sum rate over parallel sub - channels ) .a difficulty is that this objective is unbounded as . to simplify the analysis , we first take , which lower bounds the objective for all ( and corresponds to scaling up the power in the diffusion limit ) . after characterizing the optimal policywe then replace the objective with the preceding scaled objective with fixed to generate numerical results .we discuss the rate of growth of the scaled rate objective as .] we therefore rewrite the discrete - time optimization as the continous - time control problem \\ \text{subject to : } & \,\ , \label{eq : pconst1 } \limsup_{t \rightarrow \infty}e\left [ \frac{1}{t } \int_{0}^{t } \,{\epsilon}(t)\,dt + \frac{1}{t}\int_{0}^{t } \,p(t)\,dt \right ] \le \,p_{av},\\ \text{and } & \qquad { \epsilon}(t ) \le { \epsilon_{max}}. \end{aligned}\end{aligned}\ ] ] analogous to , the bellman equation can be written as ( see ) \right\}\ ] ] where is the generator of the state process with pilot power ( * ? ? ?7 ) , and is given by = \frac{e[dv]}{dt } = a + \epsilon b\ ] ] where \end{aligned}\ ] ] and the dependence on is omitted for notational convenience . herewe ignore existence issues , and simply assume that there exists a bounded , continuous , and twice differentiable function satisfying .note that is unique only up to a constant ( * ? ? ?* ch . 4), .[ thm ] given the pilot power constraint ] .substituting into gives the final version of the bellman equation where .an alternative way to arrive at and is to take the diffusion limit of the discrete - time bellman equation .this alternative derivation is given in appendix [ ap : alt ] .it is easily shown that the optimal data power allocation is \\ & = & \biggr ( \frac{- \lambda { \sigma_z}^2 ( 2 \theta + \hat{\mu } ) + \sqrt{\delta } } { 2 \lambda \theta ( \hat{\mu } + \theta ) } \biggr ) ^ { + } \label{eq : palloc}\end{aligned}\ ] ] where and determines in .note that for .this power allocation is the same as that obtained in , which considers a fading channel with constant estimation error , as opposed to the time - varying estimation error in our model .from theorem [ thm ] the optimal pilot power control policy is determined by the switching boundary in the state space , which is defined by the condition .this is referred to as a `` free boundary '' condition for the bellman pde .the dynamical behavior of the optimal pilot power control policy is illustrated in fig [ fig : sysdyn1 ] .the vertical and horizontal axes correspond to the state variables and , respectively .the shaded region , , is the region of the state space in which , and in the complementary region .these two regions are separated by the free boundary , .the penalty factor determines the position of this boundary , and the associated value of .the vertical line in the figure corresponds to the estimation error variance , which results from taking for all .clearly , in steady state the estimation error variance can not be lower than this value , hence the steady - state probability density function ( pdf ) of the state is zero for . substituting in and setting gives suppose that the initial state is in .with the state evolution equations and become and .this implies that the state trajectory is a straight line towards the point until it hits the free boundary , as illustrated in fig .[ fig : sysdyn1 ] .therefore , for , must be selected so that the point lies in .otherwise , the state trajectory eventually drifts to and stays there , corresponding to for all , ( because ) , and .if the trajectory hits the free boundary below the point , then it is pushed back into .this is because at the boundary and for , the drift term in is negative , namely , .otherwise , if the trajectory hits the boundary above point , it continues into and settles along the line where the drift . for the discrete - time model with small , but positive , the state trajectory zig - zags around the boundary , as shown in fig .[ fig : sysdyn1 ] . hence if the free boundary intersects at point , then in steady state the probability mass must be concentrated along the curve .this is verified through monte carlo simulations and illustrated in fig .[ fig : trace ] .points in the state space are shown corresponding to a realization generated from and with ( so that ) .the preceding discussion suggests that the steady - state probability associated with states not on the curve defined by the free boundary and tends to zero in the continuous - time limit .this is stated formally in the next section .we also remark that in region the pde is a `` transport equation '' , which has an analytical solution containing an arbitrary function of a single variable .determining this function and the constant appears to be difficult , so that we will take an alternative ( more direct ) approach to determining the free boundary .in this section we characterize the steady - state behavior of the state trajectory with the optimal switching ( bang - bang ) training policy , and compare with some simpler policies . in particular , we give the first - order pdf over the free boundary , which we subsequently use to compute the optimized boundary explicitly .we will denote the free boundary as for . to simplify the analysis we make the following assumptions : * ( p1 ) the free boundary is a continuously differentiable curve such that * ( p2 )the function is one - to - one , i.e. , for any such that , .note that ( p1 ) requires to be large enough so that the entire free boundary ( in fig .[ fig : sysdyn1 ] ) lies to the right of .( that is , they do not intersect . )the condition on the derivative of the free boundary curve is mild .geometrically , it implies that the region enclosed by the free boundary and point is convex .this condition is indeed satisfied by the optimized free boundaries computed in later sections .[ lm : strip ] let the pilot power as a function of the state be given by then for any , the solution to and satisfies the proof is given in appendix [ ap : strip ] .the theorem implies that for large the state moves along the free boundary .hence for the discrete - time system with large , the state is typically confined to a narrow strip around the free boundary .[ ssp ] given the pilot power control , the steady - state probability of training conditioned on the channel estimate is }{[\theta_{\epsilon}(u)]^2{\epsilon_{max}}},\ ] ] and the steady - state pdf of the channel estimate is the proof is given in appendix [ ap : ssp ] . from the average training power for the pilot power control scheme can be computed as ^ 2 } f_{\hat{\mu}}(u ) \ , du.\ ] ] therefore if is large enough so that for all , then neither the pdf nor the average training power depends on .this is because as increases , the probability of training , given by , decreases so that the average training power given , namely , remains unchanged , this overhead is reduced by increasing .however , for the diffusion approximation to be accurate , must be small , hence can not be too large . ] .in addition , we observe that is independent of the correlation parameter .now consider the case in which the free boundary is constrained to be vertical , that is , .this still corresponds to a switching policy , but where the variance of the channel estimation error is constrained to be a constant , independent of the channel estimate . from the steady - state pdf of exponential , i.e. , , and the average training power is . now consider the _ constant _ pilot power control policy , where ( constant ) for all .substituting into and setting implies that the steady - state estimation error variance for all .in addition , the steady - state pdf of is exponential with mean .hence for a given average training power , constant pilot power can give _exactly _ the same estimation error and steady - state pdf as the switching policy with a vertical boundary .both schemes therefore achieve the same rate with a total power constraint ( ignoring overhead due to pilot insertion ) .we will see that the optimized boundary is _ not _ vertical , which implies that adaptive pilot power control can perform better than constant pilot power .we also observe that the same performance as the optimal switching policy can be achieved by _ continuously _ varying the training power as a function of .namely , taking gives the same steady - state pdf and training power as in and , respectively .however , this scheme corresponds to feeding back the pilot power as a sequence of real numbers , which in principle require infinite precision . in contrast, the switching policy can be implemented by fixing the training power and varying the rate at which pilot symbols are inserted .the transmitter therefore does not need to know the exact value of the channel estimate .more specifically , the optimal switching policy inserts pilots of power with probability ( or equivalently , once every coherence blocks ) when the channel estimate .this requires at most one bit per coherence block to inform the transmitter whether or not to train in the next block .( of course , the feedback can be substantially reduced by exploiting channel correlations . )the switching policy therefore requires fewer training symbols than continuous pilot power control , which requires a pilot symbol every coherence block .from the preceding discussion the optimal pilot policy is determined by the free boundary . herewe compute the free boundary by observing that this boundary must maximize the rate objective , assuming a switching policy for the pilot power .a difficulty is that the rate objective of interest is , whereas the steady - state probabilities in theorem [ ssp ] were derived in the limit as . in what follows we use the asymptotic probabilities in theorem [ ssp ] to approximate the steady - state probabilities corresponding to large but finite .simulation results have shown that the resulting free boundary is insensitive to the choice of in the objective .also , subsequent simulation results in section [ wf : num ] show that the analytical performance results accurately predict the performance of the corresponding discrete - time model with the optimal switching policy when is a few hundred . with the preceding approximation for large but finite optimal free boundary can be computed as the solution to the following functional optimization problem , f_{{\hat{\mu}}}(u ) du \\\text{subject to : } & \label{eq : pconst_ss } \int_{0}^{\infty } p(u ) f_{{\hat{\mu}}}(u)du + \int_{0}^{\infty } \epsilon(u ) \,f_{{\hat{\mu}}}(u)\,du \le \,p_{av},\\ \text{and } & \qquad \qquad \theta_{\epsilon}(u ) \ge\theta^\star \quad \text{for } \,\ , u \ge 0 , \end{aligned}\end{aligned}\ ] ] where }{[\theta_{\epsilon}(u)]^2} ] , that is , along the free boundary truncated at the threshold value . since , implies that grows as .substituting into , we observe that the upper and lower bounds have the same asymptotic growth rate , so that the rate also has this growth rate , given by implies that . ] since maximizes the upper bound in , this is the growth rate of the achievable rate .we observe that this growth in achievable rate is the same as the growth in achievable rate for parallel rayleigh fading channels ( in frequency or time ) with a sum power constraint and perfect channel knowledge at the transmitter ( e.g. , see ) .this is because the coherence blocks correspond to separate degrees of freedom ( i.e. , the transmitter can choose whether or not to transmit over each block ) , and the number of coherence blocks increases linearly with . for our modelthe associated constant is , which accounts for channel estimation error , and depends on the channel correlation .this product therefore determines the shape of the free boundary .( note also that depends on the free boundary . )namely , choosing boundary points closer to reduces , but also reduces the harmonic mean , and vice versa .the optimal boundary balances and by shifting training power from small values of to larger values , as discussed previously in sec .[ wf : num ] .[ fig : fbdonoff ] shows free boundaries at different snrs obtained by solving the optimization problem numerically for and .also shown are the optimized vertical boundaries with on - off data power control .as with water - filling , the free boundary is shaped to save training power when is small ( high probability region ) and re - distribute it to the instances when is large ( low probability region ) .the boundaries shown here are more irregular , due to the discontinuous data power allocation .the shape of the boundary for is a straight line , but does not affect the objective since the rate depends on the harmonic mean for .[ fig : n200_cap ] shows plots of achievable rates versus snr with the optimized free and vertical boundaries and on - off data power control .plots corresponding to the optimal waterfilling data power allocation are also shown for comparison .these results show that the performance with the optimized on - off power allocation are nearly the same as with water - filling . also shownare the rates obtained via monte carlo simulations of the discrete - time system with the optimized boundary .those are again higher than the rates calculated from the diffusion model , whereas the simulated rates with the vertical boundary closely match the analytical results .so far we have ignored the time overhead due to the channel uses that are occupied by the training symbols . herewe restate the pilot power control problem taking this overhead into account . a switching policy for the pilot powerrequires that one of the channel uses in a coherence block is a training symbol whenever the transmitter is directed to train . if the channel estimate for the coherence block is , then the probability of training ( as discussed in sec .[ steadystate ] ) is given by , where .therefore the original optimization problem can be reformulated , taking the training overhead in to account , by replacing the rate objective with f_{{\hat{\mu}}}(u)du .\label{eq : thpt}\ ] ] of course , if either or is large , then the training symbol overhead is negligible and the problem reduces to .otherwise , the overhead term will influence the free boundary and ergodic rate .specifically , it will reduce the optimal training power ( so that the boundary shifts towards ) , since the overhead penalty is proportional to the training power .[ fig : n200_cap_tl ] shows plots of the rate objective in versus snr with optimized free and vertical boundaries . for this figure and , corresponding to a worst - case loss in throughput due to training overhead . also , and ( 11.76 db ) .the data power control is assumed to be on - off and only the analytical results ( obtained by maximizing ) are shown .( note that the channel state pdf is still given by theorem [ thm ] . ) at low snrs the average training power and associated overhead are small , so that taking the overhead into account does not significantly affect the rate . at high snrs ( around 10 db )the training overhead reduces the achievable rate by about with both the free and vertical boundaries .the percentage improvement provided by the free boundary relative to the vertical boundary remains approximately the same .a final remark is that when symbol overhead is taken into account , the throughput associated with the vertical switching policy is no longer the same as that associated with constant power control .that is because the switching policy requires on average channel uses for training every coherence block , whereas constant power control requires one channel use for training every coherence block .of course , this savings in overhead for the switching policy comes at the cost of feedback .we have studied achievable rates for a correlated rayleigh fading channel , where both the data and pilot power are adapted based on estimated channel gain . in low snr and fast fading scenarios the pilot power constitutes a substantial fraction of the total power budget , so that pilot power adaptation can provide a substantial gain in achievable rates . by taking a diffusion limit ,corresponding to low snrs ( or wideband channel ) and high correlation between consecutive channel realizations , several insights were obtained about the optimal pilot power control policy .namely , it was shown that a policy that switches between zero and peak training power is optimal , and that the training power should be reduced when the channel is bad and increased when the channel becomes good .the optimal policy in the diffusion limit was also explicitly characterized , and shown to provide a significant increase in achievable rate for low snrs and fast fading . for the discrete - time system of interestthe switching policy is equivalent to maintaining constant pilot symbol power , but inserting pilot symbols less frequently when the channel estimate is weak ( and vice versa ) .when combined with on - off data power control , this requires finite feedback , and achieves essentially the same performance with the optimal ( water - filling ) data power control .of course , the csi feedback required for optimal data and pilot power control can be substantially reduced by exploiting the correlation between successive coherence blocks .several modeling assumptions have been made , which could be relaxed in future work .for example , we have assumed that the receiver knows the statistical model of the channel . in practice ,the receiver may assume ( or estimate ) a model , such as , which is mismatched to the actual channel statistics .an issue then is how sensitive this overall performance is to this mismatch .also , the first - order rayleigh fading model might be replaced with other fading models ( e.g. , ricean , nakagami , and higher - order autoregressive models ) .additional issues may arise when considering other channel models .for example , here we have imposed a power constraint , which is averaged over many coherence blocks .the results can therefore be directly applied to parallel fading channels where the total power constraint is split among the channels . however , for a frequency - selective channel the total power summed over parallel channels might instead be constrained per coherence block .other extensions and applications of diffusion models to multi - input multi - output ( mimo ) and multiuser channels remain to be explored .substituting into and ignoring terms with higher power of , we obtain in the diffusion limit the noise can be modeled as , where is a standard complex brownian motion .hence as , the preceding equation becomes .substituting in gives replacing by in and substituting for gives combining and , ignoring the term , gives which becomes as .substituting for and replacing by , can be re - written as , \ ] ] where , is a zero mean unit variance cscg random variable independent of .the term is a cscg random variable with mean zero and variance ( \delta t)^2 ] , and can be re - written as , \ , dt + \mathbf{v}[\hat{h}_r(t ) , \hat{h}_j(t ) , \theta(t ) ] \mathbf{\bar{b}}(t)\ ] ] where the drift and variance are given by ^\dag \\\mathbf{v}(\hat{h}_r , \hat{h}_j , \theta ) & = & \text{diag } \left[\theta \sqrt{\frac{{\epsilon}}{2{\sigma_z^2 } } } , \,\ , \theta \sqrt{\frac{{\epsilon}}{2{\sigma_z^2 } } } , \,\ , 0\right]\end{aligned}\ ] ] respectively , the dependence on time is dropped for notational convenience , and the three entries of the vector are independent , real - valued , standard brownian motions . from ( * ? ?* theorem 5.2.1 ) , given and , the solution to exists and is continuous in provided that the following two conditions are satisfied : where for any matrix with entry , , and are constants .condition is called the linear dominance property and is called the lipschitz property . given , we have + 4\rho^2 \sigma_h^4 } + \sqrt{\frac{{\epsilon_{max}}\theta^2}{{\sigma_z^2}}}\\ & \le & \left(2\rho+\frac{{\epsilon_{max}}{\sigma_h^2}}{{\sigma_z^2}}\right ) \sqrt{\hat{h}_r^2 + \hat{h}_j^2+\theta^2 } + \sqrt{4\rho^2 \sigma_h^4}+ \sqrt{\frac{{\epsilon_{max}}\theta^2}{{\sigma_z^2}}}.\end{aligned}\ ] ] so that is satisfied . similarly for have ^ 2(\theta_1-\theta_2)^2 } + \sqrt{\frac{{\epsilon_{max}}}{{\sigma_z^2}}(\theta_1-\theta_2)^2}.\end{gathered}\ ] ] so that is satisfied . since the solution to and , , is a continuous function of , it must also be continuous in .we first rewrite the discrete - time bellman equation as - v(\hat{\mu } , \theta ) \right\},\ ] ] where - v(\hat{\mu } , \theta ) = \int_0^{\infty}\ , \left[v(u , \theta_{i+1 } ) - v(\hat{\mu } , \theta ) \right]\ , f_{\hat{\mu}_{i+1 } | s_i}(u ) du .\ ] ] assuming that is a continuous and smooth function , we can expand around via the taylor series \\ + \textrm{higher - order terms}\end{gathered}\ ] ] where all the derivatives are computed at . as stated in sec .[ discrete ] , conditioned on is ricean , so that where is the zeroth - order modified bessel function of the first kind and where and are given by and , respectively , with replaced by .the first two moments are ( * ? ? ?2 ) , & = & r^2\,\hat{\mu } + { \sigma_o}^2\\ e[\hat{\mu}_{i+1}^2 | ( \hat{\mu},\theta ) ] & = & 2 { \sigma_o}^4\,\left[1 + 2 \bigg ( \frac{r^2 \hat{\mu}}{{\sigma_o}^2 } \bigg ) + \frac{1}{2 } \bigg ( \frac{r^2 \hat{\mu}}{{\sigma_o}^2 } \bigg)^2 \right ] \label{eq : mom2 } \ ] ] next we take the diffusion limit . substituting in and replacing by gives making these substitutions in - gives & = & \left [ - 2 \rho \hat{\mu } + { \theta}^2 \frac{\epsilon}{{\sigma_z}^2}\right ] \delta t + o(\delta t^2)\\ e[({\hat{\mu}}_{i+1 } - \hat{\mu})^2 | ( \hat{\mu},\theta ) ] & = & \left [ 2 \hat{\mu } { \theta}^2 \frac{\epsilon}{{\sigma_z}^2}\right ] \delta t + o(\deltat^2 ) \ ] ] it is easily shown that the higher - order moments \leq o(\delta t^2) ] . thus given condition , we have whenever . however , this contradicts the fact that .therefore we can not have a , which implies that for any .next we show that . for any continuous and twice differentiable function we must have ( * ? ? ?7 ) \right\ } = 0,\ ] ] where the expectation is over the steady - state distribution of the state .the generator ] denotes the expectation over given that the estimate is and .choosing such that ] is given by with replaced by .choosing in to be a function of only and applying proposition [ lm : strip ] gives = 0\ ] ] where .\ ] ] next we observe that a.e .in the set . if this were not the case , then since is a one - to - one function , we could choose such that , which would make the left - hand side of strictly positive .therefore setting gives the steady - state probability of training given shown in .we now solve for the steady - state pdf .choosing , a continuous and twice differentiable function of only , and applying the generator gives = - 2\rho { \hat{\mu}}w_2'({\hat{\mu } } ) + \left[w_2'({\hat{\mu } } ) + { \hat{\mu}}w_2''({\hat{\mu}})\right ] \,\frac{\epsilon \,\theta^2}{{\sigma_z^2}}.\ ] ] the necessary condition can now be written as {{\hat{\mu}}}(u ) du = 0,\ ] ] where ^ 2{\epsilon_{max}}}{{\sigma_z^2 } } p(u ) \quad \textrm{and } \quad d(u ) = u \frac{[\theta_{\epsilon}(u)]^2{\epsilon_{max}}}{{\sigma_z^2 } } p(u).\ ] ] we can further choose to satisfy the following properties : }{du } w_2(u ) = 0\end{aligned}\ ] ] and using integration by parts we can re - write as - \frac{d}{du}[c(u)f_{{\hat{\mu}}}(u)]\right ) du = 0.\ ] ] since this condition must be satisfied for any such , we have - \frac{d}{du}[c(u)f_{{\hat{\mu}}}(u ) ] = 0 , \qquad \textrm{a.e . }\,\,\,u \ge 0.\ ] ] substituting into gives the differential equation {{\hat{\mu}}}(u)\bigg\ } = 0 , \quad \textrm{a.e . }\,\,\,u \ge 0\ ] ] which can be further simplified as \frac{df_{{\hat{\mu}}}(u)}{du } + 2\rho u \left(1 - \frac{d \theta_{\epsilon}(u)}{du}\right)f_{{\hat{\mu}}}(u ) + k = 0,\ ] ] where is a constant .this is a first - order ordinary differential equation with solution \int_{0}^{u}\frac{\exp[i(t)]}{2\rho\left[{\sigma_h^2}-\theta_{\epsilon}(t)\right ] t } dt + k_1 \exp[-i(u)],\ ] ] where - \log[{\sigma_h^2}-\theta_{\epsilon}(0)]\end{aligned}\ ] ] and is another constant , which needs to be determined .since is a pdf , we must have , which implies .this is because the first integral in is unbounded , that is , }{2\rho({\sigma_h^2}-\theta_{\epsilon})t } dt \ge \int_0 ^ 1\frac{1}{2\rho{\sigma_h^2}t } dt = \infty.\ ] ] in addition we must have , which implies . substituting these values into gives .first we fix the free boundary and optimize the data power allocation . for any setting the derivative of the objective function with respect to to zerogives the optimal power allocation . substituting this into and taking the derivative with respect to gives the optimality condition \cdot \frac{\partial p^\star({\hat{\mu}})}{\partial \theta_{\epsilon}({\hat{\mu } } ) } + \frac{\partial l_{\lambda}}{\partial \theta } [ p^\star({\hat{\mu } } ) , { \hat{\mu } } , \theta_{\epsilon}({\hat{\mu } } ) ] \right]f_{{\hat{\mu}}}({\hat{\mu } } )= \\ - l_{\lambda}[p^\star({\hat{\mu } } ) , { \hat{\mu } } , \theta_{\epsilon}({\hat{\mu } } ) ] \frac{f_{{\hat{\mu}}}({\hat{\mu}})}{{\sigma_h^2}- \theta_{\epsilon}({\hat{\mu } } ) } + \frac{1}{[{\sigma_h^2}-\theta_{\epsilon}({\hat{\mu}})]^2}\int_{{\hat{\mu}}}^{\infty } l_{\lambda}[p^\star(v ) , v , \theta_{\epsilon}(v ) ] f_{{\hat{\mu}}}(v ) dv.\end{gathered}\ ] ] note that for so that .for we have = 0 $ ] . therefore reduces to with replaced by .the additional constraint implies .we first observe that can be written as the _ variational inequality _ a solution to is a solution to and vice versa .now consider the following optimization problem , d\theta du \nonumber\\\ ] ] where for and . if , , and , then the solution to is a solution to . also , a solution to with zero objective value is a solution to .the second term in the objective function is included to regularize the numerical solution .the effect of this term can be controlled by changing the weights and .these weights affect both the accuracy of the results and also the rate at which the non - linear optimization algorithm converges .the training region is where .therefore the free boundary can be obtained by solving numerically given values for and .
we consider data transmission through a time - selective , correlated ( first - order markov ) rayleigh fading channel subject to an average power constraint . the channel is estimated at the receiver with a pilot signal , and the estimate is fed back to the transmitter . the estimate is used for coherent demodulation , and to adapt the data and pilot powers . we explicitly determine the optimal pilot and data power control policies in a continuous - time limit where the channel state evolves as an ornstein - uhlenbeck diffusion process , and is estimated by a kalman filter at the receiver . the optimal pilot policy switches between zero and the maximum ( peak - constrained ) value ( `` bang - bang '' control ) , and approximates the optimal discrete - time policy at low signal - to - noise ratios ( equivalently , large bandwidths ) . the switching boundary is defined in terms of the system state ( estimated channel mean and associated error variance ) , and can be explicitly computed . under the optimal policy , the transmitter conserves power by decreasing the training power when the channel is faded , thereby increasing the data rate . numerical results show a significant increase in achievable rate due to the adaptive training scheme with feedback , relative to constant ( non - adaptive ) training , which does not require feedback . the gain is more pronounced at relatively low snrs and with fast fading . results are further verified through monte carlo simulations . limited - rate feedback , gauss - markov channel , channel estimation , adaptive training , wideband channel , diffusion approximation , free boundary problems , bang - bang control , variational inequalities .
peeling is a kind of fracture that has been studied experimentally in the context of adhesion and is a technologically important subject .experimental studies on peeling of an adhesive tape mounted on a cylindrical roll are usually in constant pull speed condition .more recently , constant load experiments have also been reported .early studies by bikermann , kaeble have attempted to explain the results by considering the system as a fully elastic object .this is clearly inadequate as it ignores the viscoelastic nature of the glue at the contact surface and therefore can not capture many important features of the dynamics .the first detailed experimental study of maugis and barquins show stick - slip oscillations within a window of pull velocity with decreasing amplitude of the pull force as a function of the pull velocity .further , these authors report that the pull force shows sinusoidal , sawtooth and highly irregular ( chaotic as these authors refer to ) wave patterns with increasing velocities . more recently ,gandur _ et al ._ have carried out a dynamical time series analysis of the force waveforms , as well as those of acoustic emission signals and report chaotic force waveforms at the upper end of the pull velocities .one characteristic feature of the peeling process is that the experimental strain energy release rate shows two stable branches separated by an unstable branch .stick - slip behavior is commonly observed in a number of systems such as jerky flow or the portevin - le chatelier ( plc ) effect , frictional sliding , and even earthquake dynamics is thought to result from stick - slip of tectonic plates .stick - slip is characterized by the system spending most part of the time in the stuck state and a short time in the slip state , and is usually seen in systems subjected to a constant response where the force developed in the system is measured by dynamically coupling the system to a measuring device .one common feature of such systems is that the force exhibits negative flow rate characteristic " ( nfrc ) .models which attempt to explain the dynamics of such systems use the macroscopic phenomenological nfrc feature as an input , although the unstable region is not accessible .this is true for models dealing with the dynamics of the adhesive tape as well .to the best of our knowledge , there is no microscopic theory which predicts the origin of the nfrc macroscopic law except in the case of the plc effect ( see below ) . as there is a considerable similarity between the peeling of an adhesive tape and the plc effect , it is useful to consider the similarities in some detail .the plc effect refers to a type of plastic instability observed when samples of dilute alloys are deformed under constant cross head speeds .the effect manifests itself in the form of a series of serrations in a range of applied strain rates and temperatures .this feature is much like the peeling of an adhesive tape .other features common to these two situations are : abrupt onset of the large amplitude oscillations at low applied velocities with a gradually decreasing trend and nfrc , which in the plc effect refers to the existence of negative strain rate sensitivity of the flow stress . in the case of the plc effect , the physical origin of the negative strain rate sensitivity is attributed to the ageing of dislocations and their tearing away from the cloud of solute atoms .recently , the origin of the negative srs has been explicitly demonstrated as arising from competing time scales of pinning and unpinning in the ananthakrishna s model . in the case of adhesive tape, the origin of nfrc can be attributed to the viscoelastic behavior of the fluid .( constant load and constant load rate experiments are possible in the plc also . )while simple phenomenological models based on nfrc explain the generic features of the plc effect , there appears to be some doubts if the equations of motion conventionally used in the present case of peeling are adequate to describe the velocity jumps .indeed , these equations of motion are singular and pose problems in the numerical solutions . apart from detailed experimental investigation of the peeling process , maugis and barquins , have also contributed substantially to the understanding of the dynamics of the peeling process . however , the first dynamical analysis is due to hong and yue who use an n " shaped function to mimic the dependence of the peel force on the rupture speed .they showed that the system of equations exhibits periodic and chaotic stick - slip oscillations .however , the jumps in the rupture speed are introduced _ externally _ once the rupture velocity exceeds the limit of stability .thus , the stick - slip oscillations are _ not _ obtained as a natural consequence of the equations of motion .therefore , in our opinion the results presented in ref . are the artifacts of the numerical procedure followed .et al . _ interpret the stick - slip jumps as catastrophes .again , the belief that the jumps in the rupture velocity can not be obtained from the equations of motion appears to be the motivation for introducing the action of discrete operators on the state of the system to interpret the stick - slip jumps , though they do not demonstrate the correctness of such a framework for the set of equations .lastly , there are no reports that explain the decrease in the amplitude of the peel force with increasing pull speed as observed in experiments ._ as there is a general consensus that these equations of motion correctly describe the experimental system , a proper resolution of this question ( on the absence of dynamical jumps in these equations ) assumes importance_. the purpose of this paper is to show that the dynamics of stick - slip during peeling can be explained using a differential - algebraic scheme meant for such singular situations and demonstrate the rich dynamics inherent to these equations . in what followswe first derive the equations of motion ( used earlier ) by introducing an appropriate lagrangian for the system .then , we use an algorithm meant to solve differential - algebraic equations and present the results of our simulations for various parameter values .one of our major findings is that inertia has a strong influence on the dynamics .in addition , following the dynamization scheme similar to the one used in the context of the plc effect , we suggest that the peel force depends on the applied velocity . using this form of peel force leads to the decreasing nature of the magnitude of the pull force as a function of applied velocity . for certain values of the inertia, we find canard type solutions .these numerical results are captured to a reasonable accuracy using a set of approximations valid in different regimes of the parameter space .even though , our emphasis is on demonstrating the correctness of these equations of motion and richness of the inherent dynamics that capture the qualitative features of the peeling process , we also attempt to make a comparison of the experimental results mentioned above to the extent possible .for the sake of completeness , we start by considering the geometry of the experimental setup shown schematically in fig .[ fig1 ] . an adhesive roll of radius is mounted on an axis passing through normal to the paper and is pulled at a constant velocity by a motor positioned at with a force acting along .let the distance between and be , and that between the contact point to be .the point moves with a local velocity which can undergo rapid bursts in the velocity during rupture .the force required to peel the tape is usually called the force of adhesion denoted by .the two measured branches referred to earlier , are those of the function in a steady state situation of constant pulling velocity ( i.e. , there are no accelerations ) .the line makes an angle with the tangent at the contact point .the point subtends an angle at , with the horizontal line .we denote the elastic constant of the adhesive tape by , the elastic displacement of the tape by , the angular velocity by and the moment of inertia of the roll by .the angular velocity itself is identified by .the geometry of the setup gives and which further gives , .the total velocity at is then made up of three contributions , given by which gives following standard methods in mechanics , it is straightforward to derive the equations of motion for and by considering as the generalized co - ordinates .the corresponding lagrangian of the system can be written as ^ 2 - { k\over2}u^2 .\label{lagran}\end{aligned}\ ] ] we write the dissipation function as where physically represents the peel force which we assume is dependent on rupture speed as well as the pull speed assumed to be derivable from a potential function .the physical origin of this is due to the competition between the internal relaxation time scale of the viscoelastic fluid and the time scale determined by the applied velocity .when the applied velocity is low , there is sufficient time for the viscoelastic fluid to relax .as we increase the applied velocity , the relaxation of the fluid gets increasingly difficult and thus behaves much like an elastic substance .the effect of competing time scales is well represented by deborah number which is the ratio of time scale for structural relaxation to the characteristic time scale for deformation . indeed , in the studies on hele - shaw cell with mud as the viscous fluid, one observes a transition from viscous fingering to viscoelastic fracturing with increasing rate of invasion of the displacing fluid . as stated in the introduction, the existing models do not explain the decreasing amplitude of pull force .similar feature observed in the plc serrations has been modeled using a scheme referred to as dynamization of the negative strain rate sensitivity ( srs ) of the flow stress , where is the plastic strain rate .based on arguments similar to the preceding paragraph , they modify this function to depend on the applied strain rate , , i.e. , the negative srs of the flow stress is taken to be such that the gap between the maximum and the minimum of the function decreases with increasing . following this , we consider to depend on also , in a way that the gap in decreases as a function of the pull speed ( fig .[ fig2 ] ) . using the lagrange equations of motion , we obtain the same set of ordinary differential equations as in ref . given by \label{flow3},\end{aligned}\ ] ] with an algebraic constraint ( the last equation results from the elimination of two second order equations for . ) in eqs .( [ flow2 ] ) , ( [ flow3 ] ) , and ( [ constr ] ) we have used . while eqs .( [ flow1])-([flow3 ] ) are differential equations , eq .( [ constr ] ) is an algebraic constraint necessitating the use of differential - algebraic scheme to obtain the numerical solution . the fixed point of eqs .( [ flow1 ] ) , ( [ flow2 ] ) , ( [ flow3 ] ) , and ( [ constr ] ) is given by .( for numerical solution , in the above equations we have actually used in place of . )this point is stable for and unstable for . as is varied such that the sign of changes from negative to positive value , the system undergoes a hopf bifurcation and a limit cycle appears .the limit cycles reflect the abrupt jumps between the two positive slope branches of the function .the singular nature of these equations becomes clear if one were to consider the differential form of eq .( [ constr ] ) given by , \label{vdot } \\ & \simeq & [ \dot f ( 1 + \alpha ) + f \dot \alpha]/f^{\prime } , \label{vdotapprox}\end{aligned}\ ] ] where the prime denotes the derivative with respect to .equations ( [ vdot ] ) with eq . ( [ flow1 ] ) , ( [ flow2 ] ) , and ( [ flow3 ] ) [ or ( [ flow3 ] ) ] constitute the full set of evolution equations for the vector . however , it is clearly singular at points of extremum of , requiring an appropriate numerical algorithm .we note that eqs.([flow1 ] ) , ( [ flow2 ] ) , ( [ flow3 ] ) , and ( [ constr ] ) can be written as where , is a vector function that governs the evolution of and is a singular `` _ mass matrix _ '' given by , equation ( [ xdae ] ) is a differential - algebraic equation ( dae ) and can be solved using the so called singular perturbation technique in which the singular matrix is perturbed by adding a small constant such that the singularity is removed .the resulting equations can then be solved numerically and the limit solution obtained as .we have checked the numerical solutions for values ranging from to in some cases and the results do not depend on the value of used as long as it is small .the results presented below , however , are for .we have solved eq .( [ xdae ] ) using a standard variable - order solver , matlab ode15s program .we have parametrized the form of as to give values of the extremum of the peel velocity that mimic the general form of the experimental curves .the measured strain energy release rate from stationary state measurements is shown in fig .the decreasing nature of the gap between the maximum and minimum of for increasing is clear from fig .[ the values of could not be correctly determined as is in j / m requiring more details . however , the value of is closer to ref . and the jumps in are similar to those in experiments . ]the reason for using the form given by eq .( [ fvv ] ) is that the effects of dynamization are easily included through its dependence on the pulling velocity while more complicated terms are required to mimic completely the experimental curve ( particularly the flat portion ) .however , we stress that the trend of the results remains unaffected when the actual experimental curve is used except for the magnitude of velocity jumps and the force values .plane for .the corresponding is shown by a solid curve .( b ) a phase space trajectory in the plane for and . ( c ) a plot of for and .( d ) a plot of ( period 4 ) for and .( units of , are in m / s , in , in kg m and in s. ) , title="fig:",width=302,height=151 ] plane for .the corresponding is shown by a solid curve .( b ) a phase space trajectory in the plane for and . ( c )a plot of for and .( d ) a plot of ( period 4 ) for and .( units of , are in m / s , in , in kg m and in s. ) , title="fig:",width=302,height=151 ] plane for .the corresponding is shown by a solid curve .( b ) a phase space trajectory in the plane for and . (c ) a plot of for and .( d ) a plot of ( period 4 ) for and .( units of , are in m / s , in , in kg m and in s. ) , title="fig:",width=302,height=151 ] plane for .the corresponding is shown by a solid curve .( b ) a phase space trajectory in the plane for and . ( c ) a plot of for and .( d ) a plot of ( period 4 ) for and .( units of , are in m / s , in , in kg m and in s. ) , title="fig:",width=302,height=151 ] we have studied the dynamics of the system of equations for a wide range of values of the parameters .we have found that transients for some regions of parameters space take considerable time to die out .the results reported here are obtained after these long transients are omitted .these equations exhibit rich dynamics , some even unanticipated . herewe report typical results for two important parameters , namely the pull velocity ( m / s ) and the inertia ( kg m ) , keeping the elastic constant of the tape n / m , m and m .the influence of will also be mentioned briefly .( henceforth , we drop the units for the sake of brevity . )we find that the observed jumps of the orbit in the - plane occur in a fully dynamical way .more importantly , we find all the three possibilities namely , the orbit can jump when it approaches the limit of stability , before or beyond that permitted by . the dynamics can be broadly classified into low , intermediate and high regimes of inertia .+ ( i ) low inertia . here also , there are three regimes : low , intermediate , and high pull velocity .+ ( a ) consider keeping inertia at a low value ( say ) and also at a low value ( say , near the top , say ) .here we observe regular saw tooth form for the pull force .the phase plot in the - plane is as shown in fig .the corresponding function is also shown by the continuous curve .we see that the trajectory jumps almost instantaneously from to on reaching the maximum of ( or from to when it reaches the minimum ) .the system spends considerably more time on compared to that on .however , this feature of jumping of the trajectory at the limit of stability is only true for small values of and when is near the limit of stability . at slightly higher pull velocity ,say , even for small , say , the jumps occur even before reaching the top or bottom ( the points and ) as can be seen from fig .[ fig3](b ) for . the small amplitude high frequency oscillations seen in the phase plots [ fig .[ fig3](a ) , and [ fig3](b ) ] on the branch are due to the inertial effect , i.e. , finite value of .these oscillations are better seen on the plot shown in fig .for these values of parameters , the system is aperiodic .+ b ) as we increase , even as the saw tooth form of is retained , various types of periodic orbits [ period 4 shown in fig . [ fig3](d ) for as well as irregular orbits are seen . in both cases ( periodic as well as chaotic ) the trajectory jumps from high velocity branch ( ) to the low velocity branch before traversing the entire branch or sometimes going beyond the values permitted by .the value of at which the orbit jumps is different for different cycles . for ,at high velocity , say , the phase plot is periodic .+ plane for a single cycle for and .the corresponding is shown by a thick solid curve .( b ) corresponding plots of , ( c ) the pull force ( period 8) and ( d ) the peel velocity .( units of , are in m / s , in n , in kg m and in s. ) , title="fig:",width=302,height=188 ] plane for a single cycle for and .the corresponding is shown by a thick solid curve .( b ) corresponding plots of , ( c ) the pull force ( period 8) and ( d ) the peel velocity .( units of , are in m / s , in n , in kg m and in s. ) , title="fig:",width=302,height=151 ] plane for a single cycle for and .the corresponding is shown by a thick solid curve .( b ) corresponding plots of , ( c ) the pull force ( period 8) and ( d ) the peel velocity .( units of , are in m / s , in n , in kg m and in s. ) , title="fig:",width=302,height=188 ] plane for a single cycle for and .the corresponding is shown by a thick solid curve .( b ) corresponding plots of , ( c ) the pull force ( period 8) and ( d ) the peel velocity .( units of , are in m / s , in n , in kg m and in s. ) , title="fig:",width=302,height=151 ] \(ii ) intermediate and high inertia .+ ( a ) as the results of small for intermediate and high inertia are similar , we illustrate the results for and .the phase plot , and are shown in fig .[ fig4](a)-[fig4](d ) .consider , fig .[ fig4](a ) showing a typical phase space trajectory for a single cycle .the corresponding function is also shown by the thick continuous curve .we see that the maximum ( and minimum ) value of is larger ( or smaller ) than that allowed by .[ this feature holds when the inertia is in the intermediate regime also , though the values of maxima ( minima ) of are not significantly larger ( less ) than ( ) .] when the trajectory jumps from to at the highest value of for the cycle , the trajectory stays on for a significantly shorter time compared to the small inertia case ( ) and jumps back to well before has reached the minimum of , i.e. , is much smaller than .the pull force cascades down through a series of back and forth jumps between the two branches till the lowest value of for the cycle is reached .note that at the point is less than .for the sake of clarity , two different portions of the trajectory are marked and corresponding to the top and bottom regions of the plot .the corresponding points are also identified on the plot . after reaching , the orbit jumps to on , the trajectory decides to move up all the way till reaches a maximum value ( larger than , the point ) without jumping to the branch .this part of as a function of time , which is nearly linear on , ( i.e. , the segment ) displays a noticeable sinusoidal modulation .the sinusoidal form is better seen in [ fig . [ fig4](b ) ] .note that the successive drops in are of increasing magnitude .the jumps between the two branches in the plane are seen as bursts of [ fig .[ fig4](d ) ] .for these values of parameters , the system is periodic .+ ( b ) as we increase , the sinusoidal nature of and becomes more clear with its range becoming larger reaching a nearly sinusoidal at for large .[ the range in fig . [ fig4](c ) expands .compare fig .[ fig5](a ) . ]the magnitude of on the branch for small and moderately or large , gradually decreases with increasing .the magnitude of itself decreases as is increased . in the limit of large and ,the drops in and become quite small which are now located near the maxima and minima of these curves .this is shown in figs .[ fig5](a ) and [ fig5](b ) .the sinusoidal nature is now obvious even in unlike for smaller and where it is clear only in for the low branch .note that for , the nature of is nearly flat .this induces certain changes in the phase plot that are not apparent in and .the jumps between the two branches are now concentrated in a dense band at low and high values of . in this case , the maximum ( minimum ) value of is significantly larger ( less ) than ( ) .these rapid jumps between the branches manifest as jitter at the top and bottom of and . for and .( b ) corresponding plot of .the inset shows an expanded plot of decreasing trend of .( c ) corresponding plots of phase space trajectory that reflects the chaotic nature and ( d ) the peel velocity .( , are in m / s , in n , in kg m and in s. ) , title="fig:",width=340,height=188 ] for and .( b ) corresponding plot of .the inset shows an expanded plot of decreasing trend of .( c ) corresponding plots of phase space trajectory that reflects the chaotic nature and ( d ) the peel velocity .( , are in m / s , in n , in kg m and in s. ) , title="fig:",width=340,height=207 ] for and .( b ) corresponding plot of .the inset shows an expanded plot of decreasing trend of .( c ) corresponding plots of phase space trajectory that reflects the chaotic nature and ( d ) the peel velocity .( , are in m / s , in n , in kg m and in s. ) , title="fig:",width=340,height=207 ] for and .( b ) corresponding plot of .the inset shows an expanded plot of decreasing trend of .( c ) corresponding plots of phase space trajectory that reflects the chaotic nature and ( d ) the peel velocity .( , are in m / s , in n , in kg m and in s. ) , title="fig:",width=340,height=188 ] unlike for small [ fig . [ fig4](a ) ] , the nature of the trajectory in fig .[ fig5](c ) is different . after reaching a critical value of near the maximum value of ( the point ), the orbit spirals upwards and then descends down till another critical value of ( the point ) is reached .having reached , the orbit monotonically comes down till where it jumps to the ab branch . beyond this point, it again spirals upwards till the point is reached .thereafter , monotonically increases till is reached .the regions and are the regions where shows a near sinusoidal form .the regions and are the regions where the orbit jumps between the branches rapidly . these manifest themselves as bursts of which tend to bunch together almost into a band .[ compare fig .[ fig4](d ) with fig .[ fig5](d ) .] it is interesting to note that the jumps between the two branches occur exactly at points where , even when the maximum ( minimum ) of are higher ( lower ) than that allowed by the stationary curve .the variables are aperiodic for the set of parameters .the phase plots appear to be generated by an effective that is being cycled .[ this visual feeling is mainly due to the fact that jumps between the branches still occur at the maximum and minimum of . ]the influence of is generally to increase the range of the pull force as can be easily anticipated and to decrease the associated time scale. it may be desirable to comment on the similarity of the nature of the force waveforms displayed by the model equations with those seen in experiments .as mentioned in the introduction , apart from qualitative statements on the waveforms in ref . ( such as periodic , sawtooth etc ., which are seen in the model as well ) , it should be stressed that there is a paucity of quantitative characterization of the waveforms . in this respect , the study by gandur _ et al . _ fills the gap to some extent .these authors have carried out a dynamical analysis of the time series for various values of the pull velocities ( for a fixed value of the inertia corresponding to their experimental roller tape geometry ) . in order to compare this result ,we have calculated the largest lyapunov exponent for a range of values of and .the region of chaos is in the domain of small pull velocities when is small .the maximum lyapunov exponent turns out to be rather high , typically around 7.5 bits / s in contrast to the small values reported in ref .the large magnitude of the positive exponent in our case can be traced to the large changes in the jacobian , as varies over several order of magnitude( ) as a function of the peeling velocity and hence as a function of time .in contrast , hong _ et al . _ use an shaped curve where is constant ( and small ) on both low and high branches .however , these large values of lyapunov exponents are consistent with rather high values obtained by gandur _ from time series analysis of the pull force .we also find chaos for intermediate and high inertia in the region of high velocities where the value of the lyapunov exponent is small , typically 0.5 .the small value here again can be traced to the small changes in at high velocities .it must be mentioned that comparison with experiments is further complicated due to the presence of a two parameter family of solutions strongly dependent on both and .thus , the phase diagram is complicated , i.e. , the sequence of solutions encountered in the - plane as we change or or both does not in general display any specific ordering of periodic and chaotic trajectories ( see fig . 1 of ref . ) usually found in the well known routes to chaos .( for instance periods should be observed before the odd periods . ) indeed , in our model , we find the odd periods 3,5,7 etc , on increasing ( or ) , without seeing all the periods .( these odd periods also imply chaos at parameter values prior to that corresponding to these periods . ) in view of this , a correct comparison with experiments requires an appropriate cut in the plane consistent with the experimental values of and even where they are given .however , as the values of are not provided , full mapping of chaotic solutions is not possible .( we also note that gandur _ use a different tape from that used in ref . , as is clear from the instability range , leading additional difficulties in comparison . )one quantitative result that can be compared with experiment is the decreasing trend of the force drop magnitude .we have calculated the magnitude of the force drops during stick - slip phase as a function the pull velocity for both low ( ) and high ( ) inertia cases .figure [ fig6 ] shows the monotonically decreasing trend of average as is increased , for both small and large , a feature observed in experiments .these two distinct behaviors are a result of the dynamization of as in eq .( [ fvv ] ) . as a function of the pull speed , for two distinct values of .the dashed line corresponds to while the dotted line corresponds to .( , are in m / s , in n , in kg m and in s. ) , width=302,height=188 ] finally , as another illustration of the richness of the dynamics seen in our numerical simulations , we show in fig .[ fig7 ] , a plot of an orbit that sticks to unstable part of the manifold before jumping back to the branch .such solutions are known as canards .though canard type of solutions are rare , we have observed them for high values of and low values of . in our case ,such solutions are due to the competition of time scale due to inertia and that due to .this again illustrates the influence of inertia of the roll on the dynamics of peeling .it is clear that these equations exhibit rich and complex dynamics .a few of these features are easily understandable , but others are not .for instance , the saw - tooth form of for low inertia and low pull velocity can be explained as resulting from the trajectory sticking to stable part of and jumping only when it reaches the limit of stability . for these parameter values , as the time spent by the system is negligible during the jumps between the branches and ( and vice versa ) , the system spends most of the time on the branch and much less on due to its steep nature .then , from eq .( [ flow3 ] ) , it is clear that we should find a sawtooth form whenever the peel velocity jumps across the branch to a value of larger than the pull velocity .however , several features exhibited by these system of equations are much too complicated to understand .we first list the issues that need to be explained .+ ( i ) small .+ ( a ) we find high frequency tiny oscillations superposed on the linearly increasing [ on the branch or better seen in the plot fig . [ fig3](c ) ] .this needs to be understood .+ ( b ) the numerical solutions show that the influence of inertia can be important _ even for small _ and small .for instance , the jumps between and branches occur even before reaches the extremum values of .\(ii ) for intermediate and high values of inertia , for low case .+ ( a ) we observe several relatively small amplitude saw tooth form of on the descending part of the pull force .these appear as a sequence of jumps between the two branches in the plane which we shall refer to as the jumping mode " .a proper estimate of the magnitude of is desirable .\(b ) in addition , there appears to be a critical value of for a given cycle below which the return jumps from ab to cd stop and one observes a monotonically increasing trend in [ in fig .[ fig4](c ) ] .+ ( iii ) high i and high .\(a ) the jumps between the branches occur at a very high frequency [ fig .[ fig5](c ) ] and now are located near the extremum values of and .but these regions are separated by a stretch where the orbit monotonically increases on the branch and monotonically decreases on the branch .we need to elucidate the underlying causes leading to the switching between the jumping mode and monotonically increasing or decreasing mode .\(b ) for large , say and large ( fig . [ fig5 ] ) , the extent of values of range between and much beyond whose range is around 300 .this feature is less dominant for small and small case .as the dynamics is described by a coupled set of differential equations with an algebraic constraint , the results are not transparent .we first attempt to get insight into the complex dynamics through some simple approximations valid in each of the regimes of the parameters .solution of these approximate equations will require appropriate initial values for the relevant variables which will be provided from the exact numerical solutions . due to the nature of approximations ,the results are expected to capture only the trend and order of magnitudes of the effects that are being calculated .but as we will show , even the numbers obtained match quite closely with the exact numerical results .our idea is to capture the dynamics through a single equation ( as far as possible or at most two as in the high and case ) by including all the relevant time scales and solve the relevant equation _ on each branch ._ for this we note that the equation for and play a crucial role as the inertial contribution appears only through eqs .( [ flow1 ] ) and ( [ flow2 ] ) and the time spent by the system is controlled by the equation for , eq . ( [ vdotapprox ] ) .using eqs .( [ flow1 ] ) and ( [ flow2 ] ) , we get the general equation for can be written down by using eq .( [ vdotapprox ] ) , in eq .( [ genalpha ] ) , we get } { rf^{\prime } } , \label{genalpha1}\\ & \simeq & -\frac{fr\alpha } { i } -\frac { [ \dot f + f \dot\alpha]}{rf^{\prime}}. \label{airy1 } % & \simeq & -\frac{f\alpha } { i } -\frac{f \dot\alpha}{rf^{\prime}}.\end{aligned}\ ] ] in obtaining eq .( [ airy1 ] ) , we have used which is valid except for high and high .further , in most cases , we can drop as the magnitude of this term is small and use . to be consistent we use .we note however that even for high and high where is not small , dropping and causes only 10% error .+ * case i , small * on the low velocity branch , as is small in eq .( [ flow1 ] ) , we can drop term in eq .[ genalpha ] .thus , note that for the low inertia case , approximation is clearly justified [ see eq .( [ flow2 ] ) ] . using this equation, we first get an idea of the relevant time scales as is increased . + *case a * consider the low velocity branch where the small amplitude high frequency oscillations are seen on the nearly linearly increasing part of [ given by , see for instance fig .[ fig3](b ) ] .a rough estimate of this time spent on this branch is obtained by . using and , [ from fig . [ fig3](b ) ] , we get ( compared to the correct value of which we shall obtain soon ) which is much larger than the period of the high frequency oscillation. thus , we could take the local value for the purpose of calculating the period of the high frequency oscillation .consider the orbit at the lowest value of for which we can use .then using eq .( [ approxalpha ] ) , the frequency for which gives the period of oscillation .this agrees very well with the exact numerical value .this frequency decreases when the force reaches the maximum value to giving which is again surprisingly close to the numerical value . in the numerical solutions, we find that the period gradually decreases [ see fig .[ fig3](c ) ] .this feature is also easily recovered by using .this leads to an additional term in the equation of motion for in eq .( [ approxalpha ] ) , r\alpha / i , \label{airy}\ ] ] where is the time required for to reach starting from . hereagain the term can be dropped .if was absent , the equation has the airy s form .( note that for this case also we could assume and . )though this equation does not have an exact solution , we note that we could take to have a sinusoidal form with where is treated as a slowly increasing parameter .( this assumption works quite well . )the above equation captures the essential features of the numerical solution .the numerical solution of eq .( [ airy ] ) ( as also this representation ) gives the decreasing trend of the small amplitude high frequency oscillations .( note that the airy equation itself gives a decreasing amplitude . )we note that eq .( [ approxalpha ] ) is valid on the branch where is small even for high inertia and small case .thus , we may be able to recover the gross time scales using this equation .our numerical results show that as we increase the inertia , exhibits a sinusoidal form on the branch [ see fig . [ fig4](b ) ] , although one full cycle is not seen .we note that though the value of is much larger than that for small , we can still use the above equations [ eq .( [ approxalpha ] ) and ( [ airy ] ) ] . on this branch increases from a value to a maximum . for large ( and ) , we get a rough estimate of the period by using the mean value of in eq .[ approxalpha ] .this gives a period which already agrees satisfactorily with the numerically exact value = 0.11 considering the approximation used ( i.e. , using the mean ) .a better estimate can be obtained by using eq .( [ airy ] ) .for the high and case , fig .5 for shows that the wave forms are nearly sinusoidal except for a jitter at the top and bottom . for this case , is nearly flat over the entire range of values of , with a value 300 . here , even on the ab branch , we can not ignore the term in eq .( [ genalpha ] ) .however , one sees that as and which suggest that to the leading order , we could ignore the term .this gives the period . from fig .[ fig5](b ) , considering only the monotonically decreasing part ( ) , the value of read off from the figure compares reasonably well with this value . for the branch ,as is not small , the term appears to be important in eq .( [ genalpha ] ) .some idea of when this term is important can be had by looking at the time scales arising from inertia , namely , and the coefficient of the damping term , in eq .( [ airy1 ] ) .consider for and .the period obtained by assuming the mean value of in gives for compared to for .these numbers can be compared with the time scale which is 0.01 ( where we have used from numerical simulations for ) .this shows that for high inertia the damping coefficient in eq .( [ airy1 ] ) is important .we will discuss this issue in more detail later .+ * case b * now we focus on the origin of jumps between the branches .we note that the jumps from to ( or vice versa ) occur only when the peel velocity reaches a value where .this also means that the time scale on each branch , whether it spends only a short time or not , is controlled by the equation for .however , clearly the influence of inertia needs to be included . herewe present an approximate equation for which is valid in the various limits of the parameters : /f^{\prime } , \label{fullvdot}\\ & \simeq & \frac { [ k ( v - v ) + ( f_{in } + k ( v - v)t ) \dot \alpha)]}{f^{\prime } } , \label{approxvdot}\end{aligned}\ ] ] where the time is time spent on the branch considered ( low or high ) . in eq .( [ approxvdot ] ) , we have again used and with the same approximation used in eq .( [ airy1 ] ) .we now attempt to obtain correct estimates of the time spent by the orbit on each branch starting with the least complicated situation of the low inertia and small . for this case , on the low velocity branch , one can use the sinusoidal solution for , namely , where is a phase factor which also includes the contribution arising from the jump as well and with .both and needs to be supplied .alternately , one can use eq .( [ approxalpha ] ) with eq .( [ approxvdot ] ) for which we provide and at the point from the exact numerical solutions .we stress that this procedure is _ not _ equivalent to solving all the equations , as the only equation we use is eq .( [ approxvdot ] ) with the form of already determined from the equation for .[ we note here that though we have used the sinusoidal form of along with the initial conditions on , it is simpler to supply the initial conditions and use eq .( [ approxalpha ] ) . ]we note here that is a crucial factor that determines the time at which the orbit jumps from one branch to the other .equation ( [ approxvdot ] ) needs to be integrated from to that are determined by the pulling velocity , i.e. , the form of . for the low branch term makes a significant contribution for the time spent by the trajectory on .indeed , one can obtain the order of magnitude of the time spent by the orbit on ab by using a crude approximation for .this can be easily integrated from , to which already gives .this number is comparable to the numerically exact value .a correct estimate can be obtained by using from eq .( [ fvv ] ) with the sinusoidal form or eq .( [ approxalpha ] ) .( we have used from the numerical simulations for and }$ ] with . )this gives nearly the exact numerical value of .in fact , this solution also captures the oscillatory growth nature of quite accurately .the approximate form of ( continuous line ) along with the numerically exact solution ( dotted line ) are shown in fig .[ fig8 ] . using in gives and which is in good agreement with the exact numerical value of .it is interesting to note that this value is much less than [ see fig . [ fig3](b ) ] or equivalently is less than , what is also observed in our exact numerical simulation .the underlying mechanism of jumping of the orbit before reaches also becomes clear from the analysis ( fig .[ fig8 ] ) .we note that the magnitude of the oscillatory component in grows till it reaches permitted by .then , the orbit has to jump to .thus , the approximate solution gives an insight into the cause of the orbit jumping even before reaches ( for small i ) . for the branch also , the dominant term is .indeed , any reasonable function which has the same geometrical form of shown in fig . [ fig2 ] will give good results for .using the correct form of , we get which is close to the exact result .this again gives correct magnitude of .in addition the nature of the obtained by this approximation is close to the exact numerical solution shown in the inset of fig .8 . for and for the ab branch .the inset shows a similar comparison of on the cd branch .( , are in m / s , in kg m and in s.),width=302,height=226 ] * case ii , intermediate and high and low * the most difficult feature of our numerical solutions to understand is the dynamical mechanism leading to a series of drops in the pull force seen on the descending branch of for intermediate and high values of inertia and for a range of values .consider the high inertia and low case ( say and ) shown in fig .as stated earlier , there are two different issues that need to be understood here .first , the series of small force drops and second the monotonic increasing nature of on the branch . in this case , as already discussed , the coefficient of , namely , term in eq .( [ airy1 ] ) determines the time scale on , while on , the term dominates .thus , the general equation valid for this case is where we use .[ note that we have dropped term from eq .( [ airy1 ] ) as this term does not have any dependence on or . ]we start with the cascading effect .consider the orbit when it is at the highest value of on the branch for which we can drop term . as is a function of , and depends on time , it appears that we need to use coupled equations with eq .( [ approxvdot ] ) .however , the numerical solution of these equations show that one can make further approximation by taking to be constant taken at and , as the time spent on this branch is very small .the error in using this approximation is within . indeed , using and numerically integrating eq .( [ approxvdot ] ) , along with eq .( [ airy2 ] ) from to gives .this compares reasonably with the numerical value of . using thiswe get which compares well with the numerically exact value 19.8 . at this pointthe orbit jumps to the low velocity branch ( to the point ) .thus , as is small , for all practical purposes , we can ignore the dependence of on and on and use to be an exponentially decreasing function for analytical estimates .these analytical estimates already give reasonably accurate numbers . on the branch ,the dominant time scale is determined by , and we can use the approximate sinusoidal form in eq .( [ approxvdot ] ) , or eq . ( [ approxalpha ] ) along with eq .( [ approxvdot ] ) for the time evolution from the point . integrating from to with the appropriate initial values ( or ) and , gives which again compares very well with exact numerical value .this gives .the procedure for calculating the time spent by the orbit on and is the same and we find that successive values of increases which is again consistent with what is seen in figs .[ fig4](a ) and [ fig4](c ) .continuing this procedure , we find that a minimum value of for the cycle is reached .now consider the time evolution of on that should lead to a monotonically increasing nature as seen in the numerically exact solution .as this point corresponds to the point at which the dynamics switches from the jumping mode to the monotonically increasing nature of ( i.e. , the stretch ) , we discuss this in some detail .for the point , we have used the initial condition and integrating eq .( [ approxvdot ] ) and eq .( [ approxalpha ] ) ( or the sinusoidal form of ) from to gives .this is nearly the value 0.114 obtained from the exact numerical integration .this gives and which compares very well with the exact numerical value .in addition , the growth form of obtained from this approximation ( continuous line ) agrees very well with that of the exact numerical solution ( dotted line ) as shown in fig .the discrepancy seen in the figure can be reduced for instance if we include the terms neglected in eq .( [ genalpha1 ] ) such as and using in eq .( [ fullvdot ] ) .now we come to the crucial question. how does the system know that it has to go from to , while just during the previous visit to the point on branch lead only to a small increase in [ fig . [fig4](a ) and [ fig4](c ) ] before jumping to ? to understand this , we recall that on , a sinusoidal solution is allowed .first , one can notice a few differences in the initial conditions between the point and .for the point , the initial conditions taken from the exact numerical solution are and .( ) , while for the point , .however , for to begin a sinusoidal form , the initial value of is much higher than the natural slope .the local slope for any sinusoidal form is maximum when the variable is close to zero . in fig .[ fig4](b ) , the sinusoidal form starts when is close to zero ( at ) where the local slope should be close to the maximum value . near , the local slope is the product of the maximum amplitude of , say , ( in the sinusoidal stretch ) and .( we have assumed by dropping the phase factor . )thus , one should have when . using the value from exact numerical solution and at , we find that near . indeed, this is satisfied only at where .[ note that is not symmetric around zero due to the presence of in eq .( [ flow1 ] ) which has been ignored for the purpose of present discussion .] however , at is significantly higher than the slope permitted for to start a sinusoidal sojourn .this forces the orbit to make one more small loop ( to and back ) so that the initial value of is commensurate for to start a sinusoidal form .indeed , the initial values of at all the earlier visits to branch keep decreasing until it reaches a value that is consistent to begin the sinusoidal growth . once this is satisfied ,the monotonic increasing behavior from to is seen . as we will show this is the mechanism operating for high and case .+ * case iii , high and * for this case , even on the branch , can not be ignored in eq .( [ genalpha ] ) and thus one needs to use coupled eqs .( [ approxvdot ] ) and ( [ genalpha ] ) .calculations follow much the same lines and give correct values for and on both the branches during the rapid jumps . again, we need to answer when exactly does the system know to switch from a rapid jumping mode to monotonically increasing on or decreasing mode on ? consider the last of the rapid jumps from to ( just prior to the point ) in fig . [fig5](c ) .the corresponding point in the plot [ fig .[ fig5](b ) ] is shown on an expanded scale in the inset . from this figure , it is clear that has a positive slope at , though of small magnitude while at , it has a value -9.7 .the latter is close to the natural ( negative ) slope of when it begins the descending branch of the sinusoidal form . on the other hand ,the slope of is positive at and hence will not allow the growth to change over from a jumping mode to the sinusoidal growth form for .one can note that the slopes at points of all the earlier visits to ab [ see fig .[ fig5](b ) inset ] keep decreasing till the slope becomes negative required for the monotonically decreasing trend of .this is exactly the same mechanism for and also , for the low branch , except that in this case , even the sign of the slope is incompatible for all the points prior to in fig .[ fig5 ] c. the mechanism operating on ( i.e. , at the switching from jumping mode to monotonically decreasing nature of ) is essentially the same but arguments are a little more involved and hence they are not presented . with numerically exact solution ( dotted line ) for .( , are in m / s , in kg m and in s. ) , width=302,height=188 ] now , we consider the causes leading to the maximum and minimum values taken by being much more than permitted by . asthis is dominant for , we illustrate this using fig . [ fig5](a ) and [ fig5](c ) .we first note that eq .( [ constr ] ) constrains the dynamically changing values of and to the stationary values of .clearly , this implies that . a rough estimate of can be obtained by with determined by .this relation can be easily verified by using the numerical values of .for instance , for and , and .this gives while the numerical value from the phase plot for this case gives which is very close .similarly , using and , we get which compares well with the numerical value of 180 .we have verified this relation is respected for various values of and . for small , is small , we should not find much difference between ( ) and that of .we first summarize the results before making some relevant remarks .we have carried out a study of the dynamics of an adhesive roller tape using a differential - algebraic scheme used for singular set of differential equations .the algorithm produces stick - slip jumps across the two dissipative branches as a consequence of the inherent dynamics .our extensive simulations show that the dynamics is much richer than anticipated earlier .in particular the influence of inertia is shown to be dramatic .for instance , even at low inertia , for small values of , the influence of inertia manifests with jumps of the orbit occurring even before reaches ( or ) which is quite unexpected .more dominant is its influence for high both for low and high , though it is striking for the latter case . following the reasoning used in the plc effect, we introduce a dynamized curve as resulting from competing time scales of internal relaxation and imposed pull speed .the modified peel force function leads to the decreasing trend in the magnitude of with increasing pull velocity , a feature observed in experiments .we have also recaptured the essential features of the dynamics by a set of approximations valid in different regimes of the parameter space .these approximate solutions illustrate the influence of various time scales such as that due to inertia , the elasticity of the tape and that determined by the stationary peel force .we also find the unusual canard type of solutions . here , it is worthwhile to comment on the dynamical features of the model .the numerical results themselves are too complex to understand .a striking example of this is the series of force drops seen on the descending branch of the pull force [ fig .[ fig4](c ) ] .this result is hard to understand as it would amount to a partial relaxation of the pull force .however , a partial relaxation is only possible in the presence of another competing time scale ( other than the imposed time scale ) .another example is the jumping of the orbit for low case , from to and vice versa even before the pull force reaches the extremum values of .for this reason , we have undertaken to make this complex dynamics transparent using a set of approximations .the basic idea here is to solve a single equation ( or at most two equations as in the high and case ) which incorporates all the relevant time scales .this method not only captures all the results to within 10% error but it also clearly brings out the regimes of parameter space where these time scales become important .this analysis also shows that the time scale due to inertia of the roller tape shows up even for low which comes as a surprise as one expects that for low inertia , the orbit should stick to the stationary peel function .( recall that for low inertia , equations have been approximated by lienard type of equations by maugis and barquins . )our approximate equations demonstrate that a crucial role in inducing the jumps _ even at low inertia _ is played by the high frequency oscillations resulting from the inertia of the roller tape . as for high inertia ( both for low and high pull velocities ) , the time scale due to inertia is responsible for the partial relaxation of as shown. a few comments may be in order on the bursting type of oscillations in the peel velocity .bursting type of oscillatory behavior are commonly seen in neuro - biological systems .conventionally , bursting type oscillations arise in the presence of homoclinic orbit .such bursting type of oscillations have also been modeled using one dimensional map .however , it is clear that the mechanism for bursting type of oscillations in our case is different . in our case, this arises due to the fact that the orbit is forced to jump between the stable manifolds as a result of competing time scale of inertia and the time scale for the evolution of .( we note that the latter itself includes more than one time scale [ see eq . ( [ approxvdot ] ) ] , namely the contribution from the slopes of the stable parts of the stationary curve and that due to elasticity of the tape . )the bunching of the spikes in is the result of becoming flat for large and .one other comment relates to canard type solutions .figure [ fig7 ] shows one such solution .as mentioned , these type solutions arise from sticking to the unstable manifold .in fact , a similar type of solution is seen in fig . [fig5](c ) .as noted earlier , all the jumps from to or vice versa always occur when the peel velocity reaches the limiting value where .however , it can be seen from this figure , the orbit starting from monotonically decreases well into the unstable part of .thus , this solution also has the features of canards .it must be stated that our approximate solutions can not capture the behavior of canards . finally , the results presented in this paper are on the nature of dynamics of the model equations which so far had defied solution . however , comparison with experiments has been minimal largely due to the paucity of quantitative experimental findings as stated earlier .our analysis shows that the model predicts periodic , saw - tooth , as well as chaotic solutions as reported in .the high magnitude of the lyapunov exponents for the chaotic solutions in the low pull velocities is consistent with that reported earlier .we note that the other quantitative experimental feature reported by refs. is the decreasing trend of the average force drop magnitudes as a function of the pull velocity is also captured by our model ( fig .[ fig6 ] ) , a result that holds for both low and high inertia .this result is a direct consequence of the dynamization of the peel force function , i.e. , dependence of the peel force on the pull velocity .we note here that the complex dynamics at high velocities ( see fig .[ fig5 ] ) is a direct result of the unstable part of dynamized curve , , shrinking to zero . to the best of our knowledge ,this is first time the result in fig .[ fig6 ] has been explained .as the hypothesis of dynamization captures the decreasing trend of the force drops , it also suggests that the underlying mechanism of competing time scales responsible for the peel force depending on the pull velocity is likely to be correct as in the plc effect . clearly , a rigorous derivation of the peel force function from microscopic considerations that includes the effect of the viscoelastic glue at the contact point is needed to understand the dynamics appropriately .the authors wish to thank a. s. vasudeva murthy of tifr , bangalore for useful discussions on dae algorithm .rd and am wish to thank m. bekele of addis ababa university , ethiopia and m. s. bharathi of brown univ . ,usa for stimulating and friendly discussions .this work is financially supported by the department of science and technology , new delhi , india under the grant sp / s2k-26/98 .
we investigate the dynamics of peeling of an adhesive tape subjected to a constant pull speed . we derive the equations of motion for the angular speed of the roller tape , the peel angle and the pull force used in earlier investigations using a lagrangian . due to the constraint between the pull force , peel angle and the peel force , it falls into the category of differential - algebraic equations requiring an appropriate algorithm for its numerical solution . using such a scheme , we show that stick - slip jumps emerge in a purely dynamical manner . our detailed numerical study shows that these set of equations exhibit rich dynamics hitherto not reported . in particular , our analysis shows that inertia has considerable influence on the nature of the dynamics . following studies in the portevin - le chatelier effect , we suggest a phenomenological peel force function which includes the influence of the pull speed . this reproduces the decreasing nature of the rupture force with the pull speed observed in experiments . this rich dynamics is made transparent by using a set of approximations valid in different regimes of the parameter space . the approximate solutions capture major features of the exact numerical solutions and also produce reasonably accurate values for the various quantities of interest .
physical layer security has been a very active area of research in information theory .see and for overviews of recent progress in this field .a basic model of physical layer security is a wiretap / broadcast channel with two receivers , a legitimate receiver and an eavesdropper .both the legitimate receiver and the eavesdropper channels are assumed to be _ known _ at the transmitter . by exploring the ( statistical ) difference between the legitimate receiver channel and the eavesdropper channel , one may design coding schemes that can deliver a message reliably to the legitimate receiver while keeping it asymptotically perfectly secret from the eavesdropper . while assuming the transmitter s knowledge of the legitimate receiver channel might be reasonable ( particularly when a feedback link is available ) , assuming that the transmitter knows the eavesdropper channel is _ unrealistic _ in most scenarios .this is mainly because the eavesdropper is an _adversary _ , who usually has no incentive to help the transmitter to acquire its channel state information .hence , it is critical that physical layer security techniques are designed to withstand the _ uncertainty _ of the eavesdropper channel . in this paper, we consider a communication scenario where there are _ multiple _ possible realizations for the eavesdropper channel . which realization will actually occur is _ unknown _ to the transmitter .our goal is to design coding schemes such that the number of _ secure _ bits delivered to the legitimate receiver depends on the _ actual _ realization of the eavesdropper channel .more specifically , when the eavesdropper channel realization is weak , _ all _ bits delivered to the legitimate receiver need to be secure .in addition , when the eavesdropper channel realization is strong , a prescribed _ part _ of the bits needs to _ remain _ secure .we call such codes _ security embedding codes _ , referring to the fact that high - security bits are now embedded into the low - security ones .we envision that such codes are naturally useful for the secrecy communication scenarios where the information bits are _ not _ created equal : some of them have more security priorities than the others and hence require stronger security protections during communication .for example , in real wireless communication systems , control plane signals have higher secrecy requirement than data plane transmissions , and signals that carry users identities and cryptographic keys require stronger security protections than the other signals .a key question that we consider is at what expense one may allow part of the bits to enjoy stronger security protections .note that a naive " security embedding scheme is to design two separate secrecy codes to provide two different levels of security protections , and apply them to two separate parts of the information bits . in this scheme, the high - security bits are protected using a stronger secrecy code and hence are communicated at a lower rate .the overall communication rate is a _ convex _ combination of the low - security bit rate and the high - security bit rate and hence is lower than the low - security bit rate .moreover , this rate loss becomes larger as the portion of the high - security bits becomes larger and the additional security requirement ( for the high - security bits ) becomes higher .the main result of this paper is to show that it is possible to have a significant portion of the information bits enjoying additional security protections _ without _ sacrificing the overall communication rate .this further justifies the name security embedding , " as having part of the information bits enjoying additional security protections is now only an added bonus . more specifically ,in this paper , we call a secrecy communication scenario _ embeddable _ if a _fraction of the information bits can enjoy additional security protections without sacrificing the overall communication rate , and we call it_ perfectly embeddable _ if the high - security bits can be communicated at _ full _ rate ( as if the low - security bits do not exist ) without sacrificing the overall communication rate .key to achieve optimal security embedding is to _ jointly _ encode the low - security and high - security bits ( as opposed to separate encoding as in the naive scheme ) .in particular , the low - security bits can be used as ( part of ) the _ transmitter randomness _ to protect the high - security bits ( when the eavesdropper channel realization is strong ) ; this is a key feature of our proposed security embedding codes .the rest of the paper is organized as follows . in sec .[ sec : wtc ] , we briefly review some basic results on the secrecy capacity and optimal encoding scheme for several classical wiretap channel settings .these results provide performance and structural benchmarks for the proposed security embedding codes . in sec .[ sec : mswtc ] , an information - theoretic formulation of the security embedding problem is presented , which we term as _ two - level security wiretap channel_. a coding scheme that combines rate splitting , superposition coding , nested binning and channel prefixing is proposed and is shown to achieve the secrecy capacity region of the channel in several scenarios .based on the results of sec .[ sec : mswtc ] , in sec .[ sec : gmswtc ] we study the engineering communication models with real channel input and additive white gaussian noise , and show that both scalar and independent parallel gaussian ( under an individual per - subchannel average power constraint ) two - level security wiretap channels are _ perfectly embeddable_. in sec .[ sec : mswtc2 ] , we extend the results of sec . [sec : mswtc ] to the _ wiretap channel ii _ setting of ozarow and wyner , and show that two - level security wiretap channels ii are also _ pefectly embeddable_. finally , in sec .[ sec : con ] , we conclude the paper with some remarks .consider a discrete memoryless wiretap channel with transition probability , where is the channel input , and and are the channel outputs at the legitimate receiver and the eavesdropper , respectively ( see fig .[ fig : wtc ] ) .the transmitter has a message , uniformly drawn from where is the block length and is the rate of communication .the message is intended for the legitimate receiver , but needs to be kept asymptotically perfectly secret from the eavesdropper .mathematically , this secrecy constraint can be written as in the limit as , where ,\ldots , z[n]) ] .note from that if and only if .that is , for the gaussian wiretap channel , asymptotic perfect secrecy communication is possible if and only if the legitimate receiver has a larger channel gain than the eavesdropper . in this case , we can equivalently write the channel output at the eavesdropper as a degraded version of the channel output at the legitimate receiver , and the random binning scheme of with _ gaussian _ codebooks and _ full _ transmit power achieves the secrecy capacity of the channel .a closely related engineering scenario consists of a bank of independent parallel scalar gaussian wiretap channels . in this scenario ,the channel outputs at the legitimate receiver and the eavesdropper are given by and where here , is the channel input for the subchannel , and are the channel gains for the legitimate receiver and the eavesdropper channel respectively in the subchannel , and and are additive white gaussian noise with zero means and _ unit _ variances .furthermore , are independent for so all subchannels are independent of each other .two different types of power constraints have been considered : the average individual per - subchannel power constraint )^2 \leq p_l , \quad l=1,\ldots , l \label{eq : pcons - s}\ ] ] and the average total power constraint )^2\right ] \leq p. \label{eq : pcons - t}\ ] ] under the average individual per - subchannel power constraint , the secrecy capacity of the independent parallel gaussian wiretap channel is given by where is defined as in .clearly , any communication rate less than the secrecy capacity can be achieved by using separate scalar gaussian wiretap codes , each for one of the subchannels .the secrecy capacity , , under the average total power constraint is given by where the maximization is over all possible power allocations such that .a waterfilling - like solution for the optimal power allocation was derived in ( * ? ? ?* th . 1 ), which provides an efficient way to numerically calculate the secrecy capacity .consider a discrete memoryless broadcast channel with three receivers and transition probability .the receiver that receives the channel output is a legitimate receiver .the receivers that receive the channel outputs and are two possible realizations of an eavesdropper .assume that the channel output is _ degraded _ with respect to the channel output , i.e. , forms a markov chain in that order .therefore , the receiver that receives the channel output represents a stronger realization of the eavesdropper channel than the receiver that receives the channel output .the transmitter has two independent messages : a high - security message uniformly drawn from and a low - security message uniformly drawn from . here, is the block length , and and are the corresponding rates of communication . both messages and are intended for the legitimate receiver , and need to be kept asymptotically perfectly secure when the eavesdropper realization is weak , i.e. , in the limit as . in addition , when the eavesdropper realization is strong , the high - security message needs to remain asymptotically perfectly secure , i.e. , in the limit as .a rate pair is said to be _ achievable _ if there is a sequence of codes of rate pair such that both messages and can be reliably delivered to the legitimate receiver while satisfying the asymptotic perfect secrecy constraints and .the collection of all possible achievable rate pairs is termed as the _ secrecy capacity region _ of the channel .[ fig : mswtc ] illustrates this communication scenario , which we term as _ two - level security wiretap channel_. the above setting of two - level security wiretap channel is closely related to the traditional wiretap channel setting of .more specifically , without the additional secrecy constraint on the high - security message , we can simply view the messages and as a single ( low - security ) message with rate . andthe problem reduces to communicating the message over the traditional wiretap channel with transition probability . by the secrecy capacity expression ,the maximum achievable is given by \label{eq : cs_ck2}\ ] ] where is an auxiliary random variable satisfying the markov chain .similarly , without needing to communicate the low - security message ( i.e. , ) , the secrecy constraint reduces to which is implied by the secrecy constraint since due to the markov chain . in this case, the problem reduces to communicating the high - security message over the traditional wiretap channel with transition probability . again , by the secrecy capacity expression , the maximum achievable is given by \label{eq : cs_ck3}\ ] ] where is an auxiliary random variable satisfying the markov chain .based on the above connections , we may conclude that a two - level security wiretap channel is _ embeddable _ if there exists a sequence of coding schemes with a rate pair such that is equal to and , and it is _ perfectly embeddable _ if there exists a sequence of coding schemes with a rate pair such that is equal to and is equal to . an important special case of the two - level security wiretap channel problem considered here is when the channel output is a constant signal . in this case , the secrecy constraint becomes _ obsolete _ , and the low - security message becomes a _regular _ message without any secrecy constraint .the problem of simultaneously communicating a regular message and a confidential message over a discrete memoryless wiretap channel was first considered in , where a single - letter characterization of the capacity region was established .for the _ general _ two - level security wiretap channel problem that we consider here , both high - security message and low - security message are subject to asymptotic perfect secrecy constraints , which makes the problem potentially much more involved .the following theorem provides two _ sufficient _ conditions for establishing the achievability of a rate pair for a given discrete memoryless two - level security wiretap channel .[ thm : dm1 ] consider a discrete memoryless two - level security wiretap channel with transition probability that satisfies the markov chain .a nonnegative pair is an achievable rate pair of the channel if it satisfies for some input distribution .more generally , a nonnegative pair is an achievable rate pair of the channel if it satisfies for some joint distribution , where and are auxiliary random variables satisfying the markov chain and such that .clearly , the sufficient condition can be obtained from by choosing and to be a constant .hence , is a more general sufficient condition than .the sufficient condition can be proved by considering a _ nested _ binning scheme that uses the low - security message as part of the transmitter randomness to protect the high - security message ( when the eavesdropper channel realization is strong ) .the more general sufficient condition can be proved by considering a more complex coding scheme that combines rate splitting , superposition coding , nested binning and channel prefixing . a detailed proof of the theorem is provided in sec .[ pf : thm1 ] .the following corollary provides sufficient conditions for establishing that a two - level security wiretap channel is ( perfectly ) embeddable .the conditions are given in terms of the existence of a joint auxiliary - input random triple and are immediate consequences of theorem [ thm : dm1 ] .a two - level security wiretap channel is _ embeddable _ if there exists a pair of auxiliary random variables and satisfying the markov chain and such that , is an _optimal _ solution to the maximization program , and , and it is _ perfectly embeddable _ if there exists a pair of auxiliary random variables and satisfying the markov chain and such that , is an _optimal _ solution to the maximization program , and is equal to .if , in addition to the markov chain , we also have the markov chain in that order , the sufficient condition is also necessary , leading to a precise characterization of the secrecy capacity region .the results are summarized in the following theorem ; a proof of the theorem can be found in appendix [ app:1 ] .[ thm : dm2 ] consider a discrete memoryless two - level security wiretap channel with transition probability that satisfies the markov chains and .the secrecy capacity region of the channel is given by the set of all nonnegative pairs that satisfy for some joint distribution , where and are auxiliary random variables satisfying the markov chain .if , in addition to the markov chains and , we also have the markov chain in that order , the ( weaker ) sufficient condition also becomes necessary , leading to a simpler characterization of the secrecy capacity region ( which does _ not _ involve any auxiliary random variables ) .the results are summarized in the following theorem ; a proof of the theorem can be found in appendix [ app:2 ] .[ thm : dm3 ] consider a discrete memoryless two - level security wiretap channel with transition probability that satisfies the markov chains , and .the secrecy capacity region of the channel is given by the set of all nonnegative pairs that satisfy for some input distribution .we first prove the weaker sufficient condition by considering a nested binning scheme that uses the low - security message as part of the transmitter randomness to protect the high - security message ( when the eavesdropper channel realization is strong ) .we shall consider a random - coding argument , which can be described as follows .fix an input distribution ._ codebook generation ._ randomly and independently generate codewords of length according to an -product of .randomly partition the codewords into bins so each bin contains codewords .further partition each bin into subbins so each subbin contains codewords . label the codewords as where denotes the bin number , denotes the subbin number within each bin , and denotes the codeword number within each subbin . see fig . [fig : nb ] for an illustration of the codebook structure . _ encoding ._ to send a message pair , the transmitter _ randomly _( according to a uniform distribution ) chooses a codeword from the subbin identified by and sends it through the channel . _decoding at the legitimate receiver ._ given the channel outputs , the legitimate receiver looks into the codebook and searches for a codeword that is jointly typical with . in the casewhen with high probability the transmitted codeword is the _ only _ one that is jointly typical with ( and hence can be correctly decoded ) . _ security at the eavesdropper ._ note that each bin corresponds to a message and contains codewords , each randomly and independently generated according to an -product of .for a given message , the transmitted codeword is randomly and uniformly chosen from the corresponding bin ( where the randomness is from both the low - security message and the transmitter s choice of ) . following , in the case when we have tends to zero in the limit as .furthermore , each subbin corresponds to a message pair and contains codewords , each randomly and independently generated according to an -product of . for a given message pair ,the transmitted codeword is randomly and uniformly chosen from the corresponding subbin ( where the randomness is from the transmitter s choice of ) . again , following , in the case when we have tends to zero in the limit as .eliminating from using fourier - motzkin elimination , we can conclude that any rate pair that satisfies is achievable .next we prove the more general sufficient condition by considering a coding scheme that combines rate splitting , superposition coding , nested binning and channel prefixing .we shall once again resort to a random - coding argument , which can be described as follows .fix a joint auxiliary - input distribution with and .split the low - security message into two independent submessages and with rates and , respectively . _ codebook generation . _randomly and independently generate codewords of length according to an -product of .randomly partition the codewords into bins so each bin contains codewords .label the codewords as where denotes the bin number , and denotes the codeword number within each bin .we shall refer to the codeword collection as the -codebook . for each codeword in the -codebook , randomly and independently generate codewords of length according to an -product of .randomly partition the codewords into bins so each bin contains codewords .further partition each bin into subbins so each subbin contains codewords . label the codewords as where indicates the base codeword from which was generated , denotes the bin number , denotes the subbin number within each bin , and denotes the codeword number within each subbin. we shall refer to the codeword collection as the -subcodebook corresponding to base codeword .see fig .[ fig : nb2 ] for an illustration of the codebook structure . _ encoding ._ to send a message triple , the transmitter _ randomly _( according a uniform distribution ) chooses a codeword from the bin in the -codebook .once a is chosen , the transmitter looks into the -subcodebook corresponding to and _ randomly _ chooses a codeword from the subbin identified by .once a is chosen , an input sequence is generated according to an -product of and is then sent through the channel . _decoding at the legitimate receiver . _given the channel outputs , the legitimate receiver looks into the -codebook and its -codebooks and searches for a pair of codewords that are jointly typical with .in the case when with high probability the codeword pair selection is the only one that is jointly typical with . _security at the eavesdropper ._ to analyze the security of the high - security message and the submessage at the eavesdropper , we shall assume ( for now ) that both the submessage and the codeword selection are known at the eavesdropper .note that such an assumption can only _ strengthen _ our security analysis . given the base codeword , the encoding of and usingthe corresponding -subcodebook is identical to the nested binning scheme considered previously ( with additional channel prefixing ) .thus in the case when we have in the limit as .the equalities in and are due to the fact that and are independent . from wemay conclude that in the limit as . to analyze the security of the submessage ,note that each bin in the -codebook corresponds to a message and contains codewords , each randomly and independently generated according to an -product of . for a given submessage ,the codeword is randomly and uniformly chosen from the corresponding bin ( where the randomness is from the transmitter s choice of ) .note from that the rate of each -subcodebook is greater than . following ( * ? ? ?* lemma 1 ) , we have in the limit as .putting together and , we have which tends to zero in the limit as . finally , note that the overall communicate rate of the low - security message given by eliminating , and from , , and using fourier - motzkin elimination , simplifying the results using the facts that 1 ) , 2 ) which is due to the markov chain , and 3 ) and which are due to the markov chain , and letting , we may conclude that any rate pair satisfying is achievable .this completes the proof of theorem [ thm : dm1 ] .consider a discrete - time two - level security wiretap channel with real input and outputs , and given by where , and are the corresponding channel gains , and , and are additive white gaussian noise with zero means and unit variances .assume that so the receiver that receives the channel output represents a stronger realization of the eavesdropper channel than the receiver that receives the channel output .the channel input is subject to the average power constraint .we term the above communication scenario as _( scalar ) gaussian two - level security wiretap channel_. the following theorem provides an explicit characterization of the secrecy capacity region .[ thm : mgwtc ] consider the ( scalar ) gaussian two - level security wiretap channel .the secrecy capacity region of the channel is given by the collection of all nonnegative pairs that satisfy where is defined as in ._ proof : _ we first prove the converse part of the theorem . recall from sec .[ sec : mswtc - mod ] that without transmitting the low - security message ( which can only increase the achievable rate ) , the problem reduces to communicating the high - security message over the traditional wiretap channel . for the gaussian two - level security wiretap channel, the problem reduces to communicating the high - security message over the gaussian wiretap channel with channel outputs and given by we thus conclude that for any achievable rate .similarly , ignoring the additional secrecy constraint for the high - security message ( which can only enlarge the achievable rate region ) , we can simply view the messages and as a single message with rate . in this case, the problem reduces to communicating the message over the traditional wiretap channel . for the gaussian two - level security wiretap channel, the problem reduces to communicating the message over the gaussian wiretap channel with channel outputs and given by we thus conclude that for any achievable rate pair . to show that any nonnegative pair that satisfies is achievable , let us first consider two simple cases .first , when , both and are equal to zero ( c.f . definition ) . so does not include any positive rate pairs and hence there is nothing to prove .next , when , and reduces to since the high - security message does not need to be transmitted , any rate pair in this region can be achieved by using a scalar gaussian wiretap code to encode the low - security message .this has left us with the only case with .for the case where , the achievability of any rate pair in follows from that of by choosing to be gaussian with zero mean and variance .this completes the proof of the theorem . following corollary follows directly from the achievability of the corner point of .scalar gaussian two - level security wiretap channels under an average power constraint are perfectly embeddable .[ fig : cssg ] illustrates the secrecy capacity region for the case where .also plotted in the figure is the rate region that can be achieved by the naive scheme that uses two gaussian wiretap codes to encode the messages and separately .note that the corner point is strictly outside the naive " rate region , which illustrates the superiority of nested binning over the separate coding scheme .consider a discrete - time two - level security wiretap channel which consists of a bank of independent parallel scalar gaussian two - level security wiretap channels . in this model , the channel outputs are given by , and where here, is the channel input for the subchannel , , and are the corresponding channel gains in the subchannel , and , and are additive white gaussian noise with zero means and unit variances .we assume that for all , so the receiver that receives the channel output represents a stronger realization of the eavesdropper channel in _ each _ of the subchannels than the receiver that receives the channel output . furthermore , , , are independent so all subchannels are independent of each other .we term the above communication scenario as _ independent parallel gaussian two - level security wiretap channel_. the following theorem provides an explicit characterization of the secrecy capacity region under an average individual per - subchannel power constraint . [ thm : mpgwtc ] consider the independent parallel gaussian two - level security wiretap channel where the channel input is subject to the average individual per - subchannel power constraint .the secrecy capacity region of the channel is given by the collection of all nonnegative pairs that satisfy where is defined as in ._ proof : _ we first prove the converse part of the theorem . following the same argument as that for theorem [ thm : mgwtc ] , we can show that for any achievable secrecy rate pair . by the secrecy capacity expression for the independent parallel gaussian wiretap channel under an average individual per - subchannel power constraint, we have substituting into proves the converse part of the theorem . to show that any nonnegative pair that satisfies is achievable , let us consider _ independent _ coding over each of the subchannels .note that each subchannel is a scalar gaussian two - level security wiretap channel with average power constraint and channel gains .thus , by theorem [ thm : mgwtc ] , any nonnegative pair that satisfies is achievable for the subchannel .the overall communication rates are given by substituting into proves that any nonnegative pair that satisfies is achievable .this completes the proof of the theorem .similar to the scalar case , the following corollary is an immediate consequence of theorem [ thm : mpgwtc ] .independent parallel gaussian two - level security wiretap channels under an average individual per - subchannel power constraint are perfectly embeddable .the secrecy capacity region of the channel under an average total power constraint is summarized in the following corollary .the results follow from the well - known fact that an average total power constraint can be written as the _ union _ of average individual per - subchannel power constraints , where the union is over all possible power allocations among the subchannels .consider the independent parallel gaussian two - level security wiretap channel where the channel input is subject to the average total power constraint .the secrecy capacity region of the channel is given by the collection of all nonnegative pair that satisfies for some power allocation such that .[ fig : cspg ] illustrates the secrecy capacity with subchannels where as we can see , under the average total power constraint , the independent parallel gaussian two - level security wiretap channel is embeddable but _ not _ perfectly embeddable .the reason is that the optimal power allocation that maximizes is _ suboptimal _ in maximizing . by comparison , under the average individual per - subchannel power constraint , the power allocated to each of the subchannelsis fixed so the channel is always perfectly embeddable .in sec . [ sec : wtc ] we briefly summarized the known results on a classical secrecy communication setting known as wiretap channel . a closely related classical secrecy communication scenario is _ wiretap channel ii _ , which was first studied by ozarow and wyner . in the wiretap channelii setting , the transmitter sends a binary sequence of length _ noiselessly _ to an legitimate receiver .the signal received at the eavesdropper is given by where represents an erasure output , and is a subset of of size representing the locations of the transmitted bits that can be accessed by the eavesdropper .if the subset is _ known _ at the transmitter , a message of bits can be noiselessly communicated to the legitimate receiver through .since the eavesdropper has no information regarding to , _ perfectly _ secure communication is achieved _ without _ any coding .it is easy to see that in this scenario , is also the _ maximum _ number of bits that can be reliably and perfectly securely communicated through transmitted bits . an interesting result of is that for any , a total of bits can be reliably and _ asymptotically perfectly _ securely communicated to the legitimate receiver even when the subset is _ unknown _ ( but with a fixed size ) a priori at the transmitter . here , by asymptotically perfectly securely " we mean in the limit as . unlike the case where the subset is known ,coding is _ necessary _ when is unknown a priori at the transmitter . in particular, considered a random binning scheme that partitions the collection of all length- binary sequences into an appropriately chosen _ group code _ and its cosets .for the wiretap channel setting , as shown in sec .[ sec : mswtc ] , a random binning scheme can be easily modified into a nested binning scheme to efficiently embed high - security bits into low - security ones . the main goal of this sectionis to extend this result from the classical setting of wiretap channel to wiretap channel ii .more specifically , assume that a realization of the subset has two possible sizes , and , where .the transmitter has two independent messages , the high - security message and the low - security message , uniformly drawn from and respectively .when the size of the realization is , both messages and need to be secure , i.e. , in the limit as .in addition , when the size of the realization of is , the high - security message needs to remain secure , i.e. , in the limit as .we term this communication scenario as _ two - level security wiretap channel ii _, in line with our previous terminology in sec .[ sec : mswtc ] . by the results of , without needing to communicate the low - security message , the maximum achievable is . without the additional secrecy constraint on the high - security message ,the messages can be viewed as a single message with rate , and the maximum achievable is .the main result of this section is to show that the rate pair is indeed achievable , from which we may conclude that two - level security wiretap channels ii are _ perfectly _ embeddable .moreover , perfect embedding can be achieved by a nested binning scheme that uses a _ two - level _ coset code .the results are summarized in the following theorem .two - level security wiretap channels ii are perfectly embeddable .moreover , perfect embedding can be achieved by a nested binning scheme that uses a two - level coset code ._ proof : _ fix . consider a binary parity - check matrix \ ] ] where the size of is and the size of is .let be a one - on - one mapping between and the binary vectors of length , and let be a one - on - one mapping between and the binary vectors of length . for a given message pair , the transmitter randomly ( according to a uniform distribution ) chooses a solution to the linear equations = \left [ \begin{array}{c } s_1(m_1 ) \\s_2(m_2 ) \end{array } \right ] \label{eq : gc}\ ] ] and sends it to the legitimate receiver . when the parity - check matrix has _ full _( row ) rank , the above encoding procedure is equivalent of a nested binning scheme that partitions the collection of all length- binary sequences into bins and subbins using a two - level coset code with parity - check matrices .moreover , let be the columns of and let .define as the dimension of the subspace spanned by and when the size of the realization of is , by ( * ? ? ?* lemma 4 ) we have note that the low - security message is uniformly drawn from .so by , for a given high - security message , the transmitted sequence is randomly chosen ( according to a uniform distribution ) as a solution to the linear equations .if we let be the columns of and define where is the dimension of the subspace spanned by , we have again from ( * ? ? ?* lemma 4 ) when the size of the realization of is .let when we have either does _ not _ have full rank , or , or , and let otherwise . by using a randomized argument that generates the entries of independently according to a uniform distribution in , we can show that there exists an with for sufficiently large ( see appendix [ app:3 ] for details ) .for such an , we have from and that when the size of the realization of is , and when the size of the realization of is . letting and ( in that order )proves the achievability of the rate pair and hence completes the proof of the theorem .in this paper we considered the problem of simultaneously communicating two messages , a high - security message and a low - security message , to a legitimate receiver , referred to as the security embedding problem .an information - theoretic formulation of the problem was presented . with appropriate coding architectures, it was shown that a significant portion of the information bits can receive additional security protections without sacrificing the overall rate of communication .key to achieve efficient embedding was to use the low - security message as part of the transmitter randomness to protect the high - security message when the eavesdropper channel realization is strong .for the engineering communication scenarios with real channel input and additive white gaussian noise , it was shown that the high - security message can be embedded into the low - security message at full rate without incurring any loss on the overall rate of communication for both scalar and independent parallel gaussian channels ( under an average individual per - subchannel power constraint ) .the scenarios with multiple transmit and receive antennas are considerably more complex and hence require further investigations . finally , note that even though in this paper we have only considered providing two levels of security protections to the information bits , most of the results extend to multiple - level security in the most straightforward fashion . in the limit when the security levels change continuously , the number of secure bits delivered to the legitimate receiver would depend on the realization of the eavesdropper channel even though such realizations are unknown a priori at the transmitter .first note that when forms a markov chain in that order , we have for any jointly distributed that satisfies the markov chain . to show that the sufficient condition is also necessary , let be an achievable rate pair .following fano s inequality and the asymptotic perfect secrecy constraints and , there exists a sequence of codes ( indexed by the block length ) of rate pair such that where in the limit as .following and , we have \\ & = & h(m_1|z_1^n)-h(m_1,m_2|y^n ) \\ & \leq & h(m_1,m_2|z_1^n)-h(m_1,m_2|y^n ) \\ & = & i(m_1,m_2;y^n)-i(m_1,m_2;z_2^n ) . \end{aligned}\ ] ] let , ,\ldots , y[i-1]) ] and : = ( y^{i-1},z_{1,i+1}^n) ] . following and , we have \\ & = & i(m;y^n)-i(m;z_2^n)\\ & = & \sum_{i=1}^n \left[i(m;y[i]|y^{i-1 } ) - i(m;z_2[i]|z_{2,i+1}^n)\right]\\ & \stackrel{(b)}= & \sum_{i=1}^n \left[i(m;y[i]|y^{i-1},z_{2,i+1}^n ) - i(m;z_2[i]|y^{i-1},z_{2,i+1}^n)\right]\\ & = & \sum_{i=1}^n \left[i(m , y^{i-1},z_{1,i+1}^n , z_{2,i+1}^n;y[i ] ) - i(m , y^{i-1},z_{1,i+1}^n , z_{2,i+1}^n;z_2[i])\right]\\ & & \quad\quad -\sum_{i=1}^n \left[i(y^{i-1},z_{2,i+1}^n;y[i ] ) - i(y^{i-1},z_{2,i+1}^n;z_2[i ] ) \right]\\ & & \quad\quad -\sum_{i=1}^n \left[i(z_{1,i+1}^n;y[i]|m , y^{i-1},z_{2,i+1}^n ) - i(z_{1,i+1}^n;z_2[i]|m , y^{i-1},z_{2,i+1}^n)\right]\\ & \stackrel{(c)}\leq & \sum_{i=1}^n \left[i(m , y^{i-1},z_{1,i+1}^n , z_{2,i+1}^n;y[i ] ) - i(m , y^{i-1},z_{1,i+1}^n , z_{2,i+1}^n;z_2[i])\right]\\ & \stackrel{(d)}= & \sum_{i=1}^n \left[i(m , y^{i-1},z_{1,i+1}^n;y[i ] ) - i(m , y^{i-1},z_{1,i+1}^n;z_2[i])\right]\\ & = & \sum_{i=1}^n \left[i(m , u[i];y[i ] ) - i(m , u[i];z_2[i])\right]\\ & = & n\left[i(m , u[q];y[q]|q ) -i(m , u[q];z_{2}[q]|q)\right]\\ & = & n\left[i(m , u[q],q;y[q ] ) -i(m , u[q],q;z_{2}[q])-\left(i(y[q];q)-i(z_2[q];q)\right)\right]\\ & = & n\left[i(v[q];y[q ] ) -i(v[q];z_{2}[q])-\left(i(y[q];q)-i(z_2[q];q)\right)\right]\\ & \stackrel{(e)}\leq & n\left[i(v[q];y[q ] ) -i(v[q];z_{2}[q])\right]\end{aligned}\ ] ] where ( b ) follows from the csiszr - krner sum equality ( * ? ? ?* lemma 7 ) , ( c ) is due to the markov chain , ( d ) is due to the markov chain , and ( e ) follows again from the markov chain and the fact that the channel is memoryless . finally , we complete the proof of the theorem by letting ,q) ] , ] , ] and .as shown in theorem [ thm : dm2 ] , when we have the markov chains and , there exists a random triple satisfying the markov chain and such that and .in fact , the sum rate can be further bounded from above as \\ & \stackrel{(a)}= & i(x;y)-i(x;z_2)-\left[i(x;y|v)-i(x;z_2|v)\right]\\ & \stackrel{(b ) } \leq & i(x;y)-i(x;z_2)\end{aligned}\ ] ] where ( a ) follows from the markov chain , and ( b ) follows from the markov chain so .when we further have the markov chain , can be further bounded from above as \\ & \stackrel{(c)}= & i(v;y)-i(v;z_1 ) - \left[i(u;y)-i(u;z_1)\right]\\ & \stackrel{(d)}\leq & i(v;y)-i(v;z_1)\\ & = & i(v , x;y)-i(v , x;z_1)-\left[i(x;y|v)-i(x;z_1|v)\right]\\ & \stackrel{(e)}\leq & i(v , x;y)-i(v , x;z_1)\\ & \stackrel{(f)}= & i(x;y)-i(x;z_1)\end{aligned}\ ] ] where ( c ) follows from the markov chain , ( d ) and ( e ) follow from the markov chain so and , and ( f ) follows from the markov chain .this completes the proof of the theorem .to show that there exists a parity - check matrix such that , it is sufficient to show that where denotes the expectation of a random variable .let and for . by the union bound, we have by ( * ? ? ?* lemma 6 ) , for sufficiently large . by (* lemma 5 ) , for any such that since the total number of different subsets of is , we have for . substituting and intoproves that for sufficiently large and hence completes the proof .r. liu , t. liu , h. v. poor , and s. shamai ( shitz ) , new results on multiple - input multiple - output gaussian broadcast channels with confidential messages,"_ieee trans .inf . theory _ , submitted for publication .available online at http://arxiv.org/abs/1101.2007 y. k. chia and a. el gamal , 3-receiver broadcast channels with common and confidential messages , " _ ieee trans .inf . theory _ , submitted for publication .available online at http://arxiv.org/abs/0910.1407
this paper considers the problem of simultaneously communicating two messages , a high - security message and a low - security message , to a legitimate receiver , referred to as the security embedding problem . an information - theoretic formulation of the problem is presented . a coding scheme that combines rate splitting , superposition coding , nested binning and channel prefixing is considered and is shown to achieve the secrecy capacity region of the channel in several scenarios . specifying these results to both scalar and independent parallel gaussian channels ( under an average individual per - subchannel power constraint ) , it is shown that the high - security message can be embedded into the low - security message at full rate ( as if the low - security message does not exist ) without incurring any loss on the overall rate of communication ( as if both messages are low - security messages ) . extensions to the wiretap channel ii setting of ozarow and wyner are also considered , where it is shown that perfect " security embedding can be achieved by an encoder that uses a two - level coset code .
while the average treatment effect can be easily estimated without bias in randomized experiments , treatment effect heterogeneity plays an essential role in evaluating the efficacy of social programs and medical treatments .we define treatment effect heterogeneity as the degree to which different treatments have differential causal effects on each unit . for example , ascertaining subpopulations for which a treatment is most beneficial ( or harmful ) is an important goal of many clinical trials .however , the most commonly used method , subgroup analysis , is often inappropriate and remains one of the most debated practices in the medical research community [ e.g. , ] .estimation of treatment effect heterogeneity is also important when ( 1 ) selecting the most effective treatment among a large number of available treatments , ( 2 ) designing optimal treatment regimes for each individual or a group of individuals [ e.g. , ] , ( 3 ) testing the existence or lack of heterogeneous treatment effects [ e.g. , ] , and ( 4 ) generalizing causal effect estimates obtained from an experimental sample to a target population [ e.g. , ] . in all of these cases , the researchers must infer how treatment effects vary across individual units and/or how causal effects differ across various treatments .two well - known randomized evaluation studies in the social sciences serve as the motivating applications of this paper .earlier analyses of these data sets focused upon the estimation of the overall average treatment effects and did not systematically explore treatment effect heterogeneity .first , we analyze the get - out - the - vote ( gotv ) field experiment where many different mobilization techniques were randomly administered to registered new haven voters in the 1998 election [ ] .the original experiment used an incomplete , unbalanced factorial design , with the following four factors : a personal visit , 7 possible phone messages , 0 to 3 mailings , and one of three appeals applied to visit and mailings ( civic duty , neighborhood solidarity , or a close election ) .the voters in the control group did not receive any of these gotv messages .additional information on each voter includes age , residence ward , whether registered for a majority party , and whether the voter abstained or did not vote in the 1996 election . here, our goal is to identify a set of gotv mobilization strategies that can best increase turnout . given the design , there exist 193 unique treatment combinations , and the number of observations assigned to each treatment combination ranges dramatically , from the minimum of 4 observations ( visited in person , neighbor / civic - neighbor phone appeal , two mailings , with a civic appeal ) to the maximum of ( being visited in person , with any appeal ) .the methodological challenge is to extract useful information from such sparse data .the second application is the evaluation of the national supported work ( nsw ) program , which was conducted from 1975 to 1978 over 15 sites in the united states .disadvantaged workers who qualified for this job training program consisted of welfare recipients , ex - addicts , young school dropouts , and ex - offenders .we consider the binary outcome indicating whether the earnings increased after the job training program ( measured in 1978 ) compared to the earnings before the program ( measured in 1975 ) .the pre - treatment covariates include the 1975 earnings , age , years of education , race , marriage status , whether a worker has a college degree , and whether the worker was unemployed before the program ( measured in 1975 ) .our analysis considers two aspects of treatment effect heterogeneity .first , we seek to identify the groups of workers for whom the training program is beneficial .the program was administered to the heterogeneous group of workers and , hence , it is of interest to investigate whether the treatment effect varies as a function of individual characteristics .second , we show how to generalize the results based on this experiment to a target population . such an analysis is important for policy makers who wish to use experimental results to decide whether and how to implement this program in a target population . to address these methodological challenges ,we formulate the estimation of heterogeneous treatment effects as a variable selection problem [ see also ] .we propose the squared loss support vector machine ( l2-svm ) with separate lasso constraints over the pre - treatment and causal heterogeneity parameters ( section [ secmodel ] ) .the use of two separate constraints ensures that variable selection is performed separately for variables representing alternative treatments ( in the case of the gotv experiment ) and/or treatment - covariate interactions ( in the case of the job training experiment ) . not onlydo these variables differ qualitatively from others , they often have relatively weak predictive power .the proposed model avoids the ad - hoc variable selection of existing procedures by achieving optimal classification and variable selection in a single step [ e.g. , ] .the model also directly incorporates sampling weights into the estimation procedure , which are useful when generalizing the causal effects estimates obtained from an experimental sample to a target population . to fit the proposed model with multiple regularization constraints , we develop an estimation algorithm based on a generalized cross - validation ( gcv ) statistic .when the derivation of an optimal treatment regime rather than the description of treatment effect heterogeneity is of interest , we can replace the gcv statistic with the average effect size of the optimal treatment rule [ ] . the proposed methodology with the gcv statistic does not require cross - validation and hence is more computationally efficient than the commonly used methods for estimation of treatment effect heterogeneity such as boosting [ ] , bayesian additive regression trees ( bart ) [ ] , and other tree - based approaches [ e.g. , , ] . while most similar to a bayesian logistic regression with noninformative prior [ ] , the proposed method uses lasso constraints to produce a parsimonious model .to evaluate the empirical performance of the proposed method , we analyze the aforementioned two randomized evaluation studies ( section [ secapplications ] ) .we find that personal visits are uniformly more effective than any other treatment method , while sending three mailings with a civic duty message is the most effective treatment without a visit . in addition , every mobilization strategy with a phone call , but no personal visit , is estimated to have either a negative or negligible positive effect . for the job training study , we find that the program is most effective for low - education , high income non - hispanics , unemployed blacks with some college , and unemployed hispanics with some high school .in contrast , the program would be least effective when administered to old , unemployed recipients , unmarried whites with a high school degree but no college , and high earning hispanics with no college . finally , we conduct simulation studies to compare the performance of the proposed methodology with that of various alternative methods ( section [ secsimulations ] ) .the proposed method admits the possibility of no treatment effect and yields a low false discovery rate , when compared to the nonsparse alternative methods that _ always _ estimate some effects . despite reductions in false discovery ,the method remains statistically powerful .we find that the proposed method has a comparable discovery rate and competitive predictive properties to these commonly used alternatives .in this section we describe the proposed methodology by presenting the model and developing a computationally efficient estimation algorithm to fit the model . we describe our method within the potential outcomes framework of causal inference .consider a simple random sample of units from population , with a possibly different target population of inference .for example , the researchers and policy makers may wish to apply the gotv mobilization strategies and the job training program to a population , of which the study sample is not representative .we consider a multi - valued treatment variable , which takes one of values from where means that unit is assigned to the control condition . in the gotv study, we have a total of 193 treatment combinations ( ) , whereas the job training program corresponds to a binary treatment variable ( ) .the potential outcome under treatment is denoted by , which has support .thus , the observed outcome is given by and we define the causal effect of treatment for unit as . throughout, we assume that there is no interference among units , there is a unique version of each treatment , each unit has nonzero probability of assignment to each treatment level , and the treatment level is independent of the potential outcomes , possibly conditional on observed covariates [ ] .such assumptions are met in randomized experiments , which are the focus of this paper . under these assumptions, we can identify the average treatment effect ( ate ) for each treatment , .in observational studies , additional difficulty arises due to the possible existence of unmeasured confounders .one commonly encountered problem related to treatment effect heterogeneity requires selecting the most effective treatment from a large number of alternatives using the causal effect estimates from a finite sample .that is , we wish to identify the treatment condition such that is the largest , that is , . we may also be interested in identifying a subset of the treatments whose ates are positive .when the number of treatments is large as in the gotv study , a simple strategy of subsetting the data and conducting a separate analysis for each treatment suffers from the lack of power and multiple testing problems .another common challenge addressed in this paper is identifying groups of units for which a treatment is most beneficial ( or most harmful ) , as in the job training program study .often , the number of available pre - treatment covariates , , is large , but the heterogeneous treatment effects can be characterized parsimoniously using a subset of these covariates , . this problem can be understood as identifying a sparse representation of the conditional average treatment effect ( cate ) , using only a subset of the covariates .we denote the cate for a unit with covariate profile as , which can be estimated as the difference in predicted values under and with .the sparsity in covariates greatly eases interpretation of this model .we next turn to the description of the proposed model that combines optimal classification and variable selection to estimate treatment effect heterogeneity . for the remainder of the paper, we focus on the case of binary outcomes , that is , .however , the proposed model and algorithm can be extended easily to nonbinary outcomes by modifying the loss function .we choose to model binary outcomes with the l2-svm to illustrate our proposed methodology because it presents one of the most difficult cases for implementing two separate lasso constraints .as we discuss below , our method can be simplified when the outcome is nonbinary ( e.g. , continuous , counts , multinomial , hazard ) or the causal estimand of interest is characterized on a log - odds scale ( with a logistic loss ) . in particular , readily available software can be adapted to handle these cases [ ] . in modeling treatment effect heterogeneity, we transform the observed binary outcome to .we then relate the estimated outcome and the estimated latent variable , as is an dimensional vector of treatment effect heterogeneity variables , and is an dimensional vector containing the remaining covariates .for example , when identifying the most efficacious treatment condition among many alternative treatments , would consist of indicator variables ( e.g. , different combinations of mobilization strategies ) , each of which is representing a different treatment condition .in contrast , would include pre - treatment variables to be adjusted ( e.g. , age , party registration , turnout history ) .similarly , when identifying groups of units most helped ( or harmed ) by a treatment , would include variables representing interactions between the treatment variable ( e.g. , the job training program ) and the pre - treatment covariates of interest ( e.g. , age , education , race , prior employment status and earnings ) . in this case, would include all the main effects of the pre - treatment covariates .thus , we separate the causal heterogeneity variables of interest from the rest of the variables .we do not impose any restriction between main and interaction effects because some covariates may not predict the baseline outcome but do predict treatment effect heterogeneity .finally , we choose the linear model because it allows for easy interpretation of interaction terms .however , the researchers may also use the logistic or other link function within our framework . in estimating , we adapt the support vector machine ( svm ) classifier and place separate lasso constraints over each set of coefficients [ ] .our model differs from the standard model by allowing and to have separate lasso constraints .the model is motivated by the qualitative difference between the two parameters , and also by the fact that often causal heterogeneity variables have weaker predictive power than other variables .specifically , we formulate the svm as a penalized squared hinge - loss objective function ( hereafter l2-svm ) where the hinge - loss is defined as [ ] .we focus on the l2-svm , rather than the l1-svm , because it returns the standard difference - in - means estimate for the treatment effect in the absence of pre - treatment covariates . with two separate constraints to generate sparsity in the covariates , our estimatesare given by are pre - determined separate lasso penalty parameters for and , respectively , and is an optional sampling weight , which may be used when generalizing the results obtained from one sample to a target population .our objective function is similar to several existing lasso variants but there exist important differences .for example , the elastic net introduced by places the same set of covariates under both a lasso and ridge constraint to help reduce mis - selections among correlated covariates .in addition , the group lasso introduced by groups different levels of the same factor together so that all levels of a factor are selected without sacrificing rotational invariance . in contrast , the proposed method places separate lasso constraints over the qualitatively distinct groups of variables .the l2-svm offers two different means to estimate heterogeneous treatment effects .first , we can predict the potential outcomes directly from the fitted model and estimate the conditional treatment effect ( cte ) as the difference between the predicted outcome under the treatment status and that under the control condition , that is , .this quantity utilizes the fact that the l2-svm is an optimal classifier [ ] .second , we can also estimate the cate . to do this, we interpret the l2-svm as a truncated linear probability model over a subinterval of 7600 ) than nsw participants ( ] , was generated with and the covariance matrix is given by .the design matrix for the 49 treatment variables is orthogonal and balanced .the true values of the coefficients are set as and , where denotes 47 remaining coefficients drawn from a uniform distribution on ] .figure [ figusvsbayes ] compares the fdr and dr for our proposed method ( ` svm ` ; solid lines ) with those for the logistic lasso ( ` lasso ` ) and bayesian logistic regression ( ` glm ` ; dotted and dashed lines ) .for the bayesian glm , we consider two rules : one based on posterior means of coefficients ( dashed lines ) and the other selecting coefficients with -values below ( dotted lines ) . unlike the simulations given in section [ subsecbest ] , neither bart , boosting , nor conditional inference trees provide a simple rule for variable selection in this setting and hence no results are reported .the interpretation of these plots is identical to that of the plots in figure [ figfirstsims ] . in the left column , the top ( bottom ) plot presents fdr ( dr ) for the largest effect , whereas that of the right column presents fdr ( dr ) for the four largest effects .when compared with the bayesian glm , the proposed method has a lower fdr for both largest and four largest estimated effects .the -value thresholding improves the bayesian glm , and yet the proposed method maintains a much lower fdr and comparable dr .relative to the lasso , the proposed method is not as effective in considering the largest estimated effect except that it has a lower fdr when the sample size is small .however , when considering the four largest estimated effects , the proposed method maintains a lower fdr than the lasso , and a comparable dr .this result is consistent with the fact that the value statistic targets the largest treatment effect while the gcv statistic corresponds to the overall fit.=-1 to further evaluate our method , we consider a situation where each method is applied to a sample and then used to generate a treatment rule for each individual in another sample . for each method ,a payoff , characterized by the net number of people in the new sample who are assigned to treatment and are in fact helped by the treatment , is calculated .to represent a budget constraint faced by most researchers , we specify the total number of individuals who can receive the treatment and vary this number within the simulation study.=-1 specifically , after fitting each model to an initial sample , we draw another simple random sample of 2000 observations from the same data generating process . using the result from each method , we calculate the predicted cate for each observation of the new sample , , and give the treatment to those with highest predicted cates until the number of treated observations reaches the pre - specified limit . finally , a payoff of the form is calculated for all treated observations of the new sample where is the true cate .this produces a payoff of if a treated observation is actually helped by the treatment , if the observation is harmed , and for untreated observations . as a baseline , we compare each method to the `` oracle '' treatment rule , , which administers the treatment only when helpful . we have also considered an alternative payoff of the form , representing how much ( rather than whether ) the treatment helps or harms .the results were qualitatively similar to those presented here.=-1 .0d3.0d3.0d3.0@ & + & + & & & & + & -2 & 11 & 22 & 42 + & -19 & -4 & 8 & 21 + & -18 & 2 & 15 & 28 + & -20 & -7 & 7 & 34 + & -1 & 10 & 18 & 40 + & 2 & 2 & 2 & 5 + & -123 & -121 & -121 & -116 +the results from the simulation are presented in table [ tabpayoff ] .the table presents a comparison of payoffs , by method , as a percentage of the optimal oracle rule , which is considered as .the bottom row presents the outcome if every observation were treated , indicating that in this simulation the average treatment effect is negative but there exists a subgroup for which treatment is beneficial . the proposed method ( ` svm ` )narrowly dominates boosting ( ` boost ` ) , and both the proposed method and boosting noticeably outperform all other competitors , except conditional inference trees ( ` tree ` ) at sample size 250 . at larger sample sizes ,however , the tree severely underfits .while the proposed method and boosting perform similarly by a predictive criterion , boosting does not return an interpretable model .we also find that ` svm ` outperforms ` lasso ` , which is consistent with the fact that the gcv statistic targets the overall performance while the value statistic focuses on the largest treatment effect . if administering the treatment is costless , the proposed method generates the most beneficial treatment rule among its competitors .figure [ figposneg ] presents the results across methods and sample sizes in the presence of a budget constraint .the left column shows the proportion of treated units that actually benefit from the treatment for each observation considered for the treatment in the order of predicted cate ( the horizontal axis ) .the oracle identifies those who certainly benefit from the treatment and treats them first .the middle column shows the proportion of treated units that are hurt by the treatment . here , the oracle never hurts observations and hence is represented by the horizontal line at zero .the right column presents the net benefit by treatment rule , which can be calculated as the difference between the positive ( left column ) and negative ( middle column ) effects .each row presents a different sample size to which each method is applied .the figure shows that when the sample size is small , the proposed method assigns fewer observations a harmful treatment , relative to its competitors . for moderate and large sample sizes, the proposed method dominates its competitors in both identifying a group that would benefit from the treatment and avoiding treating those who would be hurt .this can be seen from the plots in the middle column where the result based on the proposed method ( ` svm ` ; solid thick lines ) stays close to the horizontal zero line when compared to other methods .similarly , in the right column , the results based on the proposed method stay above other methods .when these lines go below zero , it implies that a majority of treated observations would be harmed by the treatment .the disadvantage of the proposed method is its conservativeness .this can be seen in the left column where at the beginning of the percentile the solid thick line is below its competitors for small sample sizes .this difference vanishes as the sample size increases , with the proposed method outperforming its competitors . in sum , when used to predict a treatment rule for out - of - sample observations , the proposed method makes fewer harmful prescriptions and often yields a larger net benefit than its competitors .estimation of heterogeneous treatment effects plays an essential role in scientific research and policy making . in particular , researchers often wish to select the most efficacious treatments from a large number of possible treatments and to identify individuals who benefit most ( or are harmed ) by treatments .estimation of treatment effect heterogeneity is also important when generalizing experimental results to a target population of interest .the key insight of this paper is to formulate the identification of heterogeneous treatment effects as a variable selection problem . within this framework ,we develop a support vector machine with two separate sparsity constraints , one for a set of treatment effect heterogeneity parameters of interest and the other for observed pre - treatment effect parameters .this setup addresses the fact that in many applications , pre - treatment covariates are much more powerful predictors than treatment variables of interest or their interactions with covariates . in addition , unlike the existing techniques such as boosting and bart , the proposed method yields a parsimonious model that is easy to interpret .our simulation studies show that the proposed method has low false discovery rates while maintaining competitive discovery rates .the simulation study also shows that the use of our gcv statistic is appropriate when exploring the treatment effect heterogeneity rather than identifying the single optimal treatment rule ..2d3.2@ & & + treatment intercept & 6.92 & 6.67 + _ main effects _ + age & 0.00 & -0.83 + married & 1.32 & 3.39 + white & 0.00 & 0.10 + _ squared terms _+ age & -0.03 & -0.09 + education & 0.89 & 0.86 + _ interaction terms _ + no hs degree , unemployed in 1975 & -1.06 & 0.00 + white , married & 0.00 & 26.16 + white , no hs degree & 25.35 & 30.65 + hispanic , logged 1975 earnings & -49.36 & -62.15 + black , logged 1975 earnings & 8.29 & 0.00 + white , education & 0.00 & -1.41 + married , education & 4.90 & 12.11 + married , logged 1975 earnings & 0.00 & 5.72 + education , unemployed in 1975 & 7.52 & 9.59 + age , education & 0.00 & -0.47 + age , black & -0.56 & 0.00 + age , hispanic & 0.00 & 0.34 + age , unemployed in 1975 & 3.30 & 4.79 + a number of extensions of the method developed in this paper are possible .for example , we can accommodate other types of outcome variables by considering different loss functions . instead of the gcv statistic we use ,alternative criteria such as aic or bic statistics as well as more targeted quantities such as the average treatment effect for the target population can be employed .while we use lasso constraints , researchers may prefer alternative penalty functions such as the scad or adaptive lasso penalty .furthermore , although not directly examined in this paper , the proposed method can be extended to the situation where the goal is to choose the best treatment for each individual from multiple alternative treatments . finally , it is of interest to consider how the proposed method can be applied to observational data [ e.g. , see who develop a doubly robust estimator for optimal treatment regimes ] and longitudinal data settings where the derivation of optimal dynamic treatment regimes is a frequent goal [ e.g. , ] .the development of such methods helps applied researchers avoid the use of ad hoc subgroup analysis and identify treatment effect heterogeneity in a statistically principled manner .an earlier version of this paper was circulated under the title of `` identifying treatment effect heterogeneity through optimal classification and variable selection '' and received the tom ten have memorial award at the 2011 atlantic causal inference conference .we thank charles elkan , jake bowers , kentaro fukumoto , holger kern , michael rosenbaum , and sherry zaks for useful comments . the editor , associate editor , and two anonymous reviewers provided useful advice .
when evaluating the efficacy of social programs and medical treatments using randomized experiments , the estimated overall average causal effect alone is often of limited value and the researchers must investigate when the treatments do and do not work . indeed , the estimation of treatment effect heterogeneity plays an essential role in ( 1 ) selecting the most effective treatment from a large number of available treatments , ( 2 ) ascertaining subpopulations for which a treatment is effective or harmful , ( 3 ) designing individualized optimal treatment regimes , ( 4 ) testing for the existence or lack of heterogeneous treatment effects , and ( 5 ) generalizing causal effect estimates obtained from an experimental sample to a target population . in this paper , we formulate the estimation of heterogeneous treatment effects as a variable selection problem . we propose a method that adapts the support vector machine classifier by placing separate sparsity constraints over the pre - treatment parameters and causal heterogeneity parameters of interest . the proposed method is motivated by and applied to two well - known randomized evaluation studies in the social sciences . our method selects the most effective voter mobilization strategies from a large number of alternative strategies , and it also identifies the characteristics of workers who greatly benefit from ( or are negatively affected by ) a job training program . in our simulation studies , we find that the proposed method often outperforms some commonly used alternatives .
a quantum computer is a quantum system whose time evolution can be thought of as a computation , much in the same way as we think of the time evolution of a pocket calculator to be a computation . for our pourposes it will suffice to model the quantum system as a `` black box '' and focus our attention on two discrete observables out of a complete set , which we shall call the input and output register .following the standard notation , we shall indicate the computation of a function as the first ket describing the state of the input register and the second the state of the output register .kets are labelled according to the elements of and they represent .one of the most powerful features of quantum computation is _ quantum parallelism_. the superposition principle of quantum mechanics allows us to prepare the computer in a coherent superposition of a set of input states . after a single run ,all of the corresponding outputs appear in the final state , according to the time evolution unfortunately , this is no `` pay one , take n '' .in fact , the result is an entangled state of the input and output registers and there is no single measurement allowing us to extract from it all the computed values of .however , it may well be possible to distil from this final state some _ global property _ of the function , thus exploiting quantum parallelism .one of the most famous examples was presented by d. deutsch , who showed that a single quantum computation may suffice to state whether a two valued function of a two valued variable is constant or not .d. deutsch and r. jozsa later generalized this result showing that the problem of classifying a given function as `` not constant '' or `` not balanced '' can be solved in polynomial time by means of a quantum computer ( the time required by a classical solution is exponential ) .also d. r. simon showed that the problem of determining if a function is invariant under a so called xor mask , while it is classically intractable , admits an efficient quantum solution .all of the algorithms cited above ( apart from the last , for which simon also considered a fully probabilistic generalization ) are characterized by a variable running time and zero error probability .they consist of a non classical computation like ( [ thestandardcomputation ] ) followed by a measurement of the final state of the computer , as a result of which either the correct answer is obtained or the relevant information is destroyed and an explicitly inconclusive result is returned . in the latter case one has to go through the whole procedure again , so that only an average running time for the algorithm can be estimated .global properties of functions that can be determined by such an algorithm are said to be computable by quantum parallelism ( qpc).this definition was put forward by jozsa who also demonstrated that , at least in the case of two valued functions , the qpc properties that can be determined by means of a single computation are an exponentially small fraction of all the possible global properties . in this paper we tackle the general problem of stating whether a function is constant or not . we show that for or this property is not qpc , meaning that any measurement following a computation like ( [ thestandardcomputation ] ) has a finite probability of yielding a wrong result .we therefore investigate the power of quantum parallelism in a fully probabilistic setting . assuming that the ( classical ) computation of on randomly sampled points yields a constant value, we calculate the posterior probability that is actually constant .we then compute the analogous probability for a quantum algorithm requiring the same number of computations of .comparison of the two results shows that our quantum strategy allows making a better guess at the solution , its indications being more likely to be correct .we shall now briefly recall the classical example put forward by d. deutsch , before confronting the problem of its generalization .suppose we are given a function and we are interested to know whether is constant or not .of course there are only four such functions ( i.e. four instances of the problem ) , namely if all we can use is a classical computer , there is only one way to do the job : we must compute _ both _ and and compare them to check if they are equal . on the contrary , since in this simple case the property `` is constant '' is qpc , a quantum computer gives us a fair chance of finding the solution at the cost of the single computation after the computation , the calculator halts with its input and output registers in one of four possible states , corresponding to the four possible functions : since the above states are linearly dependent , they can not be distinguished with certainty .this means that no measurement can establish which function was actually computed , or , which is the same , it s impossible to extract from the final state _ both _ the values of and .however , we need only discriminate and , the final states yielded by the constant functions , from and . this can actually be done by measuring on the final state of the two registers an observable with the following non degenerate eigenstates : these four states can be thought of as `` flags '' indicating the result of the computation and have been named according to their meaning .this becomes clearer as soon as we rewrite the final states ( [ finalstates ] ) on the basis of the above eigenvectors : it is now evident that 1 .projection along the eigenvector can only take place if the state of the computer is either or , i.e. when is constant ; 2 .likewise , projection along the eigenvector can only occur if is not a constant function ; 3 . regardless of the final state of the two registers after the computation ,the measurement can yield a state with probability ; 4 .state is orthogonal to the four final states listed above , and therefore it should show up only as a consequence of noise induced errors .note that , according to the definition of the qpc class , the quantum algorithm can either give us the correct answer or no answer at all : as long as everything works properly , we ll never get a wrong result .this comes in handy when we are asked to solve a decision theoretic problem in which simply waiting has a much higher utility than taking a wrong action .in this case we can discard the fail results and base our decisions upon the meaningful answers , which we know to be correct .the straightforward generalization of deutsch s example would go as follows .given a function and assuming we can perform the non classical computation we are asked to devise an observable on the joint state of the two registers such that , after a single measurement of , we can either 1 .obtain a reliable indication that function is constant ; 2 .obtain an equally reliable indication that is not constant , or finally 3 .get an explicitly inconclusive result .let be the hilbert space of the joint states of the input and output registers .if is the basis of formed by the ( non degenerate ) eigenstates of , all that is needed would be the existence of two disjoint subsets such that 1 .all the final states obtained from the computation of constant ( non constant ) functions have a non zero projection along ( along ) ; 2 .the final states corresponding to non constant ( constant ) functions are orthogonal to ( to ) .these two requirements are evidently fulfilled in the case of deutsch s example , as can be easily seen by taking and ( for further details see ) .however , as soon as the domain and range of the function grow larger , requirements _ i. _ and _ ii . _ become incompatible .what happens is that whenever or the computation of constant functions yields final states that are linearly dependent upon those obtainable from non constant functions .this clearly forbids the existence of , since the final states coming from non constant functions can not be orthogonal to .in other words , the global property `` is constant '' is no longer qpc in the general case . note that , as demonstrated by jozsa , this result is independent of the particular superposition used as the input state for the non classical computation ( [ thecomputation ] ) .the fact that the investigated property of is not qpc compels us to work in a fully probabilistic setting in order to cope with the possibility of wrong results . preserving the general structure of the algorithmas outlined at the beginning of the preceding section , we note that we can still devise an observable such that any `` '' ket has a large projection along a subset of the eigenstates of ; for example , we can arrange for to be the very space spanned by the `` constant '' vectors .the problem is now that since `` non constant '' kets generally have a non zero projection along , measuring no longer ensures a clear cut distinction between constant and non constant functions .however , since `` non constant '' final states do have some component along the orthocomplement of , measuring still gives some ( probabilistic ) information about the computed function .we are left with two asymmetrical possibilities ( actually , as we shall see , a more convienient choice for also makes an explicitly inconclusive result possible ) : 1 . measuring yields an eigenvalue associated to the orthocomplement of in . since this can only happenif the computed function is _ not _ constant , this is an exact solution to the problem .2 . measuring projects the final state of the two registers onto a state in .if the computed function were constant , this would be the only possibility ; unfortunately , as seen above , other functions may also yield the same result .we have therefore obtained only a probabilistic indication about being constant .it is now clear that the generalized algorithm is essentially similar to a classical probabilistic algorithm , in that its results are not necessarily correct .nevertheless , as we shall see in the following sections , the posterior probability of the function actually being constant after a result of type _ b. _ is obtained turns out to be much larger for our quantum algorithm than for the classical `` sampling '' strategy ( see section [ evaluation ] ) . in the rest of this sectionwe shall deal with the choice of the observable , which constitutes the core of the algorithm .we would now like to introduce a correspondence between the hilbert space of the two registers of the computer and the space of complex matrices with rows and columns .let be the computational basis of , the first ket referring to the state of the input register and the second to that of the output register : we define the isomorphism by identifying with the matrix whose elements are {i , j}= \delta_{m , i}\delta_{n , j}.\ ] ] the isomorphism , which maps the elements of onto the canonical basis of , is then extended by linearity to the whole .since the final state of the computer after the computation of function is the entries of the corresponding matrix turn out to be , so that somehow resembles the graph of drawn with the `` '' axis along the rows and the `` '' axis pointing down . instead of for the upper left element of matrix . ]it is easy to check that the scalar product in is preserved by , i.e. for any two vectors , of ( we write for the scalar product in . )we shall now construct the observable as specified at the beginning of this section .an observable in is identified by its eigenstates that form an orthogonal basis , or , using the isomorphism , by orthogonal matrices in .we propose to take the two parameter matrices with and whose entries are defined by we shall call the above matrices _ fourier transform matrices _ ( ftm ) .we recall that given a matrix , its two dimensional discrete fourier transform is defined as therefore the components of on the ftm basis are the entries of its discrete fourier transform . we still have to decide which eigenvectors are to be taken as an indication of the function being constant .in other words we have fixed the basis but have yet to choose the subset .we take as composed by the matrices , with .it is easy to check that spans the subspace of generated by the set of the matrices corresponding to the constant functions .we have not included in because the projection probability of the computer s final state on is the same for all functions : therefore has the same role that state had in deutsch s example . in the followingwe shall put and we shall speak equivalently of the matrix in or of the state in . likewise , since for all and every matrix , subset plays the role of the state in deutsch s example ( section [ theexample ] ) . the remaining ftm matrices constitute set : note that we did not put a prime on , since it does satisfy both conditions _ i. _ and _ ii . _listed in section [ impossibility ] .this accounts for the lack of symmetry we pointed out at the beginning of section [ generalization ] .suppose we run the quantum algorithm times on the same function and we always get an indication that is constant ( a projection onto ) .we need to gauge the reliability of this result , which we can do by computing the posterior probability ( for short ) that the function really is constant .this quantity can also be used to compare the efficiency of the quantum algorithm against a conventional classical solution , since what we are looking for is a procedure giving the lowest probability of error in change for the same computational effort . to evaluate ( [ prob ] ) we use bayes theorem , that is where by we mean the joint probability that is constant _ and _ that runs of the algorithm yield a `` constant '' outcome , corresponding to the final state being projected along . by the product rule ,this can be expressed as assuming a uniform probability distribution on all the possible functions , we have .regardless of the computed function , fail results have a probability of showing up ( see equation [ prfail ] ) .this leads to . as a consequence ( [ qnum ] ) becomes the denominator of ( [ bayes ] )can be expanded over all the possible functions of type ( [ f ] ) : since we assumed the input functions to be uniformly distributed , we have the runs of the quantum algorithm are stochastically independent and that implies that the likelihoods appearing in ( [ total ] ) are simply given by ^k,\ ] ] where with we mean the likelihood of a single run , the probability of a projection onto when the function is .so we can concentrate only on , which , with the help of the sum rule , can be expressed as here stays for event `` after the measure the computer s final state projects itself onto the subspace of all constant functions '' , for the projection onto the matrix , which represents the -th constant function and is defined as , and fail for the projection onto .we have used the fact that , thanks to the orthogonality relations , events are mutually exclusive and so are and fail and that and . as we have seen in ( [ prfail ] ) , . on the other hand where is the matrix related through isomorphism to the computer s final state when the function is ( note that most of the elements of are zero , since ) .let us now compute explicitly the trace that appears in the r.h.s . of ( [ ter ] ) : equation ( [ ter ] ) then becomes equation ( [ trace ] ) contains the sum of the elements appearing in the -th row of matrix ( [ af ] ) . since matrix has a sole one in any column , this sum is equivalent to the number of ones in the -th row of .this gives us an idea for a smart classification of all the possible functions appearing in ( [ total ] ) : we associate with every function an , where is the number of rows of its corresponding matrix with ones and zeroes .doing so we can replace the sum over appearing in ( [ total ] ) by a sum over the , with conditions condition ( [ one ] ) expresses the requirement that the total number of ones in matrix is ( or , since each column contains a sole one , that has columns ) , while ( [ two ] ) is equivalent to the condition that has rows . in the following, we shall indicate with the set of the that satisfy equations ( [ zero])([two ] ) .note that , since every corresponds to more than one function , when summing over the we must use the right combinatorial factors .these , for a fixed , are given by : the first term corresponding to column permutations and the second to row permutations .we can now use equation ( [ finalpr ] ) together with this way of classifying the functions to evaluate the total likelihood that appears as the first term in equation ( [ key ] ) .if stands for a function corresponding to the , consequently ( [ totlik ] ) becomes and ( [ indip ] ) becomes in turn ^k\text{.}\ ] ] now we can sum over all the possible with conditions ( [ zero])([two ] ) and with the combinatorial factors ( [ comb ] ) , obtaining the expression of equation ( [ total ] ) in the quantum case: ^kc_{j_0,j_1,\ldots , j_n}\ ] ] ( n.b .we have used the fact that , since we suppose a prior uniform probability distribution on all the functions ) .finally , using also equation ( [ num ] ) we can express the posterior probability ( [ bayes ] ) in the quantum case as ^k c_{j_0,j_1,\ldots , j_n}}\text{.}\ ] ] in the following section we will derive the corresponding expression for the classical case .there exists at least one obvious classical probabilistic algorithm that can be used to spot constant functions .we can simply compute the value of on randomly chosen points of its domain and decide that is constant if its restriction to the sampled points is .this procedure , which we shall call the `` sampling algorithm '' , evidently constitutes the best possible classical strategy to solve the problem , since it uses up all the information we can gain on by classical computations . in order to allow a direct comparison with the quantum algorithm, we have to find out what the posterior probabilities are in this case .starting again from bayes theorem , we can express the numerator of equation ( [ bayes ] ) as that by ( [ unif ] ) is equal to ( note that in the classical case , since no fail results exist ) .we must now evaluate the denominator of bayes formula , namely equation ( [ total ] ) .choosing the inputs at random actually turns out to be inessential as long as the functions are uniformly distributed : sampling the first points is just as good .let us therefore divide all the possible functions into two classes .the first is made up by those for which at least the first values are constant ; they are .all the other functions belong to the second class . as a consequence , the likelihoods that appear in the r.h.s . of ( [ total ] )are simply given by putting this expression in ( [ total ] ) , and recalling ( [ unif ] ) , we can rewrite ( [ bayes ] ) as this result is to be compared with equation ( [ quantum ] ) , which gives the corresponding posterior probability after runs of the quantum algorithm . in order to do so , formula ( [ quantum ] )must evidently be evaluated by means of a ( classical ! ) computer . before listing the numerical results , however , we are going to discuss two special cases that can be solved analytically in the limit of large .we shall now analyse the behaviour of our generalized quantum algorithm in the worst possible case , that is when the computed function has maximum probability of being mistaken for a constant function , even if it is not .this occurs quite naturally for a matrix of the following kind : { \overbrace { \begin{array}[c]{cccccc } 1 & 1 & 1 & \ldots & 1 & 0 \\ 0 & 0 & 0 & \ldots & 0 & 1 \\ 0 & 0 & 0 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & 0 & 0 \end{array } } ^{n } } \hspace{1 mm } \right ) \right\}\scriptstyle{m}\ ] ] representing a function that is constant on its whole domain but for one point .the resulting probability of error is given by the squared modulus of the projection of on the space spanned by the set of the matrices associated to constant functions is so large that the probability of a fail result is negligible .] , that is therefore tends to one in the limit of large . here again , in order to compensate for this we have to run the quantum algorithm several times , say ( classically , we would have to sample more and more points ) .if we want to keep the probability of being `` cheated '' by an almost constant function as low as a given value , we evidently have to choose so that , that is as we would expect , does tend to infinity in the limit of large , meaning that exploring an even larger domain requires an infinite number of computations .it is nevertheless interesting to study the ratio of the number of runs to the number of elements in the domain . in the limit of large , this becomes which is a constant independent on .therefore , if we are required to perform the computation with a worst case error probability , we have to run our quantum computer a number of times which , in the limit of large , is a definite fraction of .equation ( [ worstcaseerrorprobability ] ) can in this case be inverted to obtain as a function of , yielding .coming now to the classical case , sampling a fraction of the points in the domain ( which requires computations ) entails having a probability of mistaking for .in figure [ worstcasefigure ] we plotted the worst case probability of error against for both the quantum and the classical algorithm in the limit of large . in the quantum case decreases more rapidly and stays well below the classical probability of error as long as is not too close to ( remember that the `` sampling '' algorithm is no longer probabilistic if we compute our function over its entire domain ! ) .looking now at the best case , we find that there is again a single class of functions which is easily dealt with by both algorithms , that is one to one functions or permutations of the points in the domain ( this obviously requires to be equal to ) . using the classical sampling algorithm, one can evidently be sure to distinguish an invertible function from a constant one with only two computations , since the former does not assume any value in its range more than once . in the quantum case ,permutations are associated with matrices having exactly one `` 1 '' in each row and in each column .such matrices turn out to be orthogonal to .therefore , a measurement of the final state yielded by a permutation can either result in a fail or in projection along , which indicates that the function is not constant .now fails can only be obtained with probability , which luckily vanishes as grows larger .we conclude that , in the limit of large , the quantum algorithm is practically guaranteed to spot a one to one function at first sight , after a single computation , thus doubling the efficiency of the classical algorithm . by the way, we note that if we only had to tell constant functions from permutations if our practical problem did nt require us to deal with non invertible , non constant functions we would be back to the original situation of deutsch s example .we can now see what was so special about the four functions considered by deutsch in his original example ( see equation [ thefourfunctions ] ) .when both the domain and the range consist of two points only all non constant functions turn out to be one to one , so that all ambiguity is removed .we are including , in figures [ fig8x2lin ] through [ fig24x24log ] , some comparative graphics of the posterior probabilities expressed by equations ( [ bayes2 ] ) and ( [ quantum ] ) versus the number of successful computations effected ( by successful computation we mean all computations barring fail results ) .as our previous analysis suggested , the quantum algorithm turns out to be far more efficient than the classical `` sampling '' algorithm for small values of .we emphasize that this result is entirely dependent upon the use of quantum parallelism .this highly non classical feature of quantum computation apparently allows a quicker exploration of the domain of function , even in the case that the investigated property is _ not _ qpc .the posterior probability we used for our numerical calculations is conditoned to a sequence of `` constant results '' of the quantum algorithm .we have overlooked the possibility of obtaining one or more fail outcomes .this is particularly significant when ( figures [ fig8x2lin ] and [ fig16x2log ] ) , because in such cases a fail result has a probability to show up .this means that in order to obtain projections of the final state of the computer along one must expect to run the quantum computer times .nervertheless , as the graphics show , the quantum strategy always turns out to be convenient , at least for small values of .we finally note that as and grow larger ( see for instance figure [ fig24x24log ] ) the resulting posterior probabilities turn out to be so low that both the quantum and the classical algorithm are virtually useless .this entirely depends on our assumption of an uniform distribution over functions , which is probably eccessively penalizing . in real world situations , we can expect the quantum algorithm to be useful in any situation in which the `` sampling '' algorithm is successfully employed at the present day .we are grateful to c. m. becchi for posing the question which led to this work .we acknowledge the interest of g. castagnoli and the collaboration with elsag bailey ; we also thank a. ekert and c. macchiavello for interesting discussion .special thanks to e. beltrametti for continuous help and advice .2 a. ekert , r. jozsa : quantum computation and shor s factoring algorithm , _ reviews of modern physics _ * 68 * , 733754 ( 1996 ) r. jozsa : characterizing classes of functions computable by quantum parallelism , _ proc .lond . _ a ( 1991 ) * 435 * , 563574 d. deutsch : quantum theory , the church turing principle and the universal quantum computer , _ proc .lond . _ a * 400 * , 97117 ( 1985 ) d. deutsch , r. jozsa : rapid solution of problems by quantum computation , _ proc .a ( 1992 ) * 439 * , 553558 d. r. simon : on the power of quantum computation , _ proceedings of the 35th annual ieee symposium on the foundations of computer science _ , 1994 , 116123
quantum parallelism is the main feature of quantum computation . in 1985 d. deutsch showed that a single quantum computation may be sufficient to state whether a two valued function of a two valued variable is constant or not . though the generalized problem with unconstrained domain and range size admits no deterministic quantum solution , a fully probabilistic quantum algorithm is presented in which quantum parallelism is harnessed to achieve a quicker exploration of the domain with respect to the classical `` sampling '' strategy .
we collected and analyzed more than 4.5 million time - stamped emails from students at a globally top - ranked mba program , focusing specifically on the relationship between students evolving communication networks and their subsequent career outcomes .this data is available in the form of email logs recorded and stored by the university , along with registrar data on each student before and during their matriculation in the program .included in the dataset is a record of each email sent by an mba student between fall 2006 and spring 2008 .the record includes the date and time at which the email was sent and received and the ( anonymous ) numeric ids of the sender and receiver of the message .academic records ( gmat scores , grades , extra - curricular activities , prior work experience and job titles ) , and demographic data ( age , race and nationality ) , were merged with the network data to connect email transmissions with personal characteristics .there are approximately 11.5 million e - mails and 4.5 million student - student e - mails in the data .an important characteristics of the data is its randomized design .students are randomly assigned to sections within the school , minimizing selection effects .since observations began when students first met each other , we eliminated the left censoring that typically occurs when network data are captured after ties have already been formed .in order to understand the causal relationship between network effects and job rank , we used coarsened exact matching ( cem ) ( see ) to construct a reduced , matched sample ( matching students on all characteristics possible , i.e. , age , gpa , industry experience ) . then on the matched data , we further examine the significance of network effects on students job rank . additionally , in order to demonstrate that our observations can not be explained by a random network process, we compare real observations to the null model where the degree sequence of network is preserved and links are placed completely randomly .three important findings emerge from the data .first , a student s likelihood of securing a coveted , high paying job after graduation is strongly associated with the type of network they develop during their time in the program .students with a higher network degree , greater network centrality , and more balanced communication among alters tend to have the highest post - graduation salaries , even when matching students along several covariates ( figure [ fig_1 ] ) .second , and somewhat surprisingly , we find that the structural characteristics of a student s ego network emerge as early as one month into the program and remain remarkably stable thereafter ( figures [ fig_1 ] [ fig_2 ] ) .the above findings suggest that , when exposed to a new social system , actors initial network configurations may provide `` early warning signals of success '' a finding that has important practical and policy - level implications .lastly , we observe robust patterns of `` rich - club '' behavior at the level of the global network , where highly central students are more densely connected among themselves than students of a lower degree . while the students who comprise the rich - club networks were more likely to have a higher salary post - graduation , they were also less likely to have the highest rank on other objective indicators of ability , such as gmat scores and gpa ( figure [ fig_3 ] ) .we interpret these findings as evidence of a trade - off between the development of human capital and social capital during an mba program : on the one hand , students can choose to invest in building technical skills and domain - specific knowledge to enhance their career prospects ; on the other hand , they can choose to invest in building their social capital by developing new ties and fostering a robust community of peers . though these choices are certainly not mutually exclusive , the economic benefits of the former appear to significantly outweigh the benefits of the latter .despite the fact that business schools advertise their role in fostering a valuable , life - long network ( haas mba 2013 ) , we find considerable differences in the types of networks that students actually develop - differences that are strongly linked to their future job placement and , ultimately , their access to the inner circles of the managerial elite .our findings have important implications for both mba students and the firms that hire them . from the standpoint of a student, the data suggest that important resources exist within an emerging mba network that ultimately improve one s attractiveness to employers . while we can only speculate about the mechanism underling this pattern of results i.e. specific networks may foster the development of particular social skills or , instead, they may provide access to employer networks outside of the program itself we nevertheless find that it quite literally `` pays '' to develop one s social network in the early stages of a program . from the standpoint of prospective recruiters , the subtle signals that allow one to reliably infer a student s network structure may provide valuable insight into the qualities that the applicant will bring with them to their new job .michael useem .classwide rationality in the politics of managers and directors of large corporations in the united states and great britain .administrative science quarterly vol .27 , no . 2 ( jun . , 1982 ) , pp .
the `` business elite '' constitutes a small but strikingly influential subset of the population , oftentimes affecting important societal outcomes such as the consolidation of political power , the adoption of corporate governance practices , and the stability of national economies more broadly . research has shown that this exclusive community often resembles a densely structured network , where elites exchange privileged access to capital , market information , and political clout in an attempt to preserve their economic interests and maintain the status quo . while there is general awareness that connections among the business elite arise because `` elites attend the same schools , belong to the same clubs , and in general are in the same place at the same time '' , surprisingly little is known about the network dynamics that emerge within these formative settings . here we analyze a unique dataset of all mba students at a top 5 mba program . students were randomly assigned to their first classes ; friendship among students prior to coming into the program was rare ; and the network data email transmissions among students were collected for the year 2006 when students almost entirely used the school s email server to communicate , thereby providing an excellent proxy for their networks . after matching students on all available characteristics ( e.g. , age , grade scores , industry experience , etc . ) i.e. creating `` twin pairs '' we find that the distinguishing characteristics between students who do well in job placement and those who do not is their network . further , we find that the network differences between the successful and unsuccessful students develops within the first month of class and persists thereafter , suggesting a network imprinting that is persistent . finally , we find that these effects are pronounced for students who are at the extreme ends of the distribution on other measures of success students with the best expected job placement do particularly poorly without the right network ( `` descenders '' ) , whereas students with worst expected job placement pull themselves to the top of the placement hierarchy ( `` ascenders '' ) with the right network .
the question whether and how a given function can be expressed approximately by polynomials are of great importance in theory as well as in practice . for example , by definition , an explicit , finite formula is unavailable for transcendental functions , and instead , an appropriate polynomial approximation is chosen to replace the function . because polynomials , particularly the ones of low order , are easy to manipulate , this approach provides computational speed with minimal penalty in accuracy . a natural candidate for polynomial approximationis a truncated taylor expansion , typically at the midpoint of the interval where the approximation is most accurate .taylor s theorem and the weierstrass approximation theorem asserts the possibility of local approximation of an arbitrary function . moreover , the approximation accuracy improves as the degree of the polynomial increases .however , this improvement comes at the expense of complexity and computational speed .this expense can be substantially reduced if the function can be approximated to the same accuracy with a lower degree polynomial . here, we show analytically that an arbitrary function can be approximated via legendre polynomials using _ non - uniformly _ spaced points on an interval as the input , and that at least for some functions , approximation with legendre polynomials yields a substantially higher accuracy and faster convergence compared to taylor expansion of the same order ( i.e. , with the same number of non zero coefficients ) .we further demonstrate the improvement in accuracy over taylor expansion numerically , using the sine , exponential , and entropy functions .consider the problem of estimating the instantaneous slope of the curve mapping the the output of the function to ] is given by where denotes the expectation of . because is uniform in ] is the variance in the interval ] and zero at the ends .equation [ eq : slope-2 ] allows estimation of the instantaneous slope over not just the points that are uniformly spaced , but all points in the interval ] . to see this , consider the shifted legendre polynomials of order , defined by rodrigues formula : }(x ) & = & \dfrac{1}{2^{n}n!}\dfrac{d^{n}}{dx^{n}}\left(1-\left[\dfrac{2x - a - b}{b - a}\right]^2\right)^{n}\nonumber \\ & = & \dfrac{1}{n!(b - a)^{n}}\dfrac{d^{n}}{dx^{n}}\left[\left(b - x\right)(x - a)\right]^{n } \label{eq : rodrigues}\end{aligned}\ ] ] which are orthogonal functions with respect to the inner product },p_{m,[b , a]}>=\dfrac{b - a}{2n+1}\delta_{nm}\label{eq : innerprod}\ ] ] where denotes the kronecker delta , equal to if and to otherwise .furthermore , legendre polynomials of order to are the same as the orthogonal polynomials obtained by the gram - schmidt process on the polynomials with respect to the inner product given by equation [ eq : innerprod ] up to a constant multiplication factor .therefore , by adding the basis functions , we obtain the polynomial fit to an arbitrary function on the interval ] in equation [ eq : polyfit ] has a simple and telling interpretation .note that } > & = & \dfrac{1}{n!(b - a)^{n}}\intop_{a}^{b}y\dfrac{d^{n}}{dx^{n}}\left[\left(b - x\right)(x - a)\right]^{n}dx\label{eq : legendre_approx } \\ & = & \dfrac{(-1)^{n}}{n!(b - a)^{n}}\intop_{a}^{b}\dfrac{d^{n}y}{dx^{n}}\left[\left(b - x\right)(x - a)\right]^{n}dx\label{eq : legendre}\end{aligned}\ ] ] which follows from equation [ eq : rodrigues ] .the integral in equation [ eq : legendre ] can be solved using integration by parts , and none of the boundary terms appear in the solution because for and . moreover , } > = \dfrac{(b - a)^{n}(-1)^{n}n!}{(2n+1)!}e_{\beta(n+1,n+1;a , b)}\left[\dfrac{d^{n}y}{dx^{n}}\right]\ ] ] where is a beta probability distribution with degrees of freedom , shifted and scaled so that it lies on the interval ] .note that all three functions satisfy the boundedness condition above .for comparison , we approximate the functions with taylor and legendre polynomials using at most six non - zero coefficients ( figure [ fig : tayvslagfit ] ) .because the sine function has odd symmetry about and the entropy function is odd , the expansions of these function involve , respectively , polynomial degrees of 11 and 10 of even and odd order .we estimate the signal - to - noise ratio ( snr , in decibels ) for approximations as to quantify the accuracy in approximation ( table [ tab : impr ] ) ..signal - to - noise ratios ( in decibels ) for the taylor ( t ) and legendre ( l ) polynomial approximations .note the higher snr for the legendre polynomial approximation .the accuracy of legendre polynomial approximation is significantly greater than that of taylor approximation of equal order . [tab : impr ] [ cols="^,^,^,^,^,^,^ " , ] consistent with the theoretical considerations in above , these results show a rapid convergence of legendre polynomials for increasing number of polynomial coefficients , and an improved accuracy compared to taylor polynomials of the same order .note that using six coefficients , even the least improvement ( in the case of the exponential function ) is 21.59 decibels ( which amounts to an average error for the legendre approximation 0.087 times the error for the taylor approximation ) .the approximation for the sine function leads to an improvement of 63.18 decibels ( that is , the error the legendre approximation is 0.0007 times as that for the taylor approximation on average ) .note in figure [ fig : tayvslagfit ] that while the taylor polynomial approximation has maximum accuracy at the midpoint , legendre polynomial approximation distributes the error more uniformly throughout the entire interval . as a result ,the squared error of is smaller and the snr is larger in case of legendre polynomials .our analytical and numerical results show that legendre polynomials can substantially improve the speed and accuracy of function approximation compared to taylor polynomials of the same order .the fast convergence of legendre polynomials was noted in a prior study , but the geometric convergence in norm has not been shown analytically before .the geometric convergence rate is consistent with the general result of srivastava on the relation of the generalized rate of growth of an entire function to the rate of uniform convergence of a polynomial approximation on an arbitrary infinite compact set . and, it should be noted that the geometric convergence is not possible in general for approximants if .thus , approximation using legendre polynomials can provide significant performance improvements in practical applications .we also showed that legendre polynomials has the additional advantage that an arbitrary function can be approximated using _ non - uniformly _ spaced points on a given interval .importantly , its accuracy of approximation is substantially higher than that of the taylor expansion with the same order of polynomials , with a uniform error distribution across the entire interval .therefore , legendre expansion , instead of taylor expansion , should be used when global accuracy is important .k. weierstrass , ber die analytische darstellbarkeit sogenannter willklicher functionen reeller argumente , sitzungsberichte der kniglich preuischen akademie der wissenschaften zu berlin 11 ( 1885 ) 633 .
we describe an expansion of legendre polynomials , analogous to the taylor expansion , to approximate arbitrary functions . we show that the polynomial coefficients in legendre expansion , thus , the whole series , converge to zero much more rapidly compared to the taylor expansion of the same order . furthermore , using numerical analysis with sixth - order polynomial expansion , we demonstrate that the legendre polynomial approximation yields an error at least an order of magnitude smaller than the analogous taylor series approximation . this strongly suggests that legendre expansions , instead of taylor expansions , should be used when global accuracy is important . numerical approximation , least squares , legendre polynomial 41a10 , 65d15
in the field of hydrogeology , many interesting concepts are related to finding the lag between two time series .for example , it is often hypothesized that for a seepage lake there is a significant time lag between net precipitation ( precipitation minus water loss through evaporation and runoff ) and the water levels over time , while such a lag for a drainage lake is often nonexistent or insignificant .seepage lakes are hydraulically isolated from surface water features and primarily fed by groundwater and direct precipitation .drainage lakes are typically connected to a network of streams and rivers ( wisconsin department of natural resources , 2009 ) .another example , which is our motivating example , is the relationship between precipitation and water levels of a shallow well in an unconfined aquifer versus water levels in a relatively deeper well in a semi - confined aquifer .this relationship is particularly important to water resource managers and groundwater modelers who need to accurately quantify groundwater recharge into aquifers , for developing water - supply - plans for sustainable use of aquifers .groundwater recharge , defined as entry of water into the saturated zone , is influenced by a wide variety of factors including vegetation , topography , geology , climate , and soils ( dripps , 2003 , dripps , hunt and anderson 2006 ) .groundwater recharge , which is a small percentage of the precipitation that eventually reaches the water table , is one of the most difficult parameters to quantify .this is because processes such as evaporation , transpiration and infiltration through unsaturated subsurface must first be estimated to determine the amount of water lost after a rainfall event .often times , groundwater models are developed by estimating the groundwater recharge using empirical relationships or as a percentage of precipitation .it is a common practice to use groundwater recharge as a calibration parameter , meaning the recharge value that provides the best calibration to the model is selected as representative for the watershed simulated . for temporal simulations , the lag time between a rainfall event and groundwater recharge into deeper aquifers are often ignored .although the underlying hydrogeological theory supports the existence of above time lags between time series , evidence based on empirical data for such lags have been typically assessed using visual inspection ( e.g. westoff _ et al _ , 2010 in a different hydrogeological context ) or cross - correlations ( levanon _ et al _ , 2016 ) in hydrogeological literature .cross - correlation method is essentially a parametric method , where certain parameters has to be estimated under the transfer - function - model framework and certain assumptions ( such as joint bivariate stationarity of the two time series ) has to be met ( see chapter 14 , wei 2006 ) .also diagnostic checking for model adequacy ( such as whether the noise series and the input series are independent - see again chapter 14 , wei 2006 for the definition of the noise series and input series referred to ) has to be done before cross - correlograms are plotted , although such checking are rarely done in practice . in this paper , we propose a non - parametric method to quantify the time lag using a simple adaptation of the visibility graph algorithm ( vga ) , which is an algorithm that converts a time series into a graph and was developed by physicists and seen mainly only within the physics literature so far ( lacasa , 2008 , lacasa and luque , 2010 , nunez _ et al _ 2012 ) . the method that we propose may be summarized as follows . in the proposed method, we consider one of the time series ( e.g. water levels observed in a well ) as a reference time series and create time shifted copies of the other time series of interest ( e.g. precipitation ) .we then use vga to convert all the time series ( original , copies and the reference ) to graphs and their corresponding adjacency matrices , and compare the copies of the latter time series with that of the reference .the ` distance measure ' that is used for the comparisons is the usual metric distance ( based on the frobenius norm ) between two matrices .we identify the copy of the latter time series for which this distance is minimized compared to the reference , and we define the time shift corresponding to this copy as the time lag between the orginal two time series .more details about vga and our adaptation to the time lag problem is provided in the next section using mathematical notation . in section 3we present results from simulations conducted to essentially identify an appropriate sample size and also to assess the performance of the method when values are missing .section 4 illustrates the application of the proposed method to real hydrogeologic datasets , where we also present a strategy to assess the uncertainty related to the lag estimated . finally in the last section, we make our concluding remarks .let us denote the two hydrogeological time series that we are interested in , namely precipitation and water levels , by and ( or simply and ) , respectively .in order to find the time lag between the two time series , as a first step we fix one of the series , say , and obtain time - shifted copies of the other series , the key step in our methodology is the conversion of all the above time series into graphs based on the visibility graph algorithm .graphs are mathematical constructs that are used to study relationships among various objects . in graph modelsthe objects of interest are modeled as nodes or vertices and the relationships among the objects are modeled using edges or lines connecting the vertices .+ etc .denote the time points as well as the corresponding nodes in the visibility graph.,width=624,height=360 ] visibility graph algorithm ( lacasa , 2008 , lacasa and luque , 2010 , nunez _ et al _ 2012 ) is a relatively novel method that extends usefulness of the techniques and focus of mathematical graph theory to characterize time series .it has been shown that the visibility graph inherits several properties of the time series , and its study reveals nontrivial information about the time series itself .figure 1 top panel illustrates how the visibility algorithm works .the time series plotted in the upper panel is an approximate sine series ; specifically , a sine series with gaussian white noise added .the values at 24 time points are plotted as vertical bars .one may imagine these vertical bars as , for example , buildings along a straight line in a city landscape ( i.e. a city block ) .each node in the associated visibility graph ( shown in the bottom panel ) corresponds to each time point in the series .so , the graph in figure 1 has 24 nodes .we draw a link or an edge between a pair of nodes , say and , if the visual line of sight from the top of the building ( vertical bar ) situated at towards the top of the building / bar at is not blocked by any intermediate buildings - that is , if we were to draw a line from the top of the vertical bar at to the top of the vertical bar at , it should not intersect any intermediate vertical bars . visibility lines corresponding to the edges in the graph are plotted as dotted lines in the figure in the upper panel . for example , there is no edge between and since the line of sight ( not shown ) between the top points of the vertical bars at these two time points is blocked by the vertical bar at . on the other hand , there is an edge between and since the corresponding visibility line ( shown as a dotted line ) does not intersect the vertical bar at .+ more formally , the following visibility criteria can be established : two arbitrary data values ( , ) and ( ) will have visibility , and consequently will become two connected nodes of the associated graph , if any other data ( ) placed between them fulfills : this simple intuitive idea has been proven useful practically because of certain nice features exhibited by the graphs generated by this algorithm .first of all they are connected , since each node is connected to at least its neighbors .secondly , there is no directionality between the edges , so that the graph obtained is undirected . in addition , the visibility graph is invariant under rescaling of the horizontal and vertical axes and under horizontal and vertical translations . in other words ,the graph is invariant under affine transformations of the original time series data .+ in mathematical notation any graph with nodes could be represented by its adjacency matrix which consists of s and s .the element of is if there is an edge connecting the and the node , otherwise .two graphs , and , can be be compared by the metric `` distance '' , between their corresponding adjacency matrices , and . here , , called the frobenius norm of a matrix , is the square root of the sum of the squares of the elements of the matrix ; that is , the square root of the trace of the product of the matrix with itself . + our proposed method to assess the time lag between the two hydrogeological time series and using the visibility graph approach is as follows : convert the time series into a visibility graph and obtain its corresponding adjacency matrix , .consider time - shifted copies of the time series , , each shifted in time by a lag from the set .convert these time - shifted copies of into their visibility graphs and obtain the corresponding adjacency matrices .we determine the copy for which the frobenius norm is minimized .the time lag between the two original hydrogeological series is then taken as .+ we further illustrate our method using the plots in figure 2 . the time series in the top panel, is an approximately a series of values based on the sine function obtained using the following r codes : + `n ` ` 50 ` + ` ts.a ` ` 100*sin(2*pi*(80/1000)*n ) + rnorm(n , 0 , 25 ) ` + the time series , , plotted in the middle panel of figure 2 is derived from as follows : + ` ts.b ` ` ( 1/3)*c(ts.a[3:n ] , ts.a[1:2 ] ) + rnorm(n , 0 , 5 ) ` + that is , is derived by shifting to the left by two units , by reducing the amplitude to one - third that of , and adding some gaussian noise . in other words , and have roughly the same shape although their amplitudes are different and one is shifted by two time units relative to the other as seen in the figure .one may think of and as two time series one affecting the other ( since , is shifted to the left , physically we would think of affecting ) ; e.g. as precipitation and as water levels .physically , water levels and precipitation never take negative values ; so , if one really wants to think of and as water levels and precipitation , one could think of them as mean - subtracted and scaled appropriately .we considered time - shifted copies of with time - shifts from the following set : vga was applied and adjacency matrices for the corresponding graphs were obtained .distance - measure based on the frobenius norm for the time - shifted copies of compared to the reference , are plotted in the bottom panel of figure 2 .the distance - measure is minimized at 2 , which was the lag that we set _ a priori_. thus , in this illustrative example , the lag was correctly identified by the method that we proposed .we conducted monte carlo simulations to assess the performance of the vga - based method as we varied some of the parameters of the two time series and considered in the previous section .the parameters that we considered were _ a _ ) the ratio of the amplitudes between the two simulated series and , _ b _ ) the variance for the noise term `` rnorm(n , 0 , * ) ` ' in the series ( indicated by ) and _ c _ ) the variance for the noise term `` rnorm(n , 0 , * ) ` ' in the series . for each simulation scenario considered in this section ( that is , for each set of the above parameters ) , 1000 pairs of and were generated , and for each pair time lag was assessed based on the proposed method and compared with the lag that was set _ a priori_. the performance of the method was assessed based on the percentage of times that the _ a priori _ lag was correctly identified .the _ a priori _ lags that we considered for each scenario were and ; we assumed that in typical examples from physical sciences , will be a small lag and will be a very large lag .the reason for considering the ratio of amplitudes was that even if two physical time series ( especially , hydrogeological time series ) are roughly of the same shape with only a lag between them , their amplitudes ( i.e. roughly their sizes ) are often vastly different . for and used in the introductory illustrative example , the ratio of their amplitudes was .one of the questions that was addressed in our simulations was whether our method was still good if we changed this ratio drastically , e.g. to .another question that we thought should be addressed is that whether the proposed method works only for nice periodic time series such as the ` sine series ' .increasing the variance for the noise term in makes it less like a sine series. finally , increasing the variance of the noise term in makes the shape of quite different from that of , and by doing so in our simulations we also addressed the performance of the method in such scenarios ..performance of the method when the ratio of amplitudes between and was , the noise term in was and the noise term for was .[ cols="<,^,^,^,^",options="header " , ] the performance of the proposed method with both locf and mean imputation was near perfect when only of the values( that is , 9 out of 180 ) were missing .this was true regardless of whether the values were missing for only one time series or for both , and also true across all _ a priori _ set lags , 2 , 5 , 10 and 15 .when of the values ( that is , 18 out of 180 ) were missing for only one time series , the method did very well under both locf and mean imputation for all lags .when of the values were missing for both time series , the performance was still very good when the lags were large ( 10 or 15 ) ; when the lags were small ( 2 or 5 ) , the performance with both imputation methods was still good but not as good as when the lags were large .for example , when values were missing and when the lag was 2 , the performance with locf was and , respectively , depending on whether the values were missing for only one time series or both ; the corresponding values for lag 10 , on the other hand , were even better : and . with missing values ( 27 out of 180 ) , the performance was still good ( that is , in the range ) with locf and mean imputation , for lags 2 , 5 , and 10 , irrespective of whether it was missing for only one or for both time series ( although , of course , if it was missing only for one time series , it was better ) . however , when the _ a priori _ set lag was 15 , the performance with locf was weak ( ) , when values were missing for both time series ; it was still good ( ) with locf when only one time series had missing values , and with mean imputation also ( and ) .with missing values the method worked well under both types of imputations and for all lags , only when one time series had missing values .when both time series had missing values , the performance of locf was not good with small lags ( for lag 2 and for lag 5 ) and got worse for larger lags ( for lag 10 and for lag 15 ) .the performance with mean imputation was slightly better ( and , for lags 2 , 5 , 10 and 15 , respectively ) but still not quite up to the mark . in summary , based on the above simulation results , we consider it quite safe to use the proposed method in conjunction with either of the imputation methods if it is only values missing for only one time series or for both .with - values missing , the imputation methods give good results only if it is missing for one time series ; if it is missing for both , then it is not very safe to say that imputations will work , but still reasonably safe . with about of the values missing for both time series ,it is definitely not recommended to use the proposed method with either of the imputations although it may be somewhat acceptable if it is missing for only one time series .also , in general , we observed that the performance with mean imputation was slightly better except for one or two scenarios .if the statistical practitioner has a preference of one method over the other , it may still be recommended to use both for the proposed method , at least as a sensitivity analysis .finally , we emphasize again the point made in the beginning of the section , that if large chunks of data are missing at a stretch then the imputation methods wo nt work ; in such cases , it is better to focus the analysis on other chunks of data with no or very sparse missing values .in this section , we present the results from an application of the proposed method on real hydrogeological time series. in southwest florida , two of the shallow aquifers that are tapped for water supply are the unconfined water table aquifer and the semi - confined lower tamiami aquifer .these aquifers are considered as sources of limited supply and regulated by the south florida water management district ( sfwmd , 2015a ) .water table aquifer is generally less than 50 feet thick and consists of less permeable unconsolidated sediments at the upper portion and relatively permeable limestone at the basal portion .the water table aquifer is hydraulically separated from the lower tamiami aquifer by about 15 to 30 feet of confining beds that consists of low permeable strata ( bonita springs marl member and caloosahatchee clay member ) .the top of lower tamiami aquifer is generally between 60 and 80 feet below land surface .this aquifer extends to 100 to 150 feet below grade and generally consists of sandy , biogenic limestone and calcareous sandstone ( sfwmd , 2015b ) .this aquifer is one of the primary sources of public water supply in southwest florida . to understand the lag responses of rainfall in these shallow aquifers are important for water management . for this study , in order to determine the lag responses within these two aquifers due to rainfall events , we utilized daily water level data recorded in the water table aquifer and lower tamiami aquifer .it is relevant to note that data collected in shorter frequencies ( e.g. hourly ) are ideal for `` lag '' related studies .hourly water level data was available from the water table aquifer well as well as the lower tamiami aquifer in the study area ; however , precipitation data was available only on a daily basis . in order to have both water level data and precipitation in the same time interval , we averaged the water levels to a daily average .the daily - averaged data was used solely for illustration of the statistical technique presented in this paper .data was available from july 1st , 2010 till june 29th 2016 , but water level data were missing for the following time intervals : january 5th , 2011 to april 8th , 2012 , july 10th , 2012 to october 1st , 2012 , april 2nd , 2013 to september 30th , 2013 , and finally between april 2nd , 2014 to june 29th , 2014 .complete data was available between june 30th , 2014 and june 29th , 2016 ( 731 days ) ; we analyzed this data for our illustration since this was the largest available time period with no missing data .the water level and precipitation data that were analyzed are graphically presented on figure 3 .vga method was applied to all the times series plotted in figure 3 , and frobenius distance between the corresponding pairs of adjacency matrices are plotted in figure 4 . for both unconfined water table aquifer andsemi - confined lower tamiami aquifer , the frobenius distance is minimized at lag 2 .this makes hydrogeological sense since , although one aquifer is a bit deeper than the other , considering the difference in total depths between the wells is roughly only about 40 feet , the water level response in the relatively deeper well may be observed in a matter of hours . in figure 4 , for both plots , we note that although the frobenius distnace for lag 1 is not the minimum it is close to the minimum compared to that of the other lags .thus , in order to check whether the minimum was attained at lag 2 just due to chance , we need to quantify the uncertainty regarding the estimate . since naive resampling strategies like bootstrap would create independent samples ( that is , with autocorrelations near zero ) , we used the following subsampling strategy that would preserve the original autocorrelations .we set a time - window - size , say 100 , and use the consecutive data points for water levels and precipitation in that window , and apply the proposed method to this sub - sampled data , as we did for the original data .first we place the left end of the window on the first time point of the original data , conduct the analysis , find the lag , and then move the window one unit to the right and repeat the analysis .we continue this process iteratively until the right end of the window touches the final time point for the original data . thus, with a window - size of 100 , and with a time series of length 731 , we will have 631 iterations , and for each iteration , we will have a lag between the pair of time series under consideration , so that we will have 631 lags at the end .the histograms for the 631 lags obtained using this iterative process are plotted in the top panel of figure 5 .the highest frequency for the lower tamiami aquifer is at lag 2 consistent with our finding for the original time series with 731 points .however , interestingly , the highest frequency for the water table aquifer was at lag 1 , although the frequency for lag 2 is almost as high .now the question arises whether this reversal was due to the size of the window ( 100 ) , which is quite smaller than the length of the original series ( 731 ) .so , we repeated the analysis with a window - size of 365 ( that is , 366 iterations ) .the results of the second iterative analysis are shown in the bottom panel of figure 5 ; in this case , the highest frequency for both aquifers is at lag 2 .we repeated the analysis using a window size of 50 ( 681 iterations ) and 25 ( 706 iterations ) ; in these analyses , only lag 2 appeared for all windows , so that the histogram will look like a single bar at 2 ( of height 50 or 25 , respectively ) , and no bars at any other lags .since this is simple enough to convey without a histogram , we did nt plot the histograms for window - sizes 50 and 25 . based on these results , and on hydrogeological sense, we would conclude that on the average , the water levels rise and fall 2 days after a corresponding fluctuation in precipitation , for both the aquifers .the analysis presented in this section suggests that it is critical to quantify the uncertainty prior to making conclusions and that the selected window size can influence the conclusions .quantifying time lags between two time series data , where one affects the other , is important for modelers of many physical phenomena , especially in hydrogeology .we propose an approach based on a simple extension of the visibility graph algorithm .we conducted simulations to assess the performance of the proposed method under different scenarios , and determined that the method worked well under reasonable settings .based on simulations we were also able to recommend sample size necessary to conduct the proposed analysis , and the maximum percentage of missing values under which the method will still work reasonably well with imputations .we also illustrated the method by applying it real data of water levels from aquifers and precipitation levels , and emphasized the importance of quantifying the uncertainty related to estimate of the lag .25 dripps , w.r . ( 2003 ) , the spatial and temporal variability of ground water recharge . ph.d ., department of geology and geophysics , university of wisconsin madison .dripps , w.r . , hunt , r.j . and anderson , m.p .( 2006 ) , estimating recharge rates with analytic element models and parameter estimation .ground water , 44 : 4755 .lacasa , l. , luque , b. , ballesteros , f. , luque , j. , and nuno , j.c .( 2008 ) from time series to complex networks : the visibilty graph .usa 105 , 13 , 4972 - 4975 .lacasa , l. , luque .b. ( 2010 ) mapping time series to networks : a brief overview of visibility algorithms .computer science research and technology vol 3 ( edited by nova publisher ) , isbn : 978 - 1 - 61122 - 074 - 2 .levanon , e. , shalev , e. , yechieli y , gvirtzman h. ( 2016 ) fluctuations of fresh - saline water interface and of water table induced by sea tides in unconfined aquifers .advances in water resources , volume 96 , 34 - 42 nunez , a. , lacasa , l. , luque , b. ( 2012 ) visibility algorithms : a short review .graph theory ( edited by intech ) , isbn 979 - 953 - 307 - 303 - 2 .south florida water management district ( sfwmd ) ( 2015a ) , water use permit applicant s handbook , part b. south florida water management district ( sfwmd ) ( 2015b ) , tech pub ws-35 , hydrogeologic unit mapping update for the lower west coast water supply planning area wei , w.w.s . , ( 2006 ) time series analysis : univariate and multivariate methods ( 2nd edition ) .westoff , m.c . ,bogaard , t.a . ,savenije , h.h.g .( 2010 ) quantifying the effect of in - stream rock clasts on the retardation of heat along a stream advances in water resources , volume 33 , issue 11 , 1417 - 1425 wisconsin department of natural resources , pub - fh-800 2009 , wisconsin lakes "
estimating the time lag between two hydrogeologic time series ( e.g. precipitation and water levels in an aquifer ) is of significance for a hydrogeologist - modeler . in this paper , we present a method to quantify such lags by adapting the visibility graph algorithm , which converts time series into a mathematical graph . we present simulation results to assess the performance of the method . we also illustrate the utility of our approach using a real world hydrogeologic dataset . time series , visibility graph algorithm , hydrogeology , aquifer water level , precipitation
estimating a density function using a set of initial data points in order to find probability information is a very significant tool in statistics .the method of kernel density estimation ( kde) is now standard in many analysis and applications . furthermore , this idea has been applied in multiple fields ( archaeology , economy , etc ) .the author of this article is particularly interested in constructing perception of security ( pos ) hotspots using ( kde ) methods to analyze real data registered by security experts in bogot .nowadays a wide variety of methods are available to find density functions ( kde ) , .the method of kde via difussion is of particular interest for this document ; a recent article develops a systematic method for ( kde ) using the diffusion equation , also they propose a more general equation to solve some biases for data estimation .however in their analysis , it is only considered the normalization ( conservation of mass ) of the density function via neumann boundary conditions , the mean of the sample data is not considered , thus inducing a change of an important initial parameter from the discrete data sample . in this article, we propose a new set of boundary conditions for the diffusion equation that maintain the initial mean and mass of the the discrete data sample in the density estimation process .a complete study of this framework is performed using the finite element method ( fem ) to solve the one - dimensional diffusion equation for different boundary conditions .we show the induced error on the final density when the mean is not conserved .we also show how this one - dimensional model can be used to simulate a ( pos ) in a busy avenue of a city .lastly the new boundary conditions are presented for the two - dimensional diffusion equation for future applications in two dimensional domains .as it was first noted in and expanded in , solving the diffusion equation with a discrete data sample as initial condition ( [ eq2 ] ) give an estimate of a continuous probability density function .then by solving the diffusion equation , - = 0 a < x < b , t>0 , [ eq1 ] + u(x,0)=_i=1^n(x - b_i ) , x , b_i , [ eq2 ] with appropriate boundary conditions and then finding the best ( bandwidth ) for the initial data sample one obtains a continuous estimation of the experimental density . in this article we do not consider algorithms for bandwidth selection , we consider only the conservation of the mean .for more information on the bandwidth selection see .this one - dimensional toy problem is nevertheless of interest in applications for constructing ( pos ) .for instance we can model an avenue as a one dimensional domain where predictions of the most dangerous places in a selected zone can be accomplished . in the following sections we present the non - conservation of the mean for the neumann boundary conditions for problem ( [ eq1 ] ) .we also propose new boundary conditions .for the derivations we assume that the functions are sufficiently smooth in order for the theorems of vector analysis to hold. moreover the following derivations can be done for a more general diffusion equation with a variable diffusion coefficient .if we consider the neumann or natural boundary conditions on the problem ( [ eq1 ] ) , we have as is widely known , the total mass is conserved over time , see section [ mass - conv ] , however the mean of the initial condition is , in general , not conserved .indeed , we have {a}^{b}- \left[u(x , t)\right]_{a}^{b}\\ & = u(a , t ) - u(b , t).\end{aligned}\ ] ] where we used ( [ eq1 ] ) , ( [ eq3 ] ) and integration by parts .hence the mean is generally not conserved , it depends on the values of at the boundary in a time .we propose the following boundary conditions for ( [ eq1 ] ) , note that this boundary conditions are non - local , we need to evaluate in both boundary points at the same time . now we show that both the mean and the mass are conserved over time using this boundary conditions .consider first the conservation of the total mass .we have , {a}^{b } = \frac{\partial u(x , t)}{\partial x}\big|_{a}-\frac{\partial u(x , t)}{\partial x}\big|_{b}=0.\end{aligned}\ ] ] where we used ( [ eq1 ] ) , ( [ eq4 ] ) and integration by parts .this shows that the total mass is conserved .consider now the conservation of the mean .we have , {a}^{b}- \left[u(x , t)\right]_{a}^{b}\\ & = ( b - a)\frac{\partial u(x , t)}{\partial x}\big|_{b } -u(b , t ) + u(a , t)\\ & = 0.\end{aligned}\ ] ] again ( [ eq1 ] ) , ( [ eq4 ] ) and integration by parts were used to obtain the desired result .this shows that the boundary conditions ( [ eq4 ] ) for problem ( [ eq1 ] ) conserve both mean and mass .now we proceed to make some numerical simulations using fem to show the consequences of the application of this boundary conditions in the process of estimation a probability density for a data sample ( [ eq2 ] ) .now the problem ( [ eq1]),([eq4 ] ) is written in a weak formulation in order to apply the finite element method to the problem .now for all we have , we solve this weak formulation using fem with low order elements in theinterval =[0,10] ] .,title="fig : " ] .47 ].,title="fig : " ] .47 and for the density estimation with neumann boundary conditions for ].,title="fig : " ] .47 and for the density estimation with mean conserving boundary conditions for $].,title="fig : " ] figures [ mean - conv - neu ] and [ mean - conv - mean ] present the real difference in the evolution of the density .we effectively see that the mean conserving boundary conditions conserve the mean in the density estimation process . on the other hand if we where to have an initial condition that is biased to one of the boundaries , the differences of the estimated densities by both boundary conditions would differ significantly .however there is no evidence to think that this phenomena occurs in real avenues . for the numerical experiment presented here we can see that the mean for the neumann boundary conditions has changed about 0.4% in .this change is small , in fact , for an avenue of 10 km , the change in mean would be about 40 m. we conclude that for this numerical experiment for the process of density estimation ( when the data has not change to much due to the smoothing process ) the neumann boundary condition provide a very fast ( since they are easy to implement ) and accurate way to estimate a continuous probability density. nevertheless the mean of the sample is not preserved exactly , on the other hand , the mean conserving boundary condition , apart from being also easily implementable , is accurate and do preserve the mean of the sample .we now present the problem for the diffusion equation in two dimensions , again we want the conservation of mass and mean in the time evolution of the density .consider first the conservation of the total mass .we have , where , and denotes the outward normal unit vector to . to deduce this relation we used ( [ eq9 ] ) , and the first green identity .consider now the conservation of the mean .we have , where , assuming cartesian unit vectors . again( [ eq9 ] ) and the first green s identity were used to obtain the desired result .then the conditions that we have to impose on in order to conserve mean and mass are : the advantage of two dimensional domains is that we are not restricted to impose only two conditions for the boundary(mean and mass conservation ) . for these domains we can in principle conserve additional higher moments of the density distribution that are meaningful for the particular problem .applications on two dimensional domains are of special interest for the author since a two dimensional map of the city can generate really robust results in the field of perception of security(pos ) .the proposed mean conserving boundary conditions were shown to effectively maintain the mean of the initial data sample over the continuous density estimation process .this was also confirmed by the numerical simulation of the estimation process where we used a list of uniformly distributed points in the interval [ 0,10 ] as an initial condition .the numerical experiments presented here show that even though neumann boundary conditions do not conserve the mean over time , they are accurate enough to maintain the mean in a very restricted interval before the over - smoothing of the density estimation process .we showed the application and some of the consequences of both the idea of ( kde ) and the new boundary conditions to avenues in a city .the consequences of implementing the diffusion equation with the proposed boundary conditions in companion of more special initial conditions and in 2d domains remains to be analyzed .i would like to express my gratitude to juan galvis , whose guidance was essential for the completion of this manuscript . i also want to thank francisco a. gmez and zdravko botev , whose comments were really appreciated for the analysis of the results . 1d inhomogenious diffusion equation solution with fempatarroyo k. available online .
we propose boundary conditions for the diffusion equation that maintain the initial mean and the total mass of a discrete data sample in the density estimation process . a complete study of this framework with numerical experiments using the finite element method is presented for the one dimensional diffusion equation , some possible applications of this results are presented as well . we also comment on a similar methodology for the two - dimensional diffusion equation for future applications in two - dimensional domains .
the term sitnikov problem " appeared originally in the context of studies of oscillatory solutions in the restricted three body problem .these studies were initiated by sitnikov ; they stimulated the application of symbolic dynamics in celestial mechanics .we recall that sitnikov considered the case when two primaries have equal masses and rotate around their barycenter , while the infinitesimal third body moves along a straight line normal to the plane defined by the motion of the primaries and passing through ( usually the motions of the third body perpendicularly to the plane of the primaries are called vertical " ; below we will follow this tradition ) .sitnikov concentrated his attention on phenomena taking place when the primaries move in elliptic orbits .more bibliography on `` elliptic '' sitnikov problem can be found , for example , in .if the primaries move in circular orbits , then the vertical motions are integrable .the corresponding quadratures were presented at the beginning of the xx century by pavanini and macmillan - much before the start of sitnikov s studies .relatively simple formulae for the vertical motions , written in terms of jacobi elliptic functions , can be found in . since the integrability of third body motion is something extraordinary within the restricted three body problem , many specialists investigated the properties of vertical motions in the case of primaries moving on circular orbit .very often the term circular sitnikov problem " is applied to describe this field of research .taking into account its popularity , we will use it too . nevertheless ,some authors prefer terms like pavanini problem " or macmillan problem " , which are probably more correct from the historical point of view . depending on the initial values , three types of vertical motionsare possible in the circular sitnikov problem : the hyperbolic escape ( i.e. , the escape of the third body with non - zero velocity at infinity ) , the parabolic escape ( i.e. , the escape of the third body with zero velocity as the limit at infinity ) and , finally , the periodic motion , in which third body goes away up to a distance from the plane defined by primaries and then returns to it .the first stability analysis of the periodic vertical motions in the circular sitnikov problem was undertaken by perdios and markellos , but they drew the wrong conclusion that vertical motions are always unstable ( perdios and markellos only analyzed the vertical motions with the initial conditions such that ; as it was established lately it is not enough to put any hypothesis about the stability properties of the motions with larger values of ) .the mistake was pointed out in , where the alternation of stability and instability of vertical motions were found numerically in the case of continuous monotone variation of their amplitude .lately the existence of such an alternation was confirmed by the results of computations presented in and .taking into account their numerical results , the authors of proposed the hypothesis that the lengths of stability and instability intervals have finite limits as increases .this hypothesis was formulated on the basis of computations in which did not exceed the value .our numerical investigations demonstrate that the rapidly decreasing difference of the stability intervals at is a manifestation of a local maximum of their lengths ; if is increased further , then the lengths of the stability and instability intervals tend to zero .there is one more important property of vertical motions , which can be observed only for : the intervals of complex saddle " instability , when all eigenvalues of the monodromy matrix are complex and do not lie on the unit circle . according to our computationsfirst such an interval begins at , its length is .it means the erroneous of the statement in ( p. 113 ) , that the stability indexes of the vertical motions in circular sitnikov problem are always real ( this statement was based on the results of numerical studies in which the amplitude of the motion was smaller ; as one can see it was not enough for such a general conclusion ) . to conclude our short review on previous investigations of vertical motions stability in circular sitnikov problemwe would like to mention the generalization of this problem for systems of four and more bodies .numerical results presented in demonstrate that in the generalized problem the absence of stability / instability alternation in the family of vertical motions persists .the aim of our paper is to study the stability property of the periodic vertical motions at large values of the oscillation amplitude " , both numerically and analytically .a special attention will be given to the phenomenon of infinite alternation of stability and instability in this family .in fact , the infinite alternation of stability and instability in the one - parameter family of periodic solutions is rather typical for hamiltonian systems , although the general investigation was carried out only for 2dof systems .different examples can be found in .nevertheless , an important difference exists between the circular sitnikov problem and other systems in which the alternation of stability and instability was established earlier . in the circular sitnikov problem the discussed family of periodic solutions possesses as a limit unbounded aperiodic motions - parabolic escapes , while in previously considered systems the corresponding families and their aperiodic limits were bounded . due to this difference, the alternation of stability and instability in the circular sitnikov problem can not be studied in the same way as it was done in ( one could try to compactify the phase space by means of certain changes of variables , but we were unable to find any reduction to what was investigated already ) .this paper is organized as follows . in sect .2 some general properties of the vertical motions are discussed . in sect .3 we present the linearized motion equations used in our studies of the vertical motions stability .the results of the numerical investigation of the stability are reported in sect .4 . in sect .5 we prepare for the analytical investigation : the approximate expression for the monodromy matrix is derived here . using this expression , some important stability properties of vertical periodic solutions with large amplitudes established in sect . 6( in particular , the asymptotic formulae for the intervals of stability and instability are obtained ) . in sect .7 we discuss briefly the vertical motions in the generalized circular sitnikov problem with four and more bodies .some concluding remarks can be found in sect .we consider the restricted , circular , three - body problem with primaries having equal masses , say .let be a synodic ( rotating ) reference frame with the origin at the barycenter ; the masses and are arranged on the axis , while the axis is directed along the rotation axis of the system .the coordinates of the infinitesimal third body in the synodic reference frame will be used as generalized variables : below we assume that all variables are dimensionless .the equations of motion of the third body can be written in hamiltonian form with hamiltonian function here and denote the distance between the third body and the corresponding primary , while are the momenta conjugated to .the phase space possesses a manifold which is invariant with respect to the phase flow .the phase trajectories lying on correspond to vertical motions with the third body staing always on the axis .consequently , the vertical motions are governed by a reduced 1dof system with hamiltonian the phase portrait of the system with the hamiltonian ( 1 ) is shown in fig .it is remarkable that the separatrices ( the borders between trajectories representing periodic motions and hyperbolic escapes ) intersect at infinity .-1.0 cm 2.cm .thick lines denote the separatrices ( ) . , title="fig:",width=453 ] -10 cm the periodic solutions associated to the system with hamiltonian form a one - parameter family where as parameter one can choose the amplitude " of the periodic motion ( i.e. , ) or the absolute value of at the passage trough the barycenter or the value of the hamiltonian in this periodic motion .the first variant is the most convenient for us , therefore in ( 2 ) will denote the amplitude " of the periodic motion . for definitenesswe assume that there exist explicit expressions for the solutions ( 2 ) in terms of jacobi elliptic functions .since they are not used in the forthcoming analysis , we do not rewrite them here , except for the formula about the period of vertical motion : .\eqno(3)\ ] ] here is the complete elliptic integral of the second kind , is the heuman lambda function , while the value of the modulus is given by the formula where for motions with large amplitudes ( ) the following approximate formula can be used in place of ( 3 ) : as it was mentioned before , the separatrices , representing the parabolic escapes , can be interpreted as a formal limit for periodic motions at .the parabolic escapes obey the approximate law formulae ( 4 ) and ( 5 ) are easily obtained if one suitably relates the properties of vertical motions with the properties of rectilinear motions of a particle in a newtonian field .our efforts are concentrated on the analysis of the vertical motions stability with respect to horizontal " perturbations , due to which the third body leaves the axis . under the linear approximation ,the behavior of the variables in the perturbed motion is described by the linear hamiltonian system of equations with periodic coefficients : here the symbol is used to denote the identity matrix of the -th order .the function depends periodically on time with a period , where denotes the period of the particular vertical motion whose stability is investigated .as it is known , the restricted circular three - body problem admits several types of symmetry ( for example , they are used for the numerical construction of 3d periodic solutions ) .the consequence of these symmetries is the following property of the variational equations ( 6 ) : if is a solution of ( 6 ) , then these equations admit the solution where it the -diagonal matrix , . according to floquet theory , in order to draw a conclusion about the stability or instability of the solutions of ( 6 ) , one should analyze the spectral properties of the monodromy matrix , where denotes the normal fundamental matrix corresponding to the system ( 6 ) ( i.e. , the matrix solution of ( 6 ) with the initial condition ) .the normal fundamental matrix corresponding to the linear hamiltonian system ( 6 ) is a symplectic one , i.e. it is also worthwhile to mention some other properties of this matrix : the first two equalities in ( 8) are elementary , while the last one is a consequence of the symmetry property ( 7 ) . using the relation ( 8) one easily obtains the characteristic equation of the system ( 6 ) is reciprocal and it can be written as where the quantities in the last formula are the elements of the monodromy matrix .it is also possible to rewrite the characteristic equation ( 9 ) as the product the coefficients in ( 10 ) are the roots ( real or complex ) of the quadratic equation : often enough the quantities are called the stability indices . the periodic vertical motion is stable whenever ( i.e. , when are real and their absolute values are smaller than 1 ) . in the case,\quad \{b_1,b_2\}\lefteqn{\subset}{\;\,/ } \ ; i\ ] ] an additional investigation is needed to draw a conclusion about stability or instability . in all other casesthe instability takes place .we recall that in the alternation of the stability and instability in the family of periodic vertical motions ( 2 ) was discovered . later on ,more accurate results were published in : the length of the first 35 intervals of stability and of the first 34 intervals of instability was calculated . in an attempt was undertaken to establish certain regularity in the variation of these quantities : the existence of non - zero limits for the intervals lengths was proposed . in fig.2 and fig.3we present the results of some calculations , when the first 700 intervals of stability and instability are considered .the graph in fig .2 shows that for the first 30 intervals of stability the length of the intervals increases and only afterwards the decrease of the length takes place .the hypothesis formulated in was based on the wrong interpretation of the small variation of the intervals length in vicinity of the maximum . in fig .3 the length of the instability intervals decreases monotonically and it does not follow the empirical law derived in ( according to this law , the length of the instability intervals has the limit ; evidently , it is not so ) .our results allow us to propose the following approximate formulae to characterize the behavior of the stability and instability intervals length in fig . 2 and fig .3 : more precisely , these formulae are valid for the periodic vertical motions with amplitude smaller the critical value the reason of such a restriction and the situation for will be revealed a little bit later .-0.0 cm 2.cm , title="fig:",width=453 ] -9.0 cm -0.0 cm 2.2 cm ( the first interval is not presented : if it was shown in the same scale with all subsequent intervals , it would have been difficult to understand the behaviour of the graph for large ),title="fig:",width=453 ] -9.cm it is also useful to discuss here in what way the length of the stability intervals and the length of the instability intervals depend on the amplitude of the vertical oscillations . under the same restriction obtain from our numerical investigations _ remark_. if one needs a rigorous definition about the meaning of the quantity in the last formulae , one could interpret it as the boundary value between two successive intervals of stability and instability . in fig .4 the behavior of the coefficients appearing in the characteristic equations ( 10 ) is shown . fig .4a , 4b and 4c allow us to compare the properties of these coefficients , when the parameter varies in different intervals .all graphs demonstrate the approximate periodicity , their period with respect to the parameter corresponds to an increase of period of vertical oscillations of about .it is important to point out the small gaps in the fig .4c : for the corresponding value of the parameter ( i.e. , when belongs to the intervals where the graphs are not defined ) the stability indices have complex values and the so - called complex saddle " instability of the vertical motion takes place .the enlarged fragments of the graphs in the vicinity of the gaps are given in fig .-1.0 cm 2.cm and appearing in the characteristic equation ( 10),title="fig:",width=453 ] -10.0 cm 2.cm and appearing in the characteristic equation ( 10),title="fig:",width=453 ] -10.0 cm 2.cm and appearing in the characteristic equation ( 10),title="fig:",width=453 ] -10.0 cm -5.0cm 2.cm -9.0 cm 1.6 cm -10.0 cm as it follows from our calculations , the first interval of `` complex saddle '' instability begins at .since such a value of vertical motion amplitude is large enough , it provides us with an explanation why this kind of instability of vertical motions in the circular sitnikov problem was not recognized in previous studies where relatively small values of were considered . increasing further the parameter (i.e ., for ) , we observe a stability / instability alternation of more complicated type : wide " interval of instability - narrow " interval of stability - narrow " interval of complex saddle `` instability - ' ' wide " interval of stability - wide " interval of instability - ... an analog of the formulae ( 11 ) can be constructed in the case , but we prefer to present in sect .6 several asymptotics written in a more convenient way .finally it worth while to mention that the runge - kutta - fehlberg method of 7 - 8 order with variable step was used to integrate numerically the variation equations ( 6 ) .the accuracy of the integration procedure ( the local tolerance ) was taken .since the period of vertical oscillations increases proportionally the variation equations should be integrated over relatively large time intervals : if we take for example then the value of half - period . to check the influence of the round - off errors some computationswere done both with double and quadruple precision arithmetic .in this section an approximate expression for the monodromy matrix is derived .it will be used to discuss the phenomena described in sec. 4 ( the alternation of stability and instability , the decrease of stability and instability intervals by increasing the parameter , etc ) .we assume that the amplitude of the periodic solution ( 2 ) is so large , that we can define an auxiliary quantity such that to start with we write down the monodromy matrix as the product of three fundamental matrices : where and are the instants at which the third body is at distance from the barycenter in the periodic vertical motion ( 2 ) ( at the third body moves away from barycenter , at it approaches the barycenter ) . _ approximate expression for the matrix . if the condition ( 12 ) is satisfied the phase point moves on the manifold in close vicinity of the separatrix at ] .the behavior of at is described by the remarkable asymptotic formula : here the derivation of the formula ( 14 ) is based on some simple ideas .let us take and write down as the product where is the moment of time when the third body is at distance from the barycenter in the motion corresponding to the parabolic escape .as next step , we modify the equations ( 6 ) to find the approximate expression for at . since at third body is far enough from the primaries and , it looks natural to replace by in the right parts of the first two equations in system ( 6 ) and to neglect the small term .the system ( 6 ) takes the form with now it is worthwhile to make the following remark .let us consider the rectilinear parabolic escape of the material point in the field of an attracting center . under a proper choice of units ,the distance between the attracting center and the point varies as if the asymptotics ( 5 ) is used for in the equations ( 16 ) , then these equations coincide with the motion equations of the above mentioned material point , linearized in the vicinity of the solution ( 17 ) and written in the reference frame uniformly rotating around the line of the escape . taking this into account, we implement in ( 16 ) the change of variables where this change of variables can be interpreted as the transfer from the synodic reference frame to the sidereal ( fixed ) reference frame ( ) . as a result the linearized equations of motion split into two independent subsystems it is not difficult to find partial solutions to the system ( 18 ) and here and below the dots are used for derivatives with respect to time .four independent partial solutions allow us to write down the normal fundamental matrix in terms of the variables : coming back to the initial variables , we get substituting ( 19 ) into ( 15 ) we obtain the expression for the normal fundamental matrix as the product of three matrices with only one of them depending on time : here the formula ( 20 ) can be used to compute the elements of the matrix at .asymptotically their values should not depend on the choice of .it means that the following limit exists : substituting instead of into ( 20 ) we arrive to the formula ( 14 ) .the fundamental matrix was introduced in such a way that it provides the vertical motions satisfying ( 12 ) with a `` universal '' ( i.e. , independent on ) approximation at ] the third body is far enough from the primaries , we neglect again the difference between their gravity field and the gravity field of the attracting center placed at the barycenter . to obtain the expression for within such an approximationwe need to integrate the system where describes the motion in the newtonian field along the segment ] .of course the motion along a segment corresponds to the singular impact orbit , but it is used here to approximate the regular vertical motion on the time interval were the singularities are absent . the change of variables where allows us to rewrite the equations ( 22 ) in the more simple form : it is easy to check that the system ( 24 ) admits the following partial solutions : and to compute in ( 25 ) and ( 26 ) the energy integral can be used . in the case of the motion along the segment ] , where is a large enough integer number . using the formula ( 3 ) , which defines the dependence of on the amplitude , it is not difficult to prove that in terms of the lengths of the stability and instability intervals decrease proportionally to as .1.cm computed on the base of the approximate formula for the monodromy matrix .only real values are shown.,title="fig:",width=529 ] -13 cm taking into account the approximate expression for the monodromy matrix , we describe in more details the repeating pattern of stable and unstable intervals mentioned at the end of sec .this pattern consists of four intervals appearing in the following order as increases : _ wide " interval of instability ._ both coefficients are real , but one of them has absolute value greater than 1 ( `` saddle - center '' instability ) .the asymptotic length of the interval in terms of the amplitude of the motion is about , while the variation of the semiperiod equals to about . _ narrow " interval of stability . _the coefficients are real and belong to the interval .the approximate length is ; the variation of the semiperiod equals to about . _ narrow " interval of instability . _ the coefficients are complex ( `` complex saddle '' instability ) .the approximate length is ; the variation of the semiperiod equals to about ._ wide " interval of stability . _ the coefficients are real and belong to the interval again .the approximate length is ; the variation of the semiperiod equals to about . to conclude, we recall that before the first appearance of the interval of complex saddle `` instability at , a more simple pattern with only two intervals was observed .the ' ' transient " asymptotics for the length of the stability intervals in the case can be obtained by adding of the lengths of the narrow " instability interval and both stability intervals in the final pattern .it yields which is in good agreement with the corresponding numerical result presented in sec .the investigation of the generalized circular sitnikov problem with four and more bodies revealed that in contrast to the case of the three body problem there is no alternation of stability / instability in the family of vertical motions .for simplicity we limit our consideration to the case of the restricted four body problem .it is assumed that three primaries of equal mass rotate around the barycenter in circular orbit with the radius . under the linear approximationthe stability analysis of the fourth body periodic vertical motion is reduced to the study of the spectral properties of the monodromy matrix associated to the system of linear differential equations with periodic coefficients where it is remarkable that equations ( 32 ) possess a circular symmetry : for any real they are invariant with respect to transformations of the form while the non - linearized equations of motion of the fourth body in the synodic reference frame admit only the rotational symmetry of the 3rd order . the possibility for the linearized equations of motion to have a larger group of symmetries in comparison to the original non - linear systemwas pointed out by v.i .arnold (sec .in particular , in the case of the initial system rotational symmetry of -th order ( ) the linearized equations always have circular symmetry .this is the reason why the stability analysis of the vertical motions , based on the linearized equations , yields similar results for the sitnikov problem with four and more bodies and for the particle dynamics in the gravity field of the circular ring ( numerically it was shown in ) .it is convenient to rewrite the equations of motion ( 32 ) in a sidereal ( fixed ) reference frame by means of the transformation of variables where after that the equations of motion split into two identical independent subsystems : let denote the normal fundamental matrix for the system ( 33 ) in the case when is replaced by , which corresponds to the parabolic escape . using the same technique as in sect .5 we obtain the asymptotic formula where applying the main ideas of sect . 5 , we establish the following property of the monodromy matrix associated to ( 33 ) : at the matrix , where the constant matrix .the eigenvalues of the matrix are the asymptotic limits for multiplicators ( of multiplicity ) of the system ( 33 ) : finally , it is not difficult to derive the asymptotic formulae the for multiplicators of the original system ( 32 ) : on the complex plane are placed in the small vicinity of the circles with radii and .consequently in the circular sitnikov problem with four bodies the periodic vertical motions with large amplitudes are always unstable .finally we would like to note the another opportunity to introduce the generalized circular sitnikov problem with bodies using the appropriate straight line solution of the problem of bodies .if in such a solution primaries are arranged symmetrically with respect to the barycenter then the infinitesimal body can move periodically along an axis around which the rotation of the primaries takes place ( naturally , the proposed generalization is possible for odd only ) .likely this family of periodic motions exhibits the alternation of stability and instability .the combination of numerical and analytical approaches provided us with the opportunity to correct , clarify and extend some previously known results related to the circular sitnikov problem ( mainly about the stability of vertical motions ) .for the first time under the scope of this problem the possibility of the `` complex saddle '' instability was revealed within the family of vertical motions . for our theoretical constructions it was essential that the phase trajectories corresponding to the solution under consideration have lengthy parts in the vicinity of the peculiar separatrices of the problem - the parabolic escapes to infinity .often enough it is possible to introduce a suitable auxiliary mapping in the vicinity of the separatrix in order to study the local properties of the phase flow .it would be very interesting to develop similar for the circular sitnikov problem .. 1 in * acknowledgements .* the author would like to express his gratitude to a.i .neishtadt and a.b .batkhin for useful discussions during the accomplishment of this work .also the author thanks a.celletti for reading the manuscript and suggesting many improvements .neishtadt , a.i . ,sidorenko , v.v . ,investigation of the stability of long - periodic planar motions of a satellite in a circular orbit ._ kosmicheskie issledovaniya _ , 2000 , * 38 * , 307 - 321 ( in russian .english trans . : _ cosmic research _ , 2000 , * 38 * , 289 - 303 ) .
this paper is devoted to the special case of the restricted circular three - body problem , when the two primaries are of equal mass , while the third body of negligible mass performs oscillations along a straight line perpendicular to the plane of the primaries ( so called periodic vertical motions ) . the main goal of the paper is to study the stability of these periodic motions in the linear approximation . a special attention is given to the alternation of stability and instability within the family of periodic vertical motions , whenever their amplitude is varied in a continuous monotone manner .
synchronizability is one of the currently leading problems in the fast - growing field of complex networks .a number of studies have been devoted to scrutinize which network topologies are more prone to sustain a stable globally - synchronized state of generic oscillators defined at each of its nodes .this question is of broad interest since many complex systems in fields ranging from physics , biology , computer science , or physiology , can be seen as networks of coupled oscillators , whose functionality depends crucially on the network ability to maintain a synchronous oscillation pattern .in addition , it has been shown that networks with good synchronizability are also `` good '' for ( i ) fast random walk spreading and therefore for efficient communication , ( ii ) searchability in the presence of congestion , ( iii ) robustness in the absence of privileged hubs , ( iv ) performance of neural networks , ( v ) generating consensus in social networks , etc .another related and important problem that has received a lot of attention , but that we will not study here , is the dynamics _ towards _ synchronized states ( see for example ) . in general terms, we can say that the degree of synchronizability is high when all the different nodes in a given network can `` talk easily '' to each other , or information packets can travel efficiently from any starting node to any target one .it was first observed that adding some extra links to an otherwise regular lattice in such a way that a small - world topology is generated , enhances synchronizability .this was attributed to the fact that the node - to - node average distance diminishes as extra links are added .afterwards , heterogeneity in the degree distribution was shown to hinder synchronization in networks of symmetrically coupled oscillators , leading to the so called `` paradox of heterogeneity '' as heterogeneity is known to reduce in average the node - to - node distance but still it suppresses synchronizability .the effect of other topological features as betweenness centrality , correlation in the degree distribution and clustering has been also analyzed .for example , it has been shown that the presence of weighted links ( rather than uniform ones ) and asymmetric couplings do enhance further the degree of synchronizability , but here we focus on un - weighted and un - directed links . certainly , the main breakthrough was made by barahona and pecora who , in a series of papers , established a criterion based on spectral theory to determine the stability of synchronized states under very general conditions .their main contribution is to link graph spectral properties with network dynamical properties . in particular , they considered the laplacian matrix , encoding the network topology , and showed that the degree of synchronizability ( understood as the range of stability of the synchronous state ) is controlled by the ratio between its largest eigenvalue ( ) and the smallest non - trivial one ( ) , i.e. , where is the total number of nodes .the smaller the better the synchronizability .note that , as the range of variability of is quite limited ( it is directly related to the maximum connectivity ) , minimizing is almost equivalent to maximizing the denominator ( i.e the _ spectral gap _ ) when the degree distribution is kept fixed .it is worth noticing that , even if the eigenratio can be related to ( or bounded by ) topological properties such as the ones cited above ( average path length , betweenness centrality , etc . ), none of these provides with a full characterization of a given network and therefore they are not useful to determine _ strict _ criteria for synchronizability .nevertheless , they can be very helpful as long as they give easy criteria to determine in a _ rough _ way synchronizability properties , without having to resort to lengthly eigenvalue calculations .in a couple of recent papers , we tackled the problem of finding the optimally synchronizable topology , given a fixed number of nodes and edges linking them , and assuming symmetric and un - weighted links .the strategy we followed was to implement a simulated annealing algorithm with a cost - function given by ; starting with a random topology with nodes and links , random rewirings that decrease the value of are accepted with larger probability than those increasing ( for more details see ) , until eventually a stationary ( optimal or close to optimal ) network is generated . employing this optimization algorithm, we identified the family of `` optimal network topologies '' which we called _ entangled networks_. the main topological trait of entangled networks is the absence of bottlenecks and hubs ; all sites are very much alike and the links form very intricate structures , which lead typically to ( i ) the absence of a well - defined community structure , ( ii ) poor modularity , and ( iii ) large shortest - loops . in this way , every single site is close to any other one owing to the existence of a very `` democratic '' or entangled structure in which properties such as site - to - site distance , betweenness , and minimum - loop - size are very homogeneously distributed ( see ) .entangled networks were identified as _ ramanujan graphs _ and they have been related to other interesting concepts in graph theory as _ expanders _ and cage - graphs ( see and references therein ) .these are used profusely in computer science and are under current intense study in the mathematical literature . for example, expanders and ramanujan graphs are very useful in the design of efficient communication networks , construction of error - correcting codes , or de - randomization of random algorithms .these applications greatly amplify the relevance of entangled networks in different contexts . despite of their mathematical beauty and excellent performance in network - design , entangled topologiesare not easily found in biological , social , or any other `` real - life '' networks .an exception are some food - webs , for which topologies very similar to entangled ones have been reported . as argued in the rarity in nature of entangled networks comes from the fact that they emerge out of a _ global _optimization process not easily fulfilled by means of any dynamical simple mechanism in growing networks where , usually , only _information is available . instead, real complex networks in very different contexts have been shown to exhibit , rather generically , scale - free degree distributions .these are much more heterogeneous than entangled topologies . keeping this in mind , in this paper we explore the question of ( global ) optimization of synchronizability within the realm of scale - free networks with a fixed degree - distribution . in particular , constraining our optimization algorithm to preserve a scale - free architecture , we are able to find the optimally synchronizable networks and study the emergence of non - trivial degree - degree correlations .this study is related to previous works by sorrentino , di bernardo and others , who argued that disassortative networks ( in which nodes with similar degrees tend to be _ not _ connected among themselves ) are more synchronizable that assortative ones ( where nodes with similar degrees tend to be connected ) .our study differs from previous ones in that ( i ) we derive a rigorous lower - bound for in terms of a parameter measuring degree - degree correlations and ( ii ) we explicitly design optimal networks with a given degree - distribution and , by doing so , we verify that even if it is true that more disassortative networks typically exhibit better synchronizability , this is _ not _ always the case .finally , we also face the question of which are the _ pessimal _ networks for synchronization purposes .actually , in some applications , synchronization ( or consensus , or complete homogenization ) are not desirable properties .this might be the case , for example , in neural networks for which global synchronization implies epileptic - like activity .the question of how topology can hinder such states is both pertinent and relevant and also , it can give further insight on the key structural features of synchronization . with this goal in mind, we revert the optimization algorithm , and define an inverse optimization process just by maximizing ( rather than minimizing it ) , we analyze the topology of the resulting pessimal ( or optimally un - synchronizable ) networks .in this section we derive an upper bound for the spectral graph in terms of the correlation coefficient .this coefficient was introduced by newman in to quantify the tendency of nodes with similar degrees , , to be connected between themselves . in particular , calling the number of nodes and the total number of links ( ) the correlation coefficient can be computed as ( see for more details ) : ^ 2 } { l^{-1}\sum_{i \sim j}\frac12(k_i^2+k_j^2 ) - [ l^{-1}\sum_{i \sim j}\frac12(k_i+k_j)]^2 } \quad.\ ] ] where stands for the sum over links ( i.e. over all nodes i and j connected by a link ; every link is counted only once ) .this parameter takes positive ( negative ) values for assortative ( disassortative ) configurations . defining the laplacian matrix as nodes and are connected ( disconnected ) , and , we can obtain an upper bound for by recalling that the first non - trivial laplacian eigenvalue can be expressed as : where is is the set of all possible non - constant vectors ( in the space in which the laplacian operator acts ) .taking , which is one possible vector out of the set , we obtain : where is defined as : a different selection of the vector would lead to a different inequality .the advantage of our choice is that the obtained bound can be related to the correlation coefficient , even if it is not guaranteed that it provides a tight bound .the different terms in the numerator and denominator of equation ( [ r ] ) can be written as by substituting these expressions in eq .( [ r ] ) and rearranging them we readily obtain : and finally which provides a rigorous upper bound for the spectral gap in terms of .observe that the more negative the value of ( i.e. the more disassortative the network ) the larger the upper bound for and , therefore , is allowed to take smaller values , and the corresponding network can be more synchronizable . summing up , the inequality eq.([ineq ] )establishes that , as a rule of thumb , disassortative networks are more prone to have stable synchronized states than assortative ones , in agreement with previous results . as a word of caution ,let us underline that this does _ not _ imply that , given a fixed degree distribution , any disassortative network is better synchronizable than any assortative one , as we will illustrate in the following section . a similar result to ours has been recently derived . in particular , upper and lower bounds for were obtained in terms of a parameter quantifying the degree - degree correlation ( is a simplified version of the more detailed one , , defined by eq.([r ] ) ) .these upper and lower bounds were derived elaborating upon known bounds for the spectral gap in terms of the cheeger constant . in order to obtain them , the authors implicitly assume that , for a fixed degree distribution , the cheeger constant is an uni - parametric function of .however , given a fixed , as this parameter does not specify completely the graph topology , different graphs with different cheeger constants can be constructed .therefore , the derivation of the bounds in involves some type of mean - field - like approximation , while the upper bound here has been obtained in a rigorous way .in this section we describe the optimization algorithm suitable for finding the network topology which extremizes the stability range of a global synchronous state in networks subject to a topological constraint : a fixed degree distribution .in particular , we apply this method to networks with scale - free topology and analyze the degree - degree correlations of the resulting ( extremized ) graphs .the algorithm is a modified simulated - annealing aimed at minimizing a cost function , where .a detailed description of the algorithm can be found either in or in .it yields networks for which the synchronizability is close to extremal ( i.e. maximum or minimum ) , depending on the selected cost function .in particular , setting one gets networks with _ optimal _synchronizability while choosing the optimization procedure yields what we call _ pessimal _ networks .the simulated - annealing rewiring process starts from networks generated using the configuration model ; in particular , it starts from connected networks with nodes and links , such that their degree distribution sample a power law , , with trivial ( random ) degree - degree correlation between neighboring nodes ( see fig.[red ] ) .all the results in what follows correspond to . the graphs emerging out of the and minimization processes , starting from the network in fig .[ red ] , are depicted in fig.[opt - pes ] .naked eye inspection reveals the enormous differences between optimal and pessimal topologies . while the optimal ones resemble very much the very intricate and as - homogeneous - as - possible topology of entangled networks , pessimal topologies are as chain - like as possible , with two non - linear `` heads '' at both extremes , necessary to preserve the scale - free topology constraint .note that large values of imply small values of and therefore , following the criterion for graph ( bi)partitioning described for instance in , pessimal graphs have to be easily divisible into two parts by cutting an as - small - as - possible number of links .this is , indeed , achieved in an optimal way for linear ( chain - like ) topologies .let us remark , that different initial conditions with the same scale - free distribution , lead to outputs indistinguishable statistically from the ones in fig.[opt - pes ] , rendering robust the previous results . to put these observations under a more quantitative basis , we measure degree correlations using ( i ) the average degree of the neighbors of a node with degree , and ( ii ) the correlation coefficient given by eq.([r ] ) .fig.[pkk ] shows , averaged over different realizations , for initially uncorrelated networks ( see , as an example , fig .[ red ] ) as well as for the final optimal ( fig .[ opt - pes ] , left ) and pessimal ( fig .[ opt - pes ] , right ) networks .it reveals that optimally - synchronizable scale - free networks tend to display disassortative mixing ( high - degree nodes tend to be connected with low - degree ones ) while , on the contrary , pessimal scale - free networks tend to be assortative .this result agrees with the tendency predicted by the bound on the spectral gap in eq .( [ ineq ] ) as well as with previous results . actually , one could have anticipated these conclusions knowing that a network with good synchronization properties is also able to efficiently communicate any two nodes . in this sense ,disassortative mixing , in which low connected nodes are preferentially linked to hubs which act as information distributors , seems most efficient . on the other hand ,pessimally synchronizable networks resulting from the minimization of ( or , equivalently , maximization of ) , tend to exhibit assortative mixing , i.e. grows with , at least up to a finite - size cutoff , as shown in fig .the origin of such a cutoff is evident after realizing that the probability of having large hubs connected to other very large hubs must go to zero since the total number of links present in the system is finite .obviously , the cutoff grows with system size and diverges asymptotically .the highly assortative chain - like topology of pessimal networks can be understood by the necessity of hampering the efficient communication between any two nodes in the system .this is achieved by maximizing the distance between any two hubs by interposing between them a linear chain of poorly - connected nodes .let us underline that the above observation on the effect of disassortative ( assortative ) mixing does not necessarily imply that maximizing disassortativity ( assortativity ) leads to optimal ( pessimal ) synchronizability .this is illustrated in figs .[ evol].a - b ( figs .[ evol].d - e ) , which plot respectively the time evolution of the eigenratio and the correlation coefficient during the optimization processes .the figure regarding optimization shows that the eigenratio is not a monotonic function of .this fact is made explicit in the insets to figs .[ evol].a - b : the asymptotic minimun value of does not correspond to the network for which is minimum ( obtained in the example shown around steps ) . this points out that , despite being a good indicator of synchronizability , disassortativity can not be regarded as an unique topological measure of the stability of the synchronous state .moreover , further correlations apart from the observed assortativity / disassortativity are built up during the optimization process . in fig .[ evol].c we plot the time evolution of the shortest - loop average length , defined as the average over all nodes of the shortest loop passing through each node ; it shows that grows during the minimization of , but again it exhibits a maximum before reaching its optimal - topology value .the tendency towards forming large loops for optimally synchronizable networks was reported before for entangled networks , where the smallest loops tend to be as large as possible .at this point we want to emphasize that the approach we have undertaken here is a constructive one , as opposed to that in , where different random networks with predefined degree distribution and correlations are explored to analyze how the externally - imposed assortative or disassortative correlation affects the eigenratio of the resulting networks .results here complement those in .we have studied the problem of network synchronizability , which is directly related to many other important problems as efficient communication , searchability in the presence of congestion , many computer - science tasks , etc . while generically , the optimal networks for synchronizability , assuming un - weighted and un - directed links , are super - homogeneous , _ entangled _ topologies , in which all nodes look very much alike , in this work we have investigated the nature of optimal scale - free networks . the final goal is to analyze how are the degree - degree correlations of optimal scale - free networks . for that, we have used the standard spectral approach consisting in relating the degree of synchronizability to the laplacian matrix eigenratio . in a first part of this work ,we have derived a rigorous lower bound for in terms of the correlation coefficient ( as defined by eq.([r ] ) ) , which is a measure of the degree - degree correlations .this lower bound turns out to be proportional , hence , showing that the more negative ( i.e. the more disassortative the network ) the smaller the lower bound and , therefore , the smaller values is allowed to take , and the better the synchronizability of the resulting network . in the second part, we have explicitly constructed optimal networks ( with a fixed number of nodes , links , and a given scale - free degree distribution ) by employing a recently introduced simulated - annealing algorithm .we find that optimal networks tend to be disassortative , as found already in previous studies , and in agreement with the expectations from the previously found lower bound .however , as there is not a one - to - one correspondence between and the correlation coefficient , more disassortative networks do not always synchronize better .actually , we have illustrated how during the optimization process , at some point , the degree of assortativity increases ( i.e. increases ) as the network becomes more and more synchronizable .the emerging optimal networks exhibit also a tendency to have large loops and a rather intricate structure ( as occurs for entangled networks ) . finally , we have reverted the optimization process and , by minimizing , we have found what we call pessimally synchronizable networks .these topologies are characterized by a long string ended by two `` heads '' of nodes with degrees larger than ( required to preserve the scale - free degree distribution ) and are , therefore , highly assortative .contrarily to the case above , loops are very short .these topologies are the worst possible ones ( compatible with the imposed scale - free degree distribution ) if the task is to synchronize the network .but , on the contrary , they constitute the best choice if the goal is to avoid synchronization ( or , equivalently , avoid communicability , searchability , homogenization ) , which might be important for some applications .for instance , in order to maximize the average time that a random infection ( or random walk ) takes to reach an arbitrary target node , this is the type of network to design .we acknowledge financial support from the spanish ministerio de educacin y ciencia ( fis2005 - 00791 ) and junta de andaluca ( fqm-165 ) .10 s. h. strogatz , nature * 410 * , 268 ( 2001 ) .+ a. l. barabsi and r. albert , rev . mod. phys . * 74 * , 47 ( 2002 ) .+ s. n. dorogovtsev and j. f. f. mendes , _ evolution of networks : from biological nets to the internet and www _ , oxford univ . press ( 2003 ) .+ m. e. j. newman , siam review * 45 * , 167 ( 2003 ) .+ s. boccaletti , v. latora , y. moreno , m. chavez , and d .- u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .m. barahona and l. m. pecora , phys .89 * , 054101 ( 2002 ) .+ see also , l. m. pecora and t. l. carroll , phys .lett . * 64 * , 821 ( 1990 ) ; ibid , * 80 * , 2109 ( 1998 ) .+ x. f. wang and g. chen , int .j. bifurcation chaos appl .* 12 * , 187 ( 2002 ) .a. e. motter , c. zhou , and j. kurths , phys .rev e * 71 * , 016116 ( 2005 ) ; aip conference proceedings * 776 * , 201 ( 2005 ) ; europhys . lett . * 69 * , 334 ( 2005 ) . + t. nishikawa and a.e. motter , phys .e * 73 * , 065106 ( 2006 ) .+ a. e. motter , new j. phys .* 9 * , 182 ( 2007 ) .hwang , m. chavez , a. amann , and s. boccaletti , phys .94 * , 138701 ( 2005 ) .+ m. chavez , d .- u .hwang , a. amann , h. g. e. hentschel , and s. boccaletti , phys .lett . * 94 * , 218701 ( 2005 ) .+ m. chavez , d .- u .hwang , and s. boccaletti , eur .j. special topics * 146 * , 129 ( 2007 ) .j. gmez - gardees , y. moreno , and a. arenas , phys .* 98 * , 034101 ( 2007 ) .+ j. gmez - gardees , y moreno , and a arenas , phys .e * 75 * , 066106 ( 2007 ) .+ p. n. graw and m. menzinger , phys .rev . * 72 * , 015101 ( 2005 ) .+ e. oh , k. rho , h. hong , and b. khang , phys .e * 72 * , 047101 ( 2005 ) .note that the laplacian eigenvalues , where is the largest degree in the graph .b. bollobs , _ extremal graph theory _ academic press , new york .+ w. tutte , _ graph theory as i have known it _ , oxford u. press , new york , ( 1998 ) .+ f. chung , _ spectral graph theory _, number 92 in cbms region conference series in mathematics .am . math . soc . 1997 .m. di bernardo , f. garofalo , and f. sorrentino , proceedings of the 44th ieee conference on decision and control , pp . 4616 ( 2005 ) .+ f. sorrentino , m. di bernardo , g. huerta , cellar , and s. boccaletti , physica d * 224 * , 123 ( 2006 ) .+ see also , b. wang , h. tang , t. zhou , and z. xiu , arxiv : cond - mat/0512079 .m. girvan , m. e. j. newman , proc .usa , * 99 * , 7821 - 7826 ( 2002 ) .m. e. j. newman , m. girvan , phys .e * 69 * , 026113 ( 2004 ) .see also , l. donetti and m. a. muoz , j. stat .: theor . exp .( 2004 ) p10012 ; l. donetti and m. a. muoz , in `` _ modeling cooperative behavior in the social sciences _ '' , aip conf . proc .779 , 104 ( 2005 ) .
by employing a recently introduced optimization algorithm we explicitely design optimally synchronizable ( unweighted ) networks for any given scale - free degree distribution . we explore how the optimization process affects degree - degree correlations and observe a generic tendency towards disassortativity . still , we show that there is not a one - to - one correspondence between synchronizability and disassortativity . on the other hand , we study the nature of optimally un - synchronizable networks , that is , networks whose topology minimizes the range of stability of the synchronous state . the resulting `` pessimal networks '' turn out to have a highly assortative string - like structure . we also derive a rigorous lower bound for the laplacian eigenvalue ratio controlling synchronizability , which helps understanding the impact of degree correlations on network synchronizability . _ keywords _ : article preparation , iop journals
linear regression for interval - valued data has been attracting increasing interests among researchers .see , , , , , , , , , , for a partial list of references .however , issues such as interpretability and computational feasibility still remain .especially , a commonly accepted mathematical foundation is largely underdeveloped , compared to its demand of applications . by proposing our new model, we continue to build up the theoretical framework that deeply understands the existing models and facilitates future developments . in the statistics literature ,the interval - valued data analysis is most often studied under the framework of random sets , which includes random intervals as the special ( one - dimensional ) case .the probability - based theory for random sets has developed since the publication of the seminal book of .see for a relatively complete monograph . to facilitate the presentation of our results, we briefly introduce the basic notations and definitions in the random set theory .let be a probability space .denote by or the collection of all non - empty compact subsets of . in the space , a linear structure is defined by minkowski addition and scalar multiplication , i.e. , and .a natural metric for the space is the hausdorff metric , which is defined as where denotes the euclidean metric .a random compact set is a borel measurable function , being equipped with the borel -algebra induced by the hausdorff metric . for each ,the function defined on the unit sphere : is called the support function of x. if is convex almost surely , then is called a random compact convex set .( see , p.21 , p.102 . )the collection of all compact convex subsets of is denoted by or .when , the corresponding contains all the non - empty bounded closed intervals in .a measurable function is called a random interval .much of the random sets theory has focused on compact convex sets .let be the space of support functions of all non - empty compact convex subsets in .then , is a banach space equipped with the metric ^{\frac{1}{2}},\ ] ] where is the normalized lebesgue measure on . according to the embedding theorems ( see , ) , can be embedded isometrically into the banach space of continuous functions on , and is the image of into .therefore , , , defines a metric on .particularly , let =[x^c - x^r , x^c+x^r]\ ] ] be an bounded closed interval with center and radius , or lower bound and upper bound , respectively .then , the -metric of is and the -distance between two intervals and is ^{\frac{1}{2}}\\ & = & \left[\left(x^c - y^c\right)^2+\left(x^r - y^r\right)^2\right]^{\frac{1}{2}}.\end{aligned}\ ] ] existing literature on linear regression for interval - valued data mainly falls into two categories . in the first, separate linear regression models are fitted to the center and range ( or the lower and upper bounds ) , respectively , treating the intervals essentially as bivariate vectors .examples belonging to this category include the center method by , the minmax method by , the ( constrained ) center and range method by , and the model m by .these methods aim at building up model flexibility and predicting capability , but without taking the interval as a whole .consequently , their geometric interpretations are prone to different degrees of ambiguity .take the constrained center and range method ( ccrm ) for example . adopting the notations in ,it is specified as where and .it follows that ^ 2+\left[\beta_1^r\left(x_i^r - x_j^r\right)\right]^2.\end{aligned}\ ] ] because in general , a constant change in does not result in a constant change in .in fact , a constant change in any metric of as an interval does not lead to a constant change in the same metric of .this essentially means that the model is not linear in intervals . in the second category ,special care is given to the fact that the interval is a non - separable geometric unit , and their linear relationship is studied in the framework of random sets .investigation in this category began with developing a least squares fitting of compact set - valued data and considering the interval - valued input and output as a special case .precisely , he gave analytical solutions to the real - valued numbers and under different circumstances such that is minimized on the data .the pioneer idea of was further studied in , where the -metric was extended to a more general metric called -metric originally proposed by .the advantage of the -metric lies in the flexibility to assign weights to the radius and midpoints in calculating the distance between intervals .so far the literature had been focusing on finding the affine transformation that best fits the data , but the data are not assumed to fulfill such a transformation . a probabilistic model along this direction kept missing until , and simultaneously , proposed the same simple linear regression model for the first time .the model essentially takes on the form of with and , c\in\mathbb{r} ] , .it is equivalently expressed as this leads to the following center - radius specification where , , and the signs " correspond to the two cases in ( [ mod - cases ] ) . define our modelis specified as where , ] by interval - valued predictors ] , , and .we have assumed and are independent in this paper to simplify the presentation .the model that includes a covariance between and can be implemented without much extra difficulty .least squares method is widely used in the literature to estimate the interval - valued regression coefficients ( , , ) .it minimizes on the data with respect to the parameters .denote then the sum of squared -distance between and is written as \\ & = & \sum_{i=1}^{n}\left[\left(b+\sum_{j=1}^{p}a_jx_{j , i}^c - y_i^c\right)^2+\left(\sum_{j=1}^{p}\left|a_j\right|x_{j , i}^r+\mu - y_i^r\right)^2\right].\end{aligned}\ ] ] therefore , the lse of is defined as let be the sample covariances of the centers and radii of and , respectively .especially , when , we denote by and the corresponding sample variances .in addition , define as the sample covariances of the centers and radii of and , respectively .then , the minimization problem ( [ def - ls ] ) is solved in the following proposition .[ prop : ls_solu ] the least squares estimates of the regression coefficients , if they exist , are solution of the equation system : and then , are given by the variance of a compact convex random set in is defined via its support function as where the expectation is defined by aumann integral ( see , ) as see . for the case , it is shown by straightforward calculations that ,\\ & & \text{var}(x)=\text{var}\left(x^c\right)+\text{var}\left(x^r\right).\end{aligned}\ ] ] this leads us to define the sums of squares in to measure the variability of interval - valued data .a definition of the coefficient of determination in follows immediately , which produces a measure of goodness - of - fit .the total sum of squares ( sst ) in is defined as .\ ] ] the explained sum of squares ( sse ) in is defined as .\ ] ] the residual sum of squares ( ssr ) in is defined as .\ ] ] the coefficient of determination ( ) in is defined as where and are defined in ( [ def : sst ] ) and ( [ def : ssr ] ) , respectively .analogous to the classical theory of linear regression , our model ( [ mmod-1**])-([mmod-2 * * ] ) together with the ls estimates ( [ def - ls ] ) accommodates the partition of into and . as a result , the coefficient of determination ( )can also be calculated as the ratio of and .the partition has a series of important implications of the underlying model , one of which being that the residual / and the predictor are empirically uncorrelated in . [ thm : ss ] assume model ( [ mmod-1**])-([mmod-2 * * ] ) .let and in ( [ exp - c])-([exp - r ] ) be calculated according to the ls estimates in ( [ def - ls ] ) .then , it follows that the coefficient of determination in is equivalent to it is possible to get negative values of by its definition ( [ exp - r ] ) .theorem [ thm : pred - adjust ] gives an upper bound of the probability of this unfortunate event . if the model largely explains the variability of , should be very small and so is this bound .then , the rare cases of negative can be rounded up to 0 since is nonnegative . otherwise , if most of the variability of lies in the random error , the probability of getting negative predicts may not be ignorable , but it is essentially due to the insufficiency of the model and a different model should be pursued anyway .[ thm : pred - adjust ] consider model ( [ mmod-1**])-([mmod-2 * * ] ) .let be defined in ( [ exp - c])-([exp - r ] ) .then , this section , we study the theoretical properties of the lse for the univariate model ( [ mod-1**])-([mod-2 * * ] ) . applying proposition [ prop : ls_solu ] to the case , we obtain the two sets of half - space solutions , corresponding to and , respectively , as follows : and the final formula for the ls estimates falls in three categories . in the first, there is one and only one set of existing solution , which is defined as the lse . in the second ,both sets of solutions exist , and the lse is the one that minimizes . in the third situation , neither solution exists , but this only happens with probability going to .we conclude these findings in the following theorem .[ thm : ls_solu ] assume model ( [ mod-1**])-([mod-2 * * ] ) .let be the least squares solution defined in ( [ def - ls ] ) .if , then there exists one and only one half - space solution .more specifically , + * i. * if in addition , then the ls solution is given by * ii .* if instead , then the ls solution is given by otherwise , , and then either both of the half - space solutions exist , or neither one exists .in particular , + * iii . *if in addition , then both of the half - space solutions exist , and * iv . *if instead , then the ls solution does not exist , but this happens with probability converging to 0 . unlike the classical linear regression , ls estimates for the model ( [ mod-1**])-([mod-2 * * ] )are biased .we calculate the biases explicitly in proposition [ prop : ls_exp ] , which are shown to converge to zero as the sample size increases to infinity. therefore , the ls estimates are asymptotically unbiased .[ prop : ls_exp ] let be the least squares solution in theorem [ thm : ls_solu ] . then ,,\end{aligned}\ ] ] .\end{aligned}\ ] ] [ thm : ls_consist ] consider model ( [ mod-1**])-([mod-2 * * ] ) .assume and .then , the least squares solution in theorem [ thm : ls_solu ] is asymptotically unbiased , i.e. as .we carry out a systematic simulation study to examine the empirical performance of the least squares method proposed in this paper .first , we consider the following three models : + * model 1 : , , , , ; * model 2 : , , , , ; * model 3 : , , , , ; where data show a positive correlation , a negative correlation , and a positive correlation with a negative , respectively .a simulated dataset from each model is shown in figure [ fig : sim - data ] , along with its fitted regression line . + to investigate the asymptotic behavior of the ls estimates , we repeat the process of data generation and parameter estimation 1000 times independently using sample size for all the three models. the resulting 1000 independent sets of parameter estimates for each model / sample size are evaluated by their mean absolute error ( mae ) and mean error ( me ) .the numerical results are summarized in table [ tab : sim ] .consistent with proposition [ prop : ls_exp ] , tends to underestimate when and overestimate when .this bias also causes a positive and negative bias in , when and , respectively .similarly , a positive bias in is induced by the negative bias in .all the biases diminish to 0 as the sample size increases to infinity , which confirms our finding in theorems [ thm : ls_consist ] ..evaluation of parameter estimation [ cols="^ , > , > , > , > , > , > , > " , ] [ tab : sim - com ]in this section , we apply our model to analyze the average temperature data for large us cities , which are provided by national oceanic and atmospheric administration ( noaa ) and are publicly available .the three data sets we obtained specifically are average temperatures for 51 large us cities in january , april , and july .each observation contains the averages of minimum and maximum temperatures based on weather data collected from 1981 to 2010 by the noaa national climatic data center of the united states .july in general is the hottest month in the us . by this analysis, we aim to predict the summer ( july ) temperatures by those in the winter ( january ) and spring ( april ) .figure [ fig : real - data ] plots the july temperatures versus those in january and april , respectively .the parameters are estimated according to ( [ eqn : lse-1])-([eqn : lse-3 ] ) as denote by , , and , the average temperatures in a us city in january , april , and july , respectively .the prediction for based on and is given by the three sums of squares are calculated to be therefore , the coefficient of determination is finally , the variance parameters can be estimated as thus , by theorem [ thm : pred - adjust ] , an upper bound of on average is estimated to be which is very small and reasonably ignorable .we calculate for the entire sample and all of them are well above .so , for this data , although and it is possible to get negative predicted radius , it in fact never happens because the model has captured most of the variability .the empirical distributions of residuals are shown in figure [ fig : real - residual ] .both distributions are centered at 0 , with the center residual having a slightly bigger tail .we have rigorously studied linear regression for interval - valued data in the metric space .the new model we introduces generalizes previous models in the literature so that the hukuhara difference needs not exist .analogous to the classical linear regression , our model together with the ls estimation leads to a partition of the total sum of squares ( ssr ) into the explained sum of squares ( sse ) and the residual sum of squares ( ssr ) in , which implies that the residual is uncorrelated with the linear predictor in .in addition , we have carried out theoretical investigations into the least squares estimation for the univariate model .it is shown that the ls estimates in are biased but the biases reduce to zero as the sample size tends to infinity .therefore , a bias - correction technique for small sample estimation could be a good future topic .the simulation study confirms our theoretical findings and shows that the least squares estimators perform satisfactorily well for moderate sample sizes . 99 artstein z , vitale , ra . a strong law of large numbers for random compact sets .annals of probability . 1975;5:879882 .aumann rj .integrals of set - valued functions . j. math .1965;12:112 .billard l , diday e. regression analysis for interval - valued data . in : dataanalysis , classification and related methods , proceedings of the seventh conference of the international federation of classification societies ( ifcs00 ) .springer , belgium ; 2000.p.369374 .billard l , diday e. symbolic regression analysis . in : classification , clustering and dataanalysis , proceedings of the eighth conference of the international federation of classification societies ( ifcs02 ) .springer , poland ; 2002.p.281288 .billard l. dependencies and variation components of symbolic interval - valued data . in : selected contributions in data analysis and classification .springer , berlin heidelberg ; 2007.p.312 .blanco - fernndez a , corral n , gonzlez - rodrguez g. estimation of a flexible simple linear model for interval data based on set arithmetic .computational statistics & data analysis .2011;55:25682578 .blanco - fernndez a , colubi a , gonzlez - rodrguez g. confidence sets in a linear regression model for interval data . journal of statistical planning and inference .2012;142:13201329 .carvalho fat , lima neto ea , tenorio , cp . a new method to fit a linear regression model for interval - valued data .lecture notes in computer sciences .2004;3238 : 295306 .cattaneo megv , wiencierz a. likelihood - based imprecise regression . international journal of approximate reasoning .2012;53:11371154 .diamond p. least squares fitting of compact set - valued data . j. math .1990;147:531544 .gil ma , lopez mt , lubiano ma , and montenegro m. regression and correlation analyses of a linear relation between random intervals .2001;10,1:183201 .gil ma , lubiano ma , montenegro m , lopez mt .least squares fitting of an affine function and strength of association for interval - valued data . metrika .2002;56 : 97111 .gil ma , gonzlez - rodrguez g , colubi a , montenegro m. testing linear independence in linear models with interval - valued data .computational statistics & data analysis .2007;51:30023015 .gonzlez - rodrguez g , blanco a , corral n , colubi a. least squares estimation of linear regression models for convex compact random sets .advances in data analysis and classification . 2007;1:6781 . hrmander h. sur la fonction dappui des ensembles convexes dans un espace localement convexe .arkiv fr mat .1954;3:181186 .hukuhara m. integration des applications mesurables do nt la valeur est un compact convexe .funkcialaj ekvacioj .1967;10:205223 .kendall dg .foundations of a theory of random sets . in : harding ef and kendall dg ( ed ) stochastic geometry .john wiley & sons , new york ; 1974 .krner r. a variance of compact convex random sets .institut fr stochastik , bernhard - von - cotta - str .2 09599 freiberg ; 1995 .krner r. on the variance of fuzzy random variables .fuzzy sets and systems . 1997;92:8393 .krner r , nther w. linear regression with random fuzzy variables : extended classical estimates , best linear estimates , least squares estimates .information sciences .1998;109 : 95118 .lyashenko nn . limit theorem for sums of independent compact random subsets of euclidean space .journal of soviet mathematics .1982;20:21872196 .lyashenko nn . statistics of random compacts in euclidean space .journal of soviet mathematics .1983;21:7692 .manski cf , tamer t. inference on regressions with interval data on a regressor or outcome .2002;70:519546 .matheron g. random sets and integral geometry .john wiley & sons , new york ; 1975 .molchanov i. theory of random sets .springer , london ; 2005 .lima neto ea , carvalho fat .centre and range method for fitting a linear regression model to symbolic interval data .computational statistics & data analysis . 2008 ; 52:15001515 .lima neto ea , carvalho fat . constrained linear regression models for symbolic interval - valued variables .computational statistics & data analysis . 2010;54:333347 .radstrm h. an embedding theorem for spaces of convex sets .differentiating with respect to , , and , , respectively , and setting the derivatives to zero , we get equations ( [ lse-1])-([lse-2 ] ) yield equations ( [ eqn : lse-1 ] ) are obtained by plugging ( [ lse-4])-([lse-5 ] ) into ( [ lse-3 ] ) , and equations ( [ eqn : lse-2])-([eqn : lse-3 ] ) follow from ( [ lse-4])-([lse-5 ] ) .this completes the proof . according to definitions ( [ def : sst])-([def : ssr ] ) , \nonumber\\ & = & sse+ssr+2\sum_{i=1}^{n}\left[\left(y_i^c-\hat{y}_i^c\right)\left(\hat{y}_i^c-\overline{y^c}\right ) + \left(y_i^r-\hat{y}_i^r\right)\left(\hat{y}_i^r-\overline{y^r}\right)\right]\nonumber\\ &= & sse+ssr+2\sum_{i=1}^{n}\left[\left(y_i^c-\hat{y}_i^c\right)\hat{y}_i^c+\left(y_i^r-\hat{y}_i^r\right)\hat{y}_i^r\right].\label{ss : eqn-1}\end{aligned}\ ] ] the last equation is due to ( [ lse-1])-([lse-2 ] ) .further in view of ( [ exp - c])-([exp - r ] ) and ( [ lse-3 ] ) , we have \\ & = & \sum_{i=1}^{n}\left[\left(y_i^c-\hat{y}_i^c\right)\sum_{j=1}^{p}a_jx_{j , i}^c+\left(y_i^r-\hat{y}_i^r\right)\sum_{j=1}^{p}|a_j|x_{j , i}^r\right]\\ & = & \sum_{j=1}^{p}a_j\sum_{i=1}^{n}\left[\left(y_i^c-\hat{y}_i^c\right)x_{j , i}^c+\left(y_i^r-\hat{y}_i^r\right)sgn(a_j)x_{j , i}^r\right]\\ & = & 0.\end{aligned}\ ] ] this together with ( [ ss : eqn-1 ] ) completes the proof .notice that an application of markov s inequality completes the proof .parts * i * , * ii * and * iii * are obvious from proposition [ prop : ls_solu ] .part * iv * follows from lemma [ lem : cov_r ] in appendix ii .we prove the cases and separately .to simplify notations , we will use throughout the proof , but the expectation should be interpreted as being conditioned on . + * case i : *. + from lemma [ lem : cov_est ] , we have + \sum_{i < j}(x_i^r - x_j^r ) \left [ ( y_i^r - y_j^r)-a(x_i^r - x_j^r ) \right]}{\sum_{i< j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2}\\ = & \frac{\sum_{i < j}(x_i^c - x_j^c)(\lambda_i-\lambda_j)+\sum_{i < j}(x_i^r - x_j^r)(\eta_i-\eta_j ) } { \sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2}.\\\end{aligned}\ ] ] this immediately yields similarly , } { \sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2},\ ] ] and consequently , notice now \\ & & -\left[\int_{\{\hat{a}=a^-\}}(a^+-a ) \mathrm{d}\mathbb{p}+\int_{\{\hat{a}=a^-\}}(a^--a ) \mathrm{d}\mathbb{p } \right]\nonumber\\ & = & e\left(a^+-a\right)-\int_{\{\hat{a}=a^-\}}(a^+-a^- ) \mathrm{d}\mathbb{p}\nonumber\\ & = & -e(a^+-a^-)i_{\{\hat{a}=a^-\}}\label{eqn-3}.\end{aligned}\ ] ] here , equation ( [ eqn-3 ] ) is due to ( [ eqn-1 ] ) . recall that } { \sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2},\label{a+-a-}\end{aligned}\ ] ] since .therefore , } { \sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2}\right\}i_{\{\hat{a}=a^-\}}\nonumber\\ & = & -\frac{2\sum_{i < j}\left[|a|(x_i^r - x_j^r)^2p(\hat{a}=a^-)+(x_i^r - x_j^r)e(\eta_i-\eta_j)i_{\{\hat{a}=a^-\}}\right ] } { \sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2}\nonumber\\ & = & -\frac{2\sum_{i < j}(x_i^r - x_j^r)^2p(\hat{a}=a^-)}{\sum_{i < j}(x_i^c - x_j^c)^2+\sum_{i < j}(x_i^r - x_j^r)^2}\nonumber\\ & = & -\frac{2as^2(x^r)}{s^2(x^c)+s^2(x^r)}p(\hat{a}=a^-).\label{rst-1}\end{aligned}\ ] ] similar to the preceding arguments , recall again that }{s^2\left(x^c\right)+s^2\left(x^r\right)}.\label{a++a-}\end{aligned}\ ] ] it follows that * case ii : * + in this case , we have }{s^2(x^c)+s^2(x^r)},\\ & a^--a = \frac{\sum_{i < j}(x_i^c - x_j^c)(\lambda_i-\lambda_j)-\sum_{i < j}(x_i^r - x_j^r)(\eta_i-\eta_j)}{s^2(x^c)+s^2(x^r)}.\\\end{aligned}\ ] ] these imply similar to the case of , we obtain these , together with ( [ a+-a- ] ) and ( [ a++a- ] ) , imply , the desired result follows from ( [ rst-1 ] ) , ( [ rst-2 ] ) , ( [ rst-3 ] ) and ( [ rst-4 ] ) . from ( [ b+ ] ) and ( [ b- ] ) , similarly , from ( [ mu+ ] ) and ( [ mu- ] ) , hence , the desired result follows by proposition [ prop : ls_exp ] and lemma [ lem : sign - consist ] in the appendix .[ lem : cov_r ] assume model ( [ mod-1**])-([mod-2 * * ] ) and . then . consequently , with probability converging to 1 . according to ( [ mod-2 * * ] ) , -e\left(x^r\right)e\left(|a|x^r+\eta_1\right)\nonumber\\ & = & |a|e\left(x^r\right)^2+\mu e\left(x^r\right)-|a|\left[e\left(x^r\right)\right]^2-\mu e\left(x^r\right)\nonumber\\ & = & |a|\text{var}\left(x^r\right)\nonumber\\ & \geq & 0,\label{cov_true}\end{aligned}\ ] ] provided that . by the slln , ( [ cov_sample ] ) together with ( [ cov_true ] ) completes the proof . to prove , \\ & = n\sum_{i=1}^nx_i^vy_i^v-(\sum_{i=1}^nx_i^v)(\sum_{i=1}^ny_i^v)=n^2s\left(x^v , y^v\right).\\\end{aligned}\ ] ] follows by replacing with and with in the above calculations .[ lem : sign - consist ] assume model ( [ mod-1**])-([mod-2 * * ] ) .assume in addition that and .let be the least squares solution defined in ( [ def - ls ] ) .then as .we prove the case only .the case can be proved similarly . under the assumption that , and consequently , . according to theorem [ thm : ls_solu ] , the only other circumstance under which is when and simultaneously .it is therefore sufficient to show that notice \\ & & + \frac{1}{n}\sum_{i=1}^{n}\left[\left(a^+x_i^r+\mu - y_i^r\right)^2-\left(a^-x_i^r+\mu - y_i^r\right)^2\right]\\ : = & & \frac{1}{n}\left(i+ii\right).\end{aligned}\ ] ] the first term \\ & = & \sum_{i=1}^{n}\left[\left(a^{+}-a\right)^2\left(x_i^c-\overline{x^c}\right)^2+\left(\lambda_i-\overline{\lambda}\right)^2 -2\left(a^{+}-a\right)\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\right]\\ & & -\sum_{i=1}^{n}\left[\left(a^{-}-a\right)^2\left(x_i^c-\overline{x^c}\right)^2+\left(\lambda_i-\overline{\lambda}\right)^2 -2\left(a^{-}-a\right)\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\right]\\ & = & \left[\left(a^{+}-a\right)^2-\left(a^{-}-a\right)^2\right]\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2\\ & & -2\left(a^{+}-a^{-}\right)\sum_{i=1}^{n } \left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\\ & = & \left(a^{+}-a^{-}\right)\left[\left(a^{+}+a^{-}-2a\right)\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2 -2\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\right].\end{aligned}\ ] ] from this , and the assumption that , we see that is equivalent to on the other hand , \sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2 -\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\\ & & = \left[\frac{\sum_{i < j}\left(x_i^c - x_j^c\right)\left(\lambda_i-\lambda_j\right)}{\sum_{i < j}\left(x_i^c - x_j^c\right)^2+\sum_{i < j}\left(x_i^r - x_j^r\right)^2 } -a\frac{s^2\left(x^r\right)}{s^2\left(x^c\right)+s^2\left(x^r\right)}\right]\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2\\ & & -\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\\ & & = \frac{\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2}{\sum_{i < j}\left(x_i^c - x_j^c\right)^2+\sum_{i < j}\left(x_i^r - x_j^r\right)^2 } \sum_{i < j}\left(x_i^c - x_j^c\right)\left(\lambda_i-\lambda_j\right)\\ & & -\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right ) -a\frac{s^2\left(x^r\right)}{s^2\left(x^c\right)+s^2\left(x^r\right)}\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2\\ & & = \frac{\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2}{\sum_{i < j}\left(x_i^c - x_j^c\right)^2+\sum_{i < j}\left(x_i^r - x_j^r\right)^2 } \left[n\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right)\right]\\ & & -\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right ) -a\frac{s^2\left(x^r\right)}{s^2\left(x^c\right)+s^2\left(x^r\right)}\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2\\ & & = \sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)\left(\lambda_i-\overline{\lambda}\right ) \left[\frac{s^2\left(x^c\right)}{s^2\left(x^c\right)+s^2\left(x^r\right)}-1\right]\\ & & -a\frac{s^2\left(x^r\right)}{s^2\left(x^c\right)+s^2\left(x^r\right)}\sum_{i=1}^{n}\left(x_i^c-\overline{x^c}\right)^2\\ & & = -\frac{s^2\left(x^r\right)}{s^2\left(x^c\right)+s^2\left(x^r\right ) } n\left[as^2\left(x^c\right)+s\left(x^c , \lambda\right)\right],\end{aligned}\ ] ] where denotes the sample covariance of the random variables and , which converges to almost surely by the independence assumption .therefore , \nonumber\\ & & \to c_1<0\label{eqn : consist-2}\end{aligned}\ ] ] almost surely , as . + by the similar calculation , we have that the second term \nonumber\\ & & \to c_2<0\label{eqn : consist-3}\end{aligned}\ ] ] almost surely , as .( [ eqn : consist-2 ] ) and ( [ eqn : consist-3 ] ) together imply that this completes the proof .
it has been some time since interval - valued linear regression was investigated . in this paper , we focus on linear regression for interval - valued data within the framework of random sets . the model we propose generalizes a series of existing models . we establish important properties of the model in the space of compact convex subsets of , analogous to those for the classical linear regression . furthermore , we carry out theoretical investigations into the least squares estimation that is widely used in the literature . a simulation study is presented that supports our theorems . finally , an application to a climate data set is provided to demonstrate the applicability of our model .
modelling the flow of complex fluids is a very intricate problem which is far from being solved up to now . besides studies which aim at improving phenomenological rheological models ( purely macroscopic constitutive laws ), only a few attempts are made to recover the rheological behavior of a complex fluid from elementary physical processes arising in its microstructure . + the mesoscopic model which has been proposed by hbraud and lequeux in deals with simple shear flows of concentrated suspensions .it is obtained by dividing the material in a large number of mesoscopic elements ( `` blocks '' ) with a given shear stress ( is a real number ; it is in fact an extra - diagonal term of the stress tensor in convenient coordinates ) and by considering the evolution of the probability density which represents the distribution of stress in the assembly of blocks . under various assumptions on the evolution of the stresses of the blocks which will be described below, the equation for the probability density for a block to be under stress at time may be written as : [ eq : syst - p ] _ tp =- b(t ) _p+d(p(t ) ) ^2_p - p+_0 ( ) ( 0;t);[eq : p ] + p0 ; + p(0,)=p_0 ( ) , [ eq : p0er ] where for , we denote by in equation ( [ eq : p ] ) , } ] and the dirac delta function on .each term arising in the above equation ( hl equation in short ) has a clear physical interpretation .when a block is sheared , the stress of this block evolves with a variation rate proportional to the shear rate ( is an elasticity constant ) ; in this study , the shear rate , and therefore the function , are assumed to be in .when the modulus of the stress overcomes a critical value , the block becomes unstable and may relax into a state with zero stress after a characteristic relaxation time .this phenomenon induces a rearrangement of the blocks and is modelled through the diffusion term .the diffusion coefficient is assumed to be proportional to the amount of stress which has to be redistributed by time unit and the positive parameter is supposed to represent the mechanical fragility " of the material . in all that follows ,the parameters , and are positive , and the initial data in is a given probability density ; that is we will be looking for solutions in such that belongs to to the nonlinear parabolic partial differential equation .the subscript refers to integration over with respect to , whereas the subscript refers to time integration on ] is the characteristic function of the interval - 1,1[ ] is a stationary solution to this equation and for this solution is identically zero .but it is not the unique solution to in .it is indeed possible to construct a so - called _ vanishing viscosity solution _ for which for all , and there are actually infinitely many solutions to this equation .( this statement is obtained as a corollary of lemma [ lem:2 ] in section [ sec : deg ] below . ) as far as equation ( [ eq : syst - p ] ) is concerned , we show that , in the case when and , we may have either a unique or infinitely many solutions , depending on the initial data ( see proposition [ prop : deg ] in section [ sec : deg ] ) .on the other hand , we are able to prove the following existence and uniqueness result in the non - degenerate case when : [ th : main1 ] let the initial data satisfy the conditions and assume that then , for every , there exists a unique solution to the system in .moreover , , for all , and for every there exists a positive constant such that besides so that the average stress is well - defined by ( [ eq : def - tau ] ) in .the first step toward the existence proof of solutions to will consist in the study of so - called vanishing viscosity approximations , which are the unique solutions to the family of equations [ syst : p - eps ] _ tp_=-b(t)_p_+(d(p_(t))+)^2_p _ -_p_+ _ 0 ( ) ; [ eq : p - eps ] + p_0 ; + p_(0,)=p_0 ( recall that we have rescaled the time and stress units to get and ) .section [ sec : visc ] below is devoted to the proof of the following [ prop : visc ] let be given .we assume that the initial data satisfies the same conditions as in the statement of the theorem .then , for every and , there exists a unique solution to in .moreover , , , and for every , there exist positive constants , and which are independent of such that and theorem ( [ th : main1 ] ) is then proved in section [ sec : nondeg ] while the degenerate case is investigated in section [ sec : deg ] .lastly , the description of stationary solutions in the constant shear rate case is carried out in section [ sec : stat ] .this section is devoted to the proof of proposition [ prop : visc ] .+ we begin with the following : [ lem : unique ] let satisfy .then for every and , there exists at most one solution to in .moreover , ( thus , the initial condition makes sense ) and for almost every in ] , on ] to obtain we bound from above the terms on the right - hand side as follows . first , we have thanks to ( [ c.1 ] ) and using that and .next , thanks again to ( [ c.1 ] ) , cauchy - schwarz inequality and since is in . finally , and the right - hand side goes to as goes to infinity since is in .all this together yields for almost every in ] .therefore the source term in is in and the existence and the uniqueness of a solution to the system is well - known ( see for example ) . in particular, the initial condition makes sense . owing to the fact that the source term is non -negative , the proof that is also standard ( see again ) .we now check the pointwise inequality .+ this is ensured by the maximum principle with observing that and given respectively by and are the unique solutions to the systems and respectively .we now turn to the proof of statement _ i. _ and assume that belongs to .then , using the two facts that for every , and , is easily deduced from with the help of and since .+ suppose now that .this together with the assumption , guarantees that ( see also below ) . using again ,we now have since . with the help of and observing that , we then deduce .+ we now use this bound to check that and . indeed , for any , any sequence in ] to obtain * step 3 * : _ the function is continuous ._ we consider a sequence in such that converges to strongly in and converges to strongly in , and we denote .we have to prove that converges strongly to in and converges to strongly in , with . in virtue of ( [ eq : bd - l2 ] ) and ( [ in : endsigmap ] ) , the sequence is bounded in , is bounded in and is bounded in . since is bounded in , and is bounded in , is bounded in .this together with the fact that is bounded in implies that , up to a subsequence , converges strongly towards in ( the convergence being weak in ) thanks to a well - known compactness result .in particular , converges to almost everywhere .thus and by the fatou s lemma , almost everywhere on ] .+ being given an initial data which satisfies , existence of a solution is ensured from proposition [ prop:2 ] by applying the schauder fixed point theorem on `` short '' time interval ] with , where latexmath:[ ] .it is now clearly seen that for any integer we may build a solution to on ] to deduce with using that . main result of this section corresponds to the statement of theorem [ th : main1 ] and fully describes the issue of existence and uniqueness of solutions to the hl equation in the non - degenerate case .it is summarized in the following : [ prop : non - deg ] let satisfy .we assume that .then , the hl equation has a unique solution in and is the limit ( in ) of when goes to where is the vanishing viscosity solution whose existence and uniqueness is ensured by proposition [ prop : visc ] .moreover , , and .furthermore , and for every there exists a positive constant such that we begin with proving the following : [ lem : non - degeneracy ] we assume that satisfies . then ,if , for every ] .the function is a gaussian probability density with mean and squared width .therefore , for every , we have which implies in the zero shear case ( , thus ) the proof is over and in the general case , a strictly positive bound from below is available as long as the support of is not contained in .we thus define then ( possibly even infinite ) , the support of is contained in , and for every , holds for some positive constant defined by it is worth emphasizing that this quantity is independent of .if , the proof is over and fits .let us now examine the case when and .+ we go back to , take in ] which goes to as goes to infinity . to shorten the notation we denote by instead of the corresponding sequence of solutions to . with the above bound on and , we know that is bounded in independently of .moreover thanks to and is bounded in and we also dispose of a uniform bound on in virtue of . therefore arguing exactly as in the proof of proposition [ prop:2 ] ( step 4 ) where we have proved that the mapping is relatively compact in we show that converges to some strongly in and converges to in . then is a solution to the initial problem in , and . moreover , this non - degeneracy condition on the viscosity coefficient ensures that there is at most one solution to in ( this follows by an obvious adaptation of the proof of lemma [ lem : unique ] to this case ) .therefore the limiting function is uniquely defined and does not depend on the sequence .moreover the whole sequence converges to this unique limit and not only a subsequence . + as a conclusion of this subsection let us make the following comment which is a byproduct of proposition [ prop : non - deg ] .let be a solution to in , then as soon as is positive for some time it remains so afterwards since the solution can be continued in a unique way beginning from time .throughout this section we assume that and therefore the support of is included in ] .on , the hl equation reads the above system reduces to } \;.\ ] ] therefore there exists a maximal time interval ] .besides , [ lem:1 ] let such that the function is in ,+\infty[) ] .in addition , on ,+\infty[ ] , and that on ,+\infty[ ] .thus , for any it follows that for any , [ lem:2 ] let and such that let us consider the problem 1 .if satisfies ( [ eq : condfp0 ] ) then is the unique solution to ( [ eq : ii ] ) in ; 2 . otherwise , ( [ eq : ii ] ) has an infinite number of solutions in .the set of solutions to ( [ eq : ii ] ) is made of the steady state and of the functions defined by where is the unique solution to in such that on ,+\infty[ ] fulfills the assumptions of the above lemma and .therefore there are infinitely many solutions to the equation in the introduction .* proof of corollary [ cor:1 ] : * the only point to be checked is that . with the standard notation , and by using and symmetry considerations , simple calculations yield .\end{aligned}\ ] ] since for going to , near and the integrability of on ] the same reasoning as in the non - degenerate case leads to the conclusion that converges up to an extraction to in ,+\infty[\times \rr) ] for any , being a solution to in ,+\infty[,l^2_\sigma) ] .throughout this section the shear rate is assumed to be a given constant and we are looking for solutions in to the following system : * if , any probability density which is compactly supported in ] is a solution to the system since in that case all terms in equation cancel .we now examine the issue of existence of solutions of such that . for simplicitywe denote . for given constant ,it is very easy to calculate explicitly the solutions of on each of the three regions , $ ] and . using compatibility conditions on and the fact that has to be in one obtains : the compatibility condition happens to be then automatically satisfied and the normalization constraint imposes that solves since , we immediately reach a contradiction when , whereas when equation admits a unique positive solution ; namely * the case when * + first of all , we observe that if every term in equation but vanish .thus has to be a non - zero constant which is in contradiction with .so necessarily . forgiven positive constant , we then solve as above and obtain with and it is tedious but easy to check that this function always fulfills the self - consistency condition and that the normalization condition reads for any ( the negative values of are dealt with by replacing by ) , the left - hand side of ( [ eq : norm - new ] ) is a continuous function which goes to when goes to infinity and goes to zero when goes to .this already ensures the existence of at least one steady state for any .moreover , setting ( for example ) we may rewrite the left - hand side of ( [ eq : norm - new ] ) as f(z ) = + .next we check that the function is monotone decreasing ( thus , the left - hand side of is increasing with respect to ) , whence the uniqueness result .
the mathematical properties of a nonlinear parabolic equation arising in the modelling of non - newtonian flows are investigated . the peculiarity of this equation is that it may degenerate into a hyperbolic equation ( in fact a linear advection equation ) . depending on the initial data , at least two situations can be encountered : the equation may have a unique solution in a convenient class , or it may have infinitely many solutions . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
communication networks are increasingly being taxed by the enormous demand for instantly available , streaming multimedia .ideally , we would like to maximize the reliability and data rate of a system while simultaneously minimizing the delay . yet , in the classical fixed blocklength setting , the reliability function of a code goes to zero as the rate approaches capacity even in the presence of feedback .this seems to imply that , close to capacity , it is impossible to keep delay low and reliability high .however , this lesson is partially an artifact of the block coding framework .the achievable tradeoff changes in a streaming setting where all bits do not need to be decoded by a fixed deadline , but rather , each individual bit must be recovered after a certain delay . in thissetting , the reliability function measures how quickly the error probability on each bit estimate decays as a function of the delay .surprisingly , the achievable error exponent can be quite large at capacity if a noiseless feedback link is available and cleverly exploited .the distinguishing feature of these streaming architectures with feedback is the use of an ultra - reliable special codeword that is transmitted to notify the decoder when it is about to make an error . while this `` red alert '' codeword requires a significant fraction of the decoding space to attain its very large error exponent , the remaining `` standard '' codewords merely need their error probability to vanish in the blocklength .one question that seems intimately connected to the streaming delay - reliability tradeoff is how large the red alert error exponent can be made for a fixed blocklength codebook of a given rate . beyond this streaming motivation, the red alert problem is also connected to certain sensor network scenarios .for example , consider a setting where sensors must send regular updates to the basestation using as little power as possible , i.e. , using the standard codewords .if an anomaly is detected , the sensors are permitted to transmit at higher power in order to alert the basestation with high reliability , which corresponds to our red alert problem .prior work has characterized the red alert exponent for discrete memoryless channels ( dmcs ) . in this paper , we determine the red alert exponent for point - to - point additive white gaussian noise ( awgn ) channels that operate under block power constraints on both the regular and red alert messages .we derive matching upper and lower bounds on the red alert exponent with a focus on the resulting high - dimensional geometry of the decoding regions .our code construction can be viewed as a generalization of that used in the discrete case .previous studies on protecting a special message over a dmc have relied on some variant of the following code construction .first , designate the special codeword to be the repetition of a particular input symbol .then , generate a fixed composition codebook at the desired rate .this composition is chosen to place the `` standard '' codewords as far as possible from the special codeword ( as measured by the kullback - leibler ( kl ) divergence between induced output distributions ) while still allocating each codeword a decoding region large enough to ensure a vanishing probability of error . by construction ,the rest of the space is given to the special codeword .early work by kudryashov used this strategy to achieve very high error exponents in the bit error setting under an expected delay constraint . in , borade , nakibolu , and zhengstudy `` bit''-wise and `` message''-wise unequal error protection ( uep ) problems and error exponents .the red alert problem is a message - wise uep problem in which one message is special and the remaining messages are standard . while focuses on general dmcs near capacity , lemma 1 of that paper develops a general sharp bound on the red alert exponent for dmcs at any rate below capacity ( both with and without feedback ) .specializing to the exponent achieved at capacity , let denote the input alphabet , the channel transition matrix , and the capacity - achieving output distribution of the dmc .then , the optimal red alert exponent at capacity is where is the kl divergence .we also mention recent work by nakibolu __ that considers the generalization where a strictly positive error exponent is required of the standard messages . for the binary symmetric channel ( bsc ), the optimal red alert exponent has a simple and illustrative form .this exponent can be inferred from the general expression in ( * ? ? ?* lemma 1 ) or via a direct proof due to sahai and draper ( which appeared concurrently with the conference version of ) .let denote the crossover probability of the bsc and the probability that a symbol in the codebook is a one .then , the optimal red alert exponent as a function of rate for the bsc is where , , and .csiszr studied a related problem where multiple special messages require higher reliability in .upper bounds for multiple special messages with different priority levels were also developed in . in ,borade and sanghavi examined the red alert problem from a coding theoretic perspective .as shown by wang , similar issues arise in certain sparse communication problems where the receiver must determine whether a codeword was sent or the transmitter was silent .the fundamental mechanism through which high red alert exponents are achieved is a binary hypothesis test . by designing the induced distributions at the output of the channel to be far apart as measured by kl divergence , we can distinguish whether the red alert or some standard codeword was sent .the test threshold is biased to minimize the probability of missed detection and is analyzed via an application of stein s lemma .this sort of biased hypothesis test occurs in numerous other communication settings with feedback , such as and , as mentioned earlier , these codes are also used as a component in streaming data systems ( see , for instance , ) .there is also a rich literature on the interplay between hypothesis testing and information theory , which we can not do justice to here ( see , for instance , ) .first , we mention some of our notational choices . we will use boldface lowercase letters to denote column vectors , to denote the all zeros vector , and to denote the all ones vector . throughout the paper , the functionis taken to be the natural logarithm and rate is measured in nats instead of bits .we use to denote the euclidean norm of the vector .the transmitter has a _ message _ that it wants to convey to the receiver .one of the messages , , is a red alert message that will be afforded extra error protection .we assume the red alert message is chosen with some probability greater than and the remaining messages are chosen with equal probability . the _ encoder _ maps the message into a length- real - valued codeword for transmission over the channel , .let denote the codeword used for message and let denote the entire codebook , .the codebook must satisfy both an average block power constraint across codewords , in addition , the red alert codeword must satisfy a less stringent power constraint , for some . the _ rate _ of the codebook is nats per channel use .note that our codebook average power constraint ( [ e : avgpower ] ) is less strict that the usual block power constraint .our achievable scheme can be easily modified to meet this constraint using expurgation .furthermore , our red alert power constraint ( [ e : alertpower ] ) is less strict than a peak power constraint , where denotes the symbol of the red alert codeword .our scheme sets the red alert codeword to be , which naturally satisfies a peak power constraint .therefore , our main results hold under an average power constraint and peak power constraint as well .we omit the red alert codeword from the average block power constraint for the sake of simplicity .another possibility would be to consider only an average block power constraint over both the standard and red alert codeword .this would lead to two different tensions between maximizing the red alert exponent and maximizing the rate .the first would be the allocation of the decoding regions and the second would be the allocation of power based on the probability of a red alert message . by using two separate power constraints , we can state our results in a simpler form that does not depend on the red alert probability .the _ channel _ outputs the transmitted vector , corrupted by independent and identically distributed ( i.i.d . ) gaussian noise : where for some noise variance .the signal observed by the receiver is sent into a _ decoder _ which produces an estimate of the transmitted message , . we are concerned with three quantities , the _ probability of missed detection _ of the red alert message , the _ probability of false alarm _ , and the _ average probability of error _ of all other messages : we say that a _ red alert exponent _ of is achievable if for every and large enough , there exists a rate encoder and a decoder such that in other words , we would like the red alert codeword to have as large an error exponent as possible while keeping the other error probabilities small . the standard codewords do not need to have a positive error exponentof course , the rate must be lower than the awgn capacity , , where we now review some basic facts of high - dimensional geometry that will be useful in our analysis .let denote the -dimensional ball centered at with radius .recall that the volume of is where is the gamma function ( * ? ? ?we define to be the surface of .its surface area ( or , more precisely , the -dimensional volume of its surface ) is ( * ? ? ?1 , eq . 19 ) .the dimension of the function will always be clear from the context .we also define to be the spherical shell centered at from radius to . the angle between two -dimensional vectors and is where takes values between and .let denote the -dimensional cone with its origin at , its center axis running from to , and of half - angle which takes values from to .the solid angle of an -dimensional cone of half - angle is the fraction of surface area that it carves out of an -dimensional sphere , note that the solid angle is the same for any sphere radius .[ l : surfaceratio]the solid angle of a cone with half - angle satisfies see the math leading up to equation 28 in for a proof .in the binary case , the simplest characterization of the optimal codebook is a statistical one : the red alert codeword is the zero vector and the remaining codewords are of a constant composition . from one perspective , this can be visualized as placing the red alert codeword in the `` center '' of the space with the other codewords encircling it ( see figure [ f : topdown ] ) .this corresponds to choosing the red alert codeword to be the all zeros ( or all ones ) vector .the standard codewords are generated using the distribution that maximizes the kl divergence between output distributions while still supporting a rate .while this two - dimensional illustration is quite useful for understanding the binary case , it can be misleading in the gaussian case .specifically , it suggests that we should place the red alert codeword at the origin which turns out to be suboptimal .( -30,-30)(30,30 ) ( 0,0)30 ( 0,0)(-2,-2)(2,2 ) another way of looking at the binary construction is to visualize each fixed composition as a parallel ( or circle of constant latitude ) on a sphere ( see figure [ f : sideview ] ) .that is , the code lives on the hamming cube in dimensions , which can be imagined as a sphere by taking the all zeros and all ones vectors as the two poles and specifying the parallels by their hamming weight . from this viewpoint ,the binary construction sets the red alert codeword to be one of the poles and chooses the remaining codewords on the furthest parallel that can support a codebook of rate .this perspective leads naturally to the right construction for the gaussian case .essentially , the standard codewords are placed uniformly along a constant parallel .this can be achieved by generating the standard codewords using a capacity - achieving code with a fraction of the total power .the red alert codeword is placed at the furthest limit of the red alert power constraint ( e.g. , at ) and the standard codewords are offset in the opposite direction ( e.g. , by ) .see figure [ f : offsetcodebook ] for an illustration .in the high - dimensional limit , most of the codewords will live on a parallel , thus mimicking the binary construction .this scheme leads us to the optimal red alert exponent .[ t : main ] for an awgn channel with red alert power constraint , average power constraint , and rate , the optimal red alert exponent is we prove achievability in lemma [ l : achievable ] and provide a matching upper bound in lemma [ l : converse ] . in the conference version of this paper , we used a different code construction that lead to a smaller achievable red alert exponent .the codewords were generated uniformly on the sphere of radius and we only kept those that fell within a cone of appropriate half - angle .this type of construction turns out not to achieve as dense a packing as the construction used in this paper . in appendix[ a : cone ] , we explore the reasons why this occurs in the binary case . in appendix[ a : coneexponent ] , we state the achievable red alert exponent for the conical construction .( -50,-15)(55,15 ) ( -50,-2)(-46,2 ) ( 40,0)15 ( -48,0)(0,0 ) ( -24,-4 ) ( 0,0)(40,0 ) ( 12,-4 ) ( 40,0)(40,15 ) ( 44,3 ) ( 0,0)1.3our codebook construction for consists of the following steps : 1 .choose so that .the red alert codeword is placed at the boundary of the red alert power constraint , .3 . choose so that and choose so that 4 .draw codewords i.i.d . according to a gaussian distribution with mean zero and variance .5 . to each of these codewords ,add an offset so that the transmitted codeword for each message ( other than ) is .we will show that this procedure yields a random codebook whose false alarm probability and average probability of error are both less than .afterwards , we will characterize the probability of missed detection for the red alert codeword .this will in turn imply the existence of a good fixed codebook .in this section , we will show that the red alert error exponent stated in theorem [ t : main ] is achievable .we begin by stating useful large deviations bounds that will play a role in both the proof of the achievability and of the converse .next , we show that any standard codeword plus noise lies at a certain distance from the red alert codeword with high probability .afterwards , we argue that , with high probability , any standard codeword plus noise is contained in a cone of a certain half - angle that is centered on the red alert codeword . by combining the distance and angle bounds , we can constrain the decoding region for the standard codewords to the intersection of a cone with a shell .the remainder of can thus be allocated to the decoding region for the red alert codeword , for which we will bound the resulting probability of a missed detection .our upper and lower bounds on the probability of error are proven by deriving bounds on the size and shape of the decoding regions and then applying cramr s theorem to get large deviations bounds .define to be the moment generating function of a random variable , \ , \end{aligned}\ ] ] and to be the fenchel - legendre transform ( * ? ? ?* definition 2.2.2 ) of , \ . \end{aligned}\ ] ] [ t : cramer ] let be the normalized sum of i.i.d .variables with finite mean and rate function .then , for every closed subset , and , for every open subset , see , for instance , ( * ? ? ?* theorem 2.2.3 ) for a proof. we will be particularly interested in how this bound applies to the length of i.i.d .gaussian vectors , which corresponds to setting the to be chi - square random variables ( with one degree of freedom ) .the moment generating function for such random variables is which yields a rate function of .the following lemma formalizes the notion that the squared -norm of an i.i.d .gaussian vector concentrates sharply around its variance .thus , for large , the decoding region can be restricted to a thin spherical shell .[ l : gaussiannorm ] let be a length- vector with i.i.d .zero - mean gaussian entries of variance . then , for any , and , for any , see appendix [ a : distanceproofs ] for the proof .recall that the q - function returns the probability that a scalar gaussian random variable with mean zero and unit variance is greater than or equal to , and is upper bounded as the next lemma is about the well - known fact that an i.i.d .gaussian vector is approximately orthogonal to any fixed vector .-shell of power that is offset away from the origin with power .thus , with high probability , any random codeword meets the power constraint.,width=211 ] [ l : gaussianortho ] let be a length- vector with i.i.d .zero - mean gaussian entries with variance and let be a length- vector with for some fixed . then , for any and large enough , see appendix [ a : distanceproofs ] for the proof . in figure[ f : offsetcb ] , the codebook is illustrated from the perspective of the origin . using the above lemma, it can be shown that all but a vanishing fraction of codewords have power close to and are nearly orthogonal with respect to any fixed vector .we now characterize how far away a codeword plus noise is from the red alert codeword with high probability .[ l : distance ] for any and large enough , the distance from the red alert codeword to the codeword for a standard message , , plus noise is at least with high probability , see appendix [ a : distanceproofs ] for the proof .we now upper bound the -dimensional angle between a fixed vector and the same vector plus i.i.d .gaussian noise .[ l : gaussianangle ] let be a length- vector with i.i.d .zero - mean gaussian entries with variance and let be a length- vector with for some fixed . for any and large enough ,the probability that the angle between and exceeds is upper bounded by , see appendix [ a : angleproofs ] for the proof . and the angle between a codeword plus noise and the red alert codeword.,width=259 ] in figure [ f:3dangles4 ] , we have depicted the distance and the angle from the red alert codeword to a standard codeword plus noise .notice that both the noise and the codewords are ( nearly ) orthogonal to the axis along which the red alert codeword lies .now consider a cone centered on the red alert codeword that contains a standard codeword plus noise with high probability .the next lemma upper bounds the required half - angle for the cone .[ l : angle ] let denote the cone centered on the red alert codeword with axis running towards the origin and half - angle . for any , , and large enough, if the half - angle is greater than or equal to then the cone contains the codeword for message plus noise with high probability , i.e. , see appendix [ a : angleproofs ] for the proof .now that we know the decoding region can be confined to a conical shell , we can bound the probability of missed detection for the red alert codeword . [l : achievable ] for any rate , the following red alert exponent is achievable choose . in lemma[ l : distance ] , is a lower bound on the distance between the red alert codeword and a standard codeword plus noise . from lemma [ l : angle ] , we have an upper bound on the half - angle needed to capture a standard codeword plus noise in the cone centered on the red alert codeword . if the received vector lies in the cone and is at least distance from the red alert codeword , then the decoder assumes the red alert message was not transmitted .otherwise , it declares that the red alert message was sent .for large enough , we know that the probability that a random codeword plus noise , , leaves this region is at most . therefore , the probability of false alarm ( averaged over the randomness in the codebook ) is upper bounded by .if the received vector falls in the decoding region for standard messages , we simply subtract the offset and apply a maximum likelihood decoder to make an estimate of the transmitted message .since the rate of the codebook is chosen to be slightly less than the capacity ( for the power level ) , it is straightforward to show that the average probability of error for a given message is at most .since the average false alarm probability and average error probability are small , it follows that there exists at least one fixed codebook with a small false alarm probability and average error probability .we now turn to upper bounding the probability of missed detection .assume the red alert codeword is transmitted .define where is specified by step 3 ) of the codebook construction in section [ s : codebook ] .using lemma [ l : gaussiannorm ] , the probability that the noise pushes the red alert codeword further than ( as specified in lemma [ l : distance ] ) can be upper bounded by the probability that the received vector falls into the cone of half - angle is given by the fraction of surface area of a sphere carved out by the cone . using lemma [ l : surfaceratio ] ,this can be calculated as pulling terms into the exponent we get for large enough , we get that the probability is upper bounded by . since the noise is an i.i.d .gaussian vector , its magnitude and direction are independent .therefore , the probability of missed detection is upper bounded as for large enough . for and enough and large enough , the exponent can be made equal to finally , we can solve for in terms of to get . substituting this into the expression above yields the desired result .note that at , the coherent gain , which is the largest benefit we could hope for . at , the coherent gain vanishes .we can interpret our achievability result from a hypothesis testing perspective .let denote the event that a standard codeword is transmitted and let denote the event that the red alert codeword is transmitted . under ,the entries of are i.i.d . according to a gaussian distribution with mean and variance . under ,the entries are i.i.d .gaussian with mean and variance . using the chernoff - stein lemma ( * ? ? ?* theorem 11.8.3 ) , we can bound the missed detection probability of the optimal hypothesis test via the kl divergence between the two distributions , . a bit of calculation will reveal that this kl divergence corresponds exactly to the red alert exponent .one can obtain the same exponent by plugging these distributions into the red alert exponent expression from ( * ? ? ?* lemma 1 ) .however , this does not in itself constitute a proof as the results of are for dmcs without cost constraints .we now develop an upper bound on the red alert exponent .our bound relies on the fact that , in order to recover the standard messages reliably , we must allocate a significant volume of the output space for decoding them , which contributes to the probability of missed detection .an overview of the main steps in the proof is provided below . * in lemma [ l : shell ] , we argue that a constant fraction of the codewords live in a thin shell and strictly satisfy the power and error constraints . * with high probability , the standard codewords plus noise are concentrated in a thin shell .lemma [ l : volume ] establishes this fact as well as the minimum volume required for the decoding region to attain a given probability of error . * to minimize the probability of missed detection , we should pack this volume into the thin shell to maximize the distance from the red alert codeword ( see figure [ f : converseregion ] for an illustration ) .lemma [ l : redalert ] bounds the distance and angle from the red alert codeword to the resulting decoding region ( see figure [ f : converse ] for an illustration ) . *finally , in lemma [ l : converse ] , we bound the probability that the noise carries the red alert codeword into the decoding region for the standard codewords .[ l : shell ] assume that a sequence of codebooks satisfies the average block power constraint and has average probability of error that tends to zero .then for any and large enough , there exists a shell of width that contains codewords , each with probability of error at most , and average power at most .see appendix [ a : converseproofs ] for the proof .[ l : volume ] assume that , for some , codewords , each with probability of error at most lie in the shell . then , for large enough , the decoding region for these codewords must include a subset of the noise - inflated shell with volume at least see appendix [ a : converseproofs ] for the proof .[ l : redalert ] assume that a sequence of codebooks has rate and an average probability of error that tends to zero as increases .then , for sufficiently small and large enough , the probability of missed detection is lower bounded by the probability that the noise vector has squared norm between and and lies at an angle between and where consider the standard codewords from a red alert codebook . from lemma[ l : shell ] , for any and large enough , at least codewords with power at most and probability of error at most must lie in a shell for some . from lemma[ l : volume ] , it follows that the decoding region for these codewords falls within the noise - inflated shell and has volume at least .( -52,-44)(53,44 ) ( 25,-44)(55,44 ) ( -50,0)87 - 3232 ( -48,-2)(-52,2 ) ( -50,2)(-40,15 ) ( -40,23)red alert ( -40,18)codeword ( 15,0)38 - 67.967.9 ( 15,0)38 ( -50,0)87 - 23.7 - 15.5 ( -50,0)8715.523.7 ( 15,0)30 - 52.152.1 ( 15,0)3052.1 - 52.1 ( -50,0)87 - 3030 ( -50,0)(37,0 ) ( 0,3) ( 49,0 ) ( 40,-40 ) ( -28,-28)(-15,-15 ) ( -28,-32)noise - inflated shell to get our lower bound , we need to pack this volume in the noise - inflated shell such that it minimizes . since the noise vector is i.i.d .gaussian , the probability that the red alert codeword is pushed to a certain point is determined solely by a decreasing function of the distance .let denote the set of all points at distance or greater from the red alert codeword the optimal volume packing corresponds to the intersection of the set and the noise shell + with chosen such that the volume of the set is equal to .let denote the resulting region and see figure [ f : converseregion ] for an illustration .let denote the set of points in that sit at the minimum distance to the red alert codeword , and let be any of these points .let and denote the distance and angle from the red alert codeword to .we now seek to bound these quantities through a bound on the angle from the origin to .( -52,-38)(53,38 ) ( -48,-2)(-52,2 ) ( 15,0)38 - 67.967.9 ( 15,0)38 ( 15,0)30 ( 15,0)38 - 5959 ( -50,0)87 - 23.7 - 15.5 ( -50,0)8715.523.7 ( 15,0)30 - 52.152.1( 15,0)3052.1 - 52.1 ( -50,0)92 - 2020 ( -50,0)89 - 2020 ( -50,0)(34.5,32.8 ) ( -50,0)(34.5,-32.8 ) ( 15,0)1.3 ( -50,0)(45,0 ) ( -50,0)19020.5 ( -28,4) ( 15,0)8059 ( 25,6) ( 0,22.5) ( 20,14 ) ( 29.5,35)(0,0)1.2 ( 30.5,39) ( 34.5,32.8)(0,0)1.2 ( 34.5,36.3) ( 35.7,23.9)(0,0)(3,7 ) ( 35.7,-23.9)(0,-7)(3,0 ) ( 49,0 ) let denote the half - angle of a cone , centered on the origin that contains the region ( and thus includes ). the volume of this cone must be at least equal to that of since is a subset of the noise shell .therefore , is lower bounded by the half - angle of a cone whose volume is equal to the volume of ( see figure [ f : converse ] for an illustration ) . combining ( [ e : spherevol ] ) and lemma [ l : surfaceratio ] , for large enough , the volume of this cone is upper bounded by now , since we require this quantity to exceed , we can lower bound by we can further lower bound by setting to its maximum value , thus , for any , and small enough , and large enough , is lower bounded as follows the distance from to is upper bounded by the distance to a point that lies on the intersection of the outer shell ( at distance from the origin ) and the cone of half - angle . without loss of generality ,assume that the red alert codeword is placed at for some .the direction of the red alert codeword is not important since we will always fill the noise shell relative to this direction .then is at least for any , and small enough , and large enough , this quantity is itself upper bounded by the half - angle of a cone , centered on the red alert codeword , that contains the point is lower bounded by which , for any , , , and small enough , and large enough is itself lower bounded by the probability of missed detection decreases if the distance from to is increased. the angle will simultaneously decrease .thus , by setting , we further lower bound the probability of missed detection . using the relation combined with ( [ e: theta ] ) , we find that .plugging in and , we obtain the following upper bound on : and the following lower bound on : finally , it follows that , for small enough ( but greater than for finite ) and large enough , the optimal packing contains all points from squared distance to from the red alert codeword and angle to where and are as in the statement of the theorem .thus , the probability of missed detection is lower bounded by the event that the noise falls into this region .[ l : converse ] for any rate , the red alert exponent is upper bounded by lemma [ l : redalert ] established that the probability of missed detection is lower bounded by the event that the noise has squared length between to and angle between and for some that tends to as tends to infinity .we now lower bound the probability of this event .define since the magnitude and angle of an i.i.d .gaussian vector are independent , the probability of missed detection is lower bounded as follows : by lemma [ l : surfaceratio ] , for large enough , the second term in the product can be lower bounded by which , for large enough , is itself lower bounded by now , substituting in the lower bound on from lemma [ l : redalert ] , we arrive at the following lower bound using the upper bound on from lemma [ l : redalert ] and applying theorem [ t : cramer ] for chi - square random variables ( and noting that and go to zero as goes to infinity ) , it follows that combining this with the lower bound on the angle event in ( [ e : angleexp ] ) , the exponent of the probability of missed detection is lower bounded by desired .db with . an upper bound on the point - to - point awgn error exponentis provided for comparison . ,width=364 ] db with .an upper bound on the point - to - point awgn error exponent is provided for comparison ., width=364 ] in figures [ f : redalertplot1 ] and [ f : redalertplot2 ] , we have plotted the optimal red alert exponent for and , respectively , with red alert power constraints , and . for comparison , we have also plotted an upper bound on the awgn point - to - point error exponent from ( * ? ? ?* equation 4 ) .notice that the red alert exponent can be quite large at capacity , even when the red alert power constraint is equal to the average power constraint .we have developed sharp bounds on the error exponent for distinguishing a single special message from standard messages over an awgn channel . as discussed in the introduction , these bounds can be used to characterize the performance of certain data streaming architectures , where each bit must be decoded after a given delay . an interesting question for future studyis how well a single special message can be protected at a given finite blocklength , i.e. , understanding the limits of unequal error protection in the non - asymptotic regime .the squared euclidean distance is the sum of i.i.d . squared gaussian random variables with variance . therefore , is the sum of i.i.d .chi - square random variables . applying theorem [ t : cramer ] and plugging in the chi - square rate function of , it follows that substituting in yields the first bound and yields the second .first , we write the probability that is greater than in terms of the q - function , substituting yields , which can be driven arbitrarily close to zero for large enough .we simply wish to bound the length of the vector from the special codeword to a standard codeword plus noise , . by expanding terms, we obtain : the first term is .the second term is the inner product of a fixed vector and an i.i.d .gaussian vector since is an element of a random gaussian codebook .thus , using lemma [ l : gaussianortho ] , it can be shown that the probability that this inner product is less than is at most for large enough .the third term is the squared norm of an i.i.d .gaussian vector with mean zero and variance . from lemma[ l : gaussiannorm ] , it follows that is less than with probability at most for large enough .combining these three bounds completes the proof .the angle between and is from lemma [ l : gaussianortho ] , for any and large enough , the probability that is at most .therefore , since we have that with probability at most . combining lemmas [ l : gaussiannorm ] and [ l : gaussianortho ], we can also show that the probability that is at most for large enough .thus , the probability that is at most .choosing small enough yields the desired result .the angle between the axis of the cone and the standard codeword plus noise is since is a decreasing function of , an upper bound on the angle can be obtained by lower bounding .we will do this by lower bounding the numerator and upper bounding the denominator ( with high probability ) .expanding the numerator yields : the first term is simply .the second term is the inner product of a fixed vector and an i.i.d .gaussian vector .thus , using lemma [ l : gaussianortho ] , it can be shown that for any and large enough , the probability that the second term is less than is at most .the denominator is composed of two terms .the first is simply .following the proof of lemma [ l : distance ] , it can be shown that the second term is greater than with probability at most . combining these bounds, we get that the probability that is less than with probability at most .thus , so long as the half - angle is greater than or equal to the cone contains with probability at least for large enough . applying the trigonometric identity completes the proof .observe that at least one codeword has power at most , otherwise the average will be larger than .if we remove this codeword s contribution from the average , the remaining codewords have average power at most .now , we can find a codeword whose power must be at most .removing this codeword yields an average of . continuing this process, we can remove codewords that each have power at most . by the same argument, we can find codewords that each have probability of error at most .therefore , at least codewords must satisfy both these constraints simultaneously .the selected codewords live in the sphere of radius .we partition this sphere into shells of width each .it follows that at least one of these shells must contain codewords .finally , select large enough so that . [ proof of lemma [ l : volume ] ] assume that one of the codewords from the shell is transmitted .it follows from lemma [ l : gaussiannorm ] , that for any and large enough , the probability that is larger than or smaller than is upper bounded by . if the noise lands outside this `` noise shell , '' then we will assume that the transmitted codeword is decoded correctly .however , each codeword still needs to capture probability inside the shell to ensure the error probability does not exceed .now , consider the volume required for decoding a single codeword reliably .since the noise is i.i.d .gaussian , its probability distribution is rotationally invariant .this implies that the shape that uses the least volume to capture a given probability of error is a sphere centered on the codeword .let be the radius of this sphere .by lemma [ l : gaussiannorm ] , if , the probability that the noise falls inside this sphere goes to zero exponentially in which implies the probability of error goes to one .therefore , for large enough , the probability of error will always exceed the desired probability of error ( which is assumed to be bounded away from one ) . using ( [ e : spherevol ] ) , we get that the decoding region of each codeword must have volume at least for any .we find that we will need a volume of at least to reliably decode these codewords .we now develop some intuition for why the offset construction of section [ s : codebook ] is a better construction than the conical construction we used in our earlier work .the difference between these two constructions is easier to understand in a discrete setting so we will analyze the corresponding constructions for a bsc with crossover probability . for ease of analysis , we will calculate rate in bits per channel use ( rather than nats per channel use ) .first , recall that the bsc red alert exponent can be attained using a fixed composition codebook . specifically , each of the codewords is drawn independently and uniformly from the set of weight- binary sequences .if the rate is less than the induced mutual information , the average probability of error can be driven to zero the red alert codeword is taken to be all the all zeros vector . the decoder runs a hypothesis test between the two possible output distributions , bernoulli and bernoulli .the error exponent for the probability of missed detection is the kl divergence between the two distributions , as shown in , this is the optimal red alert exponent .we can construct a _ conical code _ of parameter by first drawing codewordsi.i.d . according to a bernoulli distribution for some .let denote the resulting set of codewords . to guarantee the same red alert exponent , we only keep those codewords with hamming weight or greater and set the red alert codeword to be the all zeros vector .we now bound the rate of this construction . using theorem [ t : cramer ], it can be shown that the probability that a bernoulli sequence has hamming weight at least is upper bounded as take to be the set of subset of codewords in with hamming weight or greater . using ( [ e : weightprob ] ) , the expected size of is upper bounded by & \leq 2^{n(c - d(q\|0.5 ) -\epsilon ) } \\ & = 2^{n(1 - h_b(p ) - ( 1 - h_b(q ) ) -\epsilon ) } \\ & = 2^{n(h_b(q ) - h_b(p ) - \epsilon ) } \ .\end{aligned}\ ] ] it can be shown with a chernoff bound that the probability contains significantly more codewords vanishes doubly exponentially in .furthermore , it can be shown that the average probability of error of vanishes with .therefore , the rate of the conical codebook is for a red alert exponent of .now , observe that ( unless either or is equal to ) so , meaning that the rate of the conical construction is strictly less than the ( optimal ) constant composition construction .intuitively , this means that the usual i.i.d .bernoulli construction used to approach capacity does not pack codewords of higher ( or lower ) weights efficiently .constraining the weight of codewords is essential to the hypothesis test that leads to the red alert exponent . the constant composition ( or offset )construction is successful since it optimizes the packing of codewords of a given weight .a similar phenomenon occurs in the awgn setting as shown below .for completeness , we review the awgn conical code that we proposed in and the resulting red alert exponent . the construction is comprised of three main steps : 1 .place the red alert codeword at the limit of the red alert power constraint , .2 . draw codewords i.i.d . according to a gaussian distribution with mean and variance .3 . of these codewords , only keep the first that lie in the cone where .( if there are fewer than such codewords , declare an error . )it can be shown that with high probability the resulting codebook contains codewords inside the cone of half - angle .we now turn to bounding the distance and angle from the standard decoding region to the red alert codeword .the distance can be bounded using the techniques used to prove lemma [ l : distance ] .it follows that for any and large enough , the squared distance from the red alert codeword to a standard codeword plus noise is at least with high probability , substituting in , we get that is equal to similarly , the techniques from lemma [ l : angle ] can be used to bound the angle .let denote the cone centered on the red alert codeword with axis running towards the origin and half - angle .for any and large enough , if the half - angle is larger than then the cone contains the codeword for message plus noise with high probability , i.e. , finally , these two bounds can be combined , as in the proof of lemma [ l : achievable ] , to get an an achievable red alert exponent of , , and with ., width=364 ] in figure [ f : conevsoffset ] , we have plotted this red alert exponent alongside the optimal one derived via the offset construction for average power constaints , and with .the authors would like to thank the anonymous reviewers whose suggestions improved the presentation of this work .they would also like to thank b. nakibolu for insightful discussions on the connections between this work and the discrete memoryless case .b. nakibolu , s. k. gorantla , l. zheng , and t. p. coleman , `` bit - wise unequal error protection for variable length block codes with feedback , '' _ ieee transactions on information theory _ , submitted january 2011 .available online : http://arxiv.org/abs/1101.1934 .s. k. gorantla , b. nakibolu , l. zheng , and t. p. coleman , `` bit - wise unequal error protection for variable length block codes with feedback , '' in _ proceedings of the ieee international symposium on information theory ( isit 2010 ) _ , ( austin , tx ) , june 2010 .available online : http://arxiv.org/abs/1101.1934 .s. borade , b. nakibolu , and l. zheng , `` some fundamental limits of unequal error protection , '' in _ proceedings of the ieee international symposium on information theory ( isit 2008 ) _ , ( toronto , canada ) , july 2008 .y. y. shkel and s. c. draper , `` cooperative reliability for streaming multiple access , '' in _ proceedings of the ieee international symposium on information theory ( isit 2010 ) _ , ( austin , tx ) , june 2010 .b. nazer and s. c. draper , `` gaussian red alert exponents : geometry and code constructions , '' in _ 48th annual allerton conference on communications , control , and computing _ , ( monticello , il ) ,september 2010 .
consider the following unequal error protection scenario . one special message , dubbed the `` red alert '' message , is required to have an extremely small probability of missed detection . the remainder of the messages must keep their average probability of error and probability of false alarm below a certain threshold . the goal then is to design a codebook that maximizes the error exponent of the red alert message while ensuring that the average probability of error and probability of false alarm go to zero as the blocklength goes to infinity . this red alert exponent has previously been characterized for discrete memoryless channels . this paper completely characterizes the optimal red alert exponent for additive white gaussian noise channels with block power constraints .
we consider the semilinear singularly perturbed problem , \label{uvod1 } \\y(0)&=0,\ y(1)=0 , \label{uvod2}\end{aligned}\ ] ] where we assume that the nonlinear function is continuously differentiable , i.e. that \times \mathbb{r}\right) ] of width .it is well known that the standard discretization methods for solving are unstable and do not give accurate results when the perturbation parameter is smaller than some critical value . with this in mind , we therefore need to develop a method which produces a numerical solution for the starting problem with a satisfactory value of the error .moreover , we additionally require that the error does not depend on ; in this case we say that the method is uniformly convergent with respect to or -uniformly convergent .numerical solutions of given continuous problems obtained using a -uniformly convergent method satisfy the condition where is the exact solution of the original continuous problem , is the discrete maximum norm , is the number of mesh points that is independent of and is a constant which does not depend on or .we therefore demand that the numerical solution converges to for every value of the perturbation parameter in the domain with respect to the discrete maximum norm the problem has been researched by many authors with various assumptions on .various different difference schemes have been constructed which are uniformly convergent on equidistant meshes as well as schemes on specially constructed , mostly shishkin and bakvhvalov - type meshes , where -uniform convergence of second order has been demonstrated , see e.g. , as well as schemes with -uniform convergence of order greater than two , see e.g. .these difference schemes were usually constructed using the finite difference method and its modifications or collocation methods with polynomial splines .a large number of difference schemes also belongs to the group of exponentially fitted schemes or their uniformly convergent versions .such schemes were mostly used in numerical solving of corresponding linear singularly perturbed boundary value problems on equidistant meshes , see e.g. .less frequently were used for numerical solving of nonlinear singularly perturbed boundary value problems , see e.g. .our present work represents a synthesis of these two approaches , i.e. we want to construct a difference scheme which belongs to the group of exponentially fitted schemes and apply this scheme to a corresponding nonequidistant layer - adapted mesh . the main motivation for constructing such a scheme is obtaining an -uniform convergent method , which will be guaranteed by the layer - adapted mesh , and then further improving the numerical results by using an exponentially fitted scheme .we therefore aim to construct an -uniformly convergent difference scheme on a modified shishkin mesh , using the results on solving linear boundary value problems obtained by roos , oriordan and stynes and green s function for a suitable operator .this paper has the following structure .section [ sec1 ] . provides background information and introduces the main concepts used throughout . in section [ sec2 ] .we construct our difference scheme based on which we generate the system of equations whose solving gives us the numerical solution values at the mesh points .we also prove the existence and uniqueness theorem for the numerical solution . in section [ sec3 ] .we construct the mesh , where we use a modified shiskin mesh with a smooth enough generating function in order to discretize the initial problem . in section [ sec4 ] .we show -uniform convergence and its rate . in section [ sec5 ] .we provide some numerical experiments and discuss our results and possible future research . * notation . * throughout this paper we denote by ( sometimes subscripted ) a generic positive constant that may take different values in different formulae , always independent of and .we also ( realistically ) assume that . throughout the paper ,we denote by the usual discrete maximum norm as well as the corresponding matrix norm .consider the differential equation ( [ uvod1 ] ) in an equivalent form , \label{konst1}\ ] ] where and is a chosen constant . in order to obtain a difference scheme needed to calculate the numerical solution of the boundary value problem , using an arbitrary mesh we construct a solution of the following boundary value problem for it is clear that ,\ i=0,1,\ldots , n-1. ] the solution of is given by ,\ ] ] where is the green s function associated with the operator on the interval ] .it follows from the boundary conditions ( [ konst32 ] ) that hence , the solution of on ] , , we have that , for using this in differentiating ( [ konst7 ] ) , we get that +y_{i+1}\left[-\left ( u_i^{ii}\right)'(x_i ) \right ] & \\ & = \dfrac{\partial}{\partial x}\left[\int_{x_i}^{x_{i+1}}{g_i(x , s)\psi(s , y(s))ds}-\int_{x_{i-1}}^{x_{i}}{g_{i-1}(x , s)\psi(s , y(s))ds } \right]_{x = x_i}. & \label{konst10}\end{aligned}\ ] ] since we have that equation becomes ,\end{aligned}\ ] ] for and .we can not in general explicitly compute the integrals on the rhs of ( [ konst13 ] ) . in order to get a simple enough difference scheme , we approximate the function on \cup \left[x_i , x_{i+1 } \right] ] the non - zero elements of this tridiagonal matrix are h_0,0=&h_n , n=1 , + h_i , i=&0 , + h_i , i+1=&>0 , where hence is an matrix .moreover , is an since & |h_i , i|-|h_i , i-1|-|h_i-1,i| = & + & 4 m . & consequently using hadamard s theorem ( see e.g. theorem 5.3.10 from ) , we get that is an homeomorphism .since clearly is non - empty and is the only image of the mapping , we have that has a unique solution .+ the proof of second part of the theorem [ teo21 ] is based on a part of the proof of theorem 3 from .we have that for some .therefore and finally due to inequality we have that the solution of the problem changes rapidly near and , the mesh has to be refined there .various meshes have been proposed by various authors .the most frequently analyzed are the exponentially graded meshes of bakhvalov , see , and piecewise uniform meshes of shishkin , see . herewe use the smoothed shishkin mesh from and we construct it as follows . let be the number of mesh points and and are mesh parameters . define the shishkin mesh transition point by let us chose for simplicity in representation , we assume that , as otherwise the problem can be analyzed in the classical way .we shall also assume that is an integer .this is easily achieved by choosing and divisible by 4 for example .the mesh is generated by with the mesh generating function ,\\ p(t - q)^3 + \frac{\lambda}{q}t & t\in[q,1/2],\\ 1-\varphi(1-t ) & t\in[1/2,1 ] , \end{array } \right . \label{mreza2}\ ] ] where chosen such that i.e. + note that ] we have that [ teorema2 ] [ remark2 ] note that for ] these inequalities and the estimate imply that the analysis of the error value can be done on the part of the mesh which corresponds to ] but with the omision of the function and using the inequality from here on in we use and where we begin with a lemma that will be used further on in the proof on the uniform convergence . on the part of the modified shishkin mesh where ] , assuming that , for we have the following estimate [ lema1 ] we are using the decomposition from theorem [ teorema2 ] and expansions , . for the regular component we have that & ( + ) ^-1 |-| + & |r_i | + & + | | + || . [15 ] & first we want to estimate the expressions containing only the first derivatives in the rhs of inequality ( [ 15 ] ) . from the identity and the inequalities , , we get the inequality , which yields that , using inequality ( [ 17 ] ) together with ( [ regularna ] ) , we get that now we want to estimate the terms containing the second derivatives from the rhs of ( [ 15 ] ) . using inequality ( [ regularna ] ) , after some simplification , we get that for the layer component , first we have that & ( + ) ^-1 |-| & & + & | | & & + & + | | . & & [ 22 ] the first term of the rhs of ( [ 22 ] ) can be bounded by & || & + & ^2| s(^-_i)+s(^+_i)| . [ 23 ] for the second term of the rhs of ( [ 22 ] ) we get that & | | + & |s_i-1-s_i| + |s_i - s_i+1| , [ 24 ] in the first expression of the rhs of ( [ 24 ] ) we have the term although this ratio is bounded by , this quotient is not bounded for when this is why we are going to estimate this expression separately on the transition part and on the nonequidistant part of the mesh . in the case , using the fact that and and the fact that the function takes values from the interval when , we have that when , we can use and + for therefore using equations ( [ mreza31 ] ) , ( [ mreza32 ] ) and ( [ 18])([27 ] ) , we complete the proof of the lemma .now we state the main theorem on convergence of our difference scheme and specially chosen layer - adapted mesh . the discrete problem on the mesh from section [ sec2 ] .is uniformly convergent with respect to and where is the solution of the problem ( [ uvod1 ] ) , is the corresponding numerical solution of ( [ konst18a]) and is a constant independent of and .we shall use the technique from , i.e. since we have stability from theorem [ teo21 ] , we have that and since implies that , it only remains to estimate .let .the discrete problem ( [ konst18a]) can be written down on this part of the mesh in the following form f_0y= & 0 , & & + f_iy = & ( 3a_i+d_i+d_i+1 ) ( y_i-1-y_i ) -(y_i - y_i+1 ) + & . - ( d_i+d_i+1 ) ] + = & + = & ( 2 + 2(h_i))(y_i-1-y_i-(y_i - y_i+1 ) ) + & .-2((h_i)-1 ) ] , for . using the expansions and, we get that f_iy = & + & = & , for and hence for . + now let we rewrite equations ( [ konst18a]) as f_iy= & ( d_i+d_i+1)(y_i-1-y_i-(y_i - y_i+1 ) ) & + & + 4 ( a_i(y_i-1-y_i)-a_i+1(y_i - y_i+1 ) ) + & - .(d_i+d_i+1 ) ] .we estimate the linear and the nonlinear term separately . for the nonlinear termwe get & | ( d_i+d_i+1)| & & + & = |f(x_i-1,y_i-1)+2f(x_i , y_i)+f(x_i+1,y_i+1)|=^2|y_i-1 + 2y_i+y_i+1| . for the linear term we get & | ( d_i+d_i+1)+4| + & | y_i-1-y_i-(y_i - y_i+1)|+ |a_i(y_i-1-y_i)-a_i+1(y_i - y_i+1 ) | .[ teo8 ] for the first term in the rhs of ( [ teo8 ] ) we get while for the second term in the rhs of ( [ teo8 ] ) , using and , we get that hence , we get that for . + the proof for is analogous to the case and the proof for is analogous to the case in view of remark [ remark2 ] and lemma [ lema1 ] . finally , the case is simply shown since , and for $ ] .in this section we present numerical results to confirm the uniform accuracy of the discrete problem ( [ konst18a]). to demonstrate the efficiency of the method , we present two examples having boundary layers . the problems from our examples have known exact solutions , so we calculate as where is the value of the numerical solutions at the mesh point , where the mesh has subintervals , and is the value of the exact solution at .the rate of convergence ord is calculated using where tables 1 and 2 give the numerical results for our two examples and we can see that the theoretical and experimental results match . consider the following problem , see the exact solution of this problem is given by [ primjer1 ] the nonlinear system was solved using the initial condition and the value of the constant . in the analysis of examples 1 and 2 from section [ sec5 ] and the corresponding result tables , we can observe the robustness of the constructed difference scheme , even for small values of the perturbation parameter .note that the results presented in tables 1 and 2 already suggest -uniform convergence of second order .the presented method can be used in order to construct schemes of convergence order greater than two . in constructing such schemes ,the corresponding analysis should not be more difficult that the analysis for our constructed difference scheme . in the case of constructing schemes for solving a two - dimensional singularly perturbed boundary value problem , if one does not take care that functions of two variables do not appear during the scheme construction , the analysis should not be substantially more difficult then for our constructed scheme .in such a case it would be enough to separate the expressions with the same variables and the analysis is reduced to the previously done one - dimensional analysis .the authors are grateful to nermin okii ' c and elvis barakovi ' c for helpful advice .helena zarin is supported by the ministry of education and science of the republic of serbia under grant no .174030 .28 zh.vychisl .mat i mat .* 9 * , 841859 ( in russian ) ( 1969 ) proceedings of the 7th international conference on operational research , croatian or society , 197208 ( 1999 ). numer.math . * 56 * , 675693 ( 1990 ) novi sad j.math . * 33 * , 2 , 173180 ( 2003 ) j.math .novi sad * 33 * 1 , 145162 ( 2003 ) novi sad j. math . , * 28 * 3 , 4149 ( 1998 ) linss , t. : layer - adapted meshes for reaction - convection - diffusion problems , springer - verlag , berlin , heidelberg ( 2010 ) numer. math . * 43 * , 175198 ( 1984 ) mathematics of computation , * 47 * 176 , 555570 ( 1986 ) .siam , philadelphia , usa ( 2000 ) journal of computational and applied mathematics * 29 * , 69 - 77 ( 1990 ) sov.j.numer.anal.math.modelling * 3 * 393407 ( 1988 ) numer . math . * 50 * , 519531 ( 1987 ) mathematics of computation , * 65 * 215 , 10851109 ( 1996 ) indian j.pure appl . math ., * 27 * 10 , 10051016 ( 1996 ) review of research faculty of science - university of novi sad , * 13 * ( 1983 ) review of research faculty of science - university of novi sad , * 23 * 2 , 363379 ( 1993 ) cmam * 4 * , 368383 ( 2004 ) novi sad journal of mathematics , * 33 * 1 , 133143 ( 2003 )
in this paper we are considering a semilinear singular perturbation reaction diffusion boundary value problem , which contains a small perturbation parameter that acts on the highest order derivative . we construct a difference scheme on an arbitrary nonequidistant mesh using a collocation method and green s function . we show that the constructed difference scheme has a unique solution and that the scheme is stable . the central result of the paper is -uniform convergence of almost second order for the discrete approximate solution on a modified shishkin mesh . we finally provide two numerical examples which illustrate the theoretical results on the uniform accuracy of the discrete problem , as well as the robustness of the method . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
second order leapfrog or splitting methods are a class of widely used time - symmetric , explicit and symplectic integration algorithms for hamiltonian systems .these characteristics make them a standard tool for very long integrations , as they preserve the phase space structure and first integrals of the system .being time - symmetric , second order leapfrog algorithms have an error expansion that only contains even powers of the timestep .this fact makes them convenient for use within extrapolation schemes , such as the gragg bulirsch stoer ( gbs ) method , which are often used when very high accuracy is required .the main problem with leapfrog methods is the fact that they can only be constructed for systems where the hamiltonian separates into two or more parts , where the flow of each part can be separately integrated .a solution to this problem for partitioned systems of the type was presented in . by means of auxiliary velocitycoordinates , the equations of motion were transformed into a separable form and thus amenable for integration with a leapfrog method .the method , called auxiliary velocity algorithm ( ava ) , can also be used for nonconservative systems as well .in this paper we propose an improved extension of the ava method , applicable for hamiltonian and non - hamiltonian cases where all equations of motion depend on both coordinates and momenta in general .we first briefly introduce leapfrog integration methods , and outline their properties .next , we demonstrate how the phase space of general hamiltonian systems can be extended and a new hamiltonian constructed so that the equations of motion are brought into a separated form .we then construct symmetric leapfrog integrators for the equations .these include maps that mix the extended phase space , which we find to be a requirement for good long term behaviour .finally , we investigate how the extended phase space can be projected back to the original number of dimensions so that extra accuracy can be gained in the process .we then show how the same principle can be applied to nonconservative systems as well .we apply the obtained leapfrog methods to illustrative example cases : hamiltonian geodesic flow , and a forced van der pol oscillator .in many applications to classical physics , such as gravitational interaction of point masses , the hamiltonian function of the system can be separated into two parts where is the kinetic energy , and is the potential energy . in these cases ,the hamiltonian equations of motion read where .the equations for coordinates can then be directly integrated , if the momenta are kept constant , and vice versa .the solutions can be combined in a time - symmetric manner to obtain the two forms of the archetypal second order leapfrog , also known as the strmer verlet method , or strang splitting : and where , , and is the timestep .equations and can also be written as and where , and are the hamiltonian vector fields of and , is the phase space flow along the vector field , is the symplectic form given in local coordinates , is the identity matrix and is the exponential mapping from a lie algebra to the corresponding lie group . here the lie algebra is the algebra of smooth , real - valued functions on the phase space , with the lie product given by the poisson brackets .the group action ( written multiplicatively in equations and ) on the phase space manifold of the corresponding lie group is the phase space flow of the associated vector field .now , a reverse application of the baker campbell hausdorff ( bch ) formula on equation yields \\ & \quad\quad= \exp(h{\widehat{h } } ) , \end{split}\ ] ] where then and similarly for with .equations - are of interest for a number of reasons .first , the flows of hamiltonian vector fields are symplectic transformations for smooth functions , and thus preserve the geometric structure of the phase space , and all first integrals .since and are smooth , the leapfrog method has these properties as well .equation shows on the other hand that leapfrog integrators exactly solve a hamiltonian problem that is asymptotically related to the original one , with a perturbed hamiltonian .another desirable property of the second order leapfrogs is ( relatively ) easy composition of the basic second order method to yield methods of higher order .if is the numerical flow of a time - symmetric second order leapfrog , then can be shown to be a method of higher order for certain choices of and , with for time - symmetric methods such as the second order leapfrog .one particular example is the sixth order composition from ( composition ` s9odr6a ` in the paper ) , which we will use in section [ sc:5.2 ] .the second order leapfrog is also useful when used within an extrapolation scheme , such as the gbs scheme .using an extrapolation scheme does in principle destroy the desirable properties of the leapfrog , since the substeps require a change in timestep which destroys symplecticity , and the final linear combination of the symplectic maps is also not symplectic in general . in practice , the increase in accuracy per computational work spent often offsets this . for a comprehensive review of splitting methods in contexts not limited to hamiltonian ordinary differential equations ( odes ) , see , for geometric integration methods in general , see , and for extrapolation and other conventional methods for general odes , see .in the general case , the hamiltonian does not separate into additive parts , and the approach of the previous section can not be used .we can partially circumvent this problem in the following manner .we first extend the phase space ( essentially , when always operating in local coordinates ) by joining it with another identical copy , giving an extended phase space . in localcoordinates , the symplectic form on this phase space is where is the symplectic form on and is the kronecker product .we then introduce a new hamiltonian on this extended phase space with where now , equal to the hamiltonian function of the original system .hamilton s equations for the two parts of this split hamiltonian are then [ eq : h1 ] and [ eq : h2 ] we see that the derivatives of and depend only on and and vice versa , and the equations can be integrated to yield the actions of and .we can now apply the results from the previous section to find where , which gives the leapfrog algorithms [ eq : lf212 ] and [ eq : lf121 ] over one timestep .the leapfrog methods and then exactly solve the hamiltonian flows of the related hamiltonians if we now consider a hamiltonian initial value problem , with the initial values and set , , we see that the equations and give identical derivatives and identical evolution for both pairs and , equal to the flow of the original hamiltonian system .the numerical leapfrog solutions and then solve closely related hamiltonian problems given by and .we can write the problem in the form where the component vector fields are hamiltonian . we also define operators , , and similarly for and , where is a translation along the vector field . from thiswe see that we can also construct split solutions like where the last equality defines a shorthand notation based on the symmetry of the operator .pairs and commute , so e.g. original and auxiliary variables are also interchangeable , since they initially have the same values . as such , the only unique symmetric second order leapfrog compositions are ( with the above shorthand ) [ eq : all_lfs ] up to a reordering of commuting operators , and switching . from the leapfrogs ,the only one symplectic in the extended phase space is , as well as its conjugate method .it is _ not _ , however symplectic as a mapping within the original phase space if the final are obtained from by any pairwise choice .however , operating in the extended phase space and projecting only to obtain outputs , without altering the state in the extended phase space , leads to an algorithm that preserves the original hamiltonian with no secular growth in the error .this is not entirely surprising , since the algorithms have similarities with partitioned multistep methods , which can exhibit good long term behaviour despite a formal lack of symplecticity . by virtue of being leapfrogs , the algorithms are also explicit , time - symmetric , have error expansions that contain only even powers of the timestep and can be sequentially applied to form higher order methods .the idea behind equation can be generalised further .introducing yet another copy of the phase space with coordinates leads to a hamiltonian of the form .\ ] ] the hamiltonian gives three sets of separable equations of motion that can be integrated with a leapfrog method . as with the leapfrogs , only the leapfrogs obtained from sequential applications of the flows of the different hamiltonians in equation give a method symplectic in the extended phase space .again , no simple pairwise choice of the extended phase space variables leads to a method that is symplectic in the original phase space . in general , using sets of variables in total , with a phase space , one can use a hamiltonian of the form where where is a cyclic permutation of the indexes of the momenta .the equations of motion of any can be integrated with a leapfrog , as is the case for their symmetric combination .however , we will not investigate this general case in the paper . while equations and can be integrated with a leapfrog ,the fact that they are coupled only through the derivatives is a problem .the hamiltonian vector field of one solution at a point depends on the other and vice versa , but not on the solution itself .since the two numerical flows will not in general agree with each other or the exact flow , the derivative function for one numerical flow at one point will end up depending on a different point of the other solution , and both solutions may diverge with time .this problem is quickly confirmed by numerical experiments . an seemingly straightforward solution for this problem would be to introduce feedback between the two solutions , in the form of _ mixing maps _ , .we now amend the leapfrogs to obtain , e.g. so that the resulting algorithm is still symmetric , since at the last step , can be subsumed into the projection map described below .if are symplectic , then the leapfrogs that are symplectic on the extended phase space , i.e. , retain this character .there is no need to restrict to strictly symplectic maps , however , since the extended phase space leapfrogs are not symplectic when restricted to the original phase space in any case . without this restriction ,potentially attractive candidates might stem from e.g. symmetries of the extended hamiltonian and its exact solution .for example , for , permutations of coordinates , , or momenta , , do not change the exact solution for given ( equal ) initial conditions . permuting both switchesthe component hamiltonians , but since they are equal , this has no effect for the exact solution .we will find that for the numerical method , the permutations can be beneficial .a related problem is how to project a vector in extended phase space back to the dimension of the original .this should be done in a manner that minimizes the error in the obtained coordinates , momenta and original hamiltonian .in addition , the final algorithm should be symplectic , or as close to symplectic as possible . to this end, we introduce a _ projection map _ . in principle, the projection map could be used in two different ways .the first is to obtain formal outputs at desired intervals , while the algorithm always runs in the extended phase space .the second is to use the projection map after each step , and then copy the projected values to create the extended set of variables for the next step .the final algorithm over steps would then be either of where is function composition , and is the cloning map . it should be emphasized that in equation , the projection map is included only to obtain the formal output after steps , while the current state is preserved , and used to continue the integration .in contrast , the algorithm can evidently be considered as a mapping in the original phase space , and as such represents a conventional method such as a partitioned runge kutta method , as seen in section [ sc:3.2 ] .unfortunately , for hamiltonian problems , it leads to secular increase of the error in the original hamiltonian . in the hamiltonian case, it seems that operating in the extended phase space is necessary , and as such , is to be used . to take a first stab at determining a suitable choice for and , we turn to analyzing the error in the hamiltonian function and deviations from symplecticity . we start the search of suitable candidates for and from symmetric linear maps of the form and where , and all matrix elements are real .we then look for the coefficients that give the best results according to conservation of hamiltonian function or symplecticity .we can thus use , say , take two steps to make and independent , and expand in terms of , where , and look at the coefficients . in this examplecase , the zeroth order coefficient of reads with to make the cofficient identically zero , we need to have for all the maps and , which makes them linear interpolations between the original and auxiliary coordinates and momenta . with the substitutions , also the first order coefficient of becomes identically zerothe same substitution zeroes the coefficients of up to and including the second order .the third order coefficient of becomes independent of the map matrix elements , and as such we will focus on the expansion of .the second order coefficient is ,\ ] ] where and the derivatives are -linear operators where , , and juxtaposition in implies contraction .from here , there are several choices available .we will consider some interesting combinations .choosing makes the second order term identically zero .taking and then makes also the third order term be identically zero , and as such apparently gives an additional order of accuracy compared to the standard leaprog .however , despite this , these choices lead to poor long term behaviour , which numerically is quickly apparent .if and , so that the momenta are permuted , then choosing will also identically zero the second order coefficient . in this case , the third order coefficient ca nt be brought to zero , but the numerical long term behaviour found in the problem of section [ sc:5.1 ] is good , if is chosen so that there is a permutation symmetry between coordinates and momenta .numerically , the best results were obtained with a choice , and , .this necessitates to zero the second order coefficient .we conclude that the long term behaviour of the method is not evident from considering the conservation of the hamiltonian alone .in addition to the conservation of the hamiltonian , we are interested in the conservation of the symplectic form , or the symplecticity of the method . in local coordinates ,the condition for symplecticity of an integration method over one step is where is the jacobian of the map .we consider first symplecticity in the extended phase space .it is clear that if and are identity maps , then the method is symplectic .however , we know that this does not lead to good numerical results in the long term . to investigate other possibilities , we again apply the method for two steps and expand the left side of equation in terms of .the first order term gives two independent conditions \\& \quad\quad\times\left[- ( { { \beta } _ { { m}_1}}(1 - 2 { { \beta } _ { { m}_2}})^2 ) + { { \beta } _ { { m}_1}}^2(1 - 2 { { \beta } _ { { m}_2}})^2 + ( -1 + { { \beta } _ { { m}_2 } } ) { { \beta } _ { { m}_2}}\right ] \\ & \quad + \left[1 - 2 { { \alpha } _ { { m}_1}}(1 - 2 { { \alpha } _ { { m}_2}})^2 + 2 { { \alpha } _ { { m}_1}}^2(1 - 2 { { \alpha } _ { { m}_2}})^2 - 2 { { \alpha } _ { { m}_2 } } + 2 { { \alpha } _ { { m}_2}}^2\right ] \\ & \quad\quad\times\left[1 - 2 { { \beta } _ { { m}_1}}(1 - 2 { { \beta } _ { { m}_2}})^2 + 2 { { \beta } _ { { m}_1}}^2(1 - 2 { { \beta } _ { { m}_2}})^2 - 2 { { \beta } _ { { m}_2 } } + 2 { { \beta } _ { { m}_2}}^2\right ] = 1 \end{split } \label{eq : scond01}\\ \begin{split } & 2\left[1 - 2 { { \alpha } _ { { m}_1}}(1 - 2 { { \alpha } _ { { m}_2}})^2 + 2 { { \alpha } _ { { m}_1}}^2(1 - 2 { { \alpha } _ { { m}_2}})^2 - 2 { { \alpha } _ { { m}_2 } } + 2 { { \alpha } _ { { m}_2}}^2\right ] \\ & \quad\quad\times\left[- ( { { \beta } _ { { m}_1}}(1 - 2 { { \beta } _ { { m}_2}})^2 ) + { { \beta } _ { { m}_1}}^2(1 - 2 { { \beta } _ { { m}_2}})^2 + ( -1 + { { \beta } _ { { m}_2 } } ) { { \beta } _ { { m}_2}}\right ] \\ & \quad + 2\left[- ( { { \alpha } _ { { m}_1}}(1 - 2 { { \alpha } _ { { m}_2}})^2 ) + { { \alpha } _ { { m}_1}}^2(1 - 2 { { \alpha } _ { { m}_2}})^2 + ( -1 + { { \alpha } _ { { m}_2 } } ) { { \alpha } _ { { m}_2}}\right ] \\ & \quad\quad\times\left[1 - 2 { { \beta } _ { { m}_1}}(1 - 2 { { \beta } _ { { m}_2}})^2 + 2 { { \beta } _ { { m}_1}}^2(1 - 2 { { \beta } _ { { m}_2}})^2 - 2 { { \beta } _ { { m}_2 } } + 2 { { \beta } _ { { m}_2}}^2\right ] = 0 .\end{split}\label{eq : scond02}\end{aligned}\ ] ] solving any coefficient from these requires that none of the others is . as such, simply averaging the extended variable pairs leads to destruction of symplecticity in the extended phase space already in the zeroth order . on the other hand, any combination of makes equations and identically true .of these , combinations with are exactly symplectic , since the corresponding maps are .other combinations give a non - zero cofficient at second order .to investigate symplecticity in the original phase space , we append the projection map . in this case , the zeroth and first order terms are zero independely of the coefficients of the maps and .the second order term is a cumbersomely lengthy function of the map components and derivatives of the hamiltonian .however , it is reduced to zero by those substitutions from that have or . in this case , if we also put , the first non - zero term is of the fifth order . in the casethat or , setting does zero out the second order term , as well as the third order term , but not the fourth .the combination of identity maps with and , shown above to conserve hamiltonian to an extra order of accuracy , leads to a non - zero error already in the third order .the method , which gives the best results for the application in section [ sc:5.1 ] in both accuracy and long term behaviour , is the one with , and .interestingly , these choices give a non - zero contribution already at second order , and this is not affected by subsituting in the specific choice of hamiltonian for the problem .many common integration methods can be written in some general formulation as well , which can give additional insight to the numerical behaviour of the algorithm . here , we will consider partitioned runge - kutta ( prk ) methods .prk methods form a very general class of algorithms for solving a partitioned system of differential equations where , , and may be vector valued . a partitioned runge kutta algorithm for system can be written as where , and , are the coefficients of two ( possibly different ) runge kutta methods , respectively .the second order leapfrog can be written as a prk algorithm using the coefficients in table [ tb : lfcoeff ] .if and are functions of only and , respectively , the resulting algorithm is explicit . in the extended phase spacethe equations of motion and can be written as with , and .if the maps are identity maps , the leapfrogs and can be written as prk algorithms with coefficients from table [ tb : lfcoeff ] as well .if this is not the case , the final result for will involve both and , and similarly for , but in general during the integration , at the beginning of any given step .the resulting and will both also involve a mix of and , as well .this can not be obtained with a prk scheme .if , however , , we can write a prk scheme for the _ first _ step , where , and also . the resulting coefficients are listed in table [ tb : ephcoeff ] ..the butcher tableaus for the second order leapfrog as a partitioned runge kutta system . [ cols="^,^,^ " , ] to assess the long term behaviour of the methods , we did another integration for 3000 orbital periods , and investigated the error in the conservation of the hamiltonian .figure [ fig : geo_hami ] shows the absolute error for the first 10 orbits and the maximum error up to a given time during the integration for the whole run .while the lsode method is very accurate , we see that it eventually displays a secular power law increase in the maximum error with time , typical for nonsymplectic algorithms . the symplectic implicit midpoint method shows no secular growth in the error , and neither does the method with either projection .this result is not completely unexpected , since symmetric non - symplectic methods can also display behaviour similar to symplectic ones , particularly for quadratic hamiltonians , such as in this case . for the method with ( solid line ) and ( dotted line ) , implicit midpoint method ( dashed line ) , and the lsode method ( dash - dotted line ) ._ left : _ absolute error , for first orbits ._ right : _ maximum absolute error up to given time during integration . note the different -axis scaling ., title="fig:",scaledwidth=50.0% ] for the method with ( solid line ) and ( dotted line ) , implicit midpoint method ( dashed line ) , and the lsode method ( dash - dotted line ) ._ left : _ absolute error , for first orbits ._ right : _ maximum absolute error up to given time during integration . note the different -axis scaling ., title="fig:",scaledwidth=50.0% ] for this particular problem , the method with produces excellent results .the resulting orbit is much more closely aligned with the lsode solution than the orbit given by a basic symplectic integrator , the implicit midpoint method , and there is no secular growth in the error of the hamiltonian .it is notable also that since the method is explicit , the number of vector field evaluations required and thereby the computing time used is much less than for the implicit midpoint method , or the lsode method . as such , the could be used with a smaller timestep to obtain even better results relative to the other methods .we test the extended phase space method also on a non - conservative system , the forced van der pol oscillator which can be written in the equivalent form where parametrizes the non - linearity of the system , and are the amplitude and period of the sinusoidal forcing .the van der pol oscillator is essentially a damped non - linear oscillator that exhibits a limit cycle . for our test , we set , and ; a choice for which the oscillator is known to exhibit chaotic behaviour . as initial conditions, we take . to integrate the system, we use split systems of types ( method 1 in the following ) ( method 2 ) , in the same symmetric shorthand as in equations .method 1 is of type while method 2 is of type . for both methods ,we employ the 6th order composition coefficients from to yield a 6th order method . in this case, we use the method , and set so that after one composited step , the original and auxiliary variables are averaged .for the mixing maps , we take and . as in the previous section ,we compare these methods to the implicit midpoint method , iterated to precision in relative error , and the lsode solver with a relative accuracy parameter of and absolute accuracy parameter of .we propagate the system until using a timestep .figure [ fig : vdp_orbits ] shows the numerical orbits of the four methods in the phase space .the behaviour of the system is characterized by slow dwell near points , and quick progression along the curved paths when not . in figure[ fig : vdp_errors ] we have plotted the maximum absolute errors in and up to given time with respect to the lsode method .we find that all the methods display a secular growth in the coordinate errors , with method 1 performing best , followed by method 2 and the implicit midpoint method .methods 1 and 2 show similar qualitative behaviour as the lsode solution , while the midpoint method shows a clear divergence .these results needs to be contrasted with the amounts of vector field evaluations , which are ( method 1 ) , ( method 2 ) , ( implicit midpoint ) and ( lsode ) .the number of evaluations is roughly similar for each method , and as such method 1 is rather clearly the best of the constant timestep methods , with lsode likely the best overall . , and , integrated until , with method 1 ( solid line , top left ) , method 2 ( dotted line , top right ) , the lsode method ( dash - dotted line , bottom left ) , and the implicit midpoint method ( dashed line , bottom right ) . , scaledwidth=100.0% ]perhaps the most obvious benefit of splitting methods is that they re explicit , which eliminates all problems inherent in having to find iterative solutions .specifically , if evaluating the vector field , or parts of it , is very computationally intensive , then explicit methods are to be preferred , as they only require a single evaluation of the vector field for each step of the algorithm .leapfrog methods can also be composited with large degrees of freedom , making it possible to optimize the method used for the specific task at hand .more subtly , when the problem separates to asymmetric kinetic and potential parts , different algorithmic regularization schemes can be used to yield very powerful ( even exact ) , yet simple integrators . however , for the case when the hamiltonian remains unspecified , algorithmic regularization seems to yield little benefit , since both parts of the new hamiltonian are essentially identical .this is problematic , since the leapfrog integration of an inseparable hamiltonian typically leads to wrong results only when the system is in `` difficult '' regions of the phase space , such as near the schwarzschild radius for the case in section [ sc:5.1 ] , where the derivatives have very large numerical values and the expansions - may not converge for the chosen value of timestep .this is exactly the problem that algorithmic regularization solves , and it would be greatly beneficial if such a scheme could be employed even for the artificial splitting of the hamiltonian in . despite the lack of algorithmic regularization , the extended phase space methods seem promising .the results in section [ sc:5.1 ] demonstrate that the extended phase space methods can give results comparable to an established differential equation solver , lsode , but with less computational work .more importantly , the results are superior to a known symplectic method , the implicit midpoint method .the results in section [ sc:5.2 ] are less conclusive with respect to the lsode method , but clear superiority versus the implicit midpoint method is still evident .we find this encouraging , and believe that the extended phase space methods should be investigated further .obvious candidate for further research is the best possible form and use of the mixing and projection maps .the optimal result is likely problem dependent .another issue that would benefit from investigation is how to find algorithmic regularization schemes for the split , preferably with as loose constraints on the form of the original hamiltonian as possible .finally , whether useful integrators can be obtained from the splits of types - should be investigated .we have presented a way to construct splitting method integrators for hamiltonian problems where the hamiltonian is inseparable , by introducing a copy of the original phase space , and a new hamiltonian which leads to equations of motion that can be directly integrated .we have also shown how the phase space extension can be used to construct similar leapfrogs for general problems that can be reduced to a system of first order differential equations .we have then implemented various examples of the new leapfrogs , including a higher order composition .these methods have then been applied to the problem of geodesics in a curved space and a non - linear , non - conservative forced oscillator . with these examples , we have demonstrated that utilizing both the auxiliary and original variables in deriving the final result , via the mixing and projection maps , instead of discarding one pair as is done in and , can yield better results than established methods , such as the implicit midpoint method .the new methods share some of the benefits of the standard leapfrog methods in that they re explicit , time - symmetric and only depend on the state of the system during the previous step . for a hamiltonian problem of the type in section [ sc:5.1 ]they also have no secular growth in the error in the hamiltonian . however , the extended phase space methods leave large degrees of freedom in how to mix the variables in the extended phase space , and how to project them back to the original dimension . assuch , there is likely room for further improvement in this direction , as well as in the possibility of deriving a working algorithmic regularization scheme for these methods . in conclusion , we find the extended phase space methods to be an interesting class of numerical integrators , especially for hamiltonian problems .hairer , e. , lubich , c. , wanner , g. : geometric numerical integration : structure - preserving algorithms for ordinary differential equations , springer series in computational mathematics , vol 31 .springer , berlin ( 2006 ) radhakrishnan , k. , hindmarsh , a. : description and use of lsode , the livermore solver for ordinary differential equations .nasa , office of management , scientific and technical information program ( 1993 )
we present a method for explicit leapfrog integration of inseparable hamiltonian systems by means of an extended phase space . a suitably defined new hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog ( splitting ) methods . when the leapfrog is combined with coordinate mixing transformations , the resulting algorithm shows good long term stability and error behaviour . we extend the method to non - hamiltonian problems as well , and investigate optimal methods of projecting the extended phase space back to original dimension . finally , we apply the methods to a hamiltonian problem of geodesics in a curved space , and a non - hamiltonian problem of a forced non - linear oscillator . we compare the performance of the methods to a general purpose differential equation solver lsode , and the implicit midpoint method , a symplectic one - step method . we find the extended phase space methods to compare favorably to both for the hamiltonian problem , and to the implicit midpoint method in the case of the non - linear oscillator .
a binary matrix satisfies the consecutive - ones property ( c1p ) if its columns can be ordered in such a way that , in each row , all 1 entries appear consecutively .the c1p has been studied in relation to a wide range of problems , from theoretical computer science to genome mapping ( see and references there ) .the c1p can be naturally described in terms of covering hypergraph edges by walks .assume a binary matrix is the incidence matrix of a hypergraph , where columns represent vertices and rows encode edges ; then is c1p if and only if can be covered by a path that contains all vertices and where every edge appears as a contiguous subpath .deciding if a binary matrix is c1p can be done in linear time and space ( see and references there ) .if a matrix is not c1p , a natural approach is to remove the smallest number of rows from this matrix in such a way that the resulting matrix is c1p .this problem , equivalent to an edge - deletion problem on hypergraphs that solves the hamiltonian path problem , is np - complete , although fixed - parameter tractability ( fpt ) results have recently been published . at a high level of abstraction ,genome assembly problems can be seen as graph or hypergraph covering problems : vertices represent small genomic sequences , edges encode co - localisation information , and one wishes to cover the hypergraph with a set of linear walks ( or circular walks for genomes with circular chromosomes ) that respect co - localisation information .such walks encode the order of elements along chromosomal segments of the assembled genome .one of the major issues in genome assembly problems concerns _repeats_- genomic elements that appear , up to limited changes , in several locations in the genome being assembled .such repeats are known to confuse assembly algorithms and to introduce ambiguity in assemblies .modeling repeats in graph theoretical models of genome assembly can be done by associating to each vertex a _ multiplicity _ : the multiplicity of a vertex is an upper bound on the number of occurrences of this vertex in linear / circular walks that cover the hypergraph , and thus a vertex with a multiplicity greater than can traversed several times in these walks ( _ i.e. _ , encodes a repeat as defined above ) .this hypergraph covering problem naturally translates into a variant of the c1p , called the c1p with multiplicity ( mc1p ) that received little attention until recently , when it was investigated in several recent papers in relation to assembling ancestral genomes that describee both hardness and tractability results for decision and edge - deletion problems . in the present paper , we formalize the previously studied c1p and mc1p notions in terms of _ covering of assembly hypergraphs _ by linear and circular walks and edge - deletion problems ( section [ sec : preliminaries ] ) .next , we describe new tractability results for decision and edge - deletion problems ( section [ sec : results ] ) : we show that deciding if a given assembly hypergraph admits a covering by linear and circular walks that respects the multiplicity of all vertices is fpt and we describe polynomial time algorithms for decision and edge - deletion problems for families of assembly hypergraphs which encode information allowing us to clear ambiguities due to repeats .we conclude with several open questions ( section [ sec : conclusion ] ) .[ def : hypergraph ] an _ assembly hypergraph _ is a quadruple where is a hypergraph and are three mappings such that , , where is either a sequence on the alphabet where each element appears at least once , or ( the empty sequence ) . from now , we consider that , , , , , .a vertex such that is called a _ repeat _ ; is the set of repeats and .edges s.t . are called _ adjacencies _ ; from now , without loss of generality , we assume that if is an adjacency .edges s.t . ( resp . ) are called _ intervals _ ( resp . _triples _ ) .we denote the set of adjacencies ( resp .weights of adjacencies ) by ( resp . ) and the set of intervals ( resp .weights of intervals ) by ( resp . ) .an interval is _ ordered _if ; an assembly graph with no ordered interval is _unordered_. from now , unless explicitly specified , our assembly hypergraphs will be unordered and unweighted .we call the _ multiplicity _ of .[ def : adjacencygraph ] an assembly hypergraph with no interval is an _adjacency graph_. given an assembly hypergraph , we denote its _ induced adjacency graph _ by for every , as adjacencies are unordered . ] .[ def : compatibility ] let be an assembly hypergraph and ( resp . ) a linear ( resp .circular ) sequence on the alphabet .an unordered interval is _ compatible _ with ( resp . ) if there is a contiguous subsequence of ( resp . ) whose content is equal to .an ordered interval is compatible with ( resp . ) if there exists a contiguous subsequence of ( resp . ) equal to or its mirror .[ def : assembly ] an assembly hypergraph admits a _ linear assembly _ ( resp . _ mixed assembly _ ) if there exists a set of linear sequences ( resp .linear and/or circular sequences ) on such that every edge is compatible with at least one sequence of , and every vertex appears at most times in .the weight of an assembly is .an assembly as defined above can naturally be seen as a set of walks ( some possibly closed in mixed assemblies ) on such that every edge of is traversed by a contiguous subwalk . in the following , we consider two kinds of algorithmic problems that we investigate for different families of assembly hypergraphs and genome models , a decision problem and an edge - deletion problem . * the _ assembly decision problem _ : given an assembly hypergraph and a genome model ( linear or mixed ) , does there exist an assembly of in this model ? * the _ assembly maximum edge compatibility problem _ : given an assembly hypergraph and a genome model , compute a maximum weight subset of such that the assembly hypergraph admits an assembly in this model .[ def : repeatcluster ] let be an assembly hypergraphmaximal repeat cluster _ is a connected component of the hypergraph whose vertex set is and edge set is . as outlined in the introduction , vertices in an assembly hypergraphrepresent genomic elements , each with an associated copy number , while edges and their order ( for intervals ) encode hypothetical co - localisation information , each with an associated weight .linear and/or circular sequences of vertices defining an assembly represent the order of these genomic elements along chromosomal segments , the circular ones representing circular chromosomes .a maximal repeat cluster encodes a group of elements that are believed to appear in several locations of the genome to assemble , although different occurrences might differ in terms of content and/or order ( see for example ) .such repeated structures cause ambiguity in genome assemblies based solely on adjacencies ; for example , if , with and , and , then there are essentially three possible linear assemblies ( ) , while adding the ordered interval leads to a single possible assembly .when no repeats are allowed ( ) , the assembly decision problem in the linear genome model is equivalent to asking if a binary matrix has the c1p , which can be solved in time and space .the set of all linear assemblies can be encoded into a compact data structure , the _ pq - tree_. in the mixed genome model , the problem can also be solved in linear time , as it reduces to testing the circular c1p for every connected component of the overlap graph of the matrix . the _ pc - tree _ , a slightly modified pq - tree , can be used to encode all mixed genome assemblies .we summarize some of these results in the following theorem and refer to for a survey on these questions .[ thm : cis1 ] the assembly decision problem can be solved in time and space when , in the linear and mixed genome models . in the linear genome model ,the assembly maximum edge compatibility problem is hard for adjacency graphs it solves the problem of computing a set of paths that cover a maximum number of edges of the graph butfpt results have recently appeared .tractability results are less general when repeats are allowed , as shown below .[ thm : adadjacencies1]_ _ ( 1 ) the assembly decision problem can be solved in time and space for adjacency graphs ( ) in the linear and mixed genome models .( 2 ) in both genome models , the assembly decision problem is np - hard if and .the principle of the proof for ( 1 ) is that an adjacency graph admits a valid assembly if and only if every vertex has at most neighbours and , in the linear model , if every connected component satisfies .this result , combined with the use of pq - trees on the assembly hypergraph without its repeats , can be extended slightly in the linear genome model .[ thm : adadjacencies2]_ _ the assembly decision problem can be solved in polynomial time and space in the linear genome model for unordered assembly hypergraphs where , for every edge containing a repeat , either is an adjacency or is an interval that contains a single repeat and there exists an edge .finally , to the best of our knowledge , the following is the only tractability result for edge - deletion problems when repeats are allowed , limited to adjacency graphs and the mixed genome model .[ thm : amedadjacencies1]_ _ ( 1 ) the assembly maximum edge compatibility problem can be solved in polynomial time and space in the mixed genome model for adjacency graphs ( ) .( 2 ) the assembly maximum edge compatibility problem is np - hard in the mixed genome model if , even if .we first show that the assembly decision problem is fpt with respect to parameters and .then we describe positive results for the case where the induced adjacency graph is assumed to admit an assembly and specific families of intervals are added to clear ambiguities caused by repeats .we discuss the practical implications of our positive results at the end of the section . the assembly decision problem can be solved in space and time in the linear and mixed genome models .the principle of the proof is , for the given assembly hypergraph and here as the weight does not impact decision problems and we deal with unordered hypergraphs .so , we eliminate both mappings from our notation . ] , to build another assembly hypergraph such that for all , by making copies of each and considering each possible set of choices of 2 neighbors for each of these copies . can then be checked for the existence of an assembly with theorem [ thm : cis1 ] . the sets of choicesare made in such a way that has an assembly if and only if , for at least one of these sets of choices , has an assembly . finally ,if and are fixed , we prove that there is a fixed number of such sets .let be the set of copies we shall introduce for each ( and ) , be the _ neighborhood _ of in , that is the set of vertices belonging to edges containing , and be the `` new neighborhood '' from which we choose neighbors for vertices in .we represent each set of possible choices of 2 neighbors of each with a mapping , where .let be the collection of these mappings ( itself a mapping where ) .we can now state the full algorithm as follows . 1 .for each , make copies of , which defines the set .let .2 . for each , choose neighbours from , thus defining for every .this also defines as the collection of mappings over all .3 . construct a new assembly hypergraph with , for all , and defined as follows : ( 1 ) for each , , for some , add and to ( -edges ) and ( 2 ) for each , add an edge containing .4 . for each to a vertex of , let be the unique path in s.t . and all of to for each such that .5 . use theorem [ thm : cis1 ] on .output yes and exit if admits an assembly in the chosen genome modeliterate over all possible sets of neighbour choices in step 2 .output no if no admits an assembly in the chosen genome model .[ [ algorithm - correctness . ] ] algorithm correctness .+ + + + + + + + + + + + + + + + + + + + + + the premise for the algorithm is the following claim , which we state and prove below . has an assembly if and only if , for some , has an assembly .[ lem : mapping ] first , if has the assembly , in , we replace each occurrence of a vertex by copies where .let this new assembly be called .each such is adjacent to at most 2 other distinct vertices .we consider the mapping which maps each such to its two neighbours in this assembly .if we can establish that the hypergraph obtained from this mapping and the new edges we introduce admits as an assembly , we are done . to decide if has an assembly , we first note that any set of covering walks on is a set of paths ( we can not visit the same vertex twice because for all ) . since is a covering walk of , by splitting the vertices of into distinct copies ,we ensure that no vertex of is visited twice by .now , let us look at the set of edges .if all of them are covered as contiguous subsequences in , we are done .we show this by the following observations . 1 . in , every edge occurs as a contiguous subsequence .let be the edge in corresponding to .then , by definition of , must occur in it as a contiguous subsequence .2 . for each for some , we defined using the assembly .so , we definitely get both adjacencies in .so , must be an assembly for , which implies that has an assembly .conversely , if the graph has an assembly , it contains all vertices , and occurrences of each for all repeat vertices . if we remove the subscripts , _i.e. _ , becomes for all , we get an assembly , which we claim is an assembly for , as will have the following properties . 1 .every vertex appears at least once , and at most times .2 . for every edge only of vertices in , we get a contiguous occurrence of , which is the corresponding edge in .3 . for every edge , such that for some , there is an edge such that has two neighbours and . in this case, we get a contiguous occurrence of including .removing the subscripts gives us a contiguous occurrence of in the new assembly .so , contains occurrences of every edge in as contiguous subsequences , which proves that is an assembly for .this proves the claim .this proof holds for both genome models as theorem [ thm : cis1 ] considers them both .[ [ algorithm - complexity . ] ] algorithm complexity .+ + + + + + + + + + + + + + + + + + + + + the space complexity follows obviously from the construction of .the choice of neighbours can be made in at most ways for each new vertex .so , in total , we get at most possible mappings .the procedure on each can be done in time , since we just need to check its neighbours , which are at most .doing so for all vertices in takes time at most .the final step , checking for the existence of an assembly for a given , can be done in time , since we add at most new edges , and new vertices .now , we assume we are given an assembly hypergraph whose induced adjacency graph is known to have a mixed assembly . to state our result ,we extend slightly the notion of compatibility : an unordered interval is said to be _ compatible _ with if there exists a walk in whose vertex set is exactly .we consider the interval compatibility problem defined below . the _ assembly maximum interval compatibility problem _ : given an assembly hypergraph such that admits a mixed assembly , compute a maximum weight subset of , , such that admits a mixed assembly .[ thm : maxcompatibility1 ] let be a weighted assembly hypergraph such that admits a mixed genome assembly , and each interval is a triple containing at most one repeat and compatible with .the assembly maximum interval compatibility problem in the mixed genome model can be solved for in linear space and time .the proof proceeds in two stages : we first show that repeat - free triples , as well as triples whose non - repeat vertices form an adjacency , must always be included in a maximum weight compatible set of triples .then , we present an algorithm which uses the adjacency compatibility algorithm of mauch et al . to decide which of the remaining triples to include . from now, we denote by a maximum weight subset of such that admits a mixed assembly .if a triple satisfies , with , then . as assumed to be compatible with by hypothesis , there is a walk on these three vertices in . as a walk on three non - repeat verticesis a path , w.l.o.g we assume that the adjacencies in the path are and ( the argument holds by symmetry for the other cases ) .then , in any mixed assembly of , in order to contain both adjacencies , and to make sure that appears exactly once in the assembly , the assembly must contain , in the order .so , it must be included in , as is a maximum weight subset of .if a triple satisfies , with , and , then . for the triple to be compatible with , needs to be adjacent to at least one of and .assume , w.l.o.g , that .if admits a mixed assembly , both and must occur in a path or a cycle .furthermore , since , these two adjacencies must occur in the same path or cycle , in the order .this is an occurrence of as a contiguous sequence , which implies that such a triple must occur in every assembly of , and must be included in .we are now left with the set of triples such that is a repeat and , which means that is adjacent to both and , and we need to find a maximum weight subset of triples of this form . to do this, we rely on the optimal edge - deletion algorithm designed by mauch et al . for adjacency graphs as shown below . 1 .initialize an empty set and .2 . for every : a. add an adjacency to , label with the triple , and set . b. remove and from , if present .3 . for every remaining adjacency ,set .4 . apply the linearization algorithm ( theorem [ thm : amedadjacencies1 ] ) on .add the triples corresponding to the labels of the adjacencies from retained by the linearization algorithm to ._ algorithm correctness ._ given a triple with a repeat vertex and no adjacency , we consider a candidate mixed assembly of containing the elements of contiguously . in such an assembly , we would encounter the consecutive substring .we can contract this substring and label the newly formed adjacency , signifying that there is a path of length between and which passes through and contains no other vertices , _i.e. _ , it encodes the triple .so , we construct the new assembly hypergraph ( an adjacency graph ) by deleting the adjacencies and and encoding the path containing into the adjacency added to . the optimal edge - deletion algorithm from a maximum weight set of adjacencies such that the assembly graph has a mixed assembly , where and are the restrictions of and to . in this assembly, we can replace every by the corresponding triple and the two corresponding adjacencies from .note that none of the adjacencies from are discarded during linearization since they are weighted so that discarding any one would be suboptimal when compared to discarding the entire set of adjacencies from .so the assembly obtained by this process will contain all the edges from , as well as a maximum weight set such that every is present .this implies that we computed a maximum weight compatible set of triples from ._ algorithm complexity ._ checking the compatibility of a triple with can be done in constant time , since we just need a -step graph search from any vertex , and proceed until we find a path connecting all vertices in .we can also check the number of repeats in in constant time . to deal with triples from the set , the new assembly hypergraph can obviously be constructed in time and space , and contains vertices and edges .so the optimal edge - deletion algorithm is the main component of the process , and is based on a maximum weight matching algorithm of time complexity .related to this theorem , we have the following corollary .[ thm : maxcompatibility2 ] let be an assembly hypergraph such that admits a mixed genome assembly , maximal repeat clusters are all of size , and each interval is an unordered compatible triple .the assembly maximum interval compatibility problem in the mixed genome model can be solved for in linear space and time .we already know that we can find a maximal weight compatible subset if there is no containing more than repeat .we now show that for the current problem , a triple , where and are repeats , and , can also be included in the set if it is compatible with .note that and can not have an adjacency between them , since the size of a maximal cluster can not exceed .so , for to be compatible , the corresponding adjacencies will be and . for to have a mixed assembly which contains both adjacencies , the assembly must contain in the order .this is a contiguous appearance of the elements of , and it must occur in every mixed assembly .it can thus be included in .theorem [ thm : maxcompatibility1 ] concludes the proof .[ def : repeatinterval ] let be an assembly hypergraph .an interval is an _ ordered repeat spanning interval _ for a maximal repeat cluster if with , and , where is a sequence on the set , containing every element at least once .the subset of ordered repeat spanning intervals in is denoted by [ thm : repeat_spanning_compatibility ] let be an assembly hypergraph such that every repeat is either contained in an adjacency , or it is contained in an interval of one of the following forms . 1 . is an ordered repeat spanning interval . is the only repeat in , , and .the assembly decision problem in the linear genome model can be solved for in polynomial time and space .the basic idea of the proof is to realize the sequence for every repeat spanning interval by creating unique copies of the repeats in and decreasing the multiplicity accordingly .this leads to an assembly graph that can then be checked using theorem [ thm : adadjacencies2 ] .formally we define an extended assembly hypergraph , , as follows ( we omit from the notation , since we are addressing a decision problem ) .1 . , , , , .2 . for every repeat spanning interval .1 . let , possibly for ( the are repeats ) .2 . for from to 1 . add a unique vertex to , with multiplicity , 2 .add an adjacency to for , 3 .decrease by 1 . 3 .add edges and to .4 . if the adjacencies and are present , add them to .3 . check if the assembly hypergraph , admits a linear genome assembly using theorem [ thm : adadjacencies2 ] . admits a valid genome assembly in the linear genome model if and only if for every repeat and admits one .assume admits an assembly . by construction , every repeat of maps to a subset of composed of and the vertices added when reading occurrences of in the ordered repeat spanning intervals of . for a repeat ,let be this subset of and the inverse map . by construction, the adjacencies added to when reading the order of an interval , when the inverse map is applied to their vertices , define a walk in corresponding exactly to , which allows us to unambiguously translate the set of linear walks on defining into a set of linear walks on .this implies that every edge of is compatible with ( as defined in def .[ def : compatibility ] ) , and we only need to consider potential problems caused by multiplicities .assume that for every repeat one has and that for every , appears at most times in an assembly of , _i.e. _ , exactly time , since for all . for a vertex that , by construction , so an assembly of also satisfies the constraints of an assembly of for . for a repeat , the number of occurrences of elements of in is at most . by construction , , so assuming that implies that the constraint on is satisfied in the linear walks on .now , consider admits an assembly in the linear genome model . by definition , for every repeat spanning interval , appears as a walk in . by replacing the repeats in such a walk by new vertices with multiplicity done in step 2.b of the algorithm above , one clearly obtains an assembly for , and the identity ensures that . __ the polynomial time and space complexity follows from theorem [ thm : adadjacencies2 ] , since the the construction of results in an assembly hypergraph with the structure in which no two repeats are contained in an interval ( the repeat spanning intervals being resolved ) , and if an interval contains a repeat , there exists an edge in , since we added them directly from .the following corollary follows easily from the previous theorem .[ cor : repeatspanning ] let be an assembly hypergraph such that each interval is an ordered repeat spanning interval .the assembly decision problem in the mixed and linear genome models can be solved for in time and space .we make the same construction as in theorem [ thm : repeat_spanning_compatibility ] . the extended assembly graph we create now is composed entirely of adjacencies , since .an application of theorem [ thm : adadjacencies1 ] completes the proof .the time and space complexities follow immediately from the linear time and space complexities stated in theorem [ thm : adadjacencies1 ] and from the size of .the results above have interesting practical implications that we outline now .first , corollary [ cor : repeatspanning ] shows that , if provided with ordered repeat spanning intervals , one can check for the existence of an assembly in both genome models .ordered repeat spanning intervals can be obtained in practice in several ways , such as mapping the elements of onto related genomes or long reads ( see appendix for more details ) .the tractability of the assembly decision problem , with linear time and space complexities , makes it possible to combine it with the tractability result of theorem [ thm : amedadjacencies1 ] to select a subset of adjacencies , followed by a greedy heuristic for the assembly maximum interval compatibility problem .note also that the condition on the unordered intervals in the statement of theorem [ thm : repeat_spanning_compatibility ] allows one to account for the important notion of _ telomeres _ .regarding theorem [ thm : maxcompatibility1 ] , it can be used to partially clear the ambiguities caused by repeats in assembly hypergraphs where triples are obtained from mate - pairs of reads from sequencing libraries defined with inserts of length greater than the length of repeats .if all maximal repeat clusters are `` collapsed '' into a single vertex ( with the maximum multiplicity among all initial repeats of the cluster ) , such mate - pairs spanning repeat clusters define the triples .solving the assembly maximum interval compatibility problem allows us to specify the locations of the different occurrences of the spanned repeat clusters in the assembled genome , thus resolving part of the ambiguity due to repeats and leaving only the internal structure of each repeat cluster ( content and order ) unresolved .in the present work , we presented a set of positive results on some hypergraph covering problems motivated by genome assembly questions . to the best of our knowledge , these are the first such results for handling repeats in assembly problems in an edge - deletion approach , as previous results focused on superstring approaches , and these new methods have been applied on real data .moreover , the initial results we presented suggest several open problems .first , our results about triples assume that they are compatible with ( _ i.e. _ , appear as walks in ) ; we conjecture that similar positive results can be obtained when relaxing this condition ( in particular when triple elements might not appear in the same connected component ) .next , our edge - deletion positive results assume that admits a genome assembly , and only intervals are considered for being deleted .this leads to a two - stage assembly process where adjacencies are deleted first , followed by intervals .it remains open to see if both adjacencies and limited families of intervals can be considered jointly . also of interestwould be to see if the size of maximal repeat clusters or of intervals can be used as parameters for fpt results .regarding repeat - spanning intervals , it can be asked if one can relax the total order structure to account for uncertainty ; for example , if they are defined from the comparison of pairs of related genomes , it might happen that specific rearrangements lead to conserved genome segments that can be described by partial orders , which opens the question of solving the assembly decision problem with partial orders to describe repeat - spanning intervals .along the same line , it might happen that intervals spanning only prefixes or suffixes of repeat occurrences ( called _ repeat - overlapping intervals _ ) can be detected , and the tractability of the assembly decision problem with such intervals is open ; we conjecture it is fpt in the number of such intervals . finally , _gaps _ , that can be described in terms of binary matrices , as entries appearing between entries , appears naturally in genome scaffolding problems ; the notion of gaps can naturally be described , for graphs , in terms of _ bandwidth _ and has been extended to binary matrices / hypergraphs in .very limited tractability result exist when gaps are allowed , whether it is for graphs or hypergraphs , none considering repeats , which opens a wide range of questions of practical importance .in this appendix , we describe how the assembly hypergraph relates to practical genome assembly problems . our initial motivation for investigating the algorithmic problems described in this paper follows from earlier computational paleogenomics methods developed to compute genome maps and scaffolds for ancestral genomes .in this problem , the vertex set represents a set of ancestral genomic markers , obtained either through whole genome alignment , the analysis of gene families , or the sequencing of an ancient genome .the function encodes the multiplicity , that is an upper bound on the allowed number of copies of each marker in potential assemblies . for ancestral genomes, it can be obtained from traditional parsimony methods .an edge encodes the hypothesis that appear _ contiguously _ in an assembly of the elements of . for ordered intervals , that are edges , such that and , encodes a total ordering information about the genomic elements they contain . in computational paleogenomics , edges and intervals( including order ) can be obtained from the comparison of pairs of genomes related to the ancient genome that is being assembled .the function is a weight that can be seen as a confidence measure on every edge ( the higher , the better ) , that can be based on phylogenetic conservation . more generally , the assembly hypergraph is a natural model for genome mapping problems .however , the assembly hypergraph also allows us to formalize other assembly problems .for example , in the _ scaffolding _problem , would represent _ contigs _ and can be obtained by methods based on the reads depth of coverage .co - localization information can be obtained from mate - pairs libraries with an insert that is short with respect to the minimum contig length , thus describing adjacencies , while ordered intervals can be obtained from mapping contigs onto long reads or related genome sequences .the assembly hypergraph can also be used to model the problem of assembling short reads into contigs , although contig assembly is generally based on eulerian superstring approaches instead of edge deletions approaches . in this problem ,the vertices represent short sequence elements , such as reads in the overlap graph approach or -mers ( substrings of length ) in the widely used de bruijn graph approach .the function can here again be obtained from the reads depth of coverage .adjacencies follow from overlaps between elements of , whose statistical significance , combined with the read quality for example , can be used to define .intervals can here again be obtained from mapping short reads on long reads . finally , it is important to remember that genomic segments are _ oriented _ along a chromosome , due to the double stranded nature of most genomes .the algorithms we described in the present paper can handle this problem in a very easy way .each genomic element is represented by two vertices , one for each extremity , with an adjacency linking them ( called a _ required _ adjacency , while adjacencies between extremities of different elements are called _ inferred _ adjacencies ) .a compatible assembly then needs to be composed of linear or circular walks where required adjacencies alternate with inferred adjacencies .this property can be handled naturally by the decision algorithms ( see ) , and also by the optimization algorithms by weighting each required adjacency by a weight greater than the cumulative weight of all inferred adjacencies .also , triples that overlap repeats need to be replaced by quadruples containing both extremities of a same initial genomic element , which can be handled by our algorithms ( full details will be given in the complete version of our work ) .
the consecutive - ones property ( c1p ) is a classical concept in discrete mathematics that has been used in several genomics applications , from physical mapping of contemporary genomes to the assembly of ancient genomes . a common issue in genome assembly concerns repeats , genomic sequences that appear in several locations of a genome . handling repeats leads to a variant of the c1p , the c1p with multiplicity ( mc1p ) , that can also be seen as the problem of covering edges of hypergraphs by linear and circular walks . in the present work , we describe variants of the mc1p that address specific issues of genome assembly , and polynomial time or fixed - parameter algorithms to solve them .
in 1984 , an unconditionally secure key distribution protocol using quantum resources was proposed by bennett and brassard .the scheme , which is now known as bb84 protocol drew considerable attention of the cryptography community by its own merit as it offered unconditional security , which was unachievable by any classical protocol of key distribution .however , the relevance of bb84 quantum key distribution ( qkd ) protocol and a set of other schemes of qkd were actually established very strongly in 1994 , when the seminal work of shor established that rsa and a few other schemes of classical cryptography would not remain secure if a scalable quantum computer is built .the bb84 protocol , not only established the possibility of obtaining unconditional security , but also manifested enormous power of quantum resources that had been maneuvered since then .specifically , this attempt at the unconditional security of qkd was followed by a set of protocols for the same task .interestingly , the beautiful applications of quantum mechanics in secure communication did not remain restricted to key distribution .in fact , it was realized soon that the messages can be sent in a secure manner without preparing a prior key .exploiting this idea various such schemes were proposed which fall under the category of secure direct quantum communication ( and references therein ) .the schemes for secure direct quantum communication can be categorized into two classes on the basis of additional classical communication required by the receiver ( bob ) to decode each bit of the transmitted message- ( i ) quantum secure direct communication ( qsdc ) and ( ii ) deterministic secure quantum communication ( dsqc ) . in the former, bob does not require an additional classical communication to decode the message , while such a classical communication is involved in the latter ( see for review ) .it is worth noting that in a scheme of qsdc / dsqc meaningful information flows in one direction as it only allows alice to send a message to bob in an unconditionally secure manner using quantum resources and without generation of a key .however , in our daily life , we often require two way communication ( say , when we speak on a telephone ) .interestingly , a modification of one of the first few qsdc schemes ( i.e. , ping - pong scheme ) led to a new type of protocol that allows both alice and bob to communicate simultaneously using the same quantum channel .this scheme for simultaneous two way communication was first proposed by ba an and is known as quantum dialogue ( qd ) . due to its similarity with the task performed by telephones , a scheme for qdare also referred as quantum telephone or quantum conversation scheme , but in what follows , we will refer to them as qd . due to its practical relevance , schemes of qd received much attention and several new schemes of qd have been proposed in the last decade .however , all these schemes of qd , and also the schemes of qsdc and dsqc , mentioned here are restricted to the two - party scenario .this observation led to two simple questions- ( i ) do we need a multiparty qd for any practical purpose ? and( ii ) if answer of the previous question is yes , can we construct such a scheme ?it is easy for us ( specially for the readers of this paper and the authors of the similar papers who often participate in conferences and meet as members of various committees ) to recognize that conferences and meetings provide examples of situation where multiparty dialogue happens .specifically , in a conference a large number of participants can exchange their thoughts ( inputs , which may be viewed as classical information ) .although , usually participants of the conference / meeting are located in one place , but with the advent of new technologies , tele - conferences , webinar , and similar ideas that allow remotely located users to get involved in multiparty dialogue , are becoming extremely popular .for the participants of such a conference or meeting that allows users to be located at different places , desirable characteristics of the scheme for the conference should be as follows- ( a ) a participant must be able to communicate directly with all other participants , or in other words , every participant must be able to listen the talk / opinion delivered by every speaker as it happens in a real conference .( b ) a participant should not be able to communicate different opinion / message to different users or user groups .( c ) illegitimate users or unauthorized parties ( say those who have not paid conference registration fees ) will not be able to follow the proceedings of the conference .it is obvious that criterion ( c ) requires security and a secure scheme for multiparty quantum dialogue satisfying ( a)-(c ) is essential for today s society .we refer to such a scheme for multiparty secure communication that satisfies ( a)-(c ) as ascheme for quantum conference ( qc ) because of its analogy with the traditional conferences ( specially with the tele - conferences ) .the analogy between the communication task performed here and the traditional conference can be made clearer by noting that wikipedia defines conference as `` a conference is a meeting of people who confer about a topic '' .similarly , oxford dictionary describes a conference as `` a linking of several telephones or computers , so that each user may communicate with the others simultaneously '' .this is exactly the task that the proposed protocol for qc is aimed to perform using quantum resources and in a secure manner .thus , qc is simply a conference , which is an -party communication , where each participant can communicate his / her inputs ( classical information ) using quantum resources to remaining participants .however , it should be made clear that it is neither a multi - channel qsdc nor a multi - channel qd scheme . to be precise, one may assume that each participant maintains private quantum channels with all other participants and uses those to communicate his / her input to others via qsdc or qd .this is against the idea of a conference , as in this arrangement , a participant may send different information / opinion to different participants , in violation of criterion ( b ) listed above .the fact that to the best of our knowledge , no such scheme for multiparty secure quantum communication exists has motivated us to introduce the notion of qc and to aim to design a scheme for the same . hereit would be apt to note that although no scheme for qc is yet proposed , various schemes for other multiparty quantum communication tasks have already been proposed .for example , quantum schemes for voting , auction , and e - commerce are necessarily expected to be multiparty quantum communication schemes .interestingly , there are a few schemes for all these tasks proposed in the past ( and references therein ) .another recently discussed multiparty task isquantum key agreement ( qka ) ( and references therein ) , where the final key is generated by the contribution of all the parties involved , and a single or a few parties can not decide the final key . for instance , a multiparty qka scheme was proposed in the recent past , in which encoded qubits travel in a circular manner among all the parties .in fact , most of these multiparty quantum communication schemes , except qka , can be intrinsically viewed as a ( many ) sender(s ) sending some useful information in a secure manner to a ( many ) receiver(s ) under the control of a third party .further , all these schemes can be broadly categorized as secure multiparty quantum communication and secure multiparty quantum computation . though the line between the two is very faint to distinguish and categorize a scheme among one of them , qka and e - commercemay be considered in the former , while voting and auction fall under the latter .some efforts have also been made to introduce a notion of qc as a multiparty quantum communication task .however , earlier ideas of qc can be viewed as special cases of the notion of qc presented here and they are not sufficient to perform a conference as defined above in analogy with the definition provided in oxford dictionary and other sources .bose , vederal and knight proposed a generalized entanglement - swapping - based scheme for multiparty quantum communication that led to a set of quantum communication schemes related to qc , viz ., cryptographic conference , conference key agreement and conference call , and a scheme where many senders send their messages to single receiver via generalized superdense coding . in cryptographic conference, all parties share a multipartite entangled state .they perform measurement in the computational or diagonal basis , and the results of those measurements in which the bases chosen by all the users coincide are used to establish the secret key which will be known to all the users within the group .a similar notion of conference key agreement was used in , where a generalized notion of dense coding was used . clearlythe notion of conference is weaker here , and in our version of conference such keys can be distributed easily if all the users communicate random bits instead of meaningful messages .recent success of designing the above mentioned schemes for multiparty quantum communication further motivated us to look for a scheme for qc .a two party analogue of qc can be considered as qd , where both parties can communicate simultaneously .the group theoretic structure of ba - an - type qd schemes has been discussed in ref .the group theoretic structure discussed in will be exploited here to introduce the concept of qc .further , an asymmetric counterpart of the ba - an - type qd scheme is proposed in the recent past . following which we will also introduce and briefly discuss an asymmetric qc ( aqc ) , where all the parties involved need not to send an equal amount of information . with the recent interest of quantum communication community on quantum internet and experimental realization of multiparty quantum communication schemes , the motivation for introducing a qc or aqc scheme can be established .remaining part of the paper is organized as follows .[ sec : ba - an - protocol ] is dedicated to a brief review of qd and the group theoretic approach of qd for the sake of completeness of the paper , which has been used in the forthcoming sections to develop the idea of qc .two general schemes for the task of qc have been introduced in sec .[ sec : quantum - conferencepro ] . in the next section ,we have considered a few specific examples of both these schemes .the feasibility of an aqc scheme has also been discussed in sec .[ sec : examples - and - possible ] .finally , the security and efficiency of the proposed schemes have been discussed in sec .[ sec : security - analysis ] before concluding the paper in sec .[ sec : conclusion ] .[ [ section ] ] it would be relevant to mention that some of the present authors had presented the general structure of qd protocols in and established that the set of unitary operators used by alice and bob must form a group under multiplication . the group structure has also been found to be suitable for the asymmetric qd schemes , where alice and bob use encoding operations from different subgroups of a modified pauli group , like .this particular abelian group ( ) is of order 4 under multiplication and is called a modified pauli group as we neglect the global phase in the product of any two elements of this group , which is consistent with the quantum mechanics ( for detail see ) .the generalized group can be formed by -fold tensor products of , i.e. , . in the original qd protocol ,the encoding is done by alice and bob , respectively , using the same set of operations from the modified pauli group .the entire scheme of ba an can be summed up in the formula , where are the bell states .it is required that all the possible final states obtained after alice s and bob s encoding operations should remain orthonormal to each other and also with the initial state .once the initial and final states are known to both the legitimate users , they can exploit knowledge of their own encoding operation to extract each other s message .interestingly , alice and bob encode information with the same operators , say , for 00 , for 01 , for 10 , and for 11 . in this scenario , alice obtains a unique bijective mapping from the composite encoding of alice and bob ( ) to bob s operation ( ) using her unitary operation ( ) .this is obvious where there are only 2 parties , we may ask , is it possible to extend this scheme for qd to design a scheme for multiparty conference ? let us examine two cases with 3 parties : in case 1 : when all the parties encode the same bits say , 00 i.e. , they apply and ; and in case 2 : when one of them encodes the same bits used in case 1 , i.e. , 00 and other two will encode the similar bits but other than 00 , say 01 , i.e. , they apply and , respectively . in these two cases ,the resultant state is always the same as what was prepared initially , and none of the parties can deterministically conclude each others encoding .in fact , there will be many such cases , hence , ba an s original protocol for qd can not be generalized directly to design a scheme for multiparty conference . to design a scheme for qc, we will use the idea of disjoint subgroups introduced by some of the present authors in the recent past .disjoint subgroups refer to subgroups , say and , of a group such that they satisfy .thus , except identity and do not contain any common element .the modified pauli group has 3 mutually disjoint subgroups : and .whenever there are more than two parties , we can encode using disjoint subgroups of operators , i.e. , each party may be allowed to encode with a unique disjoint subgroup .for example , if alice , bob and charlie want to set up a qc among them , then alice can encode using , bob can encode using and charlie can encode using the use of disjoint subgroups circumvents the limitations of the original two - party qd scheme and provides a unique mapping required for multiparty conversation . in what follows ,we have proposed two protocols to accomplish the task of a qc scheme .[ [ section-1 ] ] here , we have designed two multiparty quantum communication schemes where prior generation of key is not required .these schemes may be used for qc , i.e. , for multiparty communication of meaningful information among the users . additionally , it is easy to observe that these schemes naturally reduce to the schemes for multiparty key distribution if the parties send random bits instead of meaningful messages .let us start with the simplest case , where parties send their message to party .this can be thought of as a multiparty qsdc .suppose all the parties decide to encode or communicate -bit classical messages . in this case , each user would require a subgroup of operators with at least operators . in other words ,each party would need at least a subgroup of order of a group . here, we would like to propose one such multiparty qsdc scheme .step 1.1 : : first party alice be given one subgroup to encode her -bit information .similarly , other parties ( say bob and charlie ) can encode using subgroups , and , and so on for party diana , whose encoding operations are .+ all these subgroups are pairwise disjoint subgroups , i.e. , they are chosen in such a way that . as the requirement for encoding operations to be from disjoint subgroups has been already established beforehand .+ additionally , here we assume that all the parties do nothing ( equivalent to operator identity ) on their qubits for encoding a string of zeros .as identity is the common element in the set of encoding operations to be used by each party it will be convenient to consider this as a convention in the rest of the paper .step 1.2 : : nathan ( the party ) prepares an -qubit entangled state ( with ) . + it is noteworthy that maximum information that can be encoded on the -qubit quantum channel is bits and here parties are sending bits each .in other words , after encoding operation of all the parties the quantum states should be one of the possible orthogonal states . step 1.3 : : nathan sends qubits ( ) of the entangled state to alice in a secure manner , who applies one of the operations ( which is an element of the subgroup of operators available with her ) on the travel qubits to encode her message .this will transform the initial state to .subsequently , alice sends all these encoded qubits to the next user bob .step 1.4 : : bob encodes his message which will transform the quantum state to .finally , he also sends the encoded qubits to charlie in a secure manner .step 1.5 : : charlie would follow the same strategy as followed by alice and bob . in the end , diana receives all the encoded travel qubits and she also performs theoperation corresponding to her message to transform the state into .she returns all the travel qubits to nathan .step 1.6 : : nathan can extract the information sent by all parties by measuring the final state using an appropriate basis set . + it may be noted that nathan can decode messages sent by all parties , if and only if the set of all the encoding operations gives orthogonal states after their application on the quantum state , i.e. , are orthogonal for all . in other words , after the encoding operation of all the parties the quantum states should be a part of a basis set with orthogonal states for unique decoding of all possible encoding operations .this scheme can be viewed as the generalization of ping - pong protocol to a multiparty scenario , where multiple sender s can simultaneously send their information to a receiver . in a similar way ,if all the senders wish to send and receive the same amount of information , then all of them can also choose to prepare their initial state independently and send it to all other parties in a sequential manner .subsequently , all of them may follow the above protocol faithfully to perform simultaneous multiparty qsdc protocols .in fact , simultaneous multiparty qsdc schemes of the above form will perform the task required in an ideal qc scheme .however , as each sender has to encode his secret multiple times ( times ) , it would allow him to encode different information in each round . though it may be advantageous in some communication schemes , where a sender is allowed to send different bit values to different receivers , but is undesirable in a scheme for qc .specifically , to stress on the relevance of a scheme that allows each sender to encode different bits to all the receivers , we may consider a situation where each party ( or a few of them ) publicly asks a question , and the receivers answer the question independently ( for an analogy think of a panel discussion in television ) . in this case , all the receivers may have different opinions ( say one may agree with some of them and may not with the remaining ) about various questions being asked . as far as a scheme for qc is concerned , protocol 1 described here would work under the assumption of semi - honesty . specifically , a semi - honest party may try to cheat , but he / she would follow the protocol faithfully .this assumption would enable us to consider that each party is encoding the same information every time . in what follows .we will establish that such an assumption is not required . specifically , in protocol 2 , we aim to design a genuine qc scheme , which does not require the semi - honesty assumption to restrict a user from sending different information to different receivers . here, we will attempt to design an efficient qc scheme , which can be thought of as a generalized qd scheme . in analogy of the original ba - an - type qd scheme, we will need the set of encoding operations for the party ( nathan ) . here, firstly we propose the protocol which is followed by a prescription to obtain the set of operations for party , assuming a working scheme designed for the protocol 1 . step 2.1 : : same as that of step 1.1 of protocol 1 with a simple modification that also provide nathan a subgroup enables him to encode a -bit message at a later stage .+ the mathematical structure of this subgroup will be discussed after the protocol .step 2.2 : : same as step 1.2 of protocol 1 .step 2.3 : : same as step 1.3 of protocol 1 .step 2.4 : : same as step 1.4 of protocol 1 .step 2.5 : : same as step 1.5 of protocol 1 .step 2.6 : : nathan applies unitary operation to encode his secret and the resulting state would be . step 2.7 : : nathan measures using the appropriate basis as was done in step 1.6 of protocol 1 and announces the measurement outcome . now ,with the information of the initial state , final state and one s own encoding all parties can extract the information of all other parties .+ it is to be noted that the information can be extracted only if the set of all the encoding operations gives orthogonal states after their application on the quantum state , i.e. , all the elements of are required to be mutually orthogonal for . in other words , after the encoding operation of all the parties the set of all possible quantum states should form a dimensional basis set .nathan s unitary operation can be obtained using the fact that the remaining parties have already utilized the channel capacity .hence , his encoding should be in such a way that after his encoding operation , the final quantum state should remain an element of the basis set in which the initial state was prepared .however , the bijective mapping between the initial and final states present in protocol 1 would disappear here .this is not a limitation .it is actually a requirement .this is so because , in contrast to protocol 1 where the initial and final states are secret , in protocol 2 , the choice of the initial state and the final state are publicly broadcasted .existence of a bijective mapping would have revealed all the secrets to eve .this condition provides us a mathematical advantage .specifically , it allows us to construct the set of unitary operations that nathan can apply . to doso we need to use the information about the disjoint subgroups of operators that are used by other parties . the procedure for construction of nathan s set of operationsis described below . for simplicity ,let us write the encoding operations of all the parties as follows : here , corresponds to the binary value of the decimal number , and it represents the classical information to be encoded by user x * * * ( listed in column 1 ) using the the operator ( listed in column in the row corresponding to the user x * * * ) . for example , to encode alice would use the operator , whereas for the same encoding bob and charlie would use and , respectively .further , we would like to note that by construction operators as is an element of the modified pauli group , and it is assumed that the encoding operations of the different users are chosen from the disjoint subgroups of the modified pauli groups in such a way that the product of operations listed in any column is identity , i.e. , this implies that if all the parties encode the same secret then the final state and the initial state would be the same . to illustrate this we may consider following example from eqs .( [ eq : condition])-([eq : example ] ) , it is clear that the choice of encoding operations of the other users ( i.e. , would uniquely determine .further , it is assumed that the encoding operations used by different users to encode are selected in a particular order that ensures and particular choice of for example , this condition implies that if alice s operators satisfy then bob and charlie would be given the encoding operators in an order that satisfy and respectively , and the same ordering of operators will be applicable to all other users .now , using the above mentioned facts and convention , we need to establish that forms a group under multiplication .( [ eq : condition])and the self reversibility of the elements lead to following identity- this may be used to establish the closure property of the group as .this is so because the pauli operators commute with each other under the operational definition of multiplication used in defining the modified pauli group .all the remaining properties of the group follows directly from the nature of pauli operators used to design .thus , it is established that the generalized multiparty qsdc scheme can be modified to a generalized qd scheme .it will be interesting to obtain the original ba an s qd scheme as a limiting case as follows . this particular case and all the discussions leave us with which is identical with alice s operations . in table[ tab : conference ] , we have provided a list comprising of the number of participants in the qc and the number of cbits they want to encode .the table explicitly mentions different multipartite states or quantum channels that can be utilized for the same ..[tab : conference ] various possibilities of qc scheme with a maximum number of parties each encoding bits using a group of unitary operators with at least elements . the quantum states suitable in each case and corresponding number of travel qubits are also mentioned .[ cols="^,^,^,^,^ " , ] the proposed qc scheme may also be extended to an asymmetric counterpart of the qc scheme , where each party may not be encoding the same amount of information .one such easiest example is a lecture , where the orator speaks most of the time while the remaining users barely speak . in such cases ,the parties sending redundant bits to accommodate the qc scheme may choose an aqc scheme . to exploit the maximum benefit of such schemes a party encoding more information than others ( say alice ) should prepare ( and also measure ) the quantum state ( in other words , start the qc scheme ) .in this case , the choice of unitary operations by each party would also become relevant and alice should use a subgroup of higher order than the remaining users .for instance , in a 3-party scenario , alice may use a from row 2 of table [ tab : multiparty - quantum - conference . ] to encode 2 bits message , while the remaining three users may choose and , respectively .it is worth noting here that the security of the qc scheme discussed in the following section ensures the security of the aqc scheme designed here as well .further , the proposed schemes can also be easily modified to obtain corresponding schemes for controlled qc , where an additional party ( who is referred to as the controller ) would prepare the quantum channel in such a way that the qc task can only be accomplished after the controller allows the other users to do so .controlled qc can be achieved in various ways .for example , the controller may prepare the initial state and keep some of the qubits with himself , and in absence of the measurement outcome of the corresponding qubits the other legitimate parties would fail to accomplish the task . the same feat can also be achieved by the controller without keeping a single qubit with himself by using permutation of particles .thus , it is easy to generalize the proposed schemes for qc to yield schemes for controlled qc .such a scheme for controlled qc would have many applications .for example , a direct application to that scheme would be quantum telephone where the controller can be a telephone company that provides the channel to the respective users after authentication .thus , the present scheme can be used to generalize the scheme proposed in and thus to obtain a scheme for multiparty quantum telephone or quantum teleconference .additionally , the multiparty communication schemes proposed here can be reduced to schemes for secure multiparty quantum computation .interestingly , a recently proposed secure multiparty computation scheme designed for quantum sealed - bid auction task can be viewed as a reduction of the protocol 1 proposed here .therefore , we hope that the proposed schemes may also be modified to obtain solutions of various other real life problems .a qc protocol is expected to confront the disturbance attack ( or denial of service attack ) , the intercept - and - resend attack , the entangle - and - measure attack , man - in - the - middle attack and trojan - horse * * attack by implementing the bb84 subroutine strategy ( for detail see ) , which allows senders to insert decoy qubits prepared randomly in -basis or -basis in analogy with bb84 protocol and to reveal the traces of eavesdropping by comparing the initial states of the decoy qubits with the states of the same qubits after measured by the receivers randomly using -basis or -basis .in fact , quantum communication of all the qubits from one party to other , as mentioned in both the protocols ( for example , in step 1.3 ) , is performed in a secure manner . to accomplish the secure communication of message qubits using bb84 subroutine ,an equal number of decoy qubits ( the number of decoy qubits are required to be equal to the number of message qubits traveling through the channel ) are inserted randomly in the string of travel qubits . on the authenticated receipt of this enlarged sequence of travel qubits , the sender discloses the positions of the decoy qubits and those qubits are then measured by the receiver randomly in -basis or -basis .subsequent comparison of the initial states and the measurement outcomes reveals the error rate .if the computed error rate is obtained below a tolerable limit , then the quantum communication of message qubits is considered to be accomplished in a secure manner , and the steps thereafter are followed .therefore , the above mentioned attacks on the proposed schemes can be defeated simply by adding decoy qubits and following bb84 subroutine .further , bob s intimation by alice that she has sent her qubits and bob s acknowledgment of the receipt of qubits , via an authenticated classical channel , is necessary to avoid the unwanted circumstances under which eve pretends as the desired party .there also exist some technical procedures to circumvent the trojan - horse attack ( and references therein ) . as a scheme of qc incorporates multiusers we have discussed below the security in two scenarios where ( 1 ) an outsider ( eve ) attacks the protocol , or ( 2 ) an insider ( one or some of the legitimate users ) attacks the protocol .further , all the attacks and counter measures mentioned in this section are applicable on both the schemes , unless specified . in the * entangle - and - measure attack* , eve entangles her qubit with the travel qubit in the channel .eve can extract the information by performing the -basis measurement on her ancillae . to counter this attack ,the decoy states , , and are randomly inserted and when they are examined for security , then eve is detected with probability when she attacks and states , otherwise the states remain separable for and .consequently , the total detection probability of eve is taking into account that the probability of generation of each decoy qubit state is . in the * intercept - and - resend attack* , eve prepares some fresh qubits and swaps one of her qubits with the accessible qubit in the channel when user sends it to user .thereafter , eve retrieves her qubit during their communication from user to user and obtains the encoding of user by performing a measurement on her qubits .this attack will also be defended by incorporating decoy qubits .however , eve may modify her strategy to measure the intercepted qubits randomly in either the computational or diagonal basis before sending the freshly prepared qubits corresponding to the measurement outcomes .it is evident that eve s measurement of the decoy qubits will produce disturbance if she measures in the wrong basis .let be the total number of travel qubits such that are decoy and message qubits each .eve intercepts qubits which will entail both decoy and message . without a loss of generality, we assume that half of the qubits are decoy and the other half are message qubits .since the security check is performed on the decoy qubits alone , we are interested in the decoy qubits which eve measures in her lab out of the decoy qubits in the channel . the fraction of qubits measured by eve out of the total decoy qubits is given by .from which the information gained by eve is this implies that times the correct basis will be chosen by eve .the error induced by eve is observed by alice and bob only when bob measures in the same basis as of alice and is .the amount of information bob receives is given by ) ] is the shannon binary entropy .the security is ensured until .one can calculate the fraction for secure communication with the tolerable error rate ( and references therein ) .eve s success probability is and it would decrease with the increasing value of as .* information leakage attack * is inherent in the qd schemes , and consequently , is applicable to protocol 2 proposed here as well .it refers to the information gained by eve about the encoding of the legitimate parties by analyzing the classical channel only . in brief, the leakage can be thought of as the difference between the total information sent by both the legitimate users and the minimum information required by eve to extract that information ( i.e. , eve s ignorance ) .the mathematical prescription for an average gain of eve s information is where is the total classical information all the legitimate parties have encoded ; and is eve s ignorance after the announcement of the measurement outcome and is averaged over all the possible measurement outcomes as , with the conditional entropy .if the party authorized to prepare and measure the quantum state selects the initial state randomly and sends it to all the remaining users by using a standard unconditionally secure protocol for qsdc or dsqc then the leakage can be avoided as it increases the , and thus decreases to zero corresponding to no leakage .* participant attack * is possible in both the schemes proposed here . * * in the first scheme , a participant can send different cbits to different members unless we assume semi - honest parties .although this scheme is advantageous in certain applications , like sealed bid auction ( where this attack is detected in post - confirmation steps ) or where each participant wants to encode different values to respective participant , but in the conference scenario where it is required that each participant encodes the same message to all other participants then this attack is prominent , and it is wise to follow the second scheme , which is free from the assumption of semi - honest parties . in the second scheme ,the authorized party ( authorized to prepare and measure the quantum state ) encodes his information at the end just before performing the joint measurement and announcing the outcome .if he wants to cheat he can disclose an incorrect measurement outcome corresponding to his modified encoding once he comes to know others encoding .this action can be circumvented , and we can implement this protocol either with a trusted party or we can randomly select any two participants and run the scheme twice considering that respective party encodes same information .another solution would be that the initiator sends the hash value of his message at the beginning to all the remaining users , and if the hash value of his encoding revealed at the last do not match with that of the initially sent hash value , then he had cheated and will be certainly identified . * collusion attack * is a kind of illegal collaboration of more than one party who are not adjacent to each other , to cheat other members of a group to learn their encoding ( precisely of those who are in between them ) .the proposed schemes are circular in nature . in this type of an attack , the attackers generate an entangled state and circulate the same number of fake qubits as that of the travel qubits .the attackers at the end already possess the home photons of the fake qubits circulated by the first attacker and performs a joint measurement to learn the encoding of the participants in between them .it will be more effective if and participants collude .this is so , as both of them get the access of the travel particles at least once after knowing the secret of all the remaining parties .this attack can be averted by breaking the larger circle into sub - circles such that if less than attackers collude , they will not be able to cheat ( see for details ) .this attack and the solution are applicable in both the proposed schemes .the qubit efficiency of a quantum communication scheme is calculated as where bits of classical information is transmitted using number of qubits , and an additional classical communication of bits . in the first qc scheme , , and as each party sends bits and prepares -qubit entangled state and decoy qubits in each round of quantum communication .therefore , the efficiency is calculated to be .similarly , the qubit efficiency of the second qc scheme among parties such that each party encodes bits can be computed by noting that in this case , and . here , as the classical communication of cbitsis associated with the broadcast of the measurement outcome by the authorized party .thus , the qubit efficiency is obtained as . from the onecan easily calculate the qubit efficiency of various possible qc schemes detailed in table [ tab : conference ] .for example , one can check that the qubit efficiency of a two party qc with each party encoding 2 bits ( which is ba an s qd protocol ) using bell state as quantum channel is 67% .similarly , the qubit efficiency for a qc scheme involving three parties sending 1 bit each with bell state as the quantum channel can be obtained as 43% .hence , we find that for the same initial state as quantum channel the efficiency decreases as the number of parties increases and / orthe number of encoded bits decreases .in summary , the notion of qc is introduced as a multiparty secure quantum communication task which is analogous with the notion of classical conference , and two protocols for secure qc are designed .the proposed protocols are novel in the sense that they are the first set of protocols for qc , as the term qc used earlier were connected to communication tasks that were not analogous to classical conference .further , it is shown that protocols proposed here can be reduced to protocols for qc proposed earlier considering much weaker notion of conference .one of the proposed protocols can be viewed as a generalization of the ping - pong protocol for qsdc , whereas the other one can be viewed as a generalization of the schemes for qd . it is noted that protocol 1 composes number of rounds of multiple - sender to single receiver secure direct communication , which accomplishes the task of qc under the assumption of semi - honesty of the users . however , this semi - honesty assumption is not required for protocol 2 , which is proposed here as multiple - sender to multiple - receiver scheme , where the task is performed in a single round .subsequently , both the proposed schemes are elaborated with the help of an explicit example .we have discussed the utility and applications of these protocols in different scenarios .specifically , the proposed schemes may be reduced to a set of multi - party qkd and qka schemes , if the parties involved in qc send random bits instead of meaningful messages .further , feasibility and significance of the controlled and asymmetric counterparts of the proposed qc schemes have also been established .the modified versions of the proposed schemes may also be found useful in accomplishing some real - life problems , whose primitive is secure multiparty computation .for example , one can employ the proposed schemes for voting among the five countries having power of veto in united nations , where it is desired that the choice of a voter is not influenced by the choice of the others .the proposed scheme can also be extended to obtain a dynamic version of qc , where a participant can join the conference once it has started and leave it before its termination .such a generalization is possible using the method introduced by some of the present authors in ref . .further , the effect of various types of markovian and non - markovian noise on the schemes proposed here can be investigated easily using the approach adopted in .security of the proposed schemes has been established against various types of insider and outsider attacks .further , the qubit efficiency analysis established that protocol 2 is more efficient than protocol 1 .further , one can easily observe that the proposed schemes are much more efficient compared to a simple minded scheme that performs the same task by using multiple two - party direct communication schemes , which will again work only under the assumption of semi - honest users .finally , we have also presented a set of encoding operations suitable with a host of quantum channels for performing the qc schemes for number of parties .this provides experimentalists a freedom to choose the encoding operations and the quantum state to be used as quantum channel as per convenience .further , experimental realization of quantum secure direct communication scheme , which can demonstrate protocols , like quantum dialogue , quantum authentication , has been successfully performed in , and it paves way for experimental realization of qc . keeping these facts in mind , we conclude this paper with a hope that the schemes proposed here and/or their variants will be realized in the near future .* acknowledgment : * ab acknowledges support from the council of scientific and industrial research , government of india ( scientists pool scheme ) .cs thanks japan society for the promotion of science ( jsps ) , grant - in - aid for jsps fellowskt and ap thank defense research & development organization ( drdo ) , india for the support provided through the project number erip / er/1403163/m/01/1603 .10 bennett , c. h. , brassard , g. : quantum cryptography : public key distribution and coin tossing . in proceedings of the ieee international conference on computers , systems , and signal processing , bangalore , india , pp .175 - 179 ( 1984 ) shor , p. w. polynomial - time algorithms for prime factorization and discrete logarithms on a quantum computer , in proceedings of 35th annual symp . on foundations of computer science , santa fe , ieee computer society press .( 1994 ) thapliyal , k. , pathak , a. : applications of quantum cryptographic switch : various tasks related to controlled quantum communication can be performed using bell states and permutation of particles . quantum inf. process .* 14 * , 2599 - 2616 ( 2015 ) sharma , v. , thapliyal , k. , pathak , a. , banerjee , s. : a comparative study of protocols for secure quantum communication under noisy environment : single - qubit - based protocols versus entangled - state - based protocols .inf . process .* 15 * , 4681 ( 2016 )
a notion of quantum conference is introduced in analogy with the usual notion of a conference that happens frequently in today s world . quantum conference is defined as a multiparty secure communication task that allows each party to communicate their messages simultaneously to all other parties in a secure manner using quantum resources . two efficient and secure protocols for quantum conference have been proposed . the security and efficiency of the proposed protocols have been analyzed critically . it is shown that the proposed protocols can be realized using a large number of entangled states and group of operators . further , it is shown that the proposed schemes can be easily reduced to protocol for multiparty quantum key distribution and some earlier proposed schemes of quantum conference , where the notion of quantum conference was different . keywords : quantum conference , quantum cryptography , secure quantum communication , multiparty quantum communication .
the statistical model of atom developed by e.fermi and known as thomas - fermi model , although based on a highly simplified theoretical framework , has been proven surprisingly good in predicting basic properties of condensed matter .a particular feature of such a model is the description of compressed atoms , in this case the predicted properties were confirmed experimentally . in a preceding publication , we focused the attention on this particular aspect of the theory and proposed a simple model for describing systems under pressure .the central point of that work was the development of the concept of `` statistical ionization '' .in simple terms , this is a universal semianalytical function which enables one to describe , as a function of the distance from the point - like nucleus , the balancing process between the antibinding and binding contribution to the total energy within the compressed atom . in spite of the approximations done and extensively discussed, we underlined the utility of the proposed model as a tool for investigating at a basic level and at low computational cost , properties of systems under pressure .however it was also underlined that the properties of semianalycity and universality of the function disappear when higher order of approximation are introduced in the basic tf theory . in the light of what stated before , in this publication we intend to extend the treatment of the previous work to more sophisticated models of the tf approach . that is to say , to include the effects of exchange and correlation into the original tf model and obtain in this framework a `` generalized statistical ionization function '' .the extension of the original tf model is due to p.a.m.dirac ( tfd ) for the exchange while for the part relative to the correlation , several approaches has been proposed ; in this work the one proposed by h.lewis ( tfdl ) has been chosen because it is simple and appropriate for the compressed case , since the treatment of the electrons becomes exact in high density limit ( see appendix ) .the paper is organized as follows ; first we obtain the tf , tfd and tfdl equation within a single generalized approach , then we show numerical solutions for different atomic numbers in the neutral uncompressed case , next we illustrate numerical results for the `` generalized statistical ionization function '' in case of ( atomic number ) .finally the equation of state of a compressed system is calculated .comments on the results obtained as well as on advantages and limitations of this model , conclude the work .in this section we derive a unique general form for the tf , tfd and tfdl equation . in a semiclassical approximation the local electronic density of states can be defined as ( see for example ) : =\frac{8\pi}{h^3}p^2({{\bf r } } ) dp({{\bf r}}).\ ] ] the local electron density is therefore given by : which determines the local fermi vector . in the spirit of the ( generalized ) thomas - fermi approachwe can express the one - particle energy ] of free ( interacting ) electrons plus an electrostatic field arising from the direct electron - nucleus and electron - electron interaction ( we must remind that the nucleus is considered a positive point - like charge ) : = h_{\rm el}[p({{\bf r}})]-ev({{\bf r}}),\ ] ] for a system at the equilibrium the chemical potential , defined by the maximum energy of \}=e[p_{\rm f}({{\bf r}}),{{\bf r}}] ] is needed .of course this task is in principle quite hard since the solution of a many - body system is necessary , and some kind of approximation is required .the original tf model represents the simplest approximation , i.e. the hamiltonian ] through the relation e[p(r),r]$ ] .we can identify three different contributions : where \frac{p^2(r)}{2m},\ ] ] where of course .\ ] ] a particularly meaningful quantity is also the integrated energy defined by : where is a generic distance . represents therefore the total energy contained in the atomic volume enclosed within the distance from the nucleus .just as , we can think as composed by a kinetic , a coulomb electron - nucleus and a coulomb electron - electron term .after some straightforward manipulations ( see also ) , by using once more the reduced variables , we can write each of these contributions as : \int_0^x dx v^5(x ) x^{-1/2},\ ] ] \int_0^x dx v^3(x ) x^{-1/2},\ ] ] \int_0^x dx v^3(x ) x^{1/2 } \left[\frac{1}{x}\int_0^x v^3(x ' ) { x'}^{1/2 } + \int_0^{x_0 } v^3(x ' ) { x'}^{-1/2 } \right],\ ] ] where is the scaled variable related to through the relation and where we made use of the relation : ,\ ] ] valid for an isotropic . as it was done in , we consider a variable distance within the atom , and , as stated before , is the total energy in classical terms , of the sphere of radius inside the spherical atoms . in simple terms , we do divide the atom into infinitesimally thin concentric shells ; the energy at is the sum of the contributions of all of the shells inside . as a consequencethe distance at which has its minimum can be interpreted as the distance at which the binding ( nucleus - electrons ) and antibinding ( electron - electron and kinetic energy ) contributions to the total energy are in exact balance .this allows us to define a sort of electron ionization criterion , where the term `` ionization '' stands for electrons with zero or positive energy ; thus we can address to as the `` generalized statistical ionization function '' .the exact meaning of statistical ionization with the related limitations has been extensively discussed in our previous work , for such a reason we do not spend more discussion about it and focus the attention onto the numerical results . the number of ionized electron is therefore given by which , written in reduced variables , reads : differently from the procedure adopted in our previous work , in this case can be studied only numerically , since it is not possible to solve analytically the integrals and so obtain a semianalytical expression .moreover the dependence of on does not allow to obtain a universal function where to pass from one atom to another is possible by simply scaling in .next we show results obtained by studying numerically the `` generalized statistical ionization function '' for the case ; however since we did find quantitative agreement between tf and tfdl , the results shown are valid for any atomic number by opportunely scaling them .in this section we show results obtained for the `` generalized statistical ionization function '' for the case .as it was already underlined , we expect that the results are quantitatively and qualitatively equivalent in case tf or tfdl model is used , since the solution of the tf and tfdl equation do not differ , as it was shown in the previous section .this means that the generalized ionization function is the numerical equivalent of the semianalytical and universal statistical ionization function of our previous work ; thus the results ( and the equation of state ) shown here for , can be easily generalized to any by simply scaling .indeed the function does not show a different behavior when tf or tfdl are used ; in figure [ etot_s - s ] the plots obtained using tf and using tfdl coincide . as it is possible to see in figure [ etot_s - s ] a minimumis always obtained for any compression , and in figure [ etot_s = n_x ] a particular compression has been chosen and the determination on the number of ionized electrons is pictorially illustrated . accordingly to the procedure of our previous work , at this point we can therefore model the compressed atom as a core with radius plus a number of ionized electrons which in principle is free and is spread over all the atomic volume .the pressure is therefore determined by the density of ` ` free '' ( ionized ) electrons as we proposed in the previous work ( again limitations are extensively discussed there ) : where is given by eq .( [ nions ] ) and the atomic volume is simply .first we study the ionization as a function of the compression for the single atom ( see fig.[nion - xcore - x0 ] ) and then we extend the procedure to a macroscopic level so that the resulting equation of state is shown in fig .[ p - vf ] , where the pressure in units of rydberg over the bohr atomic volume is plotted as function of the volume .we generalized the concept of `` statistical ionization function '' for electrons in a compressed atom , obtained in a previous work , within the tf approach .the generalization was developed by extending the previous analysis to more sophisticated tf models where exchange and correlation are considered .we found that there is no qualitative and quantitative difference between the original and the generalized approach .although one could have expected this result , we explicitly proved such an equivalence , and for the first time , solutions of the tfdl equation where shown . the important conclusion of this work concerns the fact that because of the equivalence shown , the results reported for the particular case of are valid for any atomic number , provided that a rescaling , where is the atomic number wished , is applied ; above all the results legitimate the use of the semianalytical and universal ionization function obtained in our previous work with the evident advantages of the low computational cost and its extreme simplicity and immediacy .moreover the fact that lewis formula is exact for the high density limit , means that in our case the effects of correlation were well described , as a consequence one can conclude that such effects are not relevant for describing atoms under pressure , at least in first approximation ; this result it is not obvious in an _ a priori _ analysis .as it was discussed in the previous work , the equation of state is a simplification and as a consequence far from being rigorous ; for instance an open problem of our model is the distribution of the `` ionized '' electrons which in the present work is considered simply uniform .however , due to its simplicity and feasibility , the model can be applied for a basic study of compressed matter not only via the determination of the equation of state , but also as a basis for developing more efficient analysis within more sophisticated theories . the example shown in our previous publication , was the determination of the `` ionized '' electrons for a certain compression and the consequent description , in an _ ab initio _ method , of the ionized electrons via plane - waves wavefunctions while the other electrons can be represented as a core or as localized orbitals ; this would speed up the convergence for self - consistent calculations of compressed matter .here we can say more , as it is well known , the _ ab initio _ calculations are based on the concept of pseudopotential ; only the valence electrons are explicitly taken into account while the rest are placed in a core described by an opportune potential which interacts with the valence electrons .valence electrons and core are known for the uncompressed atoms but one may ask what happens in case the system is under high pressure , in this case the valence electrons and the core should be redefined according to the degree of compression . in this case our model , at basic level would be very helpful for estimating the number of ionized ( valence ) electrons and define what is the core .the evident advantage of such a procedure stays in the simplicity of such an estimate and in the very low computational cost . in conclusionwe think that the study performed in this work furnishes important information and tools for a computational inexpensive and well founded basic analysis of compressed matter .we briefly illustrate the procedure followed by lewis to develop a suitable formula for the electron correlation , for more details see and references therein .consider an electronic fermi gas whose maximum momentum is the fermi momentum and density ; the correlation energy at high density can be calculated via the gell - mann s scheme and leads to the following expression and are respectively the electron mass and the electron charge .this expression must be modified in such a way that its validity could be reasonably extended to any density and in particular must be exact at low density . at this pointlewis invokes wigner procedure for the calculation of electron correlation for a dilute gas ; i.e. one simply needs to note that a very dilute electron gas in the ground state crystallize into a body - centered cubic lattice and at this point the correlation energy can be determined exactly via a madelung type technique .the expression for low density obtained is where .finally the procedure leads to what lewis defines as `` a suitable interpolation formula '' for the correlation energy valid for any density : \label{formula3}\ ] ] where .this is the expression used by lewis to incorporate the correlation effects into the tfd model .e.fermi , rend.accad.naz.lincei 6 ( 1927 ) 602 .r.p.feynmann , n.metropolis , e.teller , phys.rev . 75 ( 1949 ) 1561 r.f.trunin , m.a.podurets , g.v.simakov,l.v.popov , b.n.moiseev , sov.phys .jetp 35 ( 1972 ) 550 . l.delle site , physica a , 293 ( 2001 ) 71 ; + l.delle site , phisica a , 295 ( 2001 ) 562 . p.a.m.dirac , proc.cambridge phil.soc . 26( 1930 ) 376 .h.lewis , phys.rev . 111( 1958 ) 1554 .l.landau , m.j.kearsley , e.m.lifshits , statistical physics : part 1 , buttenworth - heinemann , oxford , 1980 .j.f.barnes , phys.rev . 140 ( 1965 )j.f.barnes , phys.rev . 153 ( 1967 ) 269 .e.e.sapleter , h.s.zapolsky , phys.rev . 158( 1967 ) 876 .k.ebina , t.nakamura , j.phys.soc . of jap .52 ( 1983 ) 1658 .s.l.shapiro , s.a.teukolsky , black holes , white dwarfs and neutron star , wiley , new york , 1983 .
in a previous work one of the authors proposed a simple model for studying systems under pressure based on the thomas - fermi ( tf ) model of single atom . in this work we intend to extend the previous work to more general thomas - fermi models where electronic exchange and correlation are introduced . to do so , we first study numerically the equation obtained by h.w.lewis ( tfdl ) which introduces the effects of exchange and correlation into the original tf equation ; next the procedure followed in the previous work is extended to the new approach and a specific example is illustrated . although one could expect that no big differences were produced by the generalized tf model , we show the qualitative as well as quantitative equivalence with detailed numerical results . these results support the robustness of our conclusions with regards to the model proposed in the previous work and give the character of universality ( i.e. to pass from one atom to another , the quantities calculated must be simply scaled by a numerical factor ) to the properties of compressed systems shown in this work .
we consider a general system of chemical species inside a fixed volume with denoting the number of molecules .the stoichiometric matrix describes changes in the population size due to different reaction channels , where each describes the change in the number of molecules of type from to caused by an event of type .the probability that event occurs during the time interval equals , where the are called the transition rates .this specification leads to a poisson birth and death process described by the following stochastic equations where the are independent poisson processes with rate 1 . in order to define the contribution of the reaction to the variability of first define as the expectation of conditioned on the processes so that is a random variable where timings of reaction have been averaged over all possible times , keeping all other reactions fixed .therefore is a random variable representing the difference between the native process and a time - averaged process of the reaction . nowthe contribution of the reaction to the total variability of is where denotes the temporal average over all reactions .this definition is similar to the one proposed in to quantify the contributions of promoter states and mrna fluctuations , respectively , to the protein level variability . in general , it is difficult to calculate or study properties of equation ( [ contrib_def ] ) using a poisson birth and death process framework ( [ kurtz_x ] ) . hereinstead we use the linear noise approximation ( lna ) , which allows us to model stochastic systems using wiener processes driven instead of poisson processes driven stochastic differential equations .the lna is valid if the number of interacting molecules is sufficiently large and decomposes the system s state , , into a deterministic part and a stochastic part here and are described by the deterministic and stochastic differential equations respectively , and their coefficients are given by the following formulae the lna presents a simple way to compute contributions , and here we demonstrate how the total variance can be decomposed into the sum of individual contributions .we first write the explicit solution for the process as and , where is the fundamental matrix of the non - autonomous system of ordinary differential equations and instead of and , respectively . ] now it is straightforward to verify that where , , , and from ( [ sol_xi ] ) and ( [ xi_j ] ) we have with and the time derivative of we obtain for , with this is , of course , analogous to the fluctuation dissipation theorem , with the exception that the diffusion matrix contains zeros at all entries not corresponding to the reaction .now the fact that the total variance can be represented as the sum of individual contributions results directly from the decomposition of the diffusion matrix and the linearity of the equation for , given by the standard fluctuation dissipation theorem the decomposition ( [ sigmaj_sum ] ) it is in principle possible to detect reactions that make large contributions to the output variability of biochemical reactions .but even simple systems , for which analytic formulae exist , usually have complicated noise structures ; we can nevertheless prove two general propositions that assign surprisingly substantial contributions to the degradation of an output signal .we formulate them in this section ( proofs are in the _ appendix _ ) and illustrate them further below .consider a general system such as described at the beginning of the _ noise decomposition _ section .in addition , assume that the deterministic component of the system has a unique and stable stationary state ( all eigenvalues of matrix have negative real parts ) . if is an output of this system being produced at rate and degraded in the reaction at rate then the contribution of the output s degradation is equal to the half of its mean ; more specifically , {nn}=\frac{1}{2 } \langle x_n \rangle.\ ] ] now consider again a general system but assume that reaction rates are of mass action form and that only three types of reactions are allowed : production from source : : : degradation : : : conversion : : : .to satisfy the openness assumption each species can be created and degraded either directly or indirectly ( via a series of preceding conversion reactions ) . as in _ proposition 1 _let be the output of the system .under this assumption the degradation of the output contributes exactly half of the total variance of the system s output , {nn}=\frac{1}{2 } \left[\sigma\right]_{nn},\ ] ] where is again the index of the output s degradation reaction ._ proposition 2 _ can be understood as a balance between production and degradation reactions .if we consider all reactions except as producing species , then production and degradation contribute the same amount of noise .usually , however , there is more than one production reaction , and therefore it is more convenient to interpret this result as the contribution of a single reaction .both propositions indicate that a substantial part of noise is a result of the signal s degradation reaction .this observation is particularly interesting as open conversion systems are often good approximations to important biochemical reactions such as michaelis - menten enzymatic conversions or linear signal transduction pathways , which we discuss below ; it suggests also that controlled degradation is an effective mechanism to decrease the overall contribution to variation in protein levels .in this section we demonstrate how our methods can be used to study origins of noise in biochemical systems and present consequences of the above propositions . for illustrative purposes we start with a basic birth and death process , where molecules arrive at rate and degrade at rate .in the lna this process can be expressed by the following stochastic differential equation the stationary distribution of this system is poisson with the mean .therefore in the stationary state death events occur at rate which must be equal the birth rate .the noise terms in the above equation are equal at stationarity indicating that contributions of birth and death reactions are equal .using formula ( [ sigma_j ] ) it is straightforward to verify that the decomposition holds indeed .here we first focus on conversion pathways , where molecules are born at a certain rate and information is transmitted to molecules using ( possibly reversible ) conversion reactions .all molecules can degrade with arbitrary first order rates , and we assume that the final product is not the substrate of any conversion reaction . under these assumptions ,regardless of parameter values , length of the pathway , and degradation of intermediates , the degradation of the output , rather unexpectedly , contributes exactly half of the variance of the signalling output , . on the other hand ,if information is transmitted not using conversion reactions , but each species catalyses the creation of the subsequent species , i.e. , then according to the _ proposition 1 _ , the contribution of the degradation of to the output s variance equals half of its mean regardless of the cascade length and parameter values . in order to study contributions of intermediate reactions , we considered cascades of the length three ( see figure [ cascades_arrows ] ) and decomposed variances numerically for both conversion and catalytic cases .decompositions are presented in figure [ cascades ] . for conversion cascades we used two parameter sets corresponding to fast and slow conversions to demonstrate that increasing the rates of intermediate steps decreases their contribution asmight intuitively be predicted ( top panel of figure [ cascades ] ) . for catalytic cascades increasing the rates of reactions at step two and three ( going from slow to fast )increases the contribution of reactions at the bottom of the cascades . for slow dynamics during steps two and three ,the relatively fast fluctuations at the start are filtered out ( low pass filtering ) .if the dynamics of all species occur at similar times scales then these fluctuation can efficiently propagate downstream ( bottom panel of figure [ cascades ] ) .michaelis - menten enzyme kinetics are fundamental examples of biochemical reactions ; substrate molecules ( ) bind reversibly to enzyme molecules ( ) with the forward rate constant and the backward rate constant to form a complex ( ) , which then falls apart into the enzyme and a product ( ) at rate : . to ensure existence of the steady state, we assume that substrate molecules arrive at rate and are degraded at rate . at the unique steady state , if the concentration of an enzyme is large compared to the substrate , the system is well approximated by a set of mono - molecular reactions : our theory therefore predicts that half of the noise in michaelis - menten kinetics operating in its linear range ( abundant enzyme ) is generated by degradation of the product molecules . in figure [ michaelis ]we have calculated the contributions of each of the four reactions to the variance of all four species for the full model ( without themonomolecular approximation ) .the contribution of the output degradation is as predicted by the monomolecular approximation .the canonical example of a linear catalytic pathway is the gene expression process , which can simply be viewed as the production of rna ( ) from source at the rate , and production of protein ( ) in a catalytic reaction at rate together with first order degradation of both species at rates , ._proposition 1 _ states that the part of the variance resulting from the protein degradation equals half of the protein mean .moreover , formula ( [ sigma_j ] ) allows us to derive the complete decomposition a similar decomposition was first presented by , however , only into contributions resulting from fluctuating species and without theoretical explanation ; the latter was provided later by .below we investigate two extensions of the above model .first , we assume that the promoter can fluctuate between `` on '' and `` off '' states ( similarly to ) and calculate contributions for different time - scales of these fluctuations .second , we assume that the protein is a fluorophore that undergoes a two - step maturation process before it becomes visible ( folding and joint cyclization with oxidation ) .figure [ promoter ] presents contributions for fast and slow promoter kinetics , showing that fast fluctuations are effectively filtered out ( contributing 10 % ) but remain a substantial contributor when they are slow ( contributing 40 % ) .variability in gene expression is often measured by means of fluorescent proteins that undergo maturation before becoming visible for detection techniques ; but the process of maturation itself is subject to stochastic effects , and can thus contribute significantly to the observed variability .we used typical parameters for fast and slow maturing fluorescent proteins and found that maturation contributed 4 and 25% , respectively ( figure [ maturation ] ) to the overall variability ; here our method allows for the rigorous quantification compared to the previous qualitative observations of . in order to demonstrate that our methodology and predictions are valid also for more general out - of - steady state models we next focus on the p53 regulatory system which incorporates a feedback loop between the tumour suppressor p53 and the oncogene mdm2 , andis involved in the regulation of cell cycle and the response to dna damage .we consider the model introduced in and later analysed in that reduces the system to three molecular species , p53 , mdm2 precursor and mdm2 , denoted here by and .the model incorporates six reactions ( ) , and its deterministic version can be written in the form of the following ordinary differential equations using equation ( [ sigma_j ] ) we calculated the contributions of all of the six reactions present in the model .results are presented in figure [ p53_contrib ] .the contributions of each reaction oscillates over an approximately constant range over the course considered here , except for the reaction of degradation : the contribution of the degradation accumulates over time .this observation is consistent with our propositions which predict for the steady - state models that the contribution of output signal degradation is significant , and generally overrides the contributions of other reactions .in the authors studied the applicability of the lna for the analysis of oscillatory models and found that the approximation fails in this type of models when long time periods are considered . in these casesthe total variance diverges to infinity with time .they also decomposed the total variance into an oscillatory and a diverging part ; the diverging part corresponds to the variability in the oscillation s period .the diverging part in figure [ p53_contrib ] results from the p53 degradation , indicating that stochasticity of this reaction is responsible for the variability of the period in this system .the propositions exemplified by the above models demonstrate that the noise resulting from degradation of an output is a substantial source of variability in biochemical systems . in a general systemit contributes a fraction of the variance equal to the half of the mean output .here we show how controlling the degradation can reduce this contribution .we consider a simple birth and death process , but the same mechanism is valid for general system considered in the _ proposition 1_. if births and deaths occur at the state dependent rates and , respectively , such a system is described by at equilibrium , we have , and using formula ( [ sigma_j ] ) we can calculate that contribution of degradation is for , this reduces , of course , to the previously discussed case .nevertheless , if functions and are of hill and michaelis - menten type , respectively , then the contribution is no longer directly related to the mean and can be reduced to an arbitrary low level according to the above formula .any decrease in the contribution compared to the mean is achieved by the reduction of the variance resulting from the reduced flux through both reactions and autoregulatory control effects .recent experimental work on protein degradation _ in vitro _ provides the evidence that degradation indeed exhibits michaelis - menten type kinetics , instead of the linear first order kinetics that are usually used to model degradation .the effect of autoregulation is depicted in figure [ figure1 ] .the noise decomposition method introduced here allows us to investigate in detail where and how noise enters biochemical processes , is propagated through reaction systems , and affects cellular decision making processes .we have shown analytically that in a wide class of systems the signal degradation reaction contributes half of the noise in a system s output regardless of parameter values .we have also carried out numerical study for a system that never reaches steady - state and confirmed the surprisingly important role of degradation in controlling stochasticity of the p53 system , as well as in a range of generic systems biology models .quite generally , the ability to dissect noise propagation through biological systems does enable researchers better to understand the role of noise in function ( and evolution ) , and will also enable synthetic biologists to either harness or dampen the effects of noise in molecular signalling and response networks .one of the central results that we found and report here is the crucial role of degradation of the signal on the overall noise levels .the relevance of degradation has not been studied before in the context of stochastic biochemical dynamics and to our knowledge our study is the first that draws attention to the importance of degradation .this is particularly important as this may indicate new therapeutic targets : in humans and other other sequenced organisms , certainly , the repertoire of proteins involved in protein degradation , in particular ubiquitin - ligases , is as rich and diverse as the repertoire of proteins regulating their activation , the kinases ( and phosphatases ) .thus targeting the degradation of proteins appears as important to biological systems as protein activation and offers an attractive and broad range of new potential therapeutic targets .mk and mphs acknowledge support from the bbsrc ( bb / g020434/1 ) .jm would like to thank the polish ministry of science and higher education for a financial support under the grant n201 362536 .mphs is a royal society wolfson research merit award holder .* 1 ) * interactions of with other species imply that for and . thus matrix can be written as *2 ) * formula ( [ d_j ] ) implies that all elements of matrix are equal to except , therefore has the form * 3 ) * it is straightforward to verify that the matrix satisfies the equation an open conversion system ( see e.g. ) , therefore from _ proposition 1 _ we have , and being the stationary solution of .the following parameters were used : , , , .for fast fluctuations we used , , and for slow , .all rates are per hour .[ promoter ] ] . for slow maturation ( average maturation time approx. 5 h. ) we assumed folding and maturation rates to be and , respectively .for fast maturation ( average maturation time approx . 0.5 h ) we set , .all rates are per hour .[ maturation ] ] trajectories ( top left ) and decomposition of variance into the contributions corresponding to each of the reactions for mdm2 ( top right ) , mdm2 rna ( bottom left ) and p53 ( bottom right ) .we used the index of the parameters in equations ( [ p53_1]-[p53_3 ] ) to denote reaction they describe .we used model and parameters published in .figure demonstrates accumulation of the noise contributed by degradation of p53 ( reaction ) .the following parameters were used : all rates are per hour . ] the effect of noise reduction resulting from regulated degradation .each colour describes production rate ( dotted line ) , degradation rate ( dashed line ) and density of stationary distribution of ( solid line ) . for blue : , ; for black : , ; for green : , . plotted densities are kernel density estimates based on 10000 independent stationary samples generated using gillespie s algorithm . ]
the phenomena of stochasticity in biochemical processes have been intriguing life scientists for the past few decades . we now know that living cells take advantage of stochasticity in some cases and counteract stochastic effects in others . the source of intrinsic stochasticity in biomolecular systems are random timings of individual reactions , which cumulatively drive the variability in outputs of such systems . despite the acknowledged relevance of stochasticity in the functioning of living cells no rigorous method have been proposed to precisely identify sources of variability . in this paper we propose a novel methodology that allows us to calculate contributions of individual reactions into the variability of a system s output . we demonstrate that some reactions have dramatically different effects on noise than others . surprisingly , in the class of open conversion systems that serve as an approximate model of signal transduction , the degradation of an output contributes half of the total noise . we also demonstrate the importance of degradation in other relevant systems and propose a degradation feedback control mechanism that has the capability of an effective noise suppression . application of our method to some well studied biochemical systems such as : gene expression , michaelis - menten enzyme kinetics , and the p53 system indicates that our methodology reveals an unprecedented insight into the origins of variability in biochemical systems . for many systems an analytical decomposition is not available ; therefore the method has been implemented as a matlab package and is available from the authors upon request . + living cells need to constantly adapt to their changing environment . they achieve this through finely honed decision making and stress response machineries that regulate and orchestrate the physiological adaptation to new conditions . in all studied genomes a large number of proteins have as their primary function the transfer and processing of such information . such proteins are linked through a host of different mechanisms into biochemical circuits that perform a variety of information processing tasks including storage , amplification , integration of and marshalling the response to environmental and physiological signals . the functioning of these information processing networks depends on thermal or probabilistic encounters between molecules , resulting in a distortion of a transferred information that is best understood as noise . each reaction in the information processing machinery leads to an inevitable loss of information . therefore , cell functions do not only rely on the necessity to make good " decisions , but also on appropriate ways to cope with the uncertainties arising from the noisy " signal transmission . to deal with the latter type of difficulty , we believe , evolution equipped cells with reliable signal transduction systems by using less noisy reactions or reaction configurations , where needed . the question , however , which reactions , molecular species or parts of a network contribute most of the variability of a system or are responsible for most of the information loss has not gained much attention , except for some models of gene expression . origins of stochasticity in biochemical systems have been discussed and argued over at length , but a unified and robust mathematical framework to study this problem has been lacking . here we present a novel and general method to calculate contributions of individual reactions to the total variability of outputs of biochemical systems . this enables us to identify the origins of cell - to - cell variability in dynamical biochemical systems . we derive a modified fluctuation - dissipation theorem which enables us to determine how much of the total variance results from each of the system s reactions . we then obtain some unexpected but general rules governing the noise characteristics of biochemical systems . in particular , we shall show that in an arbitrary system with a sufficiently large number of molecules , degradation of the output ( e.g. a reporter protein or a transcription factor ) contributes to the total variance of the system half the of the output s mean ; for the important class of open conversion systems exactly half of the variance derives from the degradation step of the output signal . these results demonstrate that some reactions may be responsible for higher information loss than others ; but our results also reveal that cells have the option of optimising biochemical network structures in order to avoid the most noisy reactions if necessary . based on these results we propose a mechanism of controlled protein degradation based on a positive feedback that allows an arbitrary noise reduction resulting from the protein degradation . below we first introduce the general framework for modelling chemical reactions and derive a new method to decompose the noise in a biochemical system into contributions from different individual reactions . furthermore , two general properties governing noise are presented . finally , we use biological examples of signal transduction systems to provide novel insights into the origins of variability . in particular , we decompose the variance of the outputs of linear transduction cascades and michaelis - menten enzyme kinetics . in addition , for the oscillatory p53 system we show that stochasticity in p53 protein degradation is responsible for the variability in the oscillations periodicity .
if elongated active , _ i.e. _ self - propelled , objects interact by pushing each other in a dissipative medium or substrate , the objects will tend to locally align as shown in fig .[ fig - model ] .since these object are self - propelled , once aligned , they will move together in the same direction for a given time .this simple effective alignment mechanism among active objects lead to interesting collective effects , as the formation of moving cluster as illustrated in fig .[ fig : myxo ] with experiments of myxobacteria .there is a broad range of real - world active systems that consist of active elongated object where this mechanism is at work : gliding bacteria , dried self - propelled rods , chemically - driven rods , and it has been recently argued that also neglecting hydrodynamic effects over steric effects in swimming bacteria and motility assays .[ b ] here , we review the large - scale properties of collections of active brownian elongated objects , in particular rods , moving in a dissipative medium / substrate .we address the problem by presenting three different models of decreasing complexity , which we refer to as model i , ii , and iii , respectively .model i is the full physical active brownian rod model introduced in where particles exhibit a well - defined shape , possess an active force acting along the longest axis of the rod , and interact via volume exclusion effects by pushing each other . in modeli there exists a coupling of local density , orientational order , and speed , known to lead to density instabilities and collective phenomena in other active models .more importantly , in model i active stresses coexist with an an effective local alignment mechanism . due to the combined effect of these two elements , model i displays exciting new physics unseen in other active models , such as the formation of highly dynamical aggregates that constantly eject giant polar cluster containing thousands of active rods .if we remove from model i the active force , we end up with an equilibrium system ( if noise terms have been adequately chosen ) . with the elongated rods interacting through steric repulsive forces ,onsager s argument on thin rods applies and the system exhibits local nematic order above a given critical density .we discuss the possibility of local nematic order and quasi - long - ranged order ( qlro ) in two - dimensions by introducing model ii , which is a simplified version of model i without anactive force .model ii allows us to argue that the symmetry of the interaction potential in model i is nematic .we introduce model iii to show that the peculiar large - scale properties displayed by model i do not result , as has been argued , from the combined effect of self - propulsion and an effective nematic velocity alignment mechanism .model iii is an active version of model ii and a simplified version of model i without volume exclusion interactions .let us recall that hat most flocking models assume a velocity alignment mechanism whose symmetry is ferromagnetic . from model iii , we learn that active particles with a nematic velocity alignment exhibit macroscopic nematic structures , which are not present in model i , which displays polar order at short scales and highly dynamical , highly fluctuating phase - separated phase .comparing model i , ii , and iii we disentangle the role of activity and interactions and identify the contribution of every modeling element . in particular, we find that by ignoring volume exclusion effects , local and global nematic order seems to be possible , while by adding steric interactions the system is dominated by the interplay of active stresses and local alignment , which prevents the formation of orientational order at large scales in two - dimensions .the physics of active elongated objects , despite its ubiquity in experimental systems , remains still poorly understood . here, we present a detailed state of the art of the unique collective properties of this fascinating physical system .let us consider active brownian rods ( abr ) moving in a two - dimensional space of linear size with periodic boundary conditions .each rod is driven by an active stress / force that is applied along the long axis of the particle .interactions among rods are modeled through a repulsive potential , which we denote , for the -th particle , by .the substrate where the rods move acts as a momentum sink .there are three friction drag coefficients , , , and , which correspond to the drags experienced by the rods as the rod moves along the long axis , perpendicular to it , or as it rotates , respectively . in the over - damped limit ,the equations of motion of the -th rod are given , as in , by : \\ \label{eq : evol_theta } \dot{\theta}_i & = & \frac{1}{\zeta_{\theta } } \left [ - \frac{\partial u_i}{\partial \theta_i } + \xi_{i}(t ) \right ] \ , , \end{aligned}\ ] ] where the dot denotes a temporal derivative , corresponds to the position of the center of mass and the orientation of the long axis of the rod .the term models the interactions with other rods and is the self - propelling force .the symbol in eq .( [ eq : evol_x ] ) is the mobility tensor defined as , with and such that .drag friction coefficients can be computed assuming that the rods are surrounded by a liquid , move on a dried surface as in experiments with granular rods , or by assuming that eqs .( [ eq : evol_x ] ) and ( [ eq : evol_theta ] ) represent gliding bacteria , in which case the friction coefficients are arguably connected to presence of the so - called focal adhesion points . in short ,the friction coefficients depend on the specific rod system eqs .( [ eq : evol_x ] ) and ( [ eq : evol_theta ] ) are supposed to model . here , we use , , , and .( [ eq : evol_theta ] ) represents the temporal evolution of the orientation of the rod , which is assumed to result from the torque generated by the interactions with the others rods , modeled by , and thus we express this torque as . note that eqs .( [ eq : evol_x ] ) and ( [ eq : evol_theta ] ) are subject to fluctuations through the terms and , which correspond to delta - correlated vectorial and scalar noise , respectively . for simplicity , we neglect and specify in eq .( [ eq : evol_theta ] ) and , with ( for more details see ) .the interactions among the rods are modeled by a soft - core potential that penalizes particle overlapping . for the -th rod, the potential takes the form : , where denotes the repulsive potential interaction between the -th and -th rod , both of length and width , such that .the rods ( in two - dimensions ) can be represented as soft rectangles or spherocylinders as in or equivalently as straight chains of disks of diameter , as implemented in , whose centers are separated a given distance ; here .we notice that results obtained with active brownian rods represented by disk - chains are qualitatively identical to those produced with the original abr model introduced in if and only if , _ i.e. _ as long as the border of the rods is smooth . using this implementation , can be expressed as , where is the potential between disk of the -th rod and disk of the -th rod , which here we assume to be given by a harmonic repulsive potential : , for and zero otherwise , where is the distance between the centers of the disks and . if we remove the active force and make in eq .( [ eq : evol_x ] ) , while requiring that and allow us to define a temperature , the resulting system is the equilibrium system of passive rods studied by onsager to describe , in a simple way , rod - like liquid crystals in two - dimensions .let us recall that according to the so - called mermin - wagner theorem , this equilibrium system in two - dimensions can not exhibit long - ranged order ( lro ) .furthermore , it is believed that this system exhibits ( in two - dimensions ) a defect - mediated phase transition analogous to the berezinskii - kosterlitz - thouless ( bkt ) transition .this means that above a given critical point , the system exhibits quasi - long - ranged order ( qlro ) , implying that by increasing the system size , while keeping all intensive parameter fixed , the order parameter ( associated to orientational order ) decreases algebraically . at the mean - field level , _i.e. _ by neglecting fluctuations , the equilibrium model ii displays an isotropic - nematic transition , which can be understood by considering two rods separated ( their center of mass ) a given distance . to simplify the reasoning ,let us neglect the dynamics of the center of mass of the rods and consider them fixed . in this scenario ,it is clear that if the interaction between the rods is modeled by an interaction potential that penalizes the overlapping of the rods ( as in model i ) , by making the orientation of the rods parallel to each other , we minimize the interaction potential ( moreover , particles may not even interact ) . in addition , since the interaction potential does not distinguish head and tail of the particles , it should be such that we obtain the same by flipping the orientation of one of the particles by .in short , the interaction potential has to be a function of overlapping area of the two rods , which is , as observed by onsager , proportional to , where and refer to the label of the particles we are looking at .since , for simplicity we express the potential directly as : where the sum runs over all particles such that , with defining the interaction range , which we could assume to be , and a constant .this defines a simplified dynamics where the equation of motion of the -th particle is given by : where and where we have simplified further the model by assuming that is an isotropic delta - correlated vectorial noise .it is important to stress that eqs .( [ eq : evol_x_model_ii ] ) and ( [ eq : evol_theta_model_ii ] ) do not correspond ( exactly ) to eqs .( [ eq : evol_x ] ) and ( [ eq : evol_theta ] ) with , but to a simplified version of model i for that shares the same symmetry .the fundamental difference between model i with and model ii is the absence of a term proportional to in eq .( [ eq : evol_x_model_ii ] ) . despite of this difference, we expect that on large scales eqs .( [ eq : evol_x_model_ii ] ) and ( [ eq : evol_theta_model_ii ] ) to display a behavior analogous to that of model i with . we know that a system evolving according to eqs .( [ eq : evol_x_model_ii ] ) and ( [ eq : evol_theta_model_ii ] ) do not exhibit lro , and that the observed transition is a defect - mediated transition with order being qlro as expected for model i with .in addition , and very important for us , at the mean - field level eqs .( [ eq : evol_x_model_ii ] ) and ( [ eq : evol_theta_model_ii ] ) exhibit an isotropic - nematic transition , characterized by the absence of local and global polar order .we will use model ii as a reference model .the message is that in the absence of activity ( meaning for ) and in two - dimesions , we do not expect the system to exhibit lro but rather qlro .furthermore , from model ii we could imagine that if activity induces order at large scales ( _ i.e. _ lro ) such order should be nematic .we will see that this is the case in model iii , which we introduce next , but counterintuitive not for model i. the third and final model we introduce here is an active version of model ii .the equations of motion of the -th particle are given by : where . despite the fact that the interaction potentials in model i and model iii are short - ranged and share the same symmetry, there is a fundamental difference between both models : the equation for in model iii does not possess a term .the absence of this term implies that in model iii there is no volume exclusion , and thus particles are point - like .as it will become clear below , this leads to the absence of active stresses in model iii , which are present in model i. due to this difference , which may appear at first glance minor , model i and iii exhibit qualitatively different large - scale properties .another difference between model i and iii is the absence of a noise terms in eq .( [ eq : evol_x_model_iii ] ) , while such terms are present in eq .( [ eq : evol_x ] ) .such noise terms do not have an impact on the ( qualitative ) large - scale properties of the systems .finally , the difference between model ii and iii is given by the so - called active term , , in eq .( [ eq : evol_x_model_iii ] ) , replacing the noise term in eq .( [ eq : evol_x_model_ii ] ) .note that in model iii the equations for and are thus coupled through , while this does not occur in model ii .we will see below that the presence of the active term in model iii has a strong impact on the large - scale properties , which turn to be qualitatively different from those of model ii .we characterize the resulting macroscopic patterns by their level of orientational order through where the averages run over the number of particles and time .polar order corresponds to and nematic order to .the and correspond to global order parameters , _i.e. _ obtained by performing an average over the entire system .local order parameters can be also defined and should be noticed that a system can exhibit no global order , while still displaying local order .the distribution of particles in space as well as information on the interaction structure can be studied by looking at the cluster size distribution ( csd ) .we use the ( weighted steady state ) csd defined as the time average ( after an initial transient ) of the instantaneous csd : where is the number of clusters of mass present in the system at time .notice that the normalization of this distribution is ensured since ( with the number of particles in the system as defined above ) .clusters are collections of interconnected particles , where any two particles are considered as ( directly ) connected if they interact to each other . for simplicity and to speed up the numerics , the criterion is usually relaxed and particles are considered as connected if they are separated a given distance which is typically of the order of the interaction radius ( the details vary from model to model ) .the functional form of indicates whether the system is `` homogeneous '' , when the csd displays an exponential tail , or phase - separated , when the csd exhibits a peak at large cluster sizes .particles can also self - organized in a kind of `` critical '' state where cluster of all sizes can be observed .this corresponds to a heavy - tailed csd .a detailed analysis of the csd of these models can be found in .an alternative way to monitor phase separation is by studying the ratio between average cluster size and system size , where , with the advantage of dealing simply with an scalar , .( a - c ) stationary state typical snapshots at different noise values ( and ) the arrows indicate particles direction of motion ; only a fraction of total particles are shown for clarity reasons .( a ) , ( b ) , ( c ) , ( d ) ( for clarity reasons particles are represented only by points ) , ( e ) .roman numbers refers to the four phases observed in finite size systems .figure from .,width=604 ] in order to study the thermodynamical behavior of the models , we analyze the scaling of and with the system size , while keeping all intensive parameter fixed .if , with , the system exhibits lro .qlro , on the other hand , corresponds to , where with in two - dimensions for the xy model .it is worth recalling that for a fully disorder system , we expect , which means that observing an algebraic decay does not necessarily imply qlro .finally , for homogeneous system , indicating that there is a finite value of with .a departure from this behavior is an indication of either phase - separation or the organization of particles in a critical " csd .here , we review results for model i and iii . the large - scale properties of model ii have been already summarized in the model section . as mentioned before , model ii , _i.e. _ the brownian rod model , is used as a reference model in order to understand the role of the active term . the comparison between modeli and iii will allow us to disentangle the role of volume - exclusion and nematic alignment .we start in inverse order , that is , we first revise results for model iii , since the model is simpler and results are , somehow , more intuitive .nematic order parameter as function of the noise amplitude for a density and a linear system size .the figure illustrates the four different regimes observed for a given density and system size ., width=377 ] the numerical results that we are going to review here correspond to an implementation of model iii _ la vicsek _ , _ i.e. _ using a numerical scheme as in .it is important to stress that direct numerical interaction of the continuum - time eqs .( [ eq : evol_x_model_iii ] ) and ( [ eq : evol_theta_model_iii ] ) lead to the same qualitative behavior as the one reported in ( data not shown ) .results where generated by using the following specific scheme ) is replaced with , which is the actual implementation used in . ] : \mathbf{v}(\theta_i^t ) \right ] + \gamma \psi_{j}^{t } \, , \end{aligned}\ ] ] where ] .the term is the noise amplitude , which means that .simulations were performed with and particle density .a fluctuating ordered phase exists at low noise ( or high density if is used as control parameter ) , and an order / disorder transition line lies in the main parameter plane .varying , we observe in a square domain of linear size , as shown in fig .[ fig:2 ] and [ fig:3 ] , that only _orientational order arises . despite the polar nature of the particles ,the polar order parameter remains near zero for all noise strengths ( fig .[ fig:3 ] ) .both the ordered and the disordered phases are divided in two by the spontaneous segregation , at intermediate values , of the system into high - density , ordered regions and sparse , disordered ones ( fig .[ fig:2]b - d ) .a total of four phases thus seems to exist , labeled i to iv by increasing noise strength hereafter .phases i and ii are nematically ordered , phases iii and iv are disordered .phase i , present at the lowest values , is ordered and spatially homogeneous ( fig .[ fig:2]a ) .its nematic order , which arises quickly from any initial condition , is the superposition , at any time , of two polarly aligned opposite subpopulations of statistically equal size ( fig .[ fig:4]a ) .these subpopulations constantly exchange particles , those which `` turn around '' , an event that occurs on exponentially - distributed times ( fig .[ fig:4]b ) .therefore it is natural to define a typical persistance time and its corresponding persistence length .they are found to decrease as the noise amplitude is increased . while the breaking of a continuous symmetry in a two dimensional equilibrium system can only lead to quasi - long - range - order ( qlro ) as discussed for model ii , in model iii the numerical evidence points towards lro .let us recall that what has been called as active nematics " , which is equivalent to model ii , where we have to explicitly require the use of a local diffusion tensor depending on , displays qlro . herenematic _ order parameter , for the tested system sizes , decays slower than a power law .a good fit of this decay is given by an algebraic approach to a constant asymptotic value ( fig . [ fig:4]c ) .this phase is characterized by the presence of giant number fluctuations , fig .[ fig:4]c ( for details see ) .in short , the numerical data suggests the existence of true long - range nematic order .ordered homogeneous phase ( , , ) .( a ) moving direction ( given by ) distribution in a system of linear system size .( b ) distribution of particles transition times between the two peaks of panel ( a ) , i.e. particles which `` turn around '' their orientation by .( c ) nematic order parameter as a function of system size .the vertical red dashed line marks the persistence length . in the inset : a constant value has been subtracted from the nematic order parameter to highlight power low decay to a nonzero constant .the red dashed line decays as .( d ) number fluctuations as a function of average particle number in a system with .the dashed line marks the power law growth .figure taken from .,width=302 ] phase ii differs from phase i by the presence , in the steady - state , of a low - density disordered region . in large - enough systems , a narrow , low density channel emerges ( fig . [ fig:2]b ) when increasing .it becomes wider and wider at larger values , so that one can then speak of a high - density nematically ordered band amidst a disordered sea ( fig . [ fig:2]c ) .inside the band , nematic order similar to phase i is found , with particles traveling in roughly equal number along each direction , and turning around or leaving the band at exponentially - distributed times , albeit with a shorter typical persistence time .giant number fluctuations similar to those reported in fig .[ fig:4]d occur .in rectangular domains , the band is typically oriented along the small dimension of the box . bands along the longer dimension can be artificially created , but in sufficiently long and narrow boxes they become wavy under what looks like a finite - wavelength instability and are eventually destroyed , leaving a thicker band along the small dimension . for large - enough ( square ) domains, the band possesses a well - defined profile with sharper and sharper edges as increases . in phase iii , spontaneous segregation into bandsstill occurs ( for large - enough domains ) , but these now thin structures constantly bend and elongate , getting thinner and vanishing , or merging with others .they never reach a static steady state shape .correspondingly , the nematic order parameter fluctuates on very large time scales and decrease like providing a clear indication that the order in these thin highly dynamical bands self - averages , making phase iii a disordered phase albeit one with huge correlation lengths and times .finally , phase iv , observed for the highest noise strengths , is disordered on small length- and time - scales , without any particular emerging structure ( fig .[ fig:2]e ) . here , we study the long - scale properties of model i by direct integration of eqs .( [ eq : evol_x ] ) and ( [ eq : evol_theta ] ) .we use as control parameter the aspect ratio , but we could use instead the packing fraction or the noise amplitude .it is important to clarify that when we vary the aspect ratio , we keep fixed the particle area . for a fixed system size ,we observe , , and take off above a critical value as shown in fig .[ fig : pt ] .we notice that decreases when is increased . below ,we observe a gas phase characterized by the absence of orientational order and an exponential cluster size distribution such that .for the system undergoes a symmetry breaking as observed previously in .the emerging order is , however , polar as evidenced by the behavior of .we recall that in the presence of polar order , is slaved to . the behavior of , right panel in fig .[ fig : pt ] , indicates that the system starts to spontaneously self - segregate for .here , we find that the onsets of orientational order and phase separation coincide and share the same critical point , as predicted using a simple kinetic model for the clustering process : due to the effective velocity alignment large polar clusters emerge , which in turn lead to macroscopic polar order . notice that in an equilibrium system of ( hard ) rods ( i.e. ) for and , according to de las heras et al . , we should observe only an homogeneous disordered phase for this range of parameters .this indicates that the observed phase transition requires , i.e. the active motion of the rods .we performed a finite size study , by increasing simultaneously and while keeping the packing fraction and all other parameters constant .[ fig : fss ] shows the scaling of the ( global ) polar order parameter and average cluster size with respect to system size for aspect ratio and several packing fractions . at low values , i.e. for , we are in the situation ( we recall that the critical value depends on ) . for system is not phase - separated and does not exhibit orientational order . as expected , the scaling of with shows that , with , which means that the system is fully disordered .in addition , we observe that , with , which indicates that there is a well - defined characteristic cluster size for the system ( that is independent of ) and consequently the system is not phase - separated . at large values ,i.e. for and , we observe phase separation and ( global ) polar order for small finite systems ( ) .the finite size study shows that , for , does not decrease ( asymptotically ) with .moreover , for large values of , we even observe an increase .this indicates that is at least proportional to , which implies that the system is phase - separated in the thermodynamical limit as well as in finite systems . at the level of the orientational order parameter , we observe an abrupt change in scaling of with .for ( e.g. , ) for and , is high _ i.e. _ the system displays global polar order . on the other hand , for ,while the system remains phase - separated , sharply decreases with . in short, the finite size study reveals that , although phase separation does take place in the thermodynamical limit as well as in finite systems , the phase transition to an orientationally ordered phase is observed only for small finite systems .global order patterns are not present in the thermodynamical limit , with the phase transition occurring , in this limit , between a disordered gas and a phase - separated state with no global orientational order .the reason for observing non - vanishing global order in small systems is the presence of few giant polar clusters as illustrated by the simulation snapshot in fig .[ fig : pt ] .such giant polar clusters can become so big and elongated that they can can even percolate the system , as shown in panel i ) of fig .[ fig : band ] .we refer to such polar percolating structures as bands . inside bands ,rods are densely packed , point into the same direction , and exhibit positional order .notice that these bands are distinct from the bands observed in the vicsek model , which are elongated in the direction orthogonal to the moving direction of the particles .more importantly , the observed polar bands are also fundamentally different from those observed in model iii , where we saw that the point - like self - propelled particles form nematic bands , inside which 50% of the particles move in one direction and 50% in the opposite one . more importantly , our finite size study indicates that the polar patterns observed in abr are a finite size effect that disappear for large enough systems . in short , several of the phases reported for inprevious abr works such as the so - called swarming phase and the bio - turbulence phase vanish in the thermodynamical limit . the abrupt change in scaling of with in fig .[ fig : fss ] suggests that above the crossover system size the polar structures are no longer stable .arguably , the decay in with is due to the fact that rods inside polar clusters are densely packed and hold fixed positions , not being able to exchange neighbors in contrast to other active systems .[ fig : band ] shows that , _i.e. _ the total time the system spends in the aggregate phase with respect to the total simulation time .we observe that increases with , in such a way that .this means that the probability of observing the system in an aggregate phase also increases with . for small system sizes we observe moving clusters and bands .large polar structures such as bands form , remain in the system for quite some time , and then quickly break and reform , typically adopting a new orientation . as ,bands survive for relatively short periods of time , and quickly bend and break . interestingly , at such large system sizes other macroscopic structures start to frequently emerge . these new macroscopic structures which we refer to as _ aggregates _ are formed by polar clusters of rods that exert stresses on each other and exhibit vanishing polar order , see panel ii ) of fig . [fig : band ] . in summary , for , the system continuously transitions between highly ordered phases e.g. phases with either a few giant polar clusters or a band and aggregates , as illustrated in fig .[ fig : band ] . for system sizelarger than bands and polar phases disappear in the thermodynamical limit , while the aggregate phase survives .the transitions between aggregates and bands ( or highly ordered phases ) for results from the competition between elastic energy and the impossibility of the system to sustain long - range polar order . for not too large system sizes , _i.e. _ for , the shape of the aggregates is roughly circular ( fig .[ fig : band ] , panel ii ) ) and at the center of the aggregate we find one single topological defect : i.e. at the mesoscale , at the center of the aggregate we can not define an average orientation for the rods . due to the active forces , at the center of the aggregate rods are strongly compressed , which implies that the potentials adopts high values .this implies that when one of these aggregates is formed , the total elastic energy of the system increases . on the contrary , in large polar structures such as bands , rods are roughly parallel to each other and therefore are much less compressed by their neighbors , andthe total elastic energy is low .the dynamics at can be summarized as follows .large polar clusters form and eventually a band emerges , but since the system is too big for the band to remain stable , at some point the band breaks .the collapse of the band gives rise to the formation of new giant polar clusters which eventually collide head on leading to a large aggregate : a process reminiscent of a traffic jam .the formation of the aggregate leads to a sharp increase of the total elastic energy .let us recall that forces and torques act in such a way that they tend to minimize . in short , the system relaxes by destroying the new formed aggregate , which give rise to the formation of new polar clusters and the cycle starts again . in larger system sizes ,i.e. for , aggregates are more complex . inside aggregates ,the competition between active forces and local polar alignment leads to new physics .this is particularly evident for very large system sizes , i.e. for , that is when aggregates are big enough to exhibit multiple topological defects of the local orientation of the rods .let us recall that aggregates are formed by polar clusters of rods that are trapped inside these structures .topological defects are areas where , at the mesoscale , as mentioned above , we can not define an average orientation of the rods as illustrated in inset of fig .[ fig : aggregate ] ( areas where the arrows meet ) . in such areas , due to the active forces, rods are strongly compressed by the active push of all surrounding abr ( see inset in fig . [fig : aggregate ] ) .since the compression is due to the presence of active forces , we refer to this phenomenon as _active stresses_. for , we observe the emergence of multiple defects that lead to an increase of the elastic energy and the build - up of stresses .notice that more topological defects imply larger values of the elastic energy .there are two clear consequence of the presence of multiple topological defects . on the one hand , aggregates are no longer roundish but rather irregular as illustrated in fig .[ fig : aggregate ] . on the other hand ,now the system can relax the elastic energy by reducing the number of topological defects .notice that for , aggregates are relatively small and exhibit one topological defect , and thus , the only way to eliminate the topological defects is by destroying the aggregate . for ,given the presence of multiple topological defects , eliminating one topological defect does not require to eliminate the aggregate . as a matter of fact ,for very large system sizes , the interplay between topological defects and active stresses lead to large fluctuations of the aggregate boundary and aggregate mass ( i.e. aggregate size ) as indicated in fig .[ fig : aggregate ] .the most distinctive feature of the observed phenomenon is the large fluctuations experienced by the aggregate mass correspond to ejections of remarkably large macroscopic polar clusters from the aggregate , that can be as large as 10% of the system size ( i.e. involving more than rods ) , fig .[ fig : aggregate ] . by this process ,i.e. the ejection of large polar clusters , the aggregate manages to decrease its elastic energy .the ejected polar clusters typically dissolve while moving through the gas phase outside the aggregate , leading to a sudden increase of the gas density , top panel in fig .[ fig : aggregate ] .this results in a higher absorption rate of abr by the aggregate that starts again to increase its mass .the comparison of models i , ii , and iii is insightful .model ii and iii share the same interaction potential , which we refer here to as alignment mechanism : eq .( [ eq : evol_theta_model_ii ] ) is identical to ( [ eq : evol_theta_model_iii ] ) . despite of this , on large scales , model ii and model iii display qualitatively different features . while model ii exhibits for low noise values qlro , as expected for an equilibrium system with continuum symmetry in two - dimensions , model iii at low noise displays nematic lro .the active term in eq .( [ eq : evol_theta_model_iii ] ) , , makes possible the emergence of stable nematic bands and an homogeneous nematically ordered phase ( phase i ) . in summary , in model ii we observe local nematic order and qlro at large scales , while in model iii we find again local nematic order and lro at large scales .importantly , at the meso and macroscopic sale model ii and iii do not exhibit polar order . in modeli , on the other hand , nematic order is never observed ( at least up to packing fractions ) .the difference between model i and model iii is due to the presence of the repulsive term in the equation for .we have learned that the addition of volume exclusion interactions in the evolution of the position of the particle has a dramatic effect on the large - scale properties of the system . at small system sizes and for large enough aspect ratios ( or equivalently packing fractions )particles self - organize into large moving polar clusters and the system displays global polar order . moreover , the observed bands are polar instead of being nematic as the ones observed in model iii .however , the performed finite size study reveals that polar order ( or any orientational order ) vanishes in the thermodynamical limit ( at least up to packing fractions ) .thus , from model i we learn that volume exclusion effects induce local polar and absence of orientational order at large scale . despite the lack of orientation order , model i undergoes a genuine phase - separation .the phase - separation process , as well as the phase - separated phase are qualitatively different from the ones observed in model iii , where the system spontaneous phase separates in the form of nematically ordered , high - density bands ( referred above as phase ii and iii ) . in modeli , phase separation starts with the formation if giant polar clusters that jam to form what we have called aggregates. the most distinctive feature of these aggregates is the constant ejection from the aggregate of thousands of particles in the form of densely packed and polarly ordered clusters , which leads to large fluctuation of the aggregate size and its boundary .this occurs due to the combined effect of an effective alignment mechanism ( also present in model iii ) and the active pushing ( or stresses ) acting among the particles that requires not only an active force but also a term in the evolution of the position of the particles .the combination of these elements ( active stresses and alignment ) is not present in other active system , making the physics of active brownian rods unique .for instance , self - propelled disks ( spd ) exhibit active stresses but no alignment among the spd .thus , phase separation in spd resembles a classical coarsening process which can be described by an effective cahn - hilliard equation . finally , it important to stress that the phase separation in model i is different from the one observed in active system with an alignment mechanism and a density - dependent speed . here , while alignment among particles is present , there are no active stresses . in summary, we have characterized the large - scale properties of abr for packing fractions smaller than or equal to .we have shown that large - scale ( orientational ) order patterns can not exist in the thermodynamic limit for ( physical ) abr ( _ i.e. _ in model i ) due to combined effect of active stresses and alignment , which leads to exciting new physics . in particular , we have seen that the interplay of these two elements ( active stresses and alignment ) gives rise to a novel highly fluctuating phase - separated phase characterized by the ejection of giant polar clusters .we point out that the presented evidence can not be used to preclude the emergence of orientational order at higher packing fractions .let us recall that even for brownian rods we expect for large enough packing fractions some kind of isotropic - nematic transition . in simulations with abr athigh packing a transition to laning has been reported .the observed laning phase may remain even in the thermodynamic limit , though its existence should be confirmed by a careful finite size study .finally , the model presented in for penetrable abr suggested a possible crossover between the large - scale properties exhibited by model i and iii . while this scenario can not be _ a priori _ excluded in , a finite size study may reveal some of the reported phases vanish in the thermodynamical limit .
lecture notes for the summer school `` microswimmers from single particle motion to collective behaviour '' at forschungszentrum jlich , 2015 .
in the last decade we entered the data - intensive era of astrophysics , where the size of data has rapidly increased , reaching in many cases dimensions overcoming the human possibility to handle them in an efficient and comprehensible way . in a very close future petabytes of datawill be the standard and , to deal with such amount of information , also the data analysis techniques and facilities must quickly evolve .for example the current exploration of petabyte - scale , multi - disciplinary astronomy and earth observation synergy , by taking the advantage from their similarities in data analytics , has issued the urgency to find and develop common strategies able to achieve solutions in the data mining algorithms , computer technologies , large scale distributed database management systems as well as parallel processing frameworks .astrophysics is one of the most involved research fields facing with this data explosion , where the data volumes from the ongoing and next generation multi - band and multi - epoch surveys are expected to be so huge that the ability of the astronomers to analyze , cross - correlate and extract knowledge from such data will represent a challenge for scientists and computer engineers . to quote just a few ,the esa euclid space mission will acquire and process about 100 gbday over at least 6 years , collecting a minimum amount of about tb of data ; pan - starrs is expected to produce more than tb of data ; the gaia space mission will build a map of the milky way galaxy , by collecting about one petabyte of data in five years ; the large synoptic survey telescope ( ) will provide about / night of imaging data for ten years and petabytes / year of radio data products .many other planned instruments and already operative surveys will reach a huge scale during their operational lifetime , such as kids ( kilo - degree survey ; ) , des ( dark energy survey , ) , herschel - atlas , hi - gal , ska and e - elt .the growth and heterogeneity of data availability induce challenges on cross - correlation algorithms and methods .most of the interesting research fields are in fact based on the capability and efficiency to cross - correlate information among different surveys .this poses the consequent problem of transferring large volumes of data from / to data centers , _ de facto _ making almost inoperable any cross - reference analysis , unless to change the perspective , by moving software to the data .furthermore , observed data coming from different surveys , even if referred to a same sky region , are often archived and reduced by different systems and technologies .this implies that the resulting catalogs , containing billions of sources , may have very different formats , naming schemas , data structures and resolution , making the data analysis to be a not trivial challenge .some past attempts have been explored to propose standard solutions to introduce the uniformity of astronomical data quantities description , such as in the case of the uniform content descriptors of the virtual observatory .one of the most common techniques used in astrophysics and fundamental prerequisite for combining multi - band data , particularly sensible to the growing of the data sets dimensions , is the cross - match among heterogeneous catalogs , which consists in identifying and comparing sources belonging to different observations , performed at different wavelengths or under different conditions .this makes cross - matching one of the core steps of any standard modern pipeline of data reduction / analysis and one of the central components of the virtual observatory .the massive multi - band and multi - epoch information , foreseen to be available from the on - going and future surveys , will require efficient techniques and software solutions to be directly integrated into the reduction pipelines , making possible to cross - correlate in real time a large variety of parameters for billions of sky objects .important astrophysical questions , such as the evolution of star forming regions , the galaxy formation , the distribution of dark matter and the nature of dark energy , could be addressed by monitoring and correlating fluxes at different wavelengths , morphological and structural parameters at different epochs , as well as by opportunely determining their cosmological distances and by identifying and classifying peculiar objects . in such context ,an efficient , reliable and flexible cross - matching mechanism plays a crucial role . in this work we present ( _ command - line catalog cross - match tool and the user guide are available at the page http://dame.dsf.unina.it/c3.html.]_ , ) , a tool to perform efficient catalog cross - matching , based on the multi - thread paradigm , which can be easily integrated into an automatic data analysis pipeline and scientifically validated on some real case examples taken from public astronomical data archives .furthermore , one of major features of this tool is the possibility to choose shape , orientation and size of the cross - matching area , respectively , between elliptical and rectangular , clockwise and counterclockwise , fixed and parametric .this makes the tool easily tailored on the specific user needs .the paper is structured as follows : after a preliminary introduction , in sec . [sec : techniques ] we perform a summary of main available techniques ; in sec .[ sect : c3design ] , the design and architecture of the tool is described ; in sections [ sect : config ] and [ sect : optimization ] , the procedure to correctly use is illustrated with particular reference to the optimization of its parameters ; some tests performed in order to evaluate performance are shown in sec .[ sect : performances ] ; finally , conclusions and future improvements are drawn in sec .[ sect : conclusion ] .cross - match can be used to find detections surrounding a given source or to perform one - to - one matches in order to combine physical properties or to study the temporal evolution of a set of sources .the primary criterion for cross - matching is the approximate coincidence of celestial coordinates ( positional cross - match ) .there are also other kinds of approach , which make use of the positional mechanism supplemented by statistical analysis used to select best candidates , like the bayesian statistics . in the positional cross - match , the only attributes under consideration are the spatial information .this kind of match is of fundamental importance in astronomy , due to the fact that the same object may have different coordinates in various catalogs , for several reasons : measurement errors , instrument sensitivities , calibration , physical constraints , etc . in principle , at the base of any kind of catalog cross - match , each source of a first catalog should be compared with all counterparts contained in a second catalog .this procedure , if performed in the naive way , is extremely time consuming , due to the huge amount of sources .therefore different solutions to this problem have been proposed , taking advantage of the progress in computer science in the field of multi - processing and high performing techniques of sky partitioning .two different strategies to implement cross - matching tools basically exist : web and stand - alone applications .web applications , like openskyquery , or cds - xmatch , offer a portal to the astronomers , allowing to cross - match large astronomical data sets , either mirrored from worldwide distributed data centers or directly uploadable from the user local machine , through an intuitive user interface .the end - user has not the need to know how the data are treated , delegating all the computational choices to the backend software , in particular for what is concerning the data handling for the concurrent parallelization mechanism .other web applications , like arches , provide dedicated script languages which , on one hand , allow to perform complex cross - correlations while controlling the full process but , on the other hand , make experiment settings quite hard for an astronomer .basically , main limitation of a web - based approach is the impossibility to directly use the cross - matching tool in an automatic pipeline of data reduction / analysis . in other words , with such a tool the user can not design and implement a complete automatic procedure to deal with data .moreover , the management of concurrent jobs and the number of simultaneous users can limit the scalability of the tool . for example, a registered user of cds - xmatch has only mb disk space available to store his own data ( reduced to mb for unregistered users ) and all jobs are aborted if the computation time exceeds 100 minutes .finally , the choice of parameters and/or functional cases is often limited in order to guarantee a basic use by the end - users through short web forms ( for instance , in cds - xmatch only equatorial coordinate system is allowed ) .stand - alone applications are generally command - line tools that can be run on the end - user machine as well as on a distributed computing environment .a stand - alone application generally makes use of apis ( application programming interfaces ) , a set of routines , protocols and tools integrated in the code .there are several examples of available apis , implementing astronomical facilities , such as stil , and astroml , that can be integrated by an astronomer within its own source code .however , this requires the astronomer to be aware of strong programming skills .moreover , when the tools are executed on any local machine , it is evident that such applications may be not able to exploit the power of distributed computing , limiting the performance and requiring the storage of the catalogs on the hosting machine , besides the problem of platform dependency . on the contrary ,a ready - to - use stand - alone tool , already conceived and implemented to embed the use of apis in the best way , will result an off - the - shelf product that the end - user has only to run .a local command - line tool can be put in a pipeline through easy system calls , thus giving the possibility to the end - user to create a custom data analysis / reduction procedure without writing or modifying any source code . moreover , being an all - in - one package , i.e including all the required libraries and routines , a stand - alone application can be easily used in a distributed computing environment , by simply uploading the code and the data on the working nodes of the available computing infrastructure .one of the most used stand - alone tools is stilts ( stil tool set , ) .it is not only a cross - matching software , but also a set of command - line tools based on the stil libraries , to process tabular data .it is written in pure java ( almost platform independent ) and contains a large number of facilities for table analysis , so being a very powerful instrument for the astronomers .on one hand , the general - purpose nature of stilts has the drawback to make hard the syntax for the composition of the command line ; on the other hand , it does not support the full range of cross - matching options provided by . in order to provide a more user - friendly tool to the astronomers ,it is also available its graphical counterpart , tool for operations on catalogs and tables ( topcat , ) , an interactive graphical viewer and editor for tabular data , based on stil apis and implementing the stilts functionalities , but with all the intrinsic limitations of the graphical tools , very similar to the web applications in terms of use .regardless the approach to cross - match the astronomical sources , the main problem is to minimize the computational time exploding with the increasing of the matching catalog size . in principle , the code can be designed according to multi - process and/or multi - thread paradigm , so exploiting the hosting machine features .for instance , evaluated to use a multi - gpu environment , designing and developing their own xmatch tool , .other studies are focused to efficiently cross - match large astronomical catalogs on clusters consisting of heterogeneous processors including both multi - core cpus and gpus , ( , ) .furthermore , it is possible to reduce the number of sources to be compared among catalogs , by opportunely partitioning the sky through indexing functions and determining only a specific area to be analyzed for each source .cds - xmatch and the tool described in use hierarchical equal area isolatitude pixelisation ( healpix , ) , to create such sky partition . , instead , proposed a combined method to speed up the cross - match by using htm ( hierarchical triangle mesh , ) , in combination with healpix and by submitting the analysis to a pool of threads .healpix is a genuinely curvilinear partition of the sphere into exactly equal area quadrilaterals of varying shape ( see fig . 3 in )the base - resolution comprises twelve pixels in three rings around the poles and equator .each pixel is partitioned into four smaller quadrilaterals in the next level .the strategy of htm is the same of healpix .the difference between the two spatial - indexing functions is that htm partitioning is based on triangles , starting with eight triangles , on the northern and on the southern hemisphere , each one partitioned into four smaller triangles at the next level ( see also fig . 2 in ) .by using one or both functions combined together , it is possible to reduce the number of comparisons among objects to ones lying in adjacent areas .finally openskyquery uses the _ zones _ indexing algorithm to efficiently support spatial queries on the sphere , . the basic idea behind the _ zones _method is to map the sphere into stripes of a certain height , called zones .each object with coordinates ( , ) is assigned to a zone by using the formula : a traditional b - tree index is then used to store objects within a zone , ordered by _ zoneid _ and right ascension . in this way, the spatial cross - matching can be performed by using bounding boxes ( b - tree ranges ) dynamically computed , thus reducing the number of comparisons ( fig . 1 in ) .finally , an additional and expensive test allows to discard false positives .all the cross - matching algorithms based on a sky partitioning have to deal with the so - called block - edge problem , illustrated in fig .[ fig : block - edge ] : the objects and in different catalogs correspond to the same object but , falling in different pieces of the sky partition , the cross - matching algorithm is not able to identify the match . to solve this issue , it is necessary to add further steps to the pipeline , inevitably increasing the computational time .for example , the zhao s tool , , expands a healpix block with an opportunely dimensioned border ; instead , the algorithm described by , combining healpix and htm virtual indexing function shapes , is able to reduce the block - edge problem , because the lost objects in a partition may be different from one to another . and in two catalogs . even if corresponding to the same source , they can be discarded by the algorithm , since they belong to two different blocks of the sky partition.,title="fig:",width=226 ] + is a command - line open - source python script , designed and developed to perform a wide range of cross - matching types among astrophysical catalogs .the tool is able to be easily executed as a stand - alone process or integrated within any generic data reduction / analysis pipeline .based on a specialized sky partitioning function , its high - performance capability is ensured by making use of the multi - core parallel processing paradigm . it is designed to deal with massive catalogs in different formats , with the maximum flexibility given to the end - user , in terms of catalog parameters , file formats , coordinates and cross - matching functions . in different functional cases and matching criteriahave been implemented , as well as the most used join function types .it also works with the most common catalog formats , with or without header : flexible image transport system ( fits , version tabular ) , american standard code for information interchange ( ascii , ordinary text , i.e. space separated values ) , comma separated values ( csv ) , virtual observatory table ( votable , xml based ) and with two kinds of coordinate system , equatorial and galactic , by using stilts in combination with some standard python libraries , namely _ _ numpy _ _ , and __ pyfits _ _ ] .+ despite the general purpose of the tool , reflected in a variety of possible functional cases , is easy to use and to configure through few lines in a single configuration file .main features of are the following : 1. _ command line _ : is a command - line tool. it can be used as stand - alone process or integrated within more complex pipelines ; 2 . _ python compatibility _ : compatible with python 2.7.x and 3.4.x ( up to the latest version currently available , ) ; 3 . _multi - platform _ : has been tested on ubuntu linux , windows and , mac os and fedora ; 4 . _ multi - process _ :the cross - matching process has been developed to run by using a multi - core parallel processing paradigm ; 5 . _ user - friendliness _ : the tool is very simple to configure and to use ; it requires only a configuration file , described in sec .[ sect : config ] .the internal cross - matching mechanism is based on the sky partitioning into cells , whose dimensions are determined by the parameters used to match the catalogs . the sky partitioning procedure is described in [ sect : preproc ] .the fig .[ fig : flowchart ] shows the most relevant features of the processing flow and the user parameters available at each stage . as mentioned before, the user can run to match two input catalogs by choosing among three different functional cases : 1 ._ sky _ : the cross - match is done within sky areas ( elliptical or rectangular ) defined by the celestial coordinates taken from catalog parameters ; 2 ._ exact value _ : two objects are matched if they have the same value for a pair of columns ( one for each catalog ) defined by the user ; 3 . _ row - by - row _ : match done on a same row - id of the two catalogs .the only requirement here is that the input catalogs must have the same number of records .the positional cross - match strategy of the method is based on the same concept of the q - fulltree approach , an our tool introduced in and : for each object of the first input catalog , it is possible to define an elliptical , circular or rectangular region centered on its coordinates , whose dimensions are limited by a fixed value or defined by specific catalog parameters .for instance , the two full width at half maximum ( fwhm ) values in the catalog can define the two semi - axes of an ellipse or the couple width and height of a rectangular region .it is also possible to have a circular region , by defining an elliptical area having equal dimensions .once defined the region of interest , the next step is to search for sources of the second catalog within such region , by comparing their distance from the central object and the limits of the area ( for instance , in the elliptical cross - match the limits are defined by the analytical equation of the ellipse ) .+ in the _ sky _ functional case , the user can set additional parameters in order to characterize the matching region and the properties of the input catalogs .in particular , the user may define : 1 .the shape ( elliptical or rectangular ) of the matching area , i.e. the region , centered on one of the matching sources , in which to search the objects of the second catalog ; 2 .the dimensions of the searching area .they can be defined by fixed values ( in arcseconds ) or by parametric values coming from the catalog .moreover , the region can be rotated by a position angle ( defined as fixed value or by a specific column present in the catalog ) ; 3 .the coordinate system for each catalog ( galactic , icrs , fk4 , fk5 ) and its units ( degrees , radians , sexagesimal ) , as well as the columns containing information about position and designation of the sources .an example of graphical representation of an elliptical cross - match is shown in fig .[ fig : crossmatch ] . in the _ exact value _ case , the user has to define only which columns ( one for each input catalog ) have to be matched , while in the most simple _ row - by - row _ case no particular configuration is needed . produces a file containing the results of the cross - match , consisting into a series of rows , corresponding to the matching objects . in the case of _ exact value _ and_ sky _ options , the user can define the conditions to be satisfied by the matched rows to be stored in the output .first , it is possible to retrieve , for each source , all the matches or only the best pairs ( in the sense of closest objects , according to the match selection criterion ) ; then , the user can choose different join possibilities ( in fig . [fig : joins ] the graphical representation of available joins is shown ) : and : : only rows having an entry in both input catalogs , ( fig .[ fig : joins]a ) ; or : : all rows , matched and unmatched , from both input catalogs , ( fig .[ fig : joins]b ) ; all from ( all from ) : : all matched rows from catalog ( or ) , together with the unmatched rows from catalog ( or ) , ( fig .[ fig : joins]c - d ) ; not ( not ) : : all the rows of catalog ( or ) without matches in the catalog ( or ) , ( fig .[ fig : joins]e - f ) ; xor : : the `` exclusive or '' of the match - i.e. only rows from the catalog not having matches in the catalog and viceversa , ( fig .[ fig : joins]g ) .+ + any experiment with the tool is based on two main phases ( see fig .[ fig : flowchart ] ) : 1 ._ pre - matching : _ this is the first task performed by during execution . the tool manipulates input catalogs to extract the required information and prepare them to the further analysis ; 2 . _matching : _after data preparation , performs the matching according to the criteria defined in the configuration file .finally , the results are stored in a file , according to the match criterion described in sec .[ sect : join ] , and all the temporary data are automatically deleted .this is the preliminary task performed by execution . during the pre - matching phase, performs a series of preparatory manipulations on input data .first of all , a validity check of the configuration parameters and input files .then it is necessary to split the data sets in order to parallelize the matching phase and improve the performance . in the _ exact value _functional case only the first input catalog will be split , while in the _ sky _ case both data sets will be partitioned in subsets . in the latter case, makes always use of galactic coordinates expressed in degrees , thus converting them accordingly if expressed in different format .when required , the two catalogs are split in the following way : in the first catalog all the entries are divided in groups , whose number depends on the multi - processing settings ( see sec . [ sect : config ] ) , since each process is assigned to one group ; in the second catalog the sky region defined by the data set is divided into square cells , by assigning a cell to each entry , according to its coordinates ( fig .[ fig : partitioning ] ) .we used the python multiprocess module to overcome the gil problem , by devoting particular care to the granularity of data to be handled in parallel .this implies that the concurrent processes do not need to share resources , since each process receives different files in input ( group of object of the 1st catalog and cells ) and produces its own output .finally the results are merged to produce the final output .the partitioning procedure on the second catalog is based on the dimensions of the matching areas : the size of the unit cell is defined by the maximum dimension that the elliptical matching regions can assume .if the `` size type '' is `` parametric '' , then the maximum value of the columns indicated in the configuration is used as cell size ; in the case of `` fixed '' values , the size of the cell will be the maximum of the two values defined in the configuration ( fig .[ fig : partitioning]a ) . in order to optimize the performance ,the size of the unit cell can not be less than a threshold value , namely the _ minimum partition cell size _ , which the user has to set through the configuration file .the threshold on the cell size is required in order to avoid the risk to divide the sky in too many small areas ( each one corresponding to a file stored on the disk ) , which could slow down the cross - matching phase performance . in sec .[ sect : optimization ] we illustrated a method to optimize such parameter as well as the number of processes to use , according to the hosting machine properties . once the partitioning is defined , each object of the second catalog is assigned to one cell , according to its coordinates .having defined the cells , the boundaries of an elliptical region associated to an object can fall at maximum in the eight cells surrounding the one including the object , as shown in fig .[ fig : partitioning]b .this prevents the block - edge problem previously introduced .once the data have been properly re - arranged , the cross - match analysis can start . in the _ row - by - row _ case , each row of the first catalogis simply merged with the corresponding row of the second data set through a serial procedure . in the other functional cases ,the cross - matching procedure has been designed and implemented to run by using parallel processing , i.e. by assigning to each parallel process one group generated in the previous phase . in the _ exact value _case , each object of the group is compared with all the records of the second catalog and matched according to the conditions defined in the configuration file . in the _ sky _ functional case ,the matching procedure is slightly more complex . as described in sec .[ sect : usecases ] , the cross - match at the basis of the method is based on the relative position of two objects : for each object of the first input catalog , defines the elliptical / rectangular region centered on its coordinates and dimensions .therefore a source of the second catalog is matched if it falls within such region . in practice ,as explained in the pre - matching phase , having identified a specific cell for each object of a group , this information is used to define the minimum region around the object used for the matching analysis .the described choice to set the dimensions of the cells ensures that , if a source matches with the object , it must lie in the nine cells surrounding the object ( also known as moore s neighborhood , , see also fig . [fig : partitioning]b ) .therefore it is sufficient to cross - match an object of a group only with the sources falling in nine cells . in the _ sky _ functional case, performs a cross - matching of objects lying within an elliptical , circular or rectangular area , centered on the sources of the first input catalog .the matching area is characterized by configuration parameters defining its shape , dimensions and orientation . in fig .[ fig : pa ] is depicted a graphical representation of two matching areas ( elliptical and rectangular ) with the indication of its parameters . in particular , to define the orientation of the matching area , requires two further parameters besides the offset and the value of the position angle , representing its orientation .the position angle , indeed , is referred , by default , to the greatest axis of the matching area with a clockwise orientation .the two additional parameters give the possibility to indicate , respectively , the correct orientation ( clockwise / counterclockwise ) and a shift angle ( in degrees ) . finally , the results of the cross - matching are stored in a file , containing the concatenation of all the columns of the input catalogs referred to the matched rows . in the _ sky _functional case the column reporting the separation distance between the two matching objects is also included .the tool is interfaced with the user through a single configuration file , to be properly edited just before the execution of any experiment . if the catalogs do not contain the source s designation / id information , will automatically assign an incremental row - id to each entry as object designation . for the _ sky _ functional case , assuming that both input catalogs contain the columns reporting the object coordinates , is able to work with galactic and equatorial ( icrs , fk4 , fk5 ) coordinate systems , expressed in the following units : degrees , radians or sexagesimal .if the user wants to use catalog information to define the matching region ( for instance , the fwhms or a radius defined by the instrumental resolution ) , obviously the first input catalog must contain such data .the position angle value / column is , on the contrary , an optional information ( default is 0 , clockwise ) . is conceived for a community as wide as possible , hence it has been designed in order to satisfy the requirement of user - friendliness . therefore , the configuration phase is limited to the editing of a setup file , can also automatically generate a dummy configuration file that could be used as template . ] containing all the information required to run .this file is structured in sections , identified by square brackets : the first two are required , while the others depend on the particular use case . in particular , the user has to provide the following information : 1 . the input files and their format ( fits , ascii , csv or votable ) ; 2 .the name and paths of the temporary , log and output files ; 3 . the match criterion , corresponding to one of the functional cases ( _ sky , exact value , row - by - row _ ) . gives also the possibility to set the number of processes running in parallel , through an optional parameter which has as default the number of cores of the working machine ( minus one left available for system auxiliary tasks ) .the configuration for the _ sky _ functional case foresees the setup of specific parameters of the configuration file : those required to define the shape and dimensions of the matching area , the properties of the input catalogs already mentioned in sec .[ sect : usecases ] , coordinate system , units as well as the column indexes for source coordinates and designation .in addition , a parameter characterizing the sky partitioning has to be set ( see sec .[ sect : preproc ] for further information ) .the parameters useful to characterize the matching area are the following : area shape : : it can be elliptical or rectangular ( circular is a special elliptical case ) ; size type : : the valid entries are _ fixed _ or _parametric_. in the first case , a fixed value will be used to determine the matching area ; in the second , the dimensions and inclination of the matching area will be calculated by using catalog parameters ; first and second dimensions of matching area : : the axes of the ellipse or width and height of the rectangular area . in case of fixed `` size type '' ,they are decimal values ( in arcsec ) , otherwise , they represent the index ( integer ) or name ( string ) of the columns containing the information to be used ; parametric factor : : it is required and used only in the case of parametric `` size type '' .it is a decimal number factor to be multiplied by the values used as dimensions , in order to increase or decrease the matching region , as well as useful to convert their format ; pa column / value : : it is the position angle value ( in the `` fixed '' case , expressed in degrees ) or the name / id of the column containing the position angle information ( in the `` parametric '' case ) ; pa settings : : the position angle , which in is referred , by default , to the main axis of the matching area ( greatest ) with a clockwise orientation .the two parameters defined here give the possibility to indicate the correct orientation ( clockwise / counterclockwise ) and a shift angle ( in degrees ) .the user has also to specify which rows must be included in the output file , by setting the two parameters indicating the match selection and the join type , as described in sec .[ sect : join ] . for the _ exact value _functional case it is required to set the name or i d of the columns used for the match for both input files .the user has also to specify which rows must be included in the output file , by setting the two parameters indicating the match selection and the join type , as described in sec .[ sect : join ] . for the _ row - by - row _ functional case ,no other settings are required .the only constrain is that both catalogs must have the same number of entries .as reflected from the description of , the choice of the best values for its internal parameters ( in particular the number of parallel processes and the minimum cell size , introduced in sec .[ sect : preproc ] ) , is crucial to obtain the best computational efficiency .this section is dedicated to show the importance of this choice , directly depending on the features of the hosting machine . in the following tests we used a computer equipped with an intel(r )core(tm ) , with one , cpu , gb of ram and hosting ubuntu linux as operative system ( os ) on a standard hard disk drive .we proceeded by performing two different kinds of tests : 1 .a series of tests with a fixed value for the minimum cell size ( ) and different values of the number of parallel processes ; 2 . a second series by using the best value of number of parallel processes found at previous step and different values for the minimum cell size .the configuration parameters used in this set of tests are reported in table [ test1:settings ] .the input data sets are two identical catalogs ( csv format ) consisting of objects extracted from the ukidss gps public data , in the range of galactic coordinates ] .each record is composed by columns . the choice to cross - match a catalog with itself represent the worst case in terms of cross - matching computational time , since each object matches at least with itself . by setting `` match selection '' as best and`` join type '' as 1 and 2 ( see table [ test1:settings ] ) , we obtained an output of objects matched with themselves as expected .we also performed all the tests by using a random shuffled version of the same input catalog , obtaining the same results .this demonstrates that the output is not affected by the particular order of data in the catalogs .. settings in the first set of tests performed to evaluate the impact of the number of parallel processes and the minimum cell size configuration parameters on the execution time .the choice of same dimensions for the ellipse axes was due to perform a fair comparison with stilts and cds - xmatch , which allow only circular cross - matching . [ cols="^,^",options="header " , ]the first input catalog has been extracted by the ukidss gps data in the range of galactic coordinates ] , while the second input catalog has been extracted by the glimpse _data , ( and ) , in the same range of coordinates . from each catalog , different subsets with variable number of objects have been extracted .in particular , data sets with , respectively , , , , and objects have been created from the first catalog , while , from second catalog , data sets with , , and rows have been extracted .then , each subset of first catalog has been cross - matched with all the subsets of the second catalog . for uniformity of comparison , due to the limitations imposed by cds - xmatch in terms of available disk space , it has been necessary to limit to only the number of columns for all the subsets involved in the tests performed to compare c and cds - xmatch ( for instance , i d and galactic coordinates ) . for the same reason, the data set with rows has not been used in the comparison between c and cds - xmatch .the common internal configuration used in these tests is shown in table [ test1:settings ] , except for the match selection there was , in fact , the necessity to set it to _ all _ for uniformity of comparison with the cds - xmatch tool ( which makes available only this option ) .then the _ best _ type has been used to compare with stilts and topcat .furthermore , in all the tests , the number of parallel processes was set to and the minimum cell size to , corresponding to the best conditions found in the optimization process of ( see sec . [ sect : optimization ] ) . finally , we chose same dimensions of the ellipse axes in order to be aligned with other tools , which allow only circular cross - matching areas . concerning the comparison among and the three mentioned tools , in the cases of both _ all _ and _ best _ types of matching selection , all tools provided exactly the same number of matches in the whole set of tests , thus confirming the reliability of with respect to other tools ( table [ tab : matchres ] ) .rows has not been used . ] in terms of computational efficiency , has been evaluated by comparing the computational time of its cross - matching phase with the other tools .the pre - matching and output creation steps have been excluded from the comparison , because strongly dependent on the host computing infrastructure .the other configuration parameters have been left unchanged ( table [ test1:settings ] ) .the complete setup for the described experiments is reported in the appendix . in fig .[ fig : c3vsstrows ] we show the computational time of the cross - matching phase for and stilts , as function of the incremental number of rows ( objects ) in the first catalog , and by varying the size of the second catalog in four cases , spanning from to rows . in all diagrams, it appears evident the difference between the two tools , becoming particularly relevant with increasing amounts of data . in the second set of tests performed on the cross - matching phase and stilts ,the computational time has been evaluated as function of the incremental number of columns of the first catalog ( from the minimum required up to , the maximum number of columns of catalog 1 ) , and by fixing the number of columns of the second catalog in five cases , respectively , , , , and , which is the maximum number of columns for catalog 2 . in terms of number of rows , in all cases both catalogswere fixed to of entries . in fig .[ fig : c3vsstcols ] the results only for and columns of catalog 2 are reported , showing that is almost invariant to the increasing of columns , becoming indeed faster than stilts from a certain amount of columns .such trend is confirmed in all the other tests with different number of columns of the second catalog .this behavior appears particularly suitable in the case of massive catalogs .finally , in the case of two fits input files instead of csv files , stilts computational time as function of the number of columns is constant and slightly faster than . in the last series of tests , we compared the computational efficiency between the cross - matching phase and cds - xmatch .in this case , due to the limitation of the catalog size imposed by cds - xmatch , the tests have been performed by varying only the number of rows from to as in the analogous tests with stilts ( except the test with rows ) , fixing the number of columns to . moreover , in this case , the cross - matching phase of has been compared with the duration of the phase _ execution _ of the cds - xmatch experiment , thus ignoring latency time due to the job submission , strongly depending on the network status and the state of the job queue , but taking into account the whole job execution . the results , reported in fig . [fig : c3vsxmrows ] , show a better performance of , although less evident when both catalogs are highly increasing their dimensions , where the differences due to the different hardware features become more relevant . at the end of the test campaign ,two other kinds of tests have been performed : ( i ) the verification of the portability of on different oss and ( ii ) an analysis of the impact of different disk technology on the computing time efficiency of the tool . in the first case , we noted , as expected , a decreasing of overall time performance on the windows versions ( and ) , with respect to same tests executed on linux versions ( ubuntu and fedora ) and mac os . onaverage execution was times more efficient on linux and mac os than windows .this is most probably due to the different strategy of disk handling among various oss , particularly critical for applications , like cross - matching tools , which make an intensive use of disk accesses .this analysis induced us to compare two disk technologies : hdd ( hard disk drive ) vs ssd ( solid state disk ) . both kinds of disks have been used on a sample of the tests previously described , revealing on average a not negligible increasing of computing time performance in the ssd case of times with respect to hdd . for clarity ,all test results presented in the previous sections have been performed on the same hdd .in this paper we have introduced , a new scalable tool to cross - match astronomical data sets .it is a multi - platform command - line python script , designed to provide the maximum flexibility to the end users in terms of choice about catalog properties ( i / o formats and coordinates systems ) , shape and size of matching area and cross - matching type .nevertheless , it is easy to configure , by compiling a single configuration file , and to execute as a stand - alone process or integrated within any generic data reduction / analysis pipeline . in order to ensure the high - performance capability ,the tool design has been based on the multi - core parallel processing paradigm and on a basic sky partitioning function to reduce the number of matches to check , thus decreasing the global computational time .moreover , in order to reach the best performance , the user can tune on the specific needs the shape and orientation of the matching region , as well as tailor the tool configuration to the features of the hosting machine , by properly setting the number of concurrent processes and the resolution of sky partitioning .although elliptical cross - match and the parametric handling of angular orientation and offset are known concepts in the astrophysical context , their availability in the presented command - line tool makes competitive in the context of public astronomical tools .a test campaign , done on real public data , has been performed to scientifically validate the tool , showing a perfect agreement with other publicly available tools .the computing time efficiency has been also measured by comparing our tool with other applications , representative of different paradigms , from stand - alone command - line ( stilts ) and graphical user interface ( topcat ) to web applications ( cds - xmatch ) . such tests revealed the full comparable performance , in particular when input catalogs increase their size and dimensions . for the next release of the tool , the work will be mainly focused on the optimization of the pre - matching and output creation phases , by applying the parallel processing paradigm in a more intensive way .moreover , we are evaluating the possibility to improve the sky partitioning efficiency by optimizing the calculation of the minimum cell size , suitable also to avoid the block - edge problem .the tool , , and the user guide are available at the page http://dame.dsf.unina.it/c3.html .the authors would like to thank the anonymous referee for extremely valuable comments and suggestions .mb and sc acknowledge financial contribution from the agreement asi / inaf i/023/12/1 .mb , am and gr acknowledge financial contribution from the 7th european framework programme for research grant fp7-space-2013 - 1 , _ vialactea - the milky way as a star formation engine_. mb and am acknowledge the prin - inaf 2014 _ glittering kaleidoscopes in the sky : the multifaceted nature and role of galaxy clusters_. 99 agrafioti , i. 2012 , from the geosphere to the cosmos , synergies with astroparticle physics , astroparticle physics for europe ( aspera ) , contributed volume , http://www.aspera-eu.org annis , j. t. , 2013 , in american astronomical society meeting abstracts # 221 , des survey strategy and expectations for early science , 221 , id.335.05 becciani , u. , bandieramonte , m. , brescia , m. , et al .2015 , in proc .adass xxv conf . ,advanced environment for knowledge discovery in the vialactea project , in press .( arxiv:1511.08619 )benjamin , r. a. , churchwell , e. , babler , b.l .2003 , , 115 , 953 , doi : 10.1086/376696 boch , t. , pineau , f. x. , & derriere , s. 2014 , cds xmatch service documentation , http://cdsxmatch.u-strasbg.fr/xmatch/doc/ braun , r. 2015 , in proc . of `` the many facets of extragalactic radio surveys : towards new scientific challenges '' ( extra - radsur2015 ) .20 - 23 october 2015 .bologna , italy .http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=267 , id.34 budavri , t. , & lee , m. a. 2013 , xmatch : gpu enhanced astronomic catalog cross - matching , astrophysics source code library , record ascl:1303.021 budavri , t. , & szalay , a. s. 2008 , , 679 , 301 cavuoti , s. , brescia , m. , longo , g. 2012 , proc .spie , 8451 , 845103 , doi : 10.1117/12.925321 churchwell , e. , babler , b.l . , meade , m.r .2009 , , 121 , 213 de jong , j. t. a. , verdoes kleijn , g. a. , boxhoorn , d. r. , et al .2015 , a&a , 582 , a62 douglas , j. , de bruijne , j. , oflaherty , k. , et al .2007 , esa bulletin , 132 , 26 du , p. , ren , j. j. , pan , j. c. , luo , a. 2014 , scpma , 57 , 577 gorski , k. m. , hivon , e. , banday , a. j. , et al .2005 , 622 , 759 gray , j. , nieto - santisteban , m. a. & szalay , a. s. 2006 , the zones algorithm for finding points - near - a - point or cross - matchin spatial datasetes , microsoft tech .: msr - tr-2006 - 52 gray , l. 2003 , not .50 , 200 ivezic , z. , 2009 , in aps april meeting abstracts , lsst : the physics of the dark universe , 54 , w4.003 , http://adsabs.harvard.edu/abs/2009aps..apr.w4003i ivoa recommendation 2005 , an ivoa standard for unified content decriptors version 1.1 ( http://adsabs.harvard.edu/abs/2005ivoa.spec.0819d ) jia , x. & luo , q. 2016 , in proc .conf . on scientific and statistical database management ( ssdbm 16 ) , ed .p. baumann et al .( new york , ny , acm ) , 12 , doi : 10.1145/2949689.2949705 jia , x. , luo , q. & fan , d. 2015 , in proc .ieee xxi int . conf . on parallel and distributed systems ( icpads ) , 617 ,doi : 10.1109/icpads.2015.83 kaiser , n. , 2004 , proc .spie , 5489 , 11 kunszt , p. z. , szalay , a. s. , thakar , a. r. in proc .mpa / eso / mpe workshop , eds banday , a. j. , zaroubi , s. , bartelmann , m. ( berlin : springer ) , 631 , doi : 10.1007/10849171_83 laureijs , r. , racca , g. , stagnaro , l. , et al .2014 , in proc .spie , 9143 , 91430h , doi : 10.1117/12.2054883 lee , m. a. , & budavri , t. , 2013 , in asp conf .475 , proc .astronomical data analysis software and systems xxii conf . , cross - identification of astronomical catalogs on multiple gpus , ed .friedel , d. n. , ( san francisco , ca : asp ) , 235 lucas , p. w , hoare , m. g , longmore , a. , et al .2008 , , 391 , 136 malkov , o. , dluzhnevskaya , o. , karpov , s. , et al .2012 , balta , 21 , 319 martins , c. j. a. p. , leite , a. c. o. , pedrosa , p. o. j. , 2014 , in statistical challenges in 21st century cosmology , fundamental cosmology with the e - elt , proc . of the international astronomical union ,iau symposium , ed .heavens , a. , starck , j .- l . &krone - martins , a. , 306 , 385 - 387 , doi : 10.1017/s1743921314013441 molinari , s. , schisano , e. , elia , d. , et al .2016 , a&a 591 , a149 , doi : 10.1051/0004 - 6361/201526380 motch , c. , & arches consortium , 2015 , in astronomical data analysis software and systems xxiv , the arches project , ed .a. r. taylor and e. rosolowsky ( san francisco : astronomical society of the pacific ) , 437 nieto - santisteban , m. a. , thakar , a. r. , szalay , a. s. , 2006 , cross - matching very large datasets ( baltimore , md : johns hopkins university ) pineau , f. x. , boch , t. , & derriere , s. 2011 , in asp conf .442 , proc astronomical data analysis software and systems xx , ed .evans , i. n. , accomazzi , a. , mink , d. j. , & rots , a. h. ( san francisco , ca : asp ) , 85 riccio , g. , brescia , m. , cavuoti , s. , mercurio , a. 2016 , c3 : command - line catalog crossmatch for modern astronomical surveys , astrophysics source code library , record ascl:1610.006 sciacca , e. , vitello , f. , becciani , u. , et al . , 2016 , milky way analysis through a science gateway : workflows and resource monitoring , proceedings of 8th international workshop on science gateways , june 2016 , rome , italy , submitted to ceur - ws , http://ceur-ws.org , issn : 1613 - 0073 .taylor , m. b. , 2005 , in asp conf . ser . 347 , astronomical data analysis software and systems xiv , ed .p. shopbell , m. britton , & r. ebert ( san francisco , ca : asp ) , 29 taylor , m. b. , 2006 , in asp conf . ser . 351 , astronomical data analysis software and systems xv , ed .c. gabriel et al .( san francisco , ca : asp ) , 666 valiante , e. 2015 , in iau general assembly , meeting 29 , the herschel - atlas survey : main results and data release , 22 , 2257414 van der walt , s. , colbert , s. c. & varoquaux , g. , 2011 , cse , 13 , 22 vanderplas , j. t. , connolly , a. j. , ivezi , & gray , a. 2012 , in proc .conf . on intelligent data understanding ( cidu ) ,introduction to astroml : machine learning for astrophysics , 47 - 54 , doi : 10.1109/cidu.2012.6382200 varga - verebelyi , e. , dobos , l. , budavari , t. , 2016 , in from interstellar clouds to star - forming galaxies : universal processes ?, iau symposium 315 , herschel footprint database and service , eprint arxiv:1602.01050 zhao , q. , sun , j. , yu , c. , et al . , 2009 , in algorithms and architectures for parallel processing , proc . of 9th international conference , ica3pp 2009 , a paralleled large - scale astronomical cross - matching function , eds .arrems , h. , chang , s .,-l . , 604 - 614 , isbn : 978 - 3 - 642 - 03095 - 6this appendix reports the configuration file as used in the example described in sec .[ sect : comparison ] .the text preceded by the semicolon is a comment . ....\textcolor{red}{[i / o files ] } input catalog 1 : \textcolor{olive}{./input / ukidss.csv } format catalog 1 : \textcolor{olive}{csv } \textcolor{blue}{;csv , fits , votable or ascii } input catalog 2 : \textcolor{olive}{./input / glimpse.csv } format catalog 2 : \textcolor{olive}{csv } \textcolor{blue}{;csv , fits , votable or ascii } output : \textcolor{olive}{./output / out.csv } output format : \textcolor{olive}{csv } \textcolor{blue}{;csv , fits , votable or ascii } log file : \textcolor{olive}{./output / out.log } stilts directory : \textcolor{olive}{./libs } working directory : \textcolor{olive}{./tmp } \textcolor{blue}{;temporary directory , removed when completed } \textcolor{red}{[sky parameters ] } area shape : \textcolor{olive}{ellipse } \textcolor{blue}{;ellipse or rectangle } size type : \textcolor{olive}{fixed } \textcolor{blue}{;parametric or fixed } matching area first dimension : \textcolor{olive}{5 } \textcolor{blue}{;arcsec for fixed type - column name / number for parametric type } matching area second dimension : \textcolor{olive}{5 } \textcolor{blue}{;arcsec for fixed type - column name / number for parametric type } parametric factor : \textcolor{olive}{1 } \textcolor{blue}{;multiplicative factor for dimension columns - required for parametric type } pa column / value : \textcolor{olive}{0 } \textcolor{blue}{;degrees for fixed type - column name / number for parametric type } pa settings : \textcolor{olive}{clock , 0 } \textcolor{blue}{;orientation ( clock , counter ) , shift ( degrees ) -empty or default = clock,0 } catalog 2 minimum partition cell size : \textcolor{olive}{100 } \textcolor{blue}{;arcsec } \textcolor{red}{[catalog 1 properties ] } coordinate system : \textcolor{olive}{galactic } \textcolor{blue}{;galactic , icrs , fk4 , fk5 } coordinate units : \textcolor{olive}{deg } \textcolor{blue}{;degrees ( or deg ) , radians ( or rad ) , sexagesimal ( or sex ) } glon / ra column : \textcolor{olive}{l } \textcolor{blue}{;column number or name - required for sky algorithm } glat / dec column : \textcolor{olive}{b } \textcolor{blue}{;column number or name - required for sky algorithm } designation column : \textcolor{olive}{sourceid } \textcolor{blue}{;column number or name - -1 for none } \textcolor{red}{[catalog 2 properties ] } coordinate system : \textcolor{olive}{galactic } \textcolor{blue}{;galactic , icrs , fk4 , fk5 } coordinate units : \textcolor{olive}{deg } \textcolor{blue}{;degrees ( or deg ) , radians ( or rad ) , sexagesimal ( or sex ) } glon / ra column : \textcolor{olive}{l } \textcolor{blue}{;column number or name - required for sky algorithm } glat / dec column : \textcolor{olive}{b } \textcolor{blue}{;column number or name - required for sky algorithm } designation column : \textcolor{olive}{designation } \textcolor{blue}{;column number or name , -1 for none } \textcolor{red}{[output rows ] } match selection : \textcolor{olive}{all } \textcolor{blue}{;all or best } join type : \textcolor{olive}{1 and 2 } \textcolor{blue}{;1 and 2 , 1 or 2 , all from 1 , all from 2 , 1 not 2 , 2 not 1 , 1 xor 2 } ....
modern astrophysics is based on multi - wavelength data organized into large and heterogeneous catalogs . hence , the need for efficient , reliable and scalable catalog cross - matching methods plays a crucial role in the era of the petabyte scale . furthermore , multi - band data have often very different angular resolution , requiring the highest generality of cross - matching features , mainly in terms of region shape and resolution . in this work we present ( command - line catalog cross - match ) , a multi - platform application designed to efficiently cross - match massive catalogs . it is based on a multi - core parallel processing paradigm and conceived to be executed as a stand - alone command - line process or integrated within any generic data reduction / analysis pipeline , providing the maximum flexibility to the end - user , in terms of portability , parameter configuration , catalog formats , angular resolution , region shapes , coordinate units and cross - matching types . using real data , extracted from public surveys , we discuss the cross - matching capabilities and computing time efficiency also through a direct comparison with some publicly available tools , chosen among the most used within the community , and representative of different interface paradigms . we verified that the tool has excellent capabilities to perform an efficient and reliable cross - matching between large data sets . although the elliptical cross - match and the parametric handling of angular orientation and offset are known concepts in the astrophysical context , their availability in the presented command - line tool makes competitive in the context of public astronomical tools .
gaussian process classifiers are a very effective family of non - parametric methods for supervised classification . in the binary case ,the class label associated to each data instance is assumed to depend on the sign of a function which is modeled using a gaussian process prior . given some data , learning is performed by computing a posterior distribution for .nevertheless , the computation of such a posterior distribution is intractable and it must be approximated using methods for approximate inference .a practical disadvantage is that the cost of most of these methods scales like , where is the number of training instances .this limits the applicability of gaussian process classifiers to small datasets with a few data instances at most .recent advances on gaussian process classification have led to sparse methods of approximate inference that reduce the training cost of these classifiers .sparse methods introduce inducing points or pseudoinputs , whose location is determined during the training process , leading to a training cost that is .a notable approach combines in the sparse approximation suggested in with stochastic variational inference .this allows to learn the posterior for and the hyper - parameters ( inducing points , length - scales , amplitudes and noise ) using stochastic gradient ascent .the consequence is that the training cost is , which does not depend on the number of instances .similarly , in a recent work , expectation propagation ( ep ) is considered as an alternative to stochastic variational inference for training these classifiers .that work shows ( i ) that stochastic gradients can also be used to learn the hyper - parameters in ep , and ( ii ) that ep performs similarly to the variational approach , but does not require one - dimensional quadratures .a disadvantage of the approach described in is that the memory requirements scale like since ep stores in memory parameters for each data instance .this is a severe limitation when dealing with very large datasets with millions of instances and complex models with many inducing points . to reduce the memory cost , we investigate in this extended abstract , as an alternative to ep , the use of stochastic propagation ( sep ) . unlike ep, sep only stores a single global approximate factor for the complete likelihood of the model , leading to a memory cost that scales like .we now explain the method for gaussian process classification described in .consider the observed labels .let be a matrix with the observed data .the assumed labeling rule is , where is a non - linear function following a zero mean gaussian process with covariance function , and is standard normal noise that accounts for mislabeled data .let be the matrix of inducing points ( _ i.e. _ , virtual data that specify how varies ) .let and be the vectors of values associated to and , respectively .the posterior of is approximated as , with a gaussian that approximates , _i.e. _ , the posterior of the values associated to . to get , first the full independent training conditional approximation ( fitc ) of employed to approximate and to reduce the training cost from to : where , and , with , , and is the marginal likelihood .furthermore , is a matrix with the prior covariances among the entries in , is a row vector with the prior covariances between and and is the prior variance of .finally , denotes the p.d.f of a gaussian distribution with mean vector equal to and covariance matrix equal to .next , the r.h.s . of ( [ eq : posterior ] ) is approximated in via expectation propagation ( ep ) to obtain .for this , each non - gaussian factor is replaced by a corresponding un - normalized gaussian approximate factor .that is , , where is a dimensional vector , and , and are parameters estimated by ep so that is similar to in regions of high posterior probability as estimated by .namely , , where is the kullback leibler divergence .we note that each has a one - rank precision matrix and hence only parameters need to be stored per each . the posterior approximation is obtained by replacing in the r.h.s . of ( [ eq : posterior ] )each exact factor with the corresponding .namely , , where is a constant that approximates , which can be maximized for finding good hyper - parameters via type - ii maximum likelihood .finally , since all factors in are gaussian , is a multivariate gaussian . in order for gaussian process classification to work well, hyper - parameters and inducing points must be learned from the data .previously , this was infeasible on big datasets using ep . in gradient of w.r.t ( _ i.e. _ , a parameter of the covariance function or a component of ) is : where and are the expected sufficient statistics under and , respectively , are the natural parameters of , and is the normalization constant of .we note that ( [ eq : gradient ] ) has a sum across the data .this enables using stochastic gradient ascent for hyper - parameter learning . a batch iteration of ep updates in parallel each .after this , is recomputed and the gradients of with respect to each hyper - parameter are used to update the model hyper - parameters .the ep algorithm in can also process data using minibatches of size . in this case , the update of the hyper - parameters and the reconstruction of is done after processing each minibatch .the update of each corresponding to the data contained in the minibatch is also done in parallel . when computing the gradient of the hyper - parameters , the sum in the r.h.s .of ( [ eq : gradient ] ) is replaced by a stochastic approximation , _i.e. _ , , with the set of indices of the instances of the current minibatch . when using minibatches and stochastic gradients the training cost is method described in the previous section has the disadvantage that it requires to store in memory parameters for each approximate factor .this leads to a memory cost that scales like .thus , in very big datasets where is of the order of several millions , and in complex models where the number of inducing points may be in the hundreds , this cost can lead to memory problems . to alleviate this, we consider training via stochastic expectation propagation ( sep ) as an alternative to expectation propagation .sep reduces the memory requirements by a factor of . r0.5 ll + & for each approximate factor to update : + 1.1 : & + 1.2 : & + 2 : & reconstruct : + ll + & set the new global factor to be uniform .+ 2 : & for each exact factor to incorporate : + 2.1 : & + 2.2 : & + 2.3 : & + 3 : & reconstruct : + ll + & set to the prior . for each to process :+ 1.1 : & + 1.2 : & + 2 : & update : + in sep the likelihood of the model is approximated by a single global gaussian factor , instead of a product of gaussian factors .the idea is that the natural parameters of approximate the sum of the natural parameters of the ep approximate factors .this approximation reduces by a factor of the memory requirements because only the natural parameters of need to be stored in memory , and the size of is dominated by the precision matrix of , which scales like . when sep is used instead of ep for finding some things change .in particular , the computation of the cavity distribution is now replaced by , .furthermore , in the case of the batch learning method described in the previous section , the corresponding approximate factor for each instance is computed as to then set .this is equivalent to adding natural parameters , _i.e. _ , . in the case of minibatch training with minibatches of size the update is slightly different to account for the fact that we have only processed a small amount of the total data . in this case , , where is a set with the indices of the instances contained in the current minibatch .finally , in sep the computation of the gradients for updating the hyper - parameters is done exactly as in ep .figure [ fig : fig_ep_vs_sep ] compares among ep , sep and adf when used to update . in the figure trainingis done in batch mode and the update of the hyper - parameters has been omitted since it is exactly the same in either ep , sep or adf . in adf the cavity distribution is simply the posterior approximation , and when is recomputed , the natural parameters of the approximate factors are simply added to the natural parameters of .adf is a simple baseline in which each data point is _ seen _ by the model several times and hence it underestimates variance .we evaluate the performance of the model described before when trained using ep , sep and adf .* performance on datasets from the uci repository : * first , we consider 7 datasets from the uci repository .the experimental protocol followed is the same as the one described in . in these experimentswe consider a different number of inducing points .namely , , and of the total training instances and the training of all methods is done in batch mode for 250 iterations .table [ tab : ll_uci ] shows the average negative test log likelihood of each method ( the lower the better ) on the test set .the best method has been highlighted in boldface .we note that sep obtains similar and sometimes even better results than ep .by contrast , adf performs worse , probably because it underestimating the posterior variance . in terms of the average training timeall methods are equal ..average negative test log likelihood for each method and average training time in seconds . [ cols="<,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^ " , ]* performance on big datasets : * we carry out experiments when the model is trained using minibatches .we follow and consider the mnist dataset , which has 70,000 instances , and the airline delays dataset , which has 2,127,068 data instances ( see for more details ) . in both casesthe test set has 10,000 instances .training is done using minibatches of size 200 , which is equal to the number of inducing points . in the case of the mnist datasetwe also report results for batch training ( in the airline dataset batch training is infeasible ) .figure [ fig : stochastic ] shows the avg .negative log likelihood obtained on the test set as a function of training time . in the mnist dataset training using minibatchesis much more efficient .furthermore , in both datasets sep performs very similar to ep .however , in these experiments adf provides equivalent results to both sep and ep .furthermore , in the airline dataset both sep and adf provide better results than ep at the early iterations , and improve a simple linear model after just a few seconds .the reason is that , unlike ep , sep and adf do not initialize the approximate factors to be uniform , which has a significant cost in this dataset .r0.5 the results obtained in the large datasets contradict the results obtained in the uci datasets in the sense that adf performs similar to ep .we believe the reason for this is that adf may perform similar to ep only when the model is simple ( small ) and/or when the number of training instances is very large ( large ) . to check that this is the case , we repeat the experiments with the mnist dataset with an increasing number of training instances ( from to ) and with an increasing number of inducing points ( from to ) .the results obtained are shown in figure [ fig : n_vsm ] , which confirms that adf only performs similar to ep in the scenario described .by contrast , sep seems to always perform similar to ep .finally , increasing the model complexity ( ) seems beneficial .stochastic expectation propagation ( sep ) can reduce the memory cost of the method recently proposed in to address large scale gaussian process classification .such a method uses expectation propagation ( ep ) for training , which stores parameters in memory , where is some small constant and is the training set size. this cost may be too expensive in the case of very large datasets or complex models .sep reduces the storage resources needed by a factor of , leading to a memory cost that is .furthermore , several experiments show that sep provides similar performance results to those of ep .a simple baseline known as adf may also provide similar results to sep , but only when the number of instances is very large and/or the chosen model is very simple . finally , we note that applying bayesian learning methods at scale makes most sense with large models , and this is precisely the aim of the method described in this extended abstract . *acknowledgments : * yl thanks the schlumberger foundation for her faculty for the future phd fellowship .jmhl acknowledges support from the rafael del pino foundation .ret thanks epsrc grant # s ep / g050821/1 and ep / l000776/1 .tb thanks google for funding his european doctoral fellowship .dhl and jmhl acknowledge support from plan nacional i+d+i , grant tin2013 - 42351-p , and from comunidad autnoma de madrid , grant s2013/ice-2845 casi - cam - cm .dhl is grateful for using the computational resources of _ centro de computacin cientfica _ at universidad autnoma de madrid .j. hensman , a. matthews , and z. ghahramani .scalable variational gaussian process classification . in _ proceedings of the eighteenth international conference on artificial intelligence and statistics _ , 2015 .
a method for large scale gaussian process classification has been recently proposed based on expectation propagation ( ep ) . such a method allows gaussian process classifiers to be trained on very large datasets that were out of the reach of previous deployments of ep and has been shown to be competitive with related techniques based on stochastic variational inference . nevertheless , the memory resources required scale linearly with the dataset size , unlike in variational methods . this is a severe limitation when the number of instances is very large . here we show that this problem is avoided when stochastic ep is used to train the model .
_ markov chain monte carlo _ ( mcmc ) _ methods _ allow samples from virtually any target distribution , known up to a normalizing constant , to be generated . in particular , the celebrated _ metropolis hastings algorithm _ ( introduced in and )simulates a markov chain evolving according to a reversible markov transition kernel by first generating , using some instrumental kernel , a candidate and then accepting or rejecting the same with a probability adjusted to satisfy the detailed balance condition . when choosing between several metropolis hastings algorithms , it is desirable to be able to compare the efficiencies , in terms of the asymptotic variance of sample path averages , of different -reversible markov chains . despite the practical importance of this question , only a few results in this direction exist the literature .peskun defined a partial ordering for finite state space markov chains , where one transition kernel has a higher order than another if the former dominates the latter on the off - diagonal ( see definition [ defipeskunordering ] ) .this ordering was extended later by tierney to general state space markov chains and another even more general ordering , the covariance ordering , was proposed in . in general , it holds that if a homogeneous -reversible markov transition kernel is greater than another according to one of these orderings , then the asymptotic variance of sample path averages for a markov chain evolving according to the former is smaller for all square integrable ( with respect to ) target functions .we provide an extension of this result to inhomogeneous markov chains that evolve alternatingly according to two different -reversible markov transition kernels . to the best of our knowledge ,this is the first work dealing with systematic comparison of asymptotic variances of inhomogeneous markov chains .the approach is linked with the operator theory for markov chains but does not make use of any spectral representation . after some preliminaries ( section [ secpreliminaries ] ) , our main result , theorem [ teomainresult ] , is stated in section [ secmain ] . in section [ secappl ] , we apply theorem [ teomainresult ] in the context of mcmc algorithms by comparing the efficiency , in terms of asymptotic variance , of some existing data - augmentation - type algorithms . moreover , we propose a novel pseudo - marginal algorithm ( in the sense of ) , referred to as the _ random refreshment _algorithm , which on the contrary to the pseudo - marginal version of the _ monte carlo within metropolis _ ( mcwm ) algorithm turns out to be exact and more efficient than the pseudo - marginal version of the _ grouped independence metropolis hastings _ ( gimh ) algorithm . here, the analysis is again driven by theorem [ teomainresult ] .the proof of theorem [ teomainresult ] is given in section [ secproofmain ] and some technical lemmas are postponed to appendix [ app ] .finally , appendix [ secappb ] relates some existing mcmc algorithms to the framework considered in this paper .we denote by and the sets of nonnegative and positive integers , respectively . in the following ,all random variables are assumed to be defined on a common probability space .let be a measurable space ; then we denote by and the spaces of positive measures and measurable functions on , respectively . the lebesgue integral of over with respect to the measure is , when well - defined , denoted by that a _ markov transition kernel _ on is a mapping ] the space of square integrable functions with respect to and furnish the same with the scalar product , g \in\ltwo[\pi ] \bigr)\ ] ] and the associated norm \bigr).\ ] ] here , we have expunged the measure from the notation for brevity .if is a markov kernel on admitting as an invariant distribution , then the mapping defines an operator on ] , that is , for all and belonging to ] , where we have defined , for a markov chain [\mathbb{n}] ] , then is guaranteed to exist ( but may be infinite ) .nevertheless , the ordering in question does not allow markov kernels lacking probability mass on the diagonal , that is , kernels satisfying for all , to be compared .this is in particular the case for gibbs samplers in general state space . to overcome this limitation ,one may consider instead the following covariance ordering based on lag - one autocovariances .[ deficovarordering ] let and be markov transition kernels on with invariant distribution .we say that _ dominates in the covariance ordering _, denoted p_0 ] , the covariance ordering , which was introduced implicitly in , page 5 , and formalized in , is an extension of the off - diagonal ordering since according to , lemma 3 , implies p_0 ] implies ( see the proof of , theorem 4 ) .all these results concern homogeneous markov chains , whereas many mcmc algorithms such as the gibbs or the metropolis - within - gibbs samplers use several kernels , for example , and in the case of two kernels .a natural idea would then be to apply theorem [ teoefficiencyordering ] to the homogeneous markov chain having the block kernel as transition kernel ; however , even when the kernels and are both -reversible , the product of the same is usually not -reversible , except in the particular case when and commute , that is , .thus , theorem [ teoefficiencyordering ] can not in general be applied directly in this case .in the following , let and , , be markov transition kernels on .define and as the markov chains evolving as follows : {i } \stackrel{p_i } { \longrightarrow } \x[1]{i } \stackrel{ q_i } { \longrightarrow } \x[2]{i } \stackrel{p_i } { \longrightarrow } \x[3]{i } \stackrel { q_i } { \longrightarrow } \cdots.\ ] ] this means that for all , and : * {i } \in\mathsf{a } { |}\mathcal{f}_{2 k}^{(i ) } ) = p_i(\x[2 k]{i } , \mathsf{a}) ] , where {i},\ldots , \x[n]{i}) p_1 \pgeq[1 ] p_0 q_1 \pgeq[1 ] q_0 ] ; thus , in practice , a sufficient condition for ( ii ) is that and .[ teomainresult ] assume that and , , satisfy and let , , be markov chains evolving as in ( [ eqeq1markov ] ) with initial distribution .then for all ] , for some markov kernels and .[ propaltcondition ] if the markov kernel is -geometrically ergodic , then for all functions such that and , { } \bigr ) } { f \bigl(\x[k ] { } \bigr)}\bigr|+\bigl|\covardu{f \bigl(\x [ 1 ] { } \bigr ) } { f \bigl ( \x[k+1 ] { } \bigr)}\bigr| \bigr ) < \infty,\ ] ] where { } ; k \in\mathbb{n}\} ] of valued random variables . to this aim ,tanner and wong suggest writing as the marginal of some distribution defined on the product space in the sense that , where is some markov transition kernel on . in most cases ,the marginal is of sole interest , while the component is introduced for convenience as a means of coping with analytic intractability of the marginal .( it could also be the case that the marginal is too computationally expensive to evaluate . ) a first solution consists in letting [\mathbb{n}] ] of the -reversible markov chain {1 } , \aux[k]{1 } ) ; k \in\mathbb{n}\} ] by algorithm [ algalg1 ] .{1 } , \aux[k]{1 } ) = ( y , u) \displaystyle\alpha ( y , u , \hat{y } , \hat{u}) \displaystyle:=1 \wedge\frac { \pi^{\ast}(\hat{y } ) r(\hat{y } , \hat{u})s(\hat{y } , \hat{u } ; y ) t(\hat{y } , \hat{u } , y ; u)}{\pi^{\ast}(y ) r(y , u ) s(y , u ; \hat{y } ) t(y , u , \hat{y } ; \hat{u})} ] is a -reversible markov chain . as a consequence , the sequence [\mathbb{n}] ] : draw , draw , draw , set {2 } \gets \cases { \hat{y } , & \quad with probability [ defined in ( \ref{eqacceptmetropolis } ) ] , \vspace*{2pt}\cr y , & \quad otherwise . } ] be the sequence [\mathbb{n}] ] implies -reversibility of .[ propinduces_rever ] the sequence generated in algorithm [ algalg2 ] is a -reversible markov chain . in ,the authors use the terminology _ randomized mcmc _( r - mcmc ) for a -reversible metropolis hastings chain generated using a set of auxiliary variables with a particular expression of the acceptance probability .although only one of these auxiliary variables is sampled at each time step , one may actually cast this approach into the framework of algorithm [ algalg2 ] by creating artificially another auxiliary variable according to the deterministic kernel where is any continuously differentiable involution on . even though is not dominated, it is possible to verify ( [ eqcondradon ] ) using that is an involution .we prove in appendix [ apprmcmc ] that the r - mcmc algorithm is a special case of algorithm [ algalg2 ] with this particular choice of and with the general form of the acceptance probability described in remark [ remgeneralradon ] . the _ generalized multiple - try metropolis _ ( gmtm ) _ algorithm _ is an extension of the _ multiple - try metropolis hastings algorithm _ proposed in .given {}=y ] and {2 } \sim\pi^{\ast} ] satisfying {i } \bigr ) } { h \bigl(\y[k]{i } \bigr ) } \bigr| < \infty\qquad \bigl(i \in\{1,2 \ } \bigr)\ ] ] it holds that {2 } \bigr ) } \leq\lim _ {n \to\infty } \frac{1}{n } \var{\sum_{k = 0}^{n - 1 } h \bigl(\y[k]{1 } \bigr)}.\ ] ] we preface the proof of theorem [ teocompalg1alg2 ] by the following lemma , which may serve as a basis for the comparison of _ homogeneous _ markov chains evolving according to ( or ) , , where and , , are kernels satisfying on some product space .[ lemhomogeneouscomp ] let and , , be kernels satisfying on , with and .in addition , assume that for all , then for all ] . by construction ,{i } \bigr ) } { f \bigl(\x[k]{i } \bigr)}\bigr|+\bigl|\covardu { f \bigl ( \x[1]{i } \bigr ) } { f \bigl(\x[k+1]{i } \bigr)}\bigr| \bigr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad = \pi f^2 - \pi^2 f + 4 \sum _ { k = 1}^\infty\bigl|\covardu{h \bigl(\y[0]{i } \bigr ) } { h \bigl ( \y[k]{i } \bigr)}\bigr| < \infty\qquad \bigl(i \in\{0 , 1\ } \bigr ) , \nonumber\end{aligned}\ ] ] where finiteness follows from the assumption ( [ eqassbddnessproduct ] ) .moreover , for all and , {i } \bigr ) } = \var{\sum_{k = 0}^{n - 1 } h \bigl ( \cy[k]{i } \bigr ) } = \frac{1}{4 } \var{\sum_{k = 0}^{2n - 1 } f \bigl(\x[k]{i } \bigr)},\ ] ] which implies , by ( [ eqimpliedbddness ] ) , {i } \bigr ) } \qquad \bigl(i \in\{0 , 1\ } \bigr).\ ] ] finally , by ( [ eqimpliedbddness ] ) we may now apply theorem [ teomainresult ] to the chains {i } ; k \!\in\!\mathbb{n}\} ] : * draw , * set draw , draw , set {3 } , \aux[k+1]{3 } ) \gets \cases { ( \hat{y } , \hat{u } ) , & \quad with probability , \vspace*{3pt}\cr ( y , \check{u } ) , & \quad otherwise . } ] according to an acceptance probability that turns out to be a standard metropolis hastings acceptance probability ( which will be seen in the proof of theorem [ teocompalg1alg4 ] below ) .interestingly , this allows the desired distribution as the target distribution of {3 } , \aux[k]{3 } ) ; k \in\mathbb{n}\} ] , where is a kernel integrating to unity , providing the classical abc discrepancy measure between the observed data summary statistics and that evaluated at the simulated data . rejuvenating gimh - abc comprises an intermediate step in which the simulated data , generated under the current parameter , are refreshed systematically .however , since sampling from is typically infeasible , the auxiliary variables are refreshed through in the spirit of algorithm [ algalg2 ] . therefore , in accordance with algorithm [ algalgrefresh ] , a _ -reversible _ alternative to rejuvenating gimh - abc is obtained by , instead of refreshing systematically the data , performing refreshment with probability ( [ eqdefrho ] )note that the fact that the constant in the denominator of is typically not computable does not prevent computation of ( [ eqdefrho ] ) , since this constant appears in as well as .this provides a _ random refreshment gimh - abc _ , which can be compared quantitatively , via the theorem [ teocompalg1alg4 ] below , to the gimh - abc while at the same time avoiding the possible gimh - abc trapping states mentioned in . [ teocompalg1alg4 ]let and be the sequences of random variables generated by algorithms [ algalg1 ] and [ algalgrefresh ] , respectively , where {i},\break \aux[0]{i } ) \sim \pi ] satisfying {i } \bigr ) }{ h \bigl(\y[k]{i } \bigr ) } \bigr| < \infty\qquad \bigl(i \in\{1,3 \ } \bigr)\ ] ] it holds that {3 } \bigr ) } \leq\lim _ { n \to\infty } \frac{1}{n } \var{\sum_{k = 0}^{n - 1 } h \bigl(\y[k]{1 } \bigr)}.\ ] ] let the kernels and be defined as in the proof of theorem [ teocompalg1alg2 ] and introduce furthermore : * defined implicitly by the transition {3 } , \aux [ k]{3 } ) \rightarrow(\y[k]{3 } , \check{u}) ] , as each is -reversible , it holds that for all \times\ltwo[\pi] ] .then , for all ] exists , and { } \bigr)}\nonumber \\ & & \qquad = \pi f^2 - \pi^2 f \\ & & \quad\qquad { } + \sum_{k = 1}^{\infty } \covardu{f \bigl(\x[0 ] { } \bigr ) } { f \bigl(\x[k ] { } \bigr ) } + \sum_{k = 1}^{\infty } \covardu{f \bigl(\x[1 ] { } \bigr ) } { f \bigl(\x[k+1 ] { } \bigr)}. \nonumber\end{aligned}\ ] ] as covariances are symmetric , { } \bigr)}=\pi f^{2 } - \pi^{2}f + 2 n^{-1 } \sum_{0 \leq i < j \leq n - 1}\covardu{f \bigl(\x[i ] { } \bigr ) } { f \bigl(\x[j ] { } \bigr)}.\ ] ] we now consider the limit , as tends to infinity , of the last term on the right - hand side .let and denote the two complementary subsets of consisting of the even and odd numbers , respectively . for all such that , we have { } \bigr ) } { f \bigl(\x[j ] { } \bigr ) } = \cases { \covardu{f \bigl(\x[0 ] { } \bigr ) } { f \bigl(\x[j - i ] { } \bigr ) } , & \quad if , \vspace*{3pt}\cr \covardu{f \bigl(\x[1 ] { } \bigr ) } { f \bigl(\x[j - i+1 ] { } \bigr ) } , & \quad if .}\ ] ] this implies that { } \bigr ) } { f \bigl(\x[j ] { } \bigr ) } \\ & & \qquad = \sum_{k = 1}^{n - 1 } n^{-1 } \biggl ( \biggl\lfloor\frac{n - 1 - k}{2 } \biggr\rfloor+ 1 \biggr ) \covardu { f \bigl(\x[0 ] { } \bigr ) } { f \bigl(\x[k ] { } \bigr)}\end{aligned}\ ] ] and { } \bigr ) } { f \bigl(\x[j ] { } \bigr ) } \\ & & \qquad = \sum_{k = 1}^{n - 2 } n^{-1 } \biggl ( \biggl\lfloor\frac{n - 2 - k}{2 } \biggr\rfloor+ 1 \biggr ) \covardu { f \bigl(\x[1 ] { } \bigr ) } { f \bigl(\x[k+1 ] { } \bigr)}.\end{aligned}\ ] ] under ( [ eqeq2asslem2 ] ) , the dominated convergence theorem applies , which provides that the limit , as goes to infinity , of {})} ] the markov kernel {i}:=p_i \mathbh{1}_{\mathcal{e}}(n ) + q_i \mathbh{1}_{\mathcal{o}}(n) ] be such that for , {i } \cdots\r[k-1]{i}f \bigr\rangle \bigr\vert < \infty.\ ] ] then for all , {1 } \cdots\r[k-1]{1}f \bigr\rangle + \bigl\langle f , \r [ 1]{1 } \cdots\r[k]{1 } f \bigr\rangle \bigr ) \\ & & \qquad \leq\sum_{k=1}^{\infty } \lambda^k \bigl ( \bigl\langle f , \r[0]{0 } \cdots\r[k-1]{0}f \bigr\rangle + \bigl\langle f , \r [ 1]{0 } \cdots \r[k]{0}f \bigr\rangle \bigr).\end{aligned}\ ] ] for all and all , define {\alpha}:=(1-\alpha)\r[n]{0}+\alpha\r[n]{1} ] , where {\mathcal{e}}(\alpha ) & : = & \sum _ { k = 1}^{\infty } \lambda^k \bigl\langle f , \r[0 ] { \alpha } \cdots \r[k-1]{\alpha } f \bigr\rangle , \\\kh[\lambda]{\mathcal{o}}(\alpha ) & : = & \sum_{k = 1}^{\infty } \lambda^k \bigl\langle f , \r[1]{\alpha } \cdots\r[k]{\alpha } f \bigr \rangle.\end{aligned}\ ] ] now , fix a distinguished ; we want show that for all ] : {\mathcal { e}}}{\mathrm{d}\alpha } ( \alpha ) = \frac{\mathrm{d}}{\mathrm{d } \alpha } \sum _ { k = 1}^{\infty } \lambda^k \bigl\langle f , \r[0 ] { \alpha}\cdots \r[k-1]{\alpha}f \bigr\rangle.\ ] ] to interchange and in the previous equation , we first note that {\alpha } \cdots \r[k-1]{\alpha}f \bigr\rangle&= & \sum _ { \ell= 0}^{k - 1 } \frac{\partial}{\partial \alpha_\ell } \bigl\langle f , \r[0]{\alpha_0 } \cdots\r [ k - 1]{\alpha_{k - 1}}f \bigr \rangle \bigg\vert_{(\alpha_0,\ldots,\alpha_{k - 1})=(\alpha,\ldots,\alpha ) } \\ & = & \sum_{\ell= 0}^{k - 1 } \bigl \langle f , \r[0 { \nearrow}\ell-1]{\alpha } \bigl(\r[\ell]{1 } - \r[\ell ] { 0 } \bigr ) \r[\ell+ 1 { \nearrow}k - 1]{\alpha}f \bigr\rangle,\end{aligned}\ ] ] where {\alpha}:=\r[s]{\alpha } \r[s+1]{\alpha } \cdots\r[t]{\alpha} ] otherwise . by ( [ eqmajonormp ] ) , {\alpha } \|\leq1 ] .thus , as we may interchange , in ( [ eqderivh ] ) , and , yielding {\mathcal{e}}}{\mathrm{d}\alpha } ( \alpha)=\sum_{k=1}^{\infty } \lambda^{k}\sum_{\ell=0}^{k-1 } \bigl \langle f , \r[0 { \nearrow}\ell-1]{\alpha } \bigl(\r[\ell ] { 1}-\r [ \ell]{0 } \bigr ) \r [ \ell+1 { \nearrow}k-1]{\alpha}f \bigr\rangle.\ ] ] similarly , it can be established that {\mathcal{o}}}{\mathrm{d}\alpha } ( \alpha ) = \sum_{k=1}^{\infty } \lambda^{k}\sum_{\ell=1}^{k } \bigl \langle f , \r[1 { \nearrow}\ell-1]{\alpha } \bigl(\r[\ell ] { 1}-\r [ \ell]{0 } \bigr ) \r[\ell + 1 { \nearrow}k]{\alpha}f \bigr\rangle.\ ] ] we now apply lemma [ lemlem1 ] to the two previous sums . for this purpose , we will use the following notation : {\alpha } : = \r[s]{\alpha } \r[s-1]{\alpha } \cdots\r[t]{\alpha} ] otherwise . then {}}{\mathrm { d}\alpha } ( \alpha ) & = & \sum_{k=1}^{\infty } \lambda^k \biggl\{\sum_{\ell=0}^{k-1 } \bigl\langle\r[\ell-1 { \searrow}0]{\alpha}f , \bigl(\r[\ell]{1}-\r[\ell]{0 } \bigr)\r [ \ell+1 { \nearrow}k-1]{\alpha}f \bigr\rangle \\[1pt ] & & \hspace*{33pt}{}+\sum_{\ell=1}^{k } \bigl\langle\r[\ell-1 { \searrow}1]{\alpha}f , \bigl(\r[\ell]{1}-\r[\ell]{0 } \bigr)\r [ \ell+1 { \nearrow}k ] { \alpha}f \bigr\rangle \biggr\ } \\[1pt ] & = & \sum_{\ell=0}^{\infty}\sum_ { m=0}^{\infty}\lambda^{\ell+m+1 } \bigl\langle\r [ \ell-1 { \searrow}0]{\alpha}f , \bigl(\r[\ell]{1}-\r[\ell]{0 } \bigr)\r [ \ell+1 { \nearrow}\ell+m]{\alpha}f \bigr\rangle \\[1pt ] & & { } { } + \sum_{\ell=1}^{\infty}\sum _ { m=1}^{\infty}\lambda^{\ell+m-1 } \bigl\langle\r [ \ell-1 { \searrow}1]{\alpha}f , \bigl(\r[\ell]{1}-\r[\ell]{0 } \bigr)\r [ \ell+1 { \nearrow}\ell+m-1]{\alpha}f \bigr\rangle.\end{aligned}\ ] ] now , note that {\alpha}=\r[n']{\alpha} ] for all ; hence , separating , in the two previous sums , odd and even indices provides {}}{\mathrm { d}\alpha } ( \alpha ) & = & \sum_{\ell\in\mathcal{e}}\sum_{m=0}^{\infty } \lambda^{\ell+m+1 } \bigl\langle\r[1 { \nearrow}\ell]{\alpha}f , \bigl(\r[0]{1}- \r[0]{0 } \bigr)\r[1 { \nearrow}m]{\alpha}f \bigr\rangle \\[1pt ] & & { } + \sum_{\ell\in\mathcal{e}\setminus\{0\}}\sum_{m=1}^{\infty } \lambda^{\ell + m-1 } \bigl\langle\r[1 { \nearrow}\ell-1]{\alpha}f , \bigl(\r [ 0]{1}- \r[0]{0 } \bigr)\r[1 { \nearrow}m-1]{\alpha}f \bigr\rangle \\[1pt ] & & { } + \sum_{\ell\in\mathcal{o}}\sum_{m=0}^{\infty } \lambda^{\ell+m+1 } \bigl\langle\r[0 { \nearrow}\ell-1]{\alpha}f , \bigl ( \r[1]{1}- \r[1]{0 } \bigr)\r[0 { \nearrow}m-1]{\alpha}f \bigr\rangle \\[1pt ] & & { } + \sum_{\ell\in\mathcal{o}}\sum_{m=1}^{\infty } \lambda^{\ell+m-1 } \bigl\langle\r[0 { \nearrow}\ell-2]{\alpha}f , \bigl ( \r[1]{1}- \r[1]{0 } \bigr)\r[0 { \nearrow}m-2]{\alpha}f \bigr\rangle.\end{aligned}\ ] ] finally , by combining the even and the odd sums , {}}{\mathrm { d}\alpha } ( \alpha ) & = & \biggl\langle\sum_{\ell=0}^{\infty } \lambda^{\ell}\r[1 { \nearrow}\ell]{\alpha}f , \bigl(\r[0]{1}-\r[0]{0 } \bigr ) \sum_{m=0}^{\infty } \lambda^m \r[1 { \nearrow}m]{\alpha}f \biggr\rangle \\ & & { } + \biggl\langle\sum_{\ell=0}^{\infty } \lambda^\ell\r[0 { \nearrow}\ell-1]{\alpha}f , \bigl(\r[1]{1}-\r[1]{0 } \bigr ) \sum_{m=0}^{\infty}\lambda^m\r [ 0{\nearrow}m-1]{\alpha}f \biggr\rangle.\end{aligned}\ ] ] since {1}\pgeq[1 ] \r[n]{0} ] is nonnegative on ] it holds that {1}-\r [ n]{0})f \rangle\leq0 ] is nonincreasing on .the proof is complete .proof of theorem [ teomainresult ] according to lemma [ lemlem2 ] , for all functions ] it holds , for all and in , which establishes ( [ eqstargrev ] ) by letting and .this completes the proof .{2 } = y \displaystyle \alpha^{(\mathrm{r})}(y , u , \hat{y}) \displaystyle:=1\wedge\frac{\pi^{\ast}(\hat{y } ) \check{r}(\hat{y } , y ) \check{s}(\hat{y } , y ; f(u))}{\pi^{\ast}(y)\check{r}(y,\hat{y } ) \check{s}(y , \hat{y } ; u ) } \bigg\vert\frac{\partial f}{\partial u}(u ) \bigg\vert ] : draw , let take the value w.pr . , let , draw , let , let {2 } \gets\cases { \hat{y } , & \quad with probability \vspace*{5pt}\cr & \quad\qquad , \vspace*{3pt}\cr y , & \quad otherwise.}\ ] ] in algorithm [ algalgmtm ] , the auxiliary variables are defined on and for all and , are sample weights .moreover , is an instrumental kernel defined on having the transition density with respect to some dominating measure on .[ lemmtm ] the gmtm algorithm is a special case of algorithm [ algalg2 ] . denoting by the random variables generated in step ( i ) in algorithm [ algalgmtm ] , the proposed candidate is obtained as , where is generated in step ( ii ) .let , where to obtain the joint distribution of conditionally on {2} ] is given by if is dominated by a nonnegative measure , then ( [ eqmtmr ] ) , ( [ eqmtms ] ) and ( [ eqmtmt ] ) show that the kernels , and are dominated as well . denoting by , and the corresponding transition densities ,it can be checked readily that so that defined in ( [ eqacceptmtm ] ) corresponds to the acceptance probability defined in ( [ eqacceptmetropolis ] ) with these particular choices of , and .consequently , the gmtm algorithm is a special case of algorithm [ algalg2 ] .note that in the previous proof , we have chosen the auxiliary variable as the vector of rejected candidates after step ( ii ) .another natural idea would consist in choosing , where the are obtained in step ( i ) ; however , since belongs to this set of candidates , the model would then not be dominated , which would make the proof more intricate .we thank the anonymous referees for insightful comments that improved significantly the presentation of the paper .a special thanks goes to the referee who provided the two counterexamples in remarks [ remcounterexsummability ] and [ remcounterexlemma ] , as well as the possible application of our methodology to the abc context in example [ exrrabc ] .
in this paper , we study the asymptotic variance of sample path averages for inhomogeneous markov chains that evolve alternatingly according to two different -reversible markov transition kernels and . more specifically , our main result allows us to compare directly the asymptotic variances of two inhomogeneous markov chains associated with different kernels and , , as soon as the kernels of each pair and can be ordered in the sense of lag - one autocovariance . as an important application , we use this result for comparing different data - augmentation - type metropolis hastings algorithms . in particular , we compare some pseudo - marginal and propose a novel exact algorithm , referred to as the _ random refreshment _ algorithm , which is more efficient , in terms of asymptotic variance , than the grouped independence metropolis hastings algorithm and has a computational complexity that does not exceed that of the monte carlo within metropolis algorithm . ,
wireless sensor and actor networks ( wsans ) are composed of sensor nodes and actors that are coordinated via wireless communications to perform distributed sensing and acting tasks . in wsans , sensor nodes collect information about the physical world , while actors use the collected information to take decisions and perform appropriate actions upon the environment .the sensor nodes are usually small devices with limited energy resources , computation capabilities and short wireless communication range . in contrast , the actors are equipped with better processing capability , stronger transmission powers and longer battery life .the number of actors in wsan is significantly lower than the number of sensor nodes .the wsans technology has enabled new surveillance applications , where sensor nodes detect targets of interest over a large area .the information collected by sensor nodes allows mobile actors to achieve surveillance goals such as target tracking and capture .several examples of the wsan - based surveillance applications can be found in the related literature , including land mine destruction , chasing of intruders , and forest fires extinguishing .the surveillance applications of wsans require real - time data delivery to provide effective actions .a fast response of actors to sensor inputs is necessary .moreover , the collected information must be up to date at the time of acting .on the other hand , the sensor readings have to be transmitted to the mobile actors through multi - hop communication links , which results in transmission delays , failures and random arrival times of packets .the energy consumption , transmission delay , and probability of transmission failure can be reduced by decreasing the amount of transmitted data .thus , minimization of data transmission is an important research issue for the development of the wsan - based surveillance applications .it should be noted that other methods can be used in parallel to alleviate the above issues , e.g. , optimisation of digital circuits design for network nodes .this paper introduces an approach to reduce the data transmission in wsan by means of suppression methods that were originally intended for wireless sensor networks ( wsns ) . the basic idea behind data suppression methodsis to send data to actors only when sensor readings are different from what both the sensor nodes and the actors expect . in the suppression schemes , a sensor node reports only those data readings that represent a deviation from the expected behaviour .thus , the actor is able to recognize relevant events in the monitored environment and take appropriate actions .the data suppression methods available in the literature were designed for monitoring applications of wsns . in such applications ,a sink node needs to collect information describing a given set of parameters with a defined precision or recognize predetermined events .these state - of - the - art suppression methods are based on an assumption that a large subset of sensor readings does not need to be reported to the sink as these readings can be inferred from the other transferred data . in orderto infer suppressed data , the sink uses a predictive model of the monitored phenomena .the same model is used by sensor nodes to decide if particular data readings have to be transmitted .a sensor node suppresses transmission of a data reading only when it can be inferred within a given error bound .temporal suppression techniques exploit correlations between current and historical data readings of a single sensor node .the simplest scheme uses a nave model , which assumes that current sensor reading is the same as the last reported reading . when using this method , a sensor node transmits its current reading to sink only if difference between the current reading and previously reported reading is above a predetermined threshold .parameters monitored by wsns usually exhibit correlations in both time and space .thus , several more sophisticated spatiotemporal suppression methods were proposed that combine the basic temporal suppression with detection of spatially correlated data from nearby nodes . according to the spatiotemporal approach ,sensor nodes are clustered based on spatial correlations .sensor readings within each cluster are collected at a designated node ( cluster head ) , which then uses a spatiotemporal model to decide if the readings have to be transmitted to sink . in previous work of the first author a decision - aware data suppression approachwas proposed , which eliminates transfers of sensor readings that are not useful for making control decisions .this approach was motivated by an observation that for various control tasks large amounts of sensor readings often do not have to be transferred to the sink node as control decisions made with and without these data are the same .the decision - aware suppression was used for optimizing transmission of target coordinates from sensor nodes to a mobile sink which has to track and catch a moving target . according to that approachonly selected data are transmitted that can be potentially useful for reducing the time in which the target will be reached by the sink . according to the authors knowledge , there is a lack of data suppression methods in the literature dedicated for the surveillance applications of wsans . in this paperthe available data suppression methods are adapted to meet the requirements of the wsans .effectiveness of these methods is evaluated by using a model of wsan , where mobile actors have to capture randomly distributed targets in the shortest possible time .the paper is organized as follows .details of the wsan model are discussed in section 2 .section 3 introduces algorithms that are used by actors to navigate toward targets as well as algorithms of sensor - actor communication that are based on the data suppression concept .results of simulation experiments are presented in section 4 .finally , conclusions are given in section 5 .in this study a model of wsan is considered , which includes 16 actors and 40000 sensor nodes .the monitored area is modelled as a grid of 200 x 200 square segments .discrete coordinates are used to describe positions of segments , sensor nodes , actors , and targets ( , ) .the sensor nodes are placed in centres of the segments .each sensor node detects presence of a target in a single segment .communication range of a sensor node covers the segment where this node is located as well as the eight neighbouring segments .radius of the actor s communication range equals 37 segments . in most cases ,the sensor nodes have to use multi - hop transmission for reporting their readings to actors . due to the long communication range, each actor can transmit data to a large number of nodes ( up to 4293 ) directly in one hop .the task of sensor nodes is to detect stationary targets in the monitored area and report their positions to actors . on the basis of the received information, each actor selects the nearest target and moves toward it .this process is executed in discrete time steps .maximum speed of actor equals two segments per time step . at each timestep three new targets are created at random positions .a target is eliminated if an actor reaches the segment in which the target was detected .the targets may correspond to fires , intruders , landmines , enemy units , etc .default ( initial ) positions of actors were determined to ensure that the communication ranges of the 16 actors cover the entire monitored area ( fig .[ fig : fig1bp ] ) .an actor , which has not received information from sensor nodes about current target locations moves toward its default position . such situation occurs when there is no target within the actor s range or the information about detected target is suppressed by sensor node . for the above wsan model , data communication costis evaluated by using two metrics : number of data transfers ( packets sent from sensor nodes to actors ) , and total hop count .the hop count is calculated assuming that the shortest path is used for each data transfer .performance of the targets elimination by actors is assessed on the basis of average time to capture , i.e. , the time from the moment when a target is created to the moment when it is captured by an actor and eliminated .during target chasing , the mobile actors decide their movement directions based on the navigation algorithm , which is presented in tab . [tab : tab1bp ] .each actor holds a target map to collect the information delivered by particular sensor nodes .an element of target map equals 1 if the -th actor has received information that there is target detected in segment .in opposite situation , the target map element equals 0 .the following symbols are used in the pseudo - code of the navigation algorithm : denotes a segment in which the -th actor is currently located , is the nearest target according to the information collected by -th actor in its target map , is the selected destination segment , and denotes the default actor position .it should be remembered that discrete coordinates are used to identify the segments .current position of actor , the destination segment as well as the targets map are broadcasted by the actor to all sensor nodes in its communication range .an actor moves toward its default position unless a target is registered in its target map .each actor takes a decision regarding the segment into which it will move during the next time step .the actor s decision is taken by solving the following optimization problem : where denotes the euclidean distance between segments and ( segments per time step ) is the maximum speed of actor ..pseudo - code of navigation algorithm executed by -th actor [ cols= " < , < " , ] [ tab : tab3bp ] it should be noted that for the sts algorithm corresponds to ts . similarly , das corresponds to ts for as the radius of actor s communication range equals 37 . the shortest time to capturewas achieved by using the das-1 algorithm with segments .the lowest hop counts were obtained for the sts algorithms with high values .figures [ fig : fig4bp ] and [ fig : fig5bp ] present detailed results for selected settings that allow the compared algorithms to achieve the maximum performance , i.e. , minimum average time to capture .the error bars show the range between minimum and maximum of the metrics obtained from the 20 simulation runs .average values are depicted as columns . ) , and das-1 ( ),width=415 ] ) , and das-2 ( ),width=415 ] according to the presented results , it can be concluded that the spatiotemporal and decision aware suppression methods reduce the number of data transfers and hop counts in comparison with the temporal suppression .moreover , these approaches decrease the average time in which actors eliminate the targets . the effect of decreased time to capture is especially visible for the algorithms that are based on the decision aware suppression .the reason underlying these results arises from the fact that when using the sts and das algorithms the sensor nodes do not report the detected targets if it is not necessary for effective navigation of actors .the information about target is transmitted from a sensor node to an actor when the distance between them is shorter .therefore , the probability that a new target will appear closer to the selected actor before it reaches the previously reported target is diminished and there is smaller chance that the assignment of targets to actors will be non - optimal .reduction of data transmission is an important issue for the development of wsan - based surveillance applications that require real - time data delivery , energy conservation , and effective utilization of the bandwidth - limited wireless communication medium . in this paperan approach is introduced to reduce the data transmission in wsan by means of suppression methods that were originally intended for wireless sensor networks .communication algorithms based on temporal , spatiotemporal , and decision aware data suppression methods are proposed for a wsan system in which mobile actors have to capture distributed targets in the shortest possible time .effectiveness of the proposed data communication algorithms was verified in computational experiments by using a wsan model .the experimental results show that the spatiotemporal and decision aware suppression methods reduce the number of data transfers and hop counts in comparison with the temporal suppression , which ensures that the actors receive complete information about targets detected in their communication ranges .further research will be conducted to test the proposed approach in more complex network scenarios . moreover , an interesting topic for future works is to investigate the impact of transmission failures on performance of the presented algorithms .vedantham , r. , zhuang , z. , sivakumar , r. : mutual exclusion in wireless sensor and actor networks . in : 3rd annual ieee communications society on sensor and ad hoc communications and networks secon06 ,1 , pp . 346355 ( 2006 ) alippi , c. , anastasi , g. , di francesco , m. , roveri , m. : an adaptive sampling algorithm for effective energy management in wireless sensor networks with energy - hungry sensors . ieee transactions on instrumentation and measurement , vol . 59 , no .2 , pp . 335344 ( 2010 ) evans , w. c. , bahr , a. , martinoli , a. : distributed spatiotemporal suppression for environmental data collection in real - world sensor networks . in : ieee international conference on distributed computing in sensor systems dcoss , pp .7079 , ( 2013 ) silberstein , a. , gelfand , a. , munagala , k. , puggioni , g. , yang , j. : making sense of suppressions and failures in sensor data : a bayesian approach . in : proceedings of the 33rd international conference on very large data bases ,842853 ( 2007 ) paczek , b. : communication - aware algorithms for target tracking in wireless sensor networks . in : kwiecien ,a. , gaj , p. , and stera , p. ( eds . )ccis , vol .springer , heidelberg ( 2014 ) paczek b. , bernas m. : optimizing data collection for object tracking in wireless sensor networks . in : kwiecien ,a. , gaj , p. , and stera , p. ( eds . )ccis , vol .485 - 494 .springer , heidelberg ( 2013 )
this paper introduces algorithms for surveillance applications of wireless sensor and actor networks ( wsans ) that reduce communication cost by suppressing unnecessary data transfers . the objective of the considered wsan system is to capture and eliminate distributed targets in the shortest possible time . computational experiments were performed to evaluate effectiveness of the proposed algorithms . the experimental results show that a considerable reduction of the communication costs together with a performance improvement of the wsan system can be obtained by using the communication algorithms that are based on spatiotemporal and decision aware suppression methods . sensor and actor networks , data suppression , target tracking , surveillance applications
nature offers a wide range of phenomena characterized by power - law distributions : diameter of moon craters , intensity of solar flares , the wealth of the richest people and intensity of terrorist attacks , to name a few .these distributions are so - called _ heavy - tailed _ , where the fractional area under the tail of the distribution is larger than that of a gaussian and there is thus more chance for samples drawn from these distributions to contain large fluctuations from the mean . anatomical defectsaside , the cosmic ray ( cr ) energy spectrum follows a power - law for over ten orders of magnitude .the predicted abrupt deviation at the very highest energies ( the gzk - cutoff ) has generated a fury of theoretical and experimental work in the past half century .recently , bahcall and waxman ( 2003 ) have asserted that the observed spectra ( except agasa ) are consistent with the expected flux suppression above .however , the incredibly low fluxes combined with as much as % uncertainty in the absolute energy determination means that there has yet to be a complete consensus on the existence of the gzk - cutoff energy . with this in mind , we consider statistics which suggest an answer to a different question : _ do the observed cr spectra follow a power - law ?_ specifically , these studies are designed to inquire whether or not there is a flux deviation relative to the power - law form by seeking to minimize the influence of the underlying parameters .the two experimental data sets considered in this study are the agasa experiment and the preliminary flux result of the pierre auger observatory .the discussion in [sec : data ] uses these spectra to introduce and comment on the power - law form .the first distinct statistical test is applied to this data in [sec : dlv ] where we explore the distribution of the largest value of a sample drawn from a power - law . in [sec : tp ] we apply the tp - statistic to the cr flux data .this statistic is asymptotically zero for pure power - law samples _regardless _ of the value power index and therefore offers a ( nearly ) parameter free method of determining deviation from the power - law form .the final section summarizes our results .a random variable is said to follow a power - law distribution if the probability of observing a value between and is where .normalizing this function such that gives , it is convenient to choose , and doing so yields for reference , one minus the cumulative distribution function is given by , taking the log of both sides of equation ( [ eq : pwlpdf ] ) yields where is an overall normalization parameter , and suggests a method of estimating ; the _ power index _ is the slope of the best fit line to the logarithmically binned data ( i.e. bin - centers with equally spaced logarithms ) . in what follows, we refer to the logarithmically binned estimate of the power index as and assume that the typical /ndf is indicative of the goodness of fit .the fitting is done with two free parameters , namely and .this figure displays published agasa and auger cr energy spectra .both axis have logarithmic scales to illustrate the power - law behavior .the vertical axis is the flux in ( m sr sec ev) and the horizontal axis is the energy in ev .the best fit lines ( see [ eq : loglogfit ] ) have slope and ( statistical error only).,width=419,height=264 ] the energy flux of two publicly available data sets are shown in fig .[ fig : specs ] .the the red point - down triangles represent the log of the binned agasa flux values in units of ( m sr sec ev) and the blue point - up triangles correspond to the auger flux .the vertical error bars on each bin reflect the poisson error based on the number of events in that bin . the log - binned estimates for each complete cr data set are the slopes of the dashed lines plotted in fig .[ fig : specs ] . to check the stability of we estimate the power index as a function of the minimum energy considered for the agasa and auger cr data sets ;see fig.[fig : specs ] .the left most point is the slope of the best fit lines plotted in fig.[fig : specs ] .the vertical error bars represent deviation.,width=419,height=264 ] in order to check the stability of to bound on our estimate , we compute the estimated power index as a function of the minimum energy considered for each of the two cr data sets . the left - most blue ( red ) point in fig .[ fig : gests ] shows for the auger ( agasa ) data taking into account all of the bin values above ( ) , the next point to the right represents that for all bins above ( ) , and so on .the vertical error bars on these points represent the error of the estimate . to ensure an acceptable chi - squared statistic, we demand that at least five bins be considered , thereby truncating at for the auger and for the agasa data set .the /ndf for the left - most points is and it increases to for the right - most for both experiments .we note that these estimates do not vary widely for the lowest s and that the values of from these experiments are consistent .the analyses discussed in [sec : dlv ] and [sec : tp ] will depend on the total number of events in the data set .since these numbers are not published we use a simple method for estimating them from the cr flux data .if the exposure is a constant function of the energy , then we may take the flux to be proportional to the number of events in the bin and the exposure , namely .the auger exposure is reported to be constant over the energy range reported with ( m sr sec ) .the agasa collaboration report flux data all the way down to but the exposure of the experiment can be considered approximately constant only for energies above ( see fig .14 of ) where ( m sr sec ) . using this methodwe get a total of 3567 events with for the auger flux and 1914 with for the agasa experiment .as evidence suggestive of a gzk - cutoff , an often cited quantity is the flux suppression , or the ratio of the flux one would expect from a power - law to that actually observed above a given maximum , say , . since one may estimate the flux suppression by estimating the number of events out of expected above a given maximum as = n_{tot}z_{max}^{1-\gamma} ] .in this section we derive a similar test statistic based on the distribution of the maximum event from a power - law sample .the statistic discussed here approaches for large and allows us to show that the estimation errors associated with are enough to disallow any significant conclusion about the presence of flux suppression for the highest energy cr s .the form of the power - law distribution allows us to calculate the pdf of the largest value , , out of n events . using the equations ( [ eq : pwlpdf ] ) and ( [ eq : pwlcdf ] ) we can say that the probability that any one value falls between and _ and _ that all of the others are less than it is . there are n ways to choose this event andso the probability for the largest value to be between and is in terms of the ratio , this can be written as fig .[ fig : maxpdf ] contains a plot of this distribution for with three choices of n. the glaring implication of this plot is that even for `` small '' n nearly all of the integral of is above .this implies that the probability of the maximum energy event falling below 10 times the minimum is very small , for a power - law with these parameters . a plot of the probability distribution of the maximum of a sample drawn from a power - law with power index .this is the distribution defined in equation ( [ eq : maxesz ] ) where is the ratio of the maximum to the minimum .the sample sizes are and .,width=359,height=226 ] motivated by the location and shape of we consider the probability that the maximum ratio from a given sample is less than or equal to a particular value , , equation ( [ eq : pxc ] ) approaches the poisson probability mentioned above ; ^{n_{tot } } \rightarrow \exp[-n_{tot}z_{max}^{1-{{\gamma } } } ] = \mathcal{p}(0 , n_{sup}) ] and a reasonable guess for the power index . since a larger will lead to a larger value of we will conservatively take the highest energy agasa ( resp .auger ) event to fall on the upper edge of the highest energy bin .the method of determining the number of events in each bin is described in [sec : data ] and here the parameter represents the total number above a given minimum .we will use the logarithmically binned estimates and errors of discussed in [sec : data ] . a plot of the probability that the maximum of a sample drawn from a power - law will be less than or equal to the maximum observed by the auger ( , blue point - up ) and agasa ( , red point - down ) experiments as a function of the minimum energy considered .the vertical error bars represent the effect of a deviation and the hatched area shows the 5% significance level.,width=419,height=264 ] the plot in fig .[ fig : pmaxlast ] shows given and as a function of minimum energy considered for each of the cr data sets in fig .[ fig : gests ] . in particular , for each the values of , and are estimated from the cr flux andthe resulting are plotted for the auger ( blue ) and agasa ( red ) data .for example , the left - most auger point represents the probability that if events are drawn from a power - law with then there is a chance that the maximum log - ratio would be less than or equal to that reported by the auger experiment , namely (ev ) .taken at face value , one may reject the null hypothesis at the 5% s.l .for this data set .the left - most agasa point represents the same probability for the complete set of agasa data , namely for events drawn from a power - law with .thus we can not reject the null hypothesis for the agasa data .the upper ( lower ) vertical error bars depicted in fig .[ fig : pmaxlast ] represent the value of if we have under ( over ) estimated the power index by , that is if , keeping the log - ratio and the total number of events constant .( the possible errors in the total number of events are on the order of a few percent and are negligible . ) since the fitting scheme considers successively lower energy bins , the points ( and errors ) for each experiment plotted in fig .[ fig : pmaxlast ] are highly correlated .the upper error bars fall above the 5% s.l .for all minimums considered and therefore the statistical error associated with is enough to disallow rejection of the power - law hypothesis .the biggest systematic measurement uncertainty in the cr data is the calibration of the energy .this uncertainty leads to an error in the reported absolute energy values of for the agasa data and as much as % for the highest energy events in the auger data set .since the probability considered here depends only on the ratio of the observed energies , it is independent of any constant systematic uncertainty in the energy determination .however , this probability is sensitive to energy errors which vary over the range considered and will thus cause uncertainty in . for example , if we take the maximum to be 50% higher ( but hold and constant ) the value of represented by the left most auger point in fig .[ fig : pmaxlast ] changes from 1.9% to 17% .thus the large uncertainty in combined with the errors associated with implies that the preliminary auger data set does not suggest sufficient evidence to reject the pure power - law hypothesis for all events above (ev ) .considering the error and extra degree of freedom associated with , an analysis of a distribution s adherence to the power - law form without reference to , or regard for , this parameter is could lead to enhanced statistical power .first proposed by v. pisarenko and d. sornette , the so - called _ tp - statistic_ is a function of random variables that ( in the limit of large ) tends to zero for samples drawn from a power - law , regardless of the value of .( tp stands for _ tail power _ , as oppossed to te , also introduced in , which stands for _ tail exponential_. ) this section will describe the tp - statistic and apply it to the cr data . the raw moments of the pdf equation ( [ eq : pwlpdf ] ) are thus power - laws with have a finite mean but an infinite variance ( in the limit of large n ) and sample statistics created from these moments are not particularly helpful . however , taking the natural logarithm of allows the integrals to converge and one may write ( for all and ) , the tp - statistic is calculated by noting that . therefore ,if we use the sample analog of these quantities , namely then we can define ( for all ) , by the law of large numbers this sample statistic tends to zero as , independent of the value of .the tp - statistic allows us to test for a power - law like distribution without comment about the value of the power index .furthermore , for any one sample we can vary from the sample minimum to the sample maximum and calculate the tp - statistic over the range of in the sample . given complete event lists one may use equation ( [ eq : tp ] ) to calculate the tp - statistic for the unbinned data . since only the binned cr flux is publicly available we adapt the statistic to a binned analysis and apply it first to an example distribution with a cutoff and then to the cr data sets . in order to build intuition about the tp - statistic and its variance before studying the cr data, we first apply this statistic to simulated event sets drawn from both a pure power - law distribution and a similar distribution with a cut - off .the cut - off pdf is chosen so that it mimics a power - law for the lowest values but has an abrupt ( and smooth ) cut - off at a particular value , say .the functional form we will use here is the normalization of this pdf is , the value of which must be computed numerically .[ fig : fdhisto ] contains a logarithmically binned histogram of 3000 events drawn from a pure power - law ( black circles ) with and , and two pdf s in the from of equation ( [ eq : fdpdf ] ) ; the magenta squares have and the green triangles have . while arbitrary , the values of these parameters are chosen to be similar to the agasa and auger data ( see fig.[fig : specs ] ) .logarithmically binned histogram of 3000 events drawn from a pure power - law with and two power - laws with a cut , see equation ( [ eq : fdpdf ] ) .the magenta squares are drawn from the distribution with and the green triangles have . as noted in the text , while arbitrary , the values of these parameters are chosen to be similar to the agasa and auger data ( see fig.[fig : specs]).,width=359,height=226 ] if we write the sorted ( from least to greatest ) values from a sample as , the solid black line in fig .[ fig : fdtp ] is created by calculating for each value of the 3000 events drawn from the pure power - law histogram in fig .[ fig : fdhisto ] .the circles represent the mean of the the statistic within the bin , say , and the vertical error bars show the root - mean - squared deviation of the statistic within the bin .note that the total number of events considered by the statistic decreases quickly from left to right which leads to a bias in and an increasing variance of the statistic .the tp - statistics , defined in equation equation ( [ eq : tp ] ) , as a function of minimum value `` '' for the 3 sets of 3000 events plotted in fig .[ fig : fdhisto ] . also plottedis the mean of the tp - statistic within each of the logarithmically spaced bins which is referred to in the text as .the vertical error bars represent the rms deviation of the statistic within each bin .parenthetically , with increased statistics , say 10,000 events , the distinct characteristics of the tp - statistic for a pure power - law , a power - law with a cutoff or a power - law with a cutoff become more clearly different.,width=419,height=264 ] the jagged magenta line fig .[ fig : fdtp ] shows the most obvious deviation from the power - law form ; it is systematically offset from zero for nearly all minima of the data set .of course , with 3000 events the histograms ( see fig .[ fig : fdhisto ] ) are enough to distinguish between these two distributions . but the tp - statistic allows us to see this deviation by considering the entire data set ( the left most magenta point in fig . [ fig : fdtp ] ) , not just by analyzing the events in the upper most bins .the green line in the figure shows for events drawn from equation ( [ eq : fdpdf ] ) with .the histogram for this set is not as clearly different from the power - law as the magenta points and neither is the tp - statistic ; the left - most green point shows no more deviation from zero than the power - law .however , as the minimum increases ( and nears ) the statistic moves away from zero ( more noise not withstanding ) and suggests that the data above the minimum deviate from the power - law .it is important to note that the tp - statistic is positive for both of the cutoff distributions .recall that for a pure power - law , . the cutoff distribution, however , lacks an extended tail and will therefore have a smaller second log - moment as compared with ( the square of ) the first log - moment and will thus result in a positive tp - statistic .a distribution with an enhancement , rather than a cutoff , in the tail would result in a negative tp - statistic , since it would have a larger second log - moment ( i.e. a larger `` variance '' ) .see the appendix ( [sec : app ] ) for a detailed discussion of the tp - statistic applied to the double power - law . to quantify the significance of the tp - statistics deviation from zero , sets of 3000 eventswere generated for each of the three distributions discussed in this section .for each set we calculate the _ mean _ tp - statistic within each of the logarithmically spaced bins .the resulting distribution of s within each bin is then fitted to a gaussian .the fitted mean and deviation of the s ( see definition in text ) within each bin for the three distributions described in the text .this plot is the result of simulated sets of events , where fig.[fig : fdtp ] is one example , and where each set contains 3000 events.,width=419,height=264 ] the black circles in fig .[ fig : fdtp_skies ] represent the mean of the gaussian fit to the distribution of s within each bin for a power - law and the error bars on the points represent the fitted deviation of the s .we interpret the left - most of these points in the following way : for 3000 events drawn from a power - law the `` expected value '' of in the first bin is effectively indistinguishable from zero , as expected .though the statistic itself does not depend on , the variance on this value does .the reason for this is that the variance of the s depends on the average total number of events greater than a given minimum , which is influenced by . in this casethe total number of events per set for minima in the first bin is at least a few thousand and the variance of the s is .these errors increase from left to right since each successively higher bin will contain s based on fewer and fewer events .the magenta squares represent the fitted mean as a function of for sets drawn from a power - law with a cut - off at .they deviate from zero for all but the largest .furthermore , this offset is statistically significant for the lowest few bins of , where the statistic reflects the deviation from power - law considering most of the events in the set .the green triangles show the fitted means for the distribution .they also display some deviation from zero , but they are not as significant since they fall near the errors for the pure power - law distribution . the distribution of the s in the first bin of fig.[fig : fdtp_skies ] ( the bin with minimum ) for the simulated pure power - law ( black , shaded ) and a power - law with a cut - off at ( magenta , hatched ) .for these distributions ( see equation ( [ eq : ptp])).,width=359,height=226 ] indeed , one may inquire as to which of the bins deviate the most from the simulated power - law .this is equivalent to asking , `` above what minimum do the data generated from this cut - off distribution maximally deviate from a pure power - law ? '' to quantify this deviation , we use a p - value given by where is the mean of the fitted gaussian and is the standard deviation .we reject the pure power - law hypothesis ( at the 5% s. l. ) if .the mean of the gaussian fit to the distribution of s for the power - law in the bin with minimum is with a standard deviation .the mean of the fitted gaussian for the distribution in this bin is with a standard deviation . therefore , the significance level of the deviation is and we can reject the pure power - law hypothesis for this distribution .the distribution of the s for this bin is plotted in fig . [fig : fdthe1tp_100skies ] for the pure power - law ( black , shaded ) and the ( magenta , hatched ) pdf .the maximum deviation for the pdf occurs in the bin with minimum and the corresponding distributions of are plotted in fig .[ fig : fdthe2tp_100skies ] .the significance of this deviation is lower ; . the distribution of the s in the fifth bin of fig.[fig : fdtp_skies ] ( bin with minimum ) for the simulated pure power - law ( black , shaded ) and a power - law with a cut - off at ( green , hatched ) . for these distributions ( see equation ( [ eq : ptp])).,width=359,height=226 ] in order to apply the tp - statistic to the cr data , monte - carlo simulations were conducted and analyzed in a manner similar to that discussed in [subsec : tp.anexample ] ; we generate sets of events from the reported flux and the resulting distribution of ( within each bin ) is fitted to a gaussian . since the significance of deviation from zero depends on both the power index and the number of events , we will compare each of the auger and agasa data sets with a unique power - law .we will take the agasa experiment to have events above and we will compare the resulting tp - statistics with those of a power - law with the same minimum and . the auger spectrum has a power - index estimate of considering all of the data above and a total of 3570 events , so we will therefore compare the tp - statistics arising from the auger flux to those of a pure power - law with these parameters .the application of this scheme to the agasa spectrum is plotted in fig .[ fig : tpstatsaga ] in red triangles .the black circles represent average tp - statistic value for data drawn from a pure power - law with .both plots have events per sky .the error bars on each point represent the 1-sigma deviation of the gaussian fit to the distribution of the mean tp - statistic .since the agasa values do not significantly deviate from zero ( or the power - law values ) this plot suggests that the agasa distribution does not significantly deviate from a pure power - law .the most significant deviation occurs in the bin with minimum (ev ) and gives , which is consistent with the p - value for this bin discussed in [sec : dlv ] .these distributions are plotted in fig [ fig : thetpaga ] .the fitted mean and deviation of the s ( see definition on the text ) within each bin for the agasa spectrum ( red triangles ) and a pure power - law distribution ( black circles ) .this plot is the result of simulated sets of events where each set contains 1916 events and the power - law has index .,width=419,height=264 ] the distribution of s in the fifth bin of fig.[fig : tpstatsaga ] ( the bin with minimum (ev ) ) for the pure power - law ( black , shaded ) and the agasa spectrum ( red , hatched ) .for these distribution ( see equation ( [ eq : ptp])).,width=359,height=226 ] the simulation results from the auger spectrum are plotted in fig .[ fig : tpstatsaug ] .this plot shows deviation from a power - law for the lowest minimums considered . for the bin with minimum find .thus we can say that the auger spectrum with energies greater than (ev ) deviate from a power - law by , where .the distribution of s for this minimum energy is plotted in fig .[ fig : thetpaug ] . the fitted mean and deviation of the s ( see definition on the text ) within each bin for the auger spectrum ( blue triangles ) and a pure power - law distribution ( black circles ) .this plot is the result of simulated sets of events where each set contains 3570 events and the power - law has index .,width=419,height=264 ] the distribution of s in the second bin of fig.[fig : tpstatsaga ] ( the bin with minimum (ev ) ) for the pure power - law ( black , shaded ) and the auger spectrum ( blue , hatched ) . for these distribution ( see equation ( [ eq : ptp])).,width=359,height=226 ] since the tp - statistic nearly eliminates the need to estimate , the biggest systematic uncertainty in analyzing the cr data with the tp - statistic is likely to be errors in the event energies .similar to the -value discussed in [sec : dlv ] , it is only the relative energy errors which can effect the result , since the tp - statistic depends only on the ratio .however , any elongation of the observed spectrum brought about by this relative uncertainty effect the tp - statistic . without further study of the cr energy systematics , we can not draw a conclusion from the deviation in fig .[ fig : thetpaug ] .in [sec : data ] we use the reported ( agasa and auger ) cr fluxes to discuss the power - law form and illustrate the logarithmically binned estimates of the power index .the probability that the maximum value of a sample drawn from a power - law is less than or equal to a particular value is defined in equation ( [ eq : pxc ] ) . using reasonable estimates for , and from the cr data sets we calculate in [sec : dlv ] .the value of is used to test the null hypothesis that these data sets follow a power - law .the agasa data give no reason to reject the hypothesis ; for the data with .the auger data give more reason to reject the null hypothesis , for the data with .however , consideration of the errors on prevent any solid conclusion . for the purpose of statistical analysis it would be useful to eliminate , or at least minimize , the importance of .the tp - statistic tends ( asymptotically ) to zero regardless of the value of and is the subject of [sec : tp ] .we apply the tp - statistic to the cr data sets using a monte - carlo method described in [subsec : tp.data ] .the agasa data give a value of for energies greater than (ev ) . a value consistent with the -value discussed in [sec : dlv ] ( fig.[fig : pmaxlast ] ) .the preliminary auger flux results in a tp - statistic with more significant deviation from the power - law form : for (ev ) . comparing this value with the -value for this bin derived in [sec : dlv ] , namely , illustrates the power of the method based on the tp - statistic which is essentially independent of gamma .better understanding of the relative errors on the cr energies should lead to a definitive conclusion on the question of a cut - off in the cr spectrum .99 m. e. j. newman , contemporary physics * 46 * , 323 - 351 ( 2005 ) .a. clauset , m. young and k.s .gleditsch , * arxiv : physics/0606007 * ( 2006 ) .k. greisen , phys .rev , lett .* 16 * , 748 ( 1966 ) .zatsepin and v.a .kuzmin , jetp .lett . 4 * 78 * , ( 1966 ) .john n. bachall and eli waxman * arxiv : hep - ph/0206217 v5 * ( 2003 ) .auger collaboration , proceedings icrc , pune , india , * 10 * 115 ( 2005 ) , * arxiv : astro - ph/0604114*. t. yamamoto and the pierre auger collaboration , * arxiv : astro - ph/0601035 v1 * ( 2006 ) .m. takeda _et al . _ ,81 * 6 ( 1998 ) . unbinned maximum likelihood methods have less error and bias when applied to power - law ( or similar ) distributions than binned methods .they can also be modified to include energy error and variable acceptance information . lacking this information, we use the logarithmically binned estimate of where necessary . the minimum variance for _ any _ estimator of given by the cramer - rao lower bound ; .m. l. goldstein , s. a. morris , and g. g. yen , eur .j. b. * 41 * , 255 - 258 ( 2004 ) .howell , _ statistical properties of maximum likelihood estimators of power law spectra information _ , nasa / tp-2002 - 212020/rev1 , marshall space flight center , dec . , 2002 .v. pisarenko , d. sornette and m rodkin , _ deviations of the distributions of seiesmic energies from the gutenberg - richter law _ , computational seismology * 35 * , 138 - 159 ( 2004 ) , * arxiv : physics/0312020*. v. pisarenko , d. sornette , _ new statistic for financial return distributions : power - law or exponential ?_ , * arxiv : physics/0403075*.in [subsec : tp.anexample ] we state that the tp - statistic will be distinctly positive for distributions which contain a tail - suppression and negative for distributions which contain a tail - enhancement ( relative to the pure power - law form ) . in this sectionwe numerically compute the tp - statistic for a `` double power - law '' distribution and describe the parameter space associated with this statistic .consider the following probability distribution function : where and are chosen such that and this distribution follows a power - law with index for , and for .+ given the parameter set , we define the tp - statistic for this distribution as ^{2 } - \frac{1}{2 } \int_{u}^{\infty } \ln^{2 } \left ( \frac{x}{u } \right ) f(x)dx .\label{eq : tpbend}\ ] ] for and/or equation ( [ eq : tpbend ] ) is identically zero since it is equal to ( see equation ( [ eq : lm ] ) ) .however , equation ( [ eq : tpbend ] ) is non - trivial when and . inwhat follows , we calculate for and various values of and with and fixed .fig.[fig : appxbendpdf ] contains a plot of versus with for several choices of ( namely , for varying from 1 to 2 in steps of 0.2 ) .the red curves correspond to ( tail - suppression ) and the blue curves have ( tail - enhancement ) .the tp - statistic for each of these distributions is shown in fig.[fig : appxbendtp ] as a function of .examination of fig.[fig : appxbendtp ] suggests the following conclusions for a given and : * is positive for all values of and if and only if , and it is negative if and only if .* for much greater than , is approximately zero . specifically , as , . *the location of the maximum deviation , say where is highly correlated with the location of the bend .indeed , we have found that there is a linear relationship between and and that this relationship is independent of whether is less than or greater than . *the maximum deviation of the tp - statistic , i.e. , is independent of . a plot of ( see equation ( [ eq : pdfbend ] ) ) versus with for several choices of ( namely , for varying from 1 to 2 in steps of 0.2 ) .the red curves correspond to ( tail - suppression ) and the blue curves have ( tail - enhancement ) .the more black the color of the curve , the larger .,width=359,height=226 ] a plot of ( see equation ( [ eq : tpbend ] ) ) for each of the distributions plotted in fig.[fig : appxbendpdf ] .those distributions with tail - suppression ( red ) have and those with tail - enhancement ( blue ) have .,width=359,height=226 ] to isolate the effects of power index choice , consider the family of distributions where is fixed but is allowed to vary . since the integrals in equation ( [ eq : tpbend ] ) only converge if , the minimum we can choose is .there is no upper bound on so we vary this parameter over the interval in steps of 0.2 and over the interval in steps of 0.5 .fig.[fig : appdeltapdf ] contains a plot of versus with and .the blue curves have ( i.e. ) and the red curves have ( i.e. ) .the more black the color of these curves , the closer is to . a plot of ( see equation ( [ eq : pdfbend ] ) ) versus with and .the blue curves have ( i.e. ) and the red curves have ( i.e. ) .the more black the color of these curves , the closer is to .,width=359,height=226 ] fig.[fig : appdeltatp ] contains a plot of for the distributions plotted in fig.[fig : appdeltapdf ] .as noted earlier , if and only if and if and only if . the colored points on these curves show where each curve maximally deviates from zero ; the coordinates of these points are for each curve ( see equation ( [ eq : u0def ] ) ) .these points show a weak dependence of on , for a given . a plot of ( see equation ( [ eq : tpbend ] ) ) for the distributions plotted in fig.[fig : appdeltapdf ] .the colored points on these curves show where each curve maximally deviates from zero ; the coordinates of these points are for each curve ( see equation ( [ eq : u0def])).,width=359,height=226 ]the value of the maximum deviation also shows dependence on . in fig.[fig : appdeltatpu0 ] we plot versus for each of the points in fig.[fig : appdeltatp ] .these plots suggest the following : * for ( blue and black ) , a small change in will lead to a large change in . *if ( bright red ) , however , a large change in will result in a small change in .this case is of particular interest since a large will mimic the cutoff distribution defined in equation ( [ eq : fdpdf ] ) .* by inspection of fig.[fig : appdeltatpu0 ] we note that for . *comparison with fig.[fig : appxbendtp ] suggests that the limiting value of is roughly independent of .the studies described in this section show that the tp - statistic can distinguish tail - suppressed ( ) from tail - enhanced ( ) distributions , i.e. if and only if and if and only if .furthermore , they show that in the limiting case of the most important parameter in determining is but that the limiting value of is roughly independent of and .
two separate statistical tests are applied to the agasa and preliminary auger cosmic ray energy spectra in an attempt to find deviation from a pure power - law . the first test is constructed from the probability distribution for the maximum event of a sample drawn from a power - law . the second employs the tp - statistic , a function defined to deviate from zero when the sample deviates from the power - law form , regardless of the value of the power index . the agasa data show no significant deviation from a power - law when subjected to both tests . applying these tests to the auger spectrum suggests deviation from a power - law . however , potentially large systematics on the relative energy scale prevent us from drawing definite conclusions at this time . high energy cosmic ray flux power - law tp - statistic
functional logic programming joins in a single paradigm the features of functional programming with those of logic programming .logic programming contributes logic variables that are seamlessly integrated in functional computations by narrowing .the usefulness and elegance of programming with narrowing is presented in . at the semantics levelfree variables are equivalent to _ non - deterministic functions _ , i.e. , functions that for some arguments may return any one of many results .thus , at the implementation level variables can be replaced by non - deterministic functions when non - deterministic functions appear simpler , more convenient and/or more efficient to implement .this paper focuses on a graph transformation recently proposed for the implementation of non - determinism of this kind .this transformation is intended to ensure the completeness of computations without cloning too eagerly a large portion of the context of a non - deterministic step .the hope is that steps following the transformation will create conditions that make cloning the not yet cloned portion of the context unnecessary .non - determinism is certainly the most characterizing and appealing feature of functional logic programming .it enables encoding potentially difficult problems into relatively simpler programs .for example , consider the problem of abstracting the dependencies among the elements of a set such as the functions of a program or the widgets of a graphical user interface . in abstractions of this kind , _ component parts _ `` build '' _ composite objects_. a non - deterministic function , , defines which objects are dependent on each part .the syntax is curry . - 35pt -.0em a part can build many objects , e.g. : part builds objects and .likewise , an object can be built from several parts , e.g. : object is built by parts and .many - to - many relationships , such as that between objects and parts just sketched , are difficult to abstract and to manipulate in deterministic languages . however , in a functional logic setting , the non - deterministic function is straightforward to define and is sufficient for all other basic functions of the abstraction .for example , a function that non - deterministically computes a part of an object is simply defined by : - 35pt -.0em where is defined using a _ functional pattern _ .the set of all the parts of an object is computed by , the implicitly defined _ set function _ of .the simplicity of design and ease of coding offered by functional logic languages through non - determinism do not come for free .the burden unloaded from the programmer is placed on the execution .all the alternatives of a non - deterministic choice must be explored to some degree to ensure that no result of a computation goes missing .doing this efficiently is a subject of active research .below , we summarize the state of the art .there are three main approaches to the execution of non - deterministic steps in a functional logic program . a fourth approach , called _pull - tabbing _ , still underdeveloped , is the subject of this paper .pull - tabbing offers some appealing characteristics missing from the other approaches .we borrow from a simple example to present the existing approaches and understand their characteristics : - 35pt -.0em we want to evaluate the expression - 35pt -.0em we recall that ` ' is a library function , called _ choice _ , that returns either of its arguments , i.e. , it is defined by the rules : - 35pt -.0em and that the clause introduces a _ shared _ expression .every occurrence of in ( [ value ] ) has the same value throughout the entire computation according to the _ call - time choice _ semantics .by contrast , in each occurrence of is evaluated independently of the other .[ fig : sharing ] highlights the difference between these two expressions when they are depicted as graphs .=5pt=12pt=1pt & @-[dl ] @-[dr ] + @-[dr ] & & @-[dl ] + & =5pt=12pt=1pt & @-[dl ] @-[dr ] + @-[d ] & & @-[d ] + & & a _ context _ is an expression with a distinguished symbol called _ hole _ denoted ` ] is the expression obtained by replacing the hole in with .e.g. , the expression in ( [ value ] ) can be written as ] is called _ empty _ context . an expression rooted by a node labeled by the choice symbolis informally referred to as _ a choice _ and each argument of the choice symbol , or successor of ,is referred to as a choice s __ backtracking _ is the most traditional approach to non - deterministic computations in functional logic programming . evaluating a choice in some context , say ] , and continuing the computation . in typical interpreters ,if and when the computation of ] .backtracking is well - understood and relatively simple to implement .it is employed in successful languages such as prolog and in language implementations such as pakcs and .the major objection to backtracking is its incompleteness .if the computation of ] is ever obtained .+ _ copying _ ( or _ cloning _ ) is an approach that fixes the inherent incompleteness of backtracking . evaluating a choice in some context , say ] and ] can be seen as ] and .evaluating a choice in some context , say ] and the root of is a proper dominator of the choice , and evaluating ?c_2[v]] ] , distinguishes whether or not is empty .if is empty , and are evaluated simultaneously and independently , as in copying and bubbling , without any context to clone .otherwise , the expression to evaluate is of the form ] . without some caution ,this transformation is unsound .unsoundness may occur when some choice has two predecessors , as in our running example .the choice will be pulled up along two paths creating _ two pairs _ of strands that eventually must be pair - wise combined together .some combinations will contain mutually exclusive alternatives , i.e. , subexpressions impossible to obtain in languages such as curry and that adopt the call - time choice semantics .[ fig : pull - tab ] presents an example of this situation .we will show that the soundness is recovered if the left and right alternative of a choice are _ not _ combined in the same expression . to this aim, we attach an identifier to each choice of an expression .we preserve this identifier when a choice is pulled up .if eventually the choice is reduced to either of its alternatives every other choice with the same identifier must be reduced to the same alternative .a very similar idea in a rather different setting is proposed in .a pull - tab step clones a single node , a predecessor of the choice being pulled up . if the choice is pulled all the way up to the root of an expression , the choice s entire spine is cloned .but if an alternative of the choice fails before the choice reaches the root , further cloning of the choice s context becomes unnecessary .we define a term graph in the customary way , but extend the decorations of nodes with choice identifiers .[ def : expression ] let be a _ signature _ , a countable set of _ variables _ , a countable set of _ nodes _ , a countable set of _ choice identifiers_. a _ ( rooted ) _ _ graph _ over is a 5-tuple such that : 1 . is the set of nodes of ; 2 . is the _ labeling _ function mapping each node of to a signature symbol or a variable ; 3 . is the _ successor _ function mapping each node of to a possibly empty string of nodes of such that if , where , and ( for the following condition , we assume that a variable has arity zero ) , then there exist in such that ; 4 . is a subset of nodes of called the _ roots _ of ; 5 . is a partial function mapping nodes labeled by the choice symbol to a choice identifier ; 6 . if and and , then , i.e. , every variable of labels one and only one node of ; and 7 . for each , either or there is a path from to where , i.e. , every node of is reachable from some root of .a graph is called a _ term ( graph ) _ , or more simply an _ expression _ , if is a singleton .typically we will use `` expression '' when talking about programs and `` graph '' when making formal claims .choice identifiers play a role in computations .thus , we will explicitly define the mapping only after formally defining the notion of computation .term graphs can be seen , e.g. , in figs .[ fig : sharing ] and [ fig : bubbling ] .every choice node of every graph of fig .[ fig : pull - tab ] would be decorated with the same choice identifier .choice identifiers are arbitrary and only compared for equality .node names are arbitrary and irrelevant to most purposes and are typically omitted .however , some definitions and proofs of our claims need to explicitly refer to some nodes of a graph . for this purpose, we adopt the _ linear notation for graphs _ ( * ? ? ?* def . 4 ) .with this convention , the left graph of fig .[ fig : sharing ] is denoted , where the node names are the italicized identifiers starting with ` ' .we also make the convention that names of nodes that do not need to be referenced can be omitted , hence .the latter is conceptually identical to ( [ value ] ) . in the linear notation for graphs ,infix operators are applied in prefix notation , e.g. , see lemma [ invariance - by - pull - tab ] .this practice eases understanding the correspondence between a node identifier and the label of that node .the definition of graph rewriting is more laborious than , although conceptually very similar to , that of term rewriting .sections 2 and 3 of formalize key concepts of graph rewriting such as _ replacement _ , _ matching _ , _ homomorphism _ , _ rewrite rule _, _ redex _ , and _ step _ in a form ideal for our discussion .therefore , we adopt entirely these definitions , including their notations , and only discuss the manipulation of choice identifiers , since they are absent from .we now formalize the class of rewrite systems that we consider in this paper .a _ program _ is a rewrite system in a class called _ limited overlapping inductively sequential _ , abbreviated _lois_. in _ lois _ systems , the rules are left - linear and constructor - based .the left - hand sides of the rules are organized in a hierarchical structure called a _ definitional tree _ that guides the evaluation strategy . in _systems , there is a single operation whose rules left - hand sides overlap . this operation is called _ choice _ , is denoted by the infix binary operation `` '' , and is defined by the rules of ( [ binary - choice - rules ] ) ._ lois _ systems have been investigated in some depth .below we highlight informally the key results that justify our choice of _ lois _ systems . 1 .any _ lois _ system admits a complete , sound and optimal evaluation strategy .any constructor - based conditional rewrite system is semantically equivalent to a _system .any _ narrowing _ computation in a _system is semantically equivalent to a _ rewriting _ computation in another similar _ lois _ system . for the above reasons , _ lois _ systems are an ideal core language for functional logic programs . informally summarizing , _lois _ systems are general enough to perform any functional logic computation and powerful enough to compute by simple rewriting and without wasting steps . in our setting ,a _ computation _ of is a sequence [0pt]{\kern.95ex\tiny}}\to\,\rlap{\raisebox{.15ex}[0pt][0pt]{\kern.95ex\tiny}}\to\, ] such that [0pt]{\kern.95ex\tiny}}\to\, ] is a _ step _, i.e. , is either one of two graph transformations : a rewrite , denoted by `` '' , or a pull - tab , denoted by `` [0pt]{\tiny}}} ] and we write [0pt]{\tiny}}}\ , g[n \leftarrow g']\xi } } \, ] and [0pt]{\kern.95ex\tiny}}\to\,}}g_2 \xi } } g_1 { \mbox{\xi } } \ldots\xi } } g_i\xi ] a pull - tab and , then , where is the source node of the pull - tab .the above definition is articulated , but conceptually simple .below , we give an informal account of it . in a typical step [0pt]{\kern.95ex\tiny}}\to\, ] , most nodes of end up in .the choice identifier , for choices , of these nodes remains the same . in a rewrite ,some nodes are created .any choice node created in the step gets a fresh choice identifier . in a pull - tab , informally speaking , the source ( a choice ) `` moves '' and the target ( not a choice ) `` splits . ''the choice identifier `` moves '' with its source .split nodes have no choice identifier .each node in the `` universe '' of nodes may belong to several graphs . in , and accordingly in our extension ( see defs . [def : expression ] and [ decorations ] ) , the function mapping a node to a decoration depends on each graph to which the node belongs .it turns out that some decorations of a node , e.g. , the label , are _ immutable _ , i.e. , the function mapping a node to such decorations does not depend on any graph .we prove the immutability claim for our extension , the choice identifier .obviously , there is no notion of time when one discusses expressions and considers the decorations of a node .hence immutable decorations `` are set '' with the nodes . in practice , these decorations `` become known '' when a node is `` placed in service '' for the purpose of a computation or is created by a step . in view of this result , we drop the subscript from since this practice simplifies the notation and attests a fundamental invariant .pull - tab steps may produce an expression with distinct choices with the same choice identifier .the same identifier tells us that to some extent these redexes are the `` same '' .therefore , when a computation replaces one such redex with the left , resp .right , alternative , every other `` same '' redex should be replaced with the left , resp .right , alternative , too .if this does not happen , the computation is unacceptable . the notion of consistency of computations introducednext abstracts this idea .[ def : consistent ] a rewrite step that replaces a redex rooted by a node labeled by the choice symbol is called a _choice step_. a computation is _ consistent _ iff for all , there exists an such that every choice step of at a node identified by applies rule of `` '' defined in _ ( [ binary - choice - rules])_.a _ strategy _ determines which step(s ) of an expression to execute . a strategy is usually defined as a function that takes an expression and returns a set of steps of this expression or , equivalently , the reducts of according to the steps of . we will not define any specific strategy .a major contribution of our work is showing that the correctness of pull - tabbing is strategy - independent .the classic definition of correctness of a strategy is stated as the ability to produce for any expression ( in the domain of the strategy ) all and only the results that would be produced by rewriting . ``all and only '' leads to the following notions .+ _ soundness : _ if is a computation of in which each step is according to and is a value ( constructor normal form ) , then .+ _ completeness : _ if , where is a value ( constructor normal form ) , then there exists a computation in which each step is according to . + in the definitions of soundness and completeness proposed above , the same expression is evaluated both according to and by rewriting .this is adequate with some conventions . rewritingis not concerned with choice identifiers .this decoration can be simply ignored in rewriting computations . in particular , in rewriting ( as opposed to rewriting and pull - tabbing ) a computation is always consistent . in graph rewriting , _ equality of graphs _ is modulo a renaming of nodes .a precise definition of this concept is in ( * ? ? ?2.5 ) . typically , the proof of soundness is trivial for strategies that execute only rewrite steps , but our strategy executes also pull - tab steps , hence it creates expressions that can not be produced by rewriting .indeed , some of these expressions will have to be discarded to ensure the soundness .the proof of correctness of pull - tabbing is non - trivial and relies on two additional concepts , _ representation _ and _ invariance _ , which are presented in following sections .r1.8 in [ fig : parallel - moves ] proofs of properties of a computation are often accomplished by `` rearranging '' the computation s steps in some convenient order . a fundamental result in rewriting , known as the parallel moves lemma , shows that in orthogonal systems the steps of a computation can be rearranged at will. a slightly weaker form of this result carries over to _lois _ systems . a pictorial representation of this result is provided in fig .the symbol `` '' denotes the reflexive closure of the rewrite relation .the notation `` '' , where is a node and is a rule , denotes either equality or a rewrite step at node with rule .a characteristic of pull - tabbing , similar to bubbling and copying , is that the completeness of computations is obtained by avoiding or delaying a commitment to either alternative of a choice . in pull - tabbing ,similar to bubbling , _ both _ the alternatives of a choice are kept or `` represented '' in a _single _ expression throughout a good part of a computation .the proof of the correctness of pull - tabbing is obtained by reasoning about this concept , which we formalize below .[ representation ] we define a mapping that takes an expression and returns a set called the _ represented set of _ as follows .let be an expression .an expression is in iff there exists a consistent computation modulo a renaming of nodes that makes all and only the choice steps of . in other words , we select either alternative for every choice of an expression . for choices with the same identifier , we select the same alternative . since distinct choice steps occur at distinct nodes , by lemma [ parallel - moves ] the order in which the choice steps are executed to produce any member of the represented set is irrelevant .therefore , the notion of represented set is _well defined_. the notion of represented set of is a simple syntactic abstraction not to be confused with the notion of set of values of an expression , which is a semantic abstraction fairly more complicated .the proof of correctness of pull - tabbing is based on two results that informally speaking establish that the notion of represented set is invariant both by pull - tab steps and by non - choice steps .we combine the previous lemmas into computations of any length .theorem [ correctness ] suggests to apply both non - choice and pull - tab steps to an expression .choices pulled up to the root are reduced consistently and without context cloning . of course ,by the time a choice is reduced , all its spines have been cloned similar to bubbling and copying .a better option , available to pull - tabbing only , is discussed in the next section .the pull - tab transformation is meant to be used in conjunction with some evaluation strategy .we showed that pull - tabbing is not tied to any particular strategy .however , the strategy should be pull - tab - aware in that : ( 1 ) a choice should be evaluated ( to a head normal form ) only when it is _ needed _ , ( 2 ) a choice in a root position is reduced ( consistently ) , whereas in a non - root position is pulled , and ( 3 ) before pulling a choice , one of the choice s alternatives should be a head - normal form .the formalization of such a strategy would take us well beyond the scope of this paper . inwell - designed , non - deterministic programs , either or both alternatives of most ( but not all ) choices should fail . under the assumption that a choice is evaluated to a head normal form only when it is _ needed _ , if an alternative of the choice fails , the choice is no longer non - deterministic the failing alternative can not produce a value .thus , the choice can be reduced to the other alternative without loss of completeness and without context cloning .this is where pull - tabbing is advantageous over copying and bubbling any portion of a choice s context not yet cloned when an alternative fails no longer needs to be cloned . of course, the implementation must still identify the choice , and choice s single remaining strand as either left or right , to ensure consistency .we investigated pull - tabbing , an approach to non - deterministic computations in functional logic programming .section [ approaches ] recalls copying and bubbling , the competitors of pull - tabbing . here, we briefly highlight the key differences between these approaches .pull - tabbing ensures the completeness of computations in the sense that no alternative of a choice is left behind until all the results of some other alternative have been produced .similar to every approach with this property , it must clone portions of the context of a choice .in contrast to copying and bubbling , it clones the context of a choice in minimal increments with the intent and the possibility of stopping cloning the context as soon as an alternative of the choice fails .the idea of identifying choices to avoid combining in some expression the left and right alternatives of the same choice appears in .the idea is developed in the framework of a natural semantics for the translation of ( flat ) curry programs into haskell .a proof of the correctness of this idea will appear in which also addresses the similarities between the natural semantics and graph rewriting .this discussion , although informal , is enlightening .we formally defined the pull - tab transformation , characterized the class of programs for which the transformation is intended , extended the computations in these programs to include the transformation , proved the correctness of these extended computations , and described the condition that reduces context cloning .in contrast to its competitors , in pull - tabbing any step is a simple and localized graph transformation .this fact should ease executing the steps in parallel .future work , aims at defining a pull - tab - aware parallel strategy and implementing it to measure the effectiveness of pull - tabbing . ,antoy , s. , fischer , s. , and reck , f. 2010 . the pull - tab transformation . in _ proceedinsg of the third international workshop on graph computation models_. enschede ,the netherlands , 127133 .available at http://gcm-events.org/gcm2010/pages/gcm2010-preproceedings.pdf .optimal non - deterministic functional logic computations . in _ proceedings of the sixth international conference on algebraic and logic programming ( alp97)_. springer lncs 1298 , southampton , uk , 1630 . ,brown , d. , and chiang , s. 2006 .lazy context cloning for non - deterministic graph rewriting . in _ proc .of the 3rd international workshop on term graph rewriting , termgraph06_. vienna , austria , 6170 .. set functions for functional logic programming . in _ proceedings of the 11th acm sigplan international conference on principles and practice of declarative programming ( ppdp 2009)_. lisbon , portugal , 7382 . , hanus , m. , liu , j. , and tolmach , a. 2005 . a virtual machine for functional logic computations . in _ proc .of the 16th international workshop on implementation and application of functional languages ( ifl 2004)_. springer lncs 3474 , lubeck , germany , 108125 .\2007 . on a tighter integration of functional and logic programming . in _aplas07 : proceedings of the 5th asian conference on programming languages and systems_. springer - verlag , berlin , heidelberg , 122138 . \1997 .on constructor - based graph rewriting systems .tech . rep .985-i , imag .available at ftp://ftp.imag.fr / pub / labo - leibniz / old - archives / pmp / c - graph - rewriting.p% s.gz[ftp://ftp.imag.fr / pub / labo - leibniz / old - archives / pmp / c - graph - rewriting.p% s.gz ] ., rodrguez - hortal , j. , and snchez - hernndez , j. 2007 . a simple rewrite notion for call - time choice semantics . in _ ppdp 07 : proceedings of the 9th acm sigplan international conference on principles and practice of declarative programming_. acm , new york , ny , usa , 197208 ., rodrguez - hortal , j. , and snchez - hernndez , j. 2008 . rewriting and call - time choice : the ho case . in _ proc . of the 9th international symposium on functional and logic programming ( flops 2008)_. springer lncs 4989 , 147162 . ,antoy , s. , and nita , m. 2004 .implementing functional logic languages using multiple threads and stores . in _ proc .of the 2004 international conference on functional programming ( icfp)_. acm , snowbird , utah , usa , 90102 .
pull - tabbing is an evaluation approach for functional logic computations , based on a graph transformation recently proposed , which avoids making irrevocable non - deterministic choices that would jeopardize the completeness of computations . in contrast to other approaches with this property , it does not require an upfront cloning of a possibly large portion of the choice s context . we formally define the pull - tab transformation , characterize the class of programs for which the transformation is intended , extend the computations in these programs to include the transformation , and prove the correctness of the extended computations . functional logic programming , non - determinism , graph rewriting , pull - tabbing to 0pt -4.5 in to 0pt draft we d mar 30 17:21:24 pdt 2011
methods for protein structure prediction , simulation and design rely on an energy function that represents the protein s free energy landscape ; a protein s native state typically corresponds to the state with minimum free energy .so - called knowledge based potentials ( kbp ) are parametrized functions for free energy calculations that are commonly used for modeling protein structures .these potentials are obtained from databases of known protein structures and lie at the heart of some of the best protein structure prediction methods .the use of kbps originates from the work of tanaka and scheraga who were the first to extract effective interactions from the frequency of contacts in x - ray structures of native proteins .miyazawa and jernigan formalized the theory for contact interactions by means of the quasi - chemical approximation .many different approaches for developing kbps exist , but the most successful methods to date build upon a seminal paper by sippl published two decades ago which introduced kbps based on probability distributions of pairwise distances in proteins and reference states .these kbps were called `` potentials of mean force '' , and seen as approximations of free energy functions .sippl s work was inspired by the statistical physics of liquids , where a `` potential of mean force '' has a very precise and undisputed definition and meaning .however , the validity of the application to biological macromolecules is vigorously disputed in the literature .nonetheless , pmfs are widely used with considerable success ; not only for protein structure prediction , but also for quality assessment and identification of errors , fold recognition and threading , molecular dynamics , protein - ligand interactions , protein design and engineering , and the prediction of binding affinity . in this article , the abbreviation `` pmf '' will refer to the pairwise distance dependent kbps following sippl , and the generalization that we introduce in this article ; we will write `` potentials of mean force '' in full when we refer to the real , physically valid potentials as used in liquid systems . at the end of the article, we will propose a new name for these statistical quantities , to set them apart from true potentials of mean force with a firm physical basis . despite the progress in methodology and theory , and the dramatic increase in the number of experimentally determined protein structures ,the accuracy of the energy functions still remains the main obstacle to accurate protein structure prediction .recently , several groups demonstrated that it is the quality of the coarse grained energy functions , rather than inadequate sampling , that impairs the successful prediction of the native state .the insights presented in this article point towards a new , theoretically well - founded way to construct and refine energy functions , and thus address a timely problem .we start with an informal outline of the general ideas presented in this article , and then analyze two notable attempts in the literature to justify pmfs .we point out their shortcomings , and subsequently present a rigorous probabilistic explanation of the strengths and shortcomings of traditional pairwise distance pmfs .this explanation sheds a surprising new light on the nature of the reference state , and allows the generalization of pmfs beyond pairwise distances in a statistically valid way .finally , we demonstrate our method in two applications involving protein compactness and hydrogen bonding . in the latter case, we also show that pmfs can be iteratively optimized , thereby effectively sculpting an energy funnel .in order to emphasize the practical implications of the theoretical insights that we present here , we start with a very concrete example that illustrates the essential concepts ( see fig . [fig : simple ] ) .currently , protein structure prediction methods often make use of fragment libraries : collections of short fragments derived from known protein structures in the protein data bank ( pdb ) . by assembling a suitable set of fragments ,one obtains conformations that are protein - like on a local length scale .that is , these conformations typically lack non - local features that characterize real proteins , such as a well - packed hydrophobic core or an extensive hydrogen bond network. such aspects of protein structure are not , or only partly , captured by fragment libraries .formally , a fragment library specifies a probability distribution , where is for example a vector of dihedral angles . in order to obtain conformations that also possess the desired non - local features , needs to be complemented with another probability distribution , with being for example a vector of pairwise distances , the radius of gyration , the hydrogen bonding network , or any combination of non - local features .typically , is a deterministic function of ; we use the notation when necessary . for the sake of argument, we will focus on the radius of gyration ( ) at this point ; in this case becomes .we assume that a suitable was derived from the set of known protein structures ; without loss of generality , we leave out the dependency on the amino acid sequence for simplicity .the problem that we address in this article can be illustrated with the following question : how can we combine and in a rigorous , meaningful way ?in other words , we want to use the fragment library to sample conformations whose radii of gyration are distributed according to .these conformations should display a realistic _ local _ structure as well , reflecting the use of the fragment library .simply multiplying and does not lead to the desired result , as and are not independent ; the resulting conformations will not be distributed according to .the solution is given in fig .[ fig : simple ] ; it involves the probability distribution , the probability distribution over the radius of gyration for conformations sampled solely from the fragment library .the subscript stands for _ reference state _ as will be explained below .the solution generates conformations whose radii of gyration are distributed according to .the influence of is apparent in the fact that for conformations with a given , their local structure will be distributed according to .the latter distribution has a clear interpretation : it corresponds to sampling an infinite amount of conformations from a fragment library , and retaining only those with the desired . note that even if we chose the uniform distribution for , the resulting will_ not _ ( necessarily ) be uniform .intuitively , provides correct information about the radius of gyration , but no information about local structure ; provides approximately correct information about the structure of proteins on a local length scale , but is incorrect on a global scale ( leading to an incorrect probability distribution for the radius of gyration ) ; finally , the formula shown in fig .[ fig : simple ] merges these two complementary sources of information together .another viewpoint is that and are used to correct the shortcomings of .this construction is statistically rigorous , provided that and are proper probability distributions . after this illustrative example, we now review the use of pmfs in protein structure prediction , and discuss how pmfs can be understood and generalized in the theoretical framework that we briefly outlined here .many textbooks present pmfs as a simple consequence of the boltzmann distribution , as applied to pairwise distances between amino acids .this distribution , applied to a specific pair of amino acids , is given by : where is the distance , is boltzmann s constant , is the temperature and is the partition function , with .the quantity is the free energy assigned to the pairwise system .simple rearrangement results in the _ inverse boltzmann formula _ , which expresses the free energy as a function of : to construct a pmf ,one then introduces a so - called _ reference state _ with a corresponding distribution and partition function , and calculates the following free energy difference : the reference state typically results from a hypothetical system in which the specific interactions between the amino acids are absent .the second term involving and can be ignored , as it is a constant . in practice , is estimated from the database of known protein structures , while typically results from calculations or simulations .for example , could be the conditional probability of finding the atoms of a valine and a serine at a given distance from each other , giving rise to the free energy difference .the total free energy difference of a protein , , is then claimed to be the sum of all the pairwise free energies: where the sum runs over all amino acid pairs ( with ) and is their corresponding distance .it should be noted that in many studies does not depend on the amino acid sequence .intuitively , it is clear that a low free energy difference indicates that the set of distances in a structure is more likely in proteins than in the reference state .however , the physical meaning of these pmfs have been widely disputed since their introduction . indeed , why is it at all necessary to subtract a reference state energy ? what is the optimal reference state ? can pmfs be generalized and justified beyond pairwise distances , and if so , how ? before we discuss and clarify these issues , we discuss two qualitative justifications that were previously reported in the literature :the first based on a physical analogy , and the second using a statistical argument .the first , qualitative justification of pmfs is due to sippl , and based on an analogy with the statistical physics of liquids . for liquids , the potential of mean force is related to the _ pair correlation function _ , which is given by : where and are the respective probabilities of finding two particles at a distance from each other in the liquid and in the reference state__. _ _ for liquids, the reference state is clearly defined ; it corresponds to the ideal gas , consisting of non - interacting particles .the two - particle potential of mean force is related to by : according to the _reversible work theorem _ , the two - particle potential of mean force is the reversible work required to bring two particles in the liquid from infinite separation to a distance from each other .sippl justified the use of pmfs a few years after he introduced them for use in protein structure prediction by appealing to the analogy with the reversible work theorem for liquids . for liquids, can be experimentally measured using small angle x - ray scattering ; for proteins , is obtained from the set of known protein structures , as explained in the previous section .the analogy described above might provide some physical insight , but , as ben - naim writes in a seminal publication : `` the quantities , referred to as ` statistical potentials , ' ` structure based potentials , ' or ` pair potentials of mean force ' , as derived from the protein data bank , are neither ` potentials ' nor ` potentials of mean force , ' in the ordinary sense as used in the literature on liquids and solutions . ''another issue is that the analogy does not specify a suitable reference state for proteins .this is also reflected in the literature on statistical potentials ; the construction of a suitable reference state continues to be an active research topic . in the next section ,we discuss a second , more recent justification that is based on probabilistic reasoning . baker and co - workers _ _ justified pmfs from a bayesian point of view and used these insights in the construction of the coarse grained rosetta energy function ; samudrala and moult used similar reasoning for the rapdf potential .according to bayesian probability calculus , the conditional probability of a structure , given the amino acid sequence , can be written as : is proportional to the product of the likelihood times the prior . by assuming that the likelihood can be approximated as a product of pairwise probabilities , and applying bayes theorem , the likelihood can be written as : where the product runs over all amino acid pairs ( with ) , and is the distance between amino acids and . obviously , the negative of the logarithm of expression ( [ eq_rosettapairs ] ) has the same functional form as the classic pairwise distance pmfs , with the denominator playing the role of the reference state in eq .[ eq : classic ] .the merit of this explanation is the qualitative demonstration that the functional form of a pmf can be obtained from probabilistic reasoning .although this view is insightful it rightfully drew the attention to the application of bayesian methods to protein structure prediction there is a more quantitative explanation , which does not rely on the incorrect assumption of pairwise decomposability , and leads to a different , _ quantitative _ conclusion regarding the nature of the reference state .this explanation is given in the next section .expressions that resemble pmfs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction : how to improve an imperfect probability distribution over a first variable using a probability distribution over a second variable ( see fig .[ fig : full ] , fig .[ fig : simple ] and materials and methods ) .we assume that is a deterministic function of ; we write when necessary . in that case , and are called _ fine _ and _ coarse grained variables _ , respectively .when is a function of , the probability distribution automatically implies a probability distribution .this distribution has some unusual properties : ; and if , it follows that .typically , represents _ local _ features of protein structure ( such as backbone dihedral angles ) , while represents _ nonlocal _ features ( such as hydrogen bonding , compactness or pairwise distances ) .however , the same reasoning also applies to other cases ; for example , could represent information coming from experimental data , and could be embodied in an empirical force field as used in molecular mechanics ( see fig .[ fig : full ] ) .typically , the distribution in itself is not sufficient for protein structure prediction : it does not consider important nonlocal features such as hydrogen bonding , compactness or favorable amino acid interactions . as a result , is incorrect with respect to , and needs to be supplemented with a probability distribution that provides additional information . by construction , is assumed to be correct ( or at least useful ) .the above situation arises naturally in protein structure prediction .for example , could be a probability distribution over the radius of gyration , hydrogen bond geometry or the set of pairwise distances , and could be a fragment library or a probabilistic model of local structure . in fig.[fig : simple ] , we used the example of a distribution over the radius of gyration for and a fragment library for . obviously , sampling from a fragment library and retaining structures with the desired nonlocal structure ( radius of gyration , hydrogen bonding , etc . )is in principle possible , but in practice extremely inefficient . how can be combined with in a meaningful way ?as mentioned previously , simply multiplying the two distributions resulting in does not lead to the desired result as the two variables are obviously not independent .the correct solution follows from simple statistical considerations ( see materials and methods ) , and is given by the following expression : we use the notation , as this distribution implies the desired distribution for .the distribution in the denominator is the probability distribution that is implied by over the coarse grained variable .conceptually , dividing by takes care of the signal in with respect to the coarse grained variable .the ratio in this expression corresponds to the probabilistic formulation of a pmf , and corresponds to the reference state ( see materials and methods ) . in practice , is typically not evaluated directly , but brought in through conformational monte carlo sampling ( see materials and methods ) ; often sampling is based on a fragment library , although other methods are possible , including sampling from a probabilistic model or a suitable energy function .the ratio , which corresponds to the probabilistic formulation of a pmf , also naturally arises in the markov chain monte carlo ( mcmc ) procedure ( see materials and methods ) .an important insight is that , in this case , the conformational sampling method uniquely defines the reference state .thus , in the case of a fragment library , the reference distribution is the probability distribution over that is obtained by sampling conformations solely using the fragment library . as the method we have introduced hereinvariably relies on the ratio of two probability distributions one regarding protein structure and the other regarding a well - defined reference state we refer to it as the _ reference ratio method_. in the next section , we show that the standard pairwise distance pmfs can be seen as an approximation of the reference ratio method .in this section , we apply the reference ratio method to the standard , pairwise distance case . in the classic pmf approach ,one considers the vector of pairwise distances between the amino acids . in this case, it is usually assumed that we can write where the product runs over all amino acid pairs ( with ) , and is their matching distance .clearly , the assumption that the joint probability can be written as a product of pairwise probabilities is not justified , but in practice this assumption often provides useful results . in order to obtain protein - like conformations , needs to be combined with an appropriate probability distribution that addresses the local features of the polypeptide chain . applying eq .[ eq : ratio ] to this case results in the following expression: where the denominator is the probability distribution over the pairwise distances as induced by the distribution .the ratio in this expression corresponds to the probabilistic expression of a pmf .the reference state is thus determined by : it reflects the probability of generating a set of pairwise distances using local structure information alone . obviously , as is conditional upon the amino acid sequence , the reference state becomes sequence dependent as well .we again emphasize that the assumption of pairwise decomposability in eq .[ eq : pairwise ] is incorrect .therefore , the application of the reference ratio method results in a useful approximation , at best . as a result, the optimal definition of the reference state also needs to compensate for the errors implied by the invalid assumption . as is itwell established that distance dependent pmfs perform well with a suitable definition of the reference state , and the incorrect pairwise decomposability assumption impairs a rigorous statistical analysis , we do not discuss this type of pmfs further .indeed , for pairwise distance pmfs , the main challenge lies in developing better probabilistic models of sets of pairwise distances .the pairwise distance pmfs currently used in protein structure prediction are thus not statistically rigorous , because they do not make use of a proper joint probability distribution over the pairwise distances , which are strongly intercorrelated due to the connectivity of molecules .a rigorous application of the reference ratio method would require the construction of a proper joint probability distribution over pairwise distances .this is certainly possible in principle , but currently , as far as we know , a challenging open problem and beyond the scope of this article .however , we have clarified that the idea of using a reference state is correct and valid , and that this state has a very precise definition .therefore , in the next two sections , we show instead how statistically valid quantities , similar to pmfs , can be obtained for very different coarse grained variables . as a first application of the reference ratio method, we consider the task of sampling protein conformations with a given probability distribution for the radius of gyration . for , we chose a gaussian distribution with mean and standard deviation .this choice is completely arbitrary ; it simply serves to illustrate that the reference ratio method allows imposing an exact probability distribution over a certain feature of interest . applying eq .[ eq : ratio ] results in: for , we used torusdbn a graphical model that allows sampling of plausible backbone angles and sampled conditional on the amino acid sequence of ubiquitin ( see materials and methods ) . is the probability distribution of the radius of gyration for structures sampled solely from torusdbn , which was determined using generalized multihistogram mcmc sampling ( see materials and methods ) . in fig .[ fig : rg_plot ] , we contrast sampling from eq . [ eq : rg ] with sampling from . in the latter case , the reference state is not properly taken into account , which results in a significant shift towards higher radii of gyration .in contrast , the distribution of for the correct distribution , given by eq .[ eq : rg ] , is indistinguishable from the target distribution .this qualitative result is confirmed by the kullback - leibler divergence a natural distance measure for probability distributions expressed in bits between the target distribution and the resulting marginal distributions of .adding to the denominator diminishes the distance from 0.08 to 0.001 bits .for this particular pmf , the effect of using the correct reference state is significant , but relatively modest ; in the next section , we discuss an application where its effect is much more pronounced . here, we demonstrate that pmfs can be optimized iteratively , which is particularly useful if the reference probability distribution is difficult to estimate .we illustrate the method with a target distribution that models the hydrogen bonding network using a multinomial distribution .we describe the hydrogen bonding network ( ) with eight integers ( for details , see materials and methods ) .three integers represent the number of residues that do not partake in hydrogen bonds in -helices , -sheets and coils , respectively .the five remaining integers represent the number of hydrogen bonds within -helices , within -strands , within coils , between -helices and coils , and between -strands and coils , respectively . as target distribution over these eight integers ,we chose a multinomial distribution whose parameters were derived from the native structure of protein g ( see materials and methods ) . provides information , regarding protein g , on the number of hydrogen bonds and the secondary structure elements involved , but does not specify _ where _ the hydrogen bonds or secondary elements occur .as in the previous section , we use torusdbn as the sampling distribution ; we sample backbone angles conditional on the amino acid sequence of protein g. native secondary structure information was _ not _ used in sampling from torusdbn .the reference distribution , due to torusdbn , is very difficult to estimate correctly for several reasons : its shape is unknown and presumably complex ; its dimensionality is high ; and the data is very sparse with respect to -sheet content .therefore , can only be approximated , which results in a suboptimal pmf .a key insight is that one can apply the method iteratively until a satisfactory pmf is obtained ( see fig .[ fig : full ] , dashed line ) . in each iteration , the ( complex ) reference distribution is approximated using a simple probability distribution ; we illustrate the method by using a multinomial distribution , whose parameters are estimated by maximum likelihood estimation in each iteration , using the conformations generated in the previous iteration . in the first iteration , we simply set the reference distribution equal to the uniform distribution .formally , the procedure works as follows . in iteration , the distribution is improved using the samples generated in iteration : where is the reference distribution estimated from the samples generated in the -th iteration , stems from torusdbn , and is the uniform distribution . after each iteration ,the set of samples is enriched in hydrogen bonds , and the reference distribution can be progressively estimated more precisely .note that in the first iteration , we simply use the product of the target and the sampling distribution ; no reference state is involved .[ fig : hbond_counts ] shows the evolution of the fractions versus the iteration number for the eight hydrogen bond categories ; the structures with minimum energy for all six iterations are shown in fig .[ fig : hbond_structures ] . in the first iteration ,the structure with minimum energy ( highest probability ) consists of a single -helix ; -sheets are entirely absent ( see fig . [fig : hbond_structures ] , structure 1 ) . already in the second iteration, -strands start to pair , and in the third and higher iterations complete sheets are readily formed .the iterative optimization of the pmf quickly leads to a dramatic enrichment in -sheet structures , as desired , and the fractions of the eight categories become very close to the native values ( fig.[fig : hbond_counts ] ) . the strengths and weaknesses of pmfs can be rigorously explained based on simple probabilistic considerations , which leads to some surprising new insights of direct practical relevance .first , we have made clear that pmfs naturally arise when two probability distributions need to be combined in a meaningful way .one of these distributions typically addresses local structure , and its contribution often arises from conformational sampling .each conformational sampling method thus requires its own reference state and corresponding reference distribution ; this is likely the main reason behind the large number of different reference states reported in the literature .if the sampling method is conditional upon the amino acid sequence , the reference state necessarily also depends on the amino acid sequence .second , conventional applications of pairwise distance pmfs usually lack two necessary features to make them fully rigorous : the use of a proper probability distribution over pairwise distances in proteins for , and the recognition that the reference state is rigorously defined by the conformational sampling scheme used , that is , .usually , the reference state is derived from external physical considerations .third , pmfs are not tied to pairwise distances , but generalize to any coarse grained variable .attempts to develop similar quantities that , for example , consider solvent exposure , relative side chain orientations , backbone dihedral angles or hydrogen bonds are thus , in principle , entirely justified .hence , our probabilistic interpretation opens up a wide range of possibilities for advanced , well - justified energy functions based on sound probabilistic reasoning ; the main challenge is to develop proper probabilistic models of the features of interest and the estimation of their parameters .strikingly , the example applications involving radius of gyration and hydrogen bonding that we presented in this article _ are _ statistically valid and rigorous , in contrast to the traditional pairwise distance pmfs .finally , our results reveal a straightforward way to optimize pmfs .often , it is difficult to estimate the probability distribution that describes the reference state . in that case, one can start with an approximate pmf , and apply the method iteratively . in each iteration ,a new reference state is estimated , with a matching probability distribution . in that way ,one iteratively attempts to sculpt an energy funnel .we illustrated this approach with a probabilistic model of the hydrogen bond network .although iterative application of the inverse boltzmann formula has been described before , its theoretical justification , optimal definition of the reference state and scope remained unclear .as the traditional pairwise distance pmfs used in protein structure prediction arise from the imperfect application of a statistically valid and rigorous procedure with a much wider scope , we consider it highly desirable that the name `` potential of mean force '' should be reserved for true , physically valid quantities . because the statistical quantities we discussed invariably rely on the use of a ratio of two probability distributions , one concerning protein structure and the other concerning the ( now well defined ) reference state , we suggest the name `` reference ratio distribution '' deriving from the application of the `` reference ratio method '' .pairwise distance pmfs , as used in protein structure prediction , are not physically justified potentials of mean force or free energies and the reference state does not depend on external physical considerations ; the same is of course true for our generalization . however , these pmfs are approximations of statistically valid and rigorous quantities , and these quantities can be generalized beyond pairwise distances to other aspects of protein structure .the fact that these quantities are not potentials of mean force or free energies is of no consequence for their statistical rigor or practical importance both of which are considerable .our results thus vindicate , formalize and generalize sippl s original and seminal idea .after about twenty years of controversy , pmfs or rather the statistical quantities that we have introduced in this article are ready for new challenges .we consider a joint probability distribution and a probability distribution over two variables of interest , and , where is a deterministic function of ; we write when relevant .note that because is a function of , it follows that ; and if , then .we assume that is a meaningful and informative distribution for .next , we note that implies a matching marginal probability distribution ( where the subscript refers to the fact that corresponds to the reference state , as we will show below): we consider the case where differs substantially from ; hence , can be considered as incorrect . on the other hand, we also assume that the conditional distribution is indeed meaningful and informative ( see next section ) .this distribution is given by : where is the delta function .the question is now how to combine the two distributions and each of which provide useful information on and in a meaningful way .before we provide the solution , we illustrate how this problem naturally arises in protein structure prediction . in protein structure prediction, is often embodied in a fragment library ; in that case , is a set of atomic coordinates obtained from assembling a set of polypeptide fragments . of course, could also arise from a probabilistic model , a pool of known protein structures , or any other conformational sampling method .the variable could , for example , be the radius of gyration , the hydrogen bond network or the set of pairwise distances .if is a deterministic function of , the two variables are called _ coarse grained _ and _ fine grained _ variables , respectively .for example , sampling a set of dihedral angles for the protein backbone uniquely defines the hydrogen bond geometry between any of the backbone atoms .above , we assumed that is a meaningful distribution .this is often a reasonable assumption ; fragment libraries , for example , originate from real protein structures , and conditioning on protein - like compactness or hydrogen bonding will thus result in a meaningful distribution .of course , sampling solely from is not an efficient strategy to obtain hydrogen bonded or compact conformations , as they will be exceedingly rare .we now provide the solution of the problem outlined in the previous section , and discuss its relevance to the construction of pmfs . a first step on the way tothe solution is to note that the product rule of probability theory allows us to write : as only is given , we need to make a reasonable choice for . we assume , as discussed before , that is a meaningful choice , which leads to: in the next step , we apply the product formula of probability theory to the second factor , and obtain: the distribution has the correct marginal distribution . in the next two sections ,we discuss how this straightforward result can be used to great advantage for understanding and generalizing pmfs .first , we show that the joint distribution specified by eq .[ eq : bh ] can be reduced to a surprisingly simple functional form .second , we discuss how this result can be used in mcmc sampling . in both cases , expressions that correspond to a pmfarise naturally . using the product rule of probability theory , eq .[ eq : bh ] can be written as: because the coarse grained variable is a deterministic function of the fine grained variable , is the delta function : finally , we integrate out the , now redundant , coarse grained variable from the expression: and obtain our central result ( eq. [ eq : ratio ] ) . sampling from will result in the desired marginal probability distribution .the influence of the fine grained distribution is apparent in the fact that is equal to .the ratio in this expression corresponds to the usual probabilistic formulation of a pmf ; the distribution corresponds to the reference state . in the next section ,we show that pmfs also naturally arise when and are used together in metropolis - hastings sampling . here , we show that metropolis - hastings sampling from the distribution specified by eq .[ eq : bh ] , using as a proposal distribution , naturally results in expressions that are equivalent to pmfs .the derivation is also valid if the proposal distribution depends on the previous state , provided satisfies the detailed balance condition .according to the standard metropolis - hastings method , one can sample from a probability distribution by generating a markov chain where each state depends only on the previous state .the new state is generated using a proposal distribution , which includes as a special case . according to the metropolis - hastings method , the proposal is accepted with a probability : where is the starting state , and is the next proposed state .we assume that the proposal distribution satisfies the detailed balance condition: as a result , we can always write eq . [ eq : methas ] as: the metropolis - hastings expression ( eq .[ eq : methas ] ) , applied to the distribution specified by eq .[ eq : bh ] and using or as the proposal distribution , results in : which reduces to: hence , we see that the metropolis - hastings method requires the evaluation of ratios of the form when or is used as the proposal distribution ; these ratios correspond to the usual probabilistic formulation of a pmf . finally , when is a deterministic function of , the proposal distribution reduces to or , and eq .[ eq : mhratio ] becomes: conformational sampling from a suitable was done using torusdbn as implemented in phaistos ; backbone angles ( and ) were sampled conditional on the amino acid sequence .we used standard fixed bond lengths and bond angles in constructing the backbone coordinates from the angles , and represented all side chains ( except glycine and alanine ) with one dummy atom with a fixed position . for the radius of gyration application, we first determined using the multi - canonical mcmc method to find the sampling weights that yield a flat histogram .sampling from the resulting joint distribution ( eq .[ eq : rg ] ) was done using the same method . in both cases , we used 50 million iterations ; the bin size was 0.08 .sampling from torusdbn was done conditional on the amino acid sequence of ubiquitin ( 76 residues , pdb code 1ubq ) . for the hydrogen bond application ,sampling from the pmfs was done in the -ensemble , using the metropolis - hastings algorithm and the generalized multihistogram method for updating the weights . in each iteration , 50,000 samples ( out of 50 million metropolis - hastings steps )were generated , and the parameters of the multinomial distribution were subsequently obtained using maximum likelihood estimation .hydrogen bonds were defined as follows : the distance is below 3.5 , and the angles formed by and are both greater than 100 .each carbonyl group was assumed to be involved in at most one hydrogen bond ; in case of multiple hydrogen bond partners , the one with the lowest distance was selected .each residue was assigned to one of the eight possible hydrogen bond categories based on the presence of hydrogen bonding at its carbonyl group and the secondary structure assignments ( for both bond partners ) by torusdbn . the target distribution the multinomial distribution used in eq .[ eq : hbond_iterative ] was obtained by maximum likelihood estimation using the number of hydrogen bonds , for all eight categories , in the native structure of protein g ( 56 residues , pdb code 2gb1 ) . sampling from torusdbnwas done conditional on the amino acid sequence of protein g ; native secondary structure information was _ not _ used .t.h . , m.b . and m.p .are joint first authors .t.h . and j.f.b .developed the theory .m.b . and m.p .performed the simulations .the remaining authors contributed new tools .t.h . wrote the paper .we acknowledge funding by the _ danish program commission on nanoscience , biotechnology and it _( nabiit , project : `` simulating proteins on a millisecond time - scale '' , 2106 - 06 - 0009 ) , the _ danish research council for technology and production sciences _ ( ftp , project : `` protein structure ensembles from mathematical models '' , 274 - 09 - 0184 ) and the _ danish council for independent research _ (fnu , project : `` a bayesian approach to protein structure determination '' , 272 - 08 - 0315 ) . 10[ 1]`#1 ` urlstyle [ 1]doi:#1 [ 1 ] [ 2 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ key : # 1 + annotation : # 2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ anfinsen cb ( 1973 ) principles that govern the folding of protein chains .science 181 : 223230 .moult j ( 1997 ) comparison of database potentials and molecular mechanics force fields .curr opin struct biol 7 : 194199 .shen my , sali a ( 2006 ) statistical potential for assessment and prediction of protein structures .protein sci 15 : 25072524 .tanaka s , scheraga ha ( 1976 ) medium- and long - range interaction parameters between amino acids for predicting three - dimensional structures of proteins .macromolecules 9 : 945950 .miyazawa s , jernigan r ( 1985 ) estimation of effective interresidue contact energies from protein crystal structures : quasi - chemical approximation .macromolecules 18 : 534552 .miyazawa s , jernigan r ( 1999 ) an empirical energy potential with a reference state for protein fold and sequence recognition .proteins 36 : 357369 .sippl mj ( 1990 ) calculation of conformational ensembles from potentials of mean force .an approach to the knowledge - based prediction of local structures in globular proteins .j mol biol 213 : 859883 .chandler d ( 1987 ) introduction to modern statistical mechanics .oxford university press , usa . mcquarrie d ( 2000 ) statistical mechanics .university science books , usa . finkelstein a , badretdinov a , gutin a ( 1995 ) why do protein architectures have boltzmann - like statistics ? proteins struct func gen 23 : 142150 .rooman m , wodak s ( 1995 ) are database - derived potentials valid for scoring both forward and inverted protein folding ?protein eng 8 : 849 - 858 .thomas pd , dill ka ( 1996 ) statistical potentials extracted from protein structures : how accurate are they ?j mol biol 257 : 457469 .ben - naim a ( 1997 ) statistical potentials extracted from protein structures : are these meaningful potentials ? j chem phys 107 : 3698 - 3706 .koppensteiner wa , sippl mj ( 1998 ) knowledge - based potentials back to the roots .biochemistry mosc 63 : 247252 .shortle d ( 2003 ) propensities , probabilities , and the boltzmann hypothesis .protein sci 12 : 12981302 .kirtay c , mitchell j , lumley j ( 2005 ) knowledge based potentials : the reverse boltzmann methodology , virtual screening and molecular weight dependence .qsar & combinatorial sci 24 : 527536 .muegge i ( 2006 ) pmf scoring revisited .j med chem 49 : 58955902 .simons kt , kooperberg c , huang e , baker d ( 1997 ) assembly of protein tertiary structures from fragments with similar local sequences using simulated annealing and bayesian scoring functions .j mol biol 268 : 209225 .colubri a , jha a , shen m , sali a , berry r , et al .( 2006 ) minimalist representations and the importance of nearest neighbor effects in protein folding simulations .j mol biol 363 : 835857 .sippl mj ( 1993 ) recognition of errors in three - dimensional structures of proteins .proteins 17 : 355362 .eramian d , shen m , devos d , melo f , sali a , et al . ( 2006 ) a composite score for predicting errors in protein structure models .protein sci 15 : 16531666 .rykunov d , fiser a ( 2010 ) new statistical potential for quality assessment of protein models and a survey of energy functions .bmc bioinformatics 11 : 128 .jones dt , taylor wr , thornton jm ( 1992 ) a new approach to protein fold recognition .nature 358 : 8689 .mjek p , elber r ( 2009 ) a coarse - grained potential for fold recognition and molecular dynamics simulations of proteins .proteins 76 : 822836 .gohlke h , hendlich m , klebe g ( 2000 ) knowledge - based scoring function to predict protein - ligand interactions1 .j mol biol 295 : 337356 .gilis d , rooman m ( 1997 ) predicting protein stability changes upon mutation using database - derived potentials : solvent accessibility determines the importance of local versus non - local interactions along the sequence1 .j mol biol 272 : 276290 .gilis d , rooman m ( 2000 ) popmusic , an algorithm for predicting protein mutant stability changes .application to prion proteins .protein eng 13 : 849856 .su y , zhou a , xia x , li w , sun z ( 2009 ) quantitative prediction of protein - protein binding affinity with a potential of mean force considering volume correction .protein sci 18 : 25502558 .chandler d ( 2005 ) interfaces and the driving force of hydrophobic assembly .nature 437 : 640647 .bowman gr , pande vs ( 2009 ) simulated tempering yields insight into the low - resolution rosetta scoring functions .proteins 74 : 777788 .shmygelska a , levitt m ( 2009 ) generalized ensemble methods for de novo structure prediction .proc natl acad sci u s a 106 : 14151420 .bryngelson j , wolynes p ( 1987 ) spin glasses and the statistical mechanics of protein folding .proc natl acad sci u s a 84 : 75247528 .leopold p , montal m , onuchic j ( 1992 ) protein folding funnels : a kinetic approach to the sequence - structure relationship .proc natl acad sci u s a 89 : 87218725 .dill k , chan h ( 1997 ) from levinthal to pathways to funnels .nat struct biol 4 : 1019 .reith d , ptz m , mller - plathe f ( 2003 ) deriving effective mesoscale potentials from atomistic simulations .j comput chem 24 : 16241636 .fain b , levitt m ( 2003 ) funnel sculpting for in silico assembly of secondary structure elements of proteins .proc natl acad sci u s a 100 : 1070010705 .sippl mj , ortner m , jaritz m , lackner p , flockner h ( 1996 ) helmholtz free energies of atom pair interactions in proteins .fold des 1 : 28998 .zhang c , liu s , zhou h , zhou y ( 2004 ) an accurate , residue - level , pair potential of mean force for folding and binding based on the distance - scaled , ideal - gas reference state .protein sci 13 : 400411 .cheng j , pei j , lai l ( 2007 ) a free - rotating and self - avoiding chain model for deriving statistical potentials based on protein structures .biophys j 92 : 38683877 .rykunov d , fiser a ( 2007 ) effects of amino acid composition , finite size of proteins , and sparse statistics on distance - dependent statistical pair potentials . proteins 67 : 559568 .bernard b , samudrala r ( 2008 ) a generalized knowledge - based discriminatory function for biomolecular interactions .proteins 76 : 115128 .samudrala r , moult j ( 1998 ) an all - atom distance - dependent conditional probability discriminatory function for protein structure prediction .j mol biol 275 : 895916 .pearl j ( 1988 ) probabilistic reasoning in intelligent systems , morgan kaufmann , san francisco , usa , chapter 3 . pp .108115 .lazaridis t , karplus m ( 2000 ) effective energy functions for protein structure prediction .curr opin struct biol 10 : 139145 .boomsma w , mardia kv , taylor cc , ferkinghoff - borg j , krogh a , et al .( 2008 ) a generative , probabilistic model of local protein structure .proc natl acad sci u s a 105 : 89328937 .sippl m , hendlich m , lackner p ( 1992 ) assembly of polypeptide and protein backbone conformations from low energy ensembles of short fragments : development of strategies and construction of models for myoglobin , lysozyme , and thymosin .protein sci 1 : 625640 .hamelryck t , kent j , krogh a ( 2006 ) sampling realistic protein conformations using local structural bias .plos comput biol 2 : e131 .zhao f , peng j , debartolo j , freed k , sosnick t , et al .( 2010 ) a probabilistic and continuous model of protein conformational space for template - free modeling .j comput biol 17 : 783 - 798 .hamelryck t ( 2009 ) probabilistic models and machine learning in structural bioinformatics .stat methods med res 18 : 505526 .kullback s , leibler r ( 1951 ) on information and sufficiency .annals math stat 22 : 7986 .zhou h , zhou y ( 2002 ) distance - scaled , finite ideal - gas reference state improves structure - derived potentials of mean force for structure selection and stability prediction .protein sci 11 : 27142726 .bowie j , luthy r , eisenberg d ( 1991 ) a method to identify protein sequences that fold into a known three - dimensional structure .science 253 : 164164 .liithy r , bowie j , eisenberg d ( 1992 ) assessment of protein models with three - dimensional profiles .nature 356 : 8385 .buchete nv , straub je , thirumalai d ( 2004 ) development of novel statistical potentials for protein fold recognition .curr opin struct biol 14 : 225232 .rooman m , kocher j , wodak s ( 1991 ) prediction of protein backbone conformation based on seven structure assignments : influence of local interactions .j mol biol 221 : 961979 .kocher j , rooman m , wodak s ( 1994 ) factors influencing the ability of knowledge - based potentials to identify native sequence - structure matches .j mol biol 235 : 15981613 .simons kt , ruczinski i , kooperberg c , fox ba , bystroff c , et al . ( 1999 ) improved recognition of native - like protein structures using a combination of sequence - dependent and sequence - independent features of proteins .proteins 34 : 8295 .thomas p , dill k ( 1996 ) an iterative method for extracting energy - like quantities from protein structures .proc natl acad sci u s a 93 : 1162811633 .huang s , zou x ( 2006 ) an iterative knowledge - based scoring function to predict protein - ligand interactions : i. derivation of interaction potentials .j comp chem 27 : 18661875 .gilks w , richardson s , spiegelhalter d ( 1996 ) markov chain monte carlo in practice . chapman & hall / crc , usa . borg m , mardia k , boomsma w , frellsen j , harder t , et al .( 2009 ) a probabilistic approach to protein structure prediction : phaistos in casp9 . in : gusnanto a , mardia k , fallaize c , editors , lasr2009 - statistical tools for challenges in bioinformatics .leeds university press , leeds , uk , pp .ferkinghoff - borg j ( 2002 ) optimized monte carlo analysis for generalized ensembles .eur phys j b 29 : 481 - 484 .hesselbo b , stinchcombe r ( 1995 ) monte carlo simulation and global optimization without parameters .phys rev lett 74 : 21512155 .delano wl ( 2002 ) the pymol molecular graphics system .palo alto , ca , usa : delano scientific .* illustration of the central idea presented in this article . * in this example , the goal is to sample conformations with a given distribution for the radius of gyration , and a plausible local structure . could , for example , be derived from known structures in the protein data bank ( pdb , left box ) . is a probability distribution over local structure , typically embodied in fragment library ( right box ) . in order to combine and in a meaningful way ( see text ) ,the two distributions are multiplied and divided by ( formula at the bottom ) ; is the probability distribution over the radius of gyration for conformations sampled solely from the fragment library ( that is , ) .the probability distribution will generate conformations with plausible local structures ( due to ) , while their radii of gyration will be distributed according to , as desired .this simple idea lies at the theoretical heart of the pmf expressions used in protein structure prediction . ] * general statistical justification of pmfs . *the goal is to combine a distribution over a fine grained variable ( top right ) , with a probability distribution over a coarse grained variable ( top left ) . could be , for example , embodied in a fragment library ( ) , a probabilistic model of local structure ( ) or an energy function ( ) ; could be , for example , the radius of gyration , the hydrogen bond network , or the set of pairwise distances . usually reflects the distribution of in known protein structures ( pdb ) , but could also stem from experimental data ( ) . sampling from results in a distribution that differs from .multiplying and does not result in the desired distribution for either ( red box ) ; the correct result requires dividing out the signal with respect to due to ( green box ) .the _ reference _ distribution in the denominator corresponds to the contribution of the reference state in a pmf . if is only approximately known , the method can be applied iteratively ( dashed arrow ) . in that case , one attempts to iteratively sculpt an energy funnel .the procedure is statistically rigorous provided and are proper probability distributions ; this is usually not the case for conventional pairwise distance pmfs . ] * a pmf based on the radius of gyration . *the goal is to adapt a distribution which allows sampling of local structures such that a given target distribution is obtained . for , we used the amino acid sequence of ubiquitin .sampling from alone results in a distribution with an average of about 27 ( triangles ) .sampling using the correct expression ( open circles ) , given by eq .[ eq : rg ] , results in a distribution that coincides with the target distribution ( solid line ) .not taking the reference state into account results in a significant shift towards higher ( black circles ) . ]* iterative estimation of a pmf .* for each of the eight hydrogen bond categories ( see text ) , the black bar to the right denotes the fraction of occurrence in the native structure of protein g. the gray bars denote the fractions of the eight categories in samples from each iteration ; the first iteration is shown to the left in light gray . in the last iteration ( iteration 6 ; dark gray bars , right ) the values are very close to the native values for all eight categories .note that hydrogen bonds between -strands are nearly absent in the first iteration ( category ) . ]
understanding protein structure is of crucial importance in science , medicine and biotechnology . for about two decades , knowledge based potentials based on pairwise distances so - called `` potentials of mean force '' ( pmfs ) have been center stage in the prediction and design of protein structure and the simulation of protein folding . however , the validity , scope and limitations of these potentials are still vigorously debated and disputed , and the optimal choice of the reference state a necessary component of these potentials is an unsolved problem . pmfs are loosely justified by analogy to the reversible work theorem in statistical physics , or by a statistical argument based on a likelihood function . both justifications are insightful but leave many questions unanswered . here , we show for the first time that pmfs can be seen as approximations to quantities that do have a rigorous probabilistic justification : they naturally arise when probability distributions over different features of proteins need to be combined . we call these quantities `` reference ratio distributions '' deriving from the application of the `` reference ratio method '' . this new view is not only of theoretical relevance , but leads to many insights that are of direct practical use : the reference state is uniquely defined and does not require external physical insights ; the approach can be generalized beyond pairwise distances to arbitrary features of protein structure ; and it becomes clear for which purposes the use of these quantities is justified . we illustrate these insights with two applications , involving the radius of gyration and hydrogen bonding . in the latter case , we also show how the reference ratio method can be iteratively applied to sculpt an energy funnel . our results considerably increase the understanding and scope of energy functions derived from known biomolecular structures .
in recent years , indoor visible light communication by light emitting diodes ( leds ) has attracted extensive academic attention ( and references therein ) , driven by advancements in designing and manufacturing of leds .adoption of leds as lighting source can significantly reduce energy consumption and at the same time offering high speed wireless communication , which is the primary focus of visible light communication ( vlc ) research .most of the existing schemes employ blue leds with a yellow phosphor coating , while with red / green / blue ( rgb ) leds higher data rate is possible because of wavelength division multiplexing . with rgb leds , color - shift keying ( csk )was recommended by the ieee 802.15.7 visible light communication task group .a few authors have promoted this idea by designing constellations using signal processing tools .drost et al . proposed an efficient constellation designed for csk based on billiard algorithm .monteiro et al . designed the csk constellation using an interior point method , operating with peak and color cross - talk constraints .bai et al .considered the constellation design for csk to minimize the bit error rate ( ber ) subject to some lighting constraints .despite the fact that the three - dimensional constellation design problems have been formulated in , a few important questions have not been addressed .they include how to compare a system with csk employed and a conventional decoupled system , the constellation design , and the peak - to - average power ratio ( papr ) reduction . in this paper , we propose a novel constellation design scheme in high dimensional space , termed csk - advanced . in our design , arbitrary number of red , blue , and green leds can be selected . with any average optical intensity and average color selected, we formulate an optimization problem to minimize the system symbol error rate ( ser ) by maximizing the minimum euclidean distance ( med ) among designed symbol vectors .further , other important lighting factors such as color rendering index ( cri ) and luminous efficacy rate ( ler ) are also considered .further , optical papr is included as an additional constraint .the remainder of this paper is organized as follows . in sectionii , we consider the constellation design problem assuming ideal channel . in section iii , we consider the constellation design for channel with cross - talks ( cwc ) . an svd - based pre - equalizer is applied and the constellations are redesigned subject to a transformed set of constraints . in section iv , we discuss the optimization of constellations under arbitrary color illuminations . in sectionv , we compare our scheme with a decoupled scheme and provide performance evaluation .finally , section vi provides conclusions .the system diagram is shown in fig .1 , with red leds , green leds , and blue leds . in one symbol interval of length , a random bit sequence of size first mapped by a bsa mapper to a symbol vector of size , where . the symbol is chosen from a constellation where denotes the constellation size .each component is applied to the corresponding led as intensity to transmit , such that .the intensity vector is then multiplied with the optical channel of size .the output of the color filters can be written as follows , where is the electro - optical conversion factor , is the photodetector responsivity . without loss of generality ( w.l.o.g . ) , assume .the noise is the combination of shot noise and thermal noise , assuming .it should be noted that the imaging detector is followed by imperfect color filters such that cross - talks may exist .the received intensity vector is passed through a symbol detector to obtain an estimate of the transmitter symbol , which is then de - mapped by to recover the bit sequence .we assume line - of - sight ( los ) links without inter - symbol interference .we first consider ideal channel , i.e. . define a joint constellation vector ^t ] is a selection matrix with all zeros except for an identity matrix at the -th block .our objective is to minimize the system ser subject to several visible lighting constraints .we aim to max the minimum med , i.e. , maximize such that the following holds for all where the parameter will be optimized and we obtain through this optimization . , ( kronecker product ) , of size has all zeros except the -th element being one , , and the distance constraints are nonconvex in .we approximate by a first order taylor series approximation around , i.e. where is either a random initialization point or a previously attained estimate .a designer may wish to constrain the average color , as non - white illumination could be useful in many places .the average of all leds intensities can be written as the following vector ^t.\end{aligned}\ ] ] we consider the average power of each color , i.e. , a vector given as follows , ^t=\mathbf{k}\bar{\mathbf{c}}=\mathbf{k}\bar{\mathbf{j}}\boldsymbol{\mathbf{c_t}},\ ] ] where is a selection matrix summing up r / g / b intensities accordingly , is the average optical power , and where . by properly selecting , the cri and ler constraints can be met . for each led , the optical papr is defined as the ratio of the highest power over the average power .mathematically , the papr of the -th led can be written as follows , ,\ ] ] where denotes the summation of all elements of vector , is a selection matrix of size , denotes the largest element of vector .the papr of an individual led can be constrained as follows .\ ] ] cri stands for a quantitative measure of ability of light sources to reproduce the colors of objects faithfully , comparing with an ideal lighting source .ler measures how well light sources creates visible light .it is the ratio of luminous flux to power .depending on context , the power can be either the radiant flux of the source s output , or it can be the total power ( electric power , chemical energy , or others ) consumed by the source .the cri and ler are important practical lighting constraints . by properly selecting ,specific cri and ler constraints can be met . when the problem can be formulated as follows , which can be straightforwardly proven as a convex optimization problem . with the first three constraints , it is termed as a regular optimization problem and with all constraints a papr - constrained problem . byiteratively solving , a local optimal constellation can be obtained . with multiple runs starting from different initial point , the best of solutions , is selected .the channel cross - talks exist when the transmitting led s emission spectral does not match the receiver filter s transmission spectral .it can be described by the following structure assuming single rgb led is employed based on and experiments , where the parameter characterizes both attenuation and interference effects . by singular value decomposition ( svd ) , , where and are unitary matrices of size , is a diagonal matrix of size . in this case, is the dimension of space for constellation design instead of .we apply a pre - equalizer at the transmitter - side and a post - equalizer at the receiver - side to equalize the channel and have the same distribution , since is unitary . ] .define , and the optimization in can be transformed as it should be noted that now is of dimension , i.e. , the constellation is designed in a -dimensional space . to further minimize the system ber with a fixed ser , a good bit - to - symbol mapping function as shown in fig .1 need to be designed . in this paper, we apply the binary switching ( bsa ) algorithm to optimize the mapping .since it is not the main focus of this paper , the details of bsa are omitted ( we refer the readers interested to ) .we provide numerical illustration of advantages of the csk - advanced with one rgb led , i.e. . both the csk - advanced and the conventional decoupled scheme can work with arbitrary color illumination .with one rgb led , ) ] and the average power . for the conventional scheme ,each led can simply take value independently from the following binary constellations .\ ] ] the med for each branch is . for our scheme ,the optimized constellation is as follows ( column 1 to 4 and column 5 to 8 are separated due to space limit . ) the med equals , such that we could expect a lower ser with sufficient snr .the asymptotic power gain is approximately ( = ) .we choose the average color as ^t ] . with the conventional scheme ,the leds take value from constellations ~~\mathcal{c}_{e , g}=[0,3]~~\mathcal{c}_{e , b}=[0,3].\ ] ] with our scheme , the optimized constellation is the med is approximately , which is smaller than med of one branch but larger than meds of two branches of the conventional scheme ..med with varying papr and average color .[ cols="^,^,^,^,^",options="header " , ] instead of redesign the constellations subject to a transformed set of constraints due to employment of a pre - equalizer , zero - forcing ( zf ) or linear minimum - mean - squared - error ( lmmse ) based post - equalizer can be employed at the receiver to mitigate the cross - talks .5 shows the corresponding bers against increased crosstalks for a balanced system employing different schemes when osnr is fixed to 5db .it is seen that our svd - based scheme significantly outperforms systems employing either zf or lmmse post - equalizers .6 shows the bers against osnr for a balanced system when is fixed to 0.1 . with thisparticular parameters chosen , there is no significant difference between zf and lmmse based system performance and therefore we only included the lmmse based results . with osnr=5db for a balanced system .] for a balanced system . ]a novel constellation design scheme , named csk - advanced , for vlc with arbitrary number of rgb leds , is proposed in this paper . with both optimized constellation and bits - to - symbols mapping , significant power gainsare observed compared with conventional decoupled systems . for more unbalanced color illumination, the larger power gains can be expected . to avoid excessive nonlinear distortion , optical papr constraintsis included into the optimization .furthermore , to deal with cwc , an svd - based pre - equalizer is introduced .it is shown by simulations that the proposed scheme significantly outperforms various benchmarks employing zf or lmmse - based post - equalizers .9 s. watson , m. tan , s.p .najda , p. perlin , m. leszczynski , g. targowski , s. grzanka , and a. e. kelly , `` visible light communications using a directly modulated 422 nm gan laser diode , '' _ opt .3792 - 3794 , 2013 . c. chen , p. wu , h lu , y. lin , j. wen , and f. hu , `` bidirectional phase - modulated hybrid cable television / radio - over - fiber lightwave transport systems , '' _ opt .404 - 406 , 2013 .j. k. kim and e. f. schubert , `` transcending the replacement paradigm of solid - state lighting , '' _ opt . express _ ,21835 - 21842 , 2008 . h. elgala and t.d.c .little , `` reverse polarity optical - ofdm ( rpo - ofdm ) : dimming compatible ofdm for gigabit vlc links , '' _ opt .24288 - 24299 , 2013 . j. vucic and k.d .langer , `` high - speed visible light communications : state - of - the - art , '' _ ofc / nfoec _ , pp .1 - 3 , mar .y. wang , y. wang , n. chi , j. yu , and h. shang , `` demonstration of 575-mb / s downlink and 225-mb / s uplink bi - directional scm - wdm visible light communication using rgb led and phosphor - based led , '' _ opt .21 , no . 1 , pp . 1203 - 1208 , 2013 .ieee 802.15.7 visible light communication task group , https://mentor.ieee.org/802.15/documents?is group=0007 .r.j . drost and b.m .sadler , `` constellation design for color - shift keying using billiards algorithms , '' _ ieee globecom workshop _980 - 984 , dec . 2010 .e. monteiro and s. hranilovic , `` constellation design for color - shift keying using interior point methods , '' _ ieee owc - ws _ , pp .1224 - 1228 , dec . 2012 .b. bai , q. he , z. xu , and y. fan , `` the color shift key modulation with non - uniform signaling for visible light communication , '' _ ieee iccc - ws - owcc _ , pp .37 - 42 , aug . 2012 .z. yu , r.j .baxley , and g.t .zhou , `` peak - to - average power ratio and illumination - to - communication efficiency considerations in visible light ofdm systems , '' _ ieee icassp _ , pp .5397 - 5401 , may . 2013 .l. zeng , d. obrien , h. minh , g. faulkner , k. lee , d. jung , y. oh , and e. won , `` high data rate multiple input multiple output ( mimo ) optical wireless communications using white led lighting , '' jsac , vol .27 , no.9 , pp .1654 - 1662 , dec . 2009 .broadbent , `` a critical review of the development of the cie1931 rgb color - matching functions , '' _ color research and applications _267 - 272 , aug . 2004 .k. zeger and a. gersho , `` pseudo - gray coding , '' _ ieee trans .2147 - 2158 , dec . 1990 .m. beko and r. dinis , `` designing good multi - dimensional constellations , '' _ ieee wireless commun ._ , vol . 1 , no . 3 , pp .221 - 224 , 2012 .f. schreckenbach , n. gortz , j. hagenauer , and g. bauch , `` optimization of symbol mappings for bit - interleaved coded modulation with iterative decoding , '' _ ieee commun ._ , vol . 7 , no . 12 , pp . 593 - 595 , decj. karout , e. agrell , k. szczerba , and m. karlsson , `` optimizing constellations for single - subcarrier intensity - modulated optical systems , '' _ ieee trans .inf . theory58 , no . 7 , pp . 4645 - 4659 , apr .cie ( 1999 ) , `` colour rendering ( tc 1 - 33 closing remarks ) , '' _ publication 135/2 , vienna : cie central bureau , isbn 3 - 900734 - 97 - 6_. a. stimson , `` photometry and radiometry for engineers , '' _ new york : wiley and son_. t.p .crauss , m.d .zoltowski , and g. leus , `` simple mmse equalizers for cdma downlink to restore chip sequence : comparison to zero - forcing and rake , '' _ ieee icassp _ , vol . 5 , pp .2865 - 2868 2000 .
in this paper , we propose a novel high dimensional constellation design scheme for visible light communication ( vlc ) systems employing red / green / blue light emitting diodes ( rgb leds ) . it is in fact a generalized color shift keying ( csk ) scheme which does not suffer efficiency loss due to a constrained sum intensity for all constellation symbols . crucial lighting requirements are included as optimization constraints . to control non - linear distortion , the optical peak - to - average - power ratio ( papr ) of leds is individually constrained . fixing the average optical power , our scheme is able to achieve much lower bit - error rate ( ber ) than conventional schems especially when illumination color is more `` unbalanced '' . when cross - talks exist among the multiple optical channels , we apply a singular value decomposition ( svd)-based pre - equalizer and redesign the constellations , and such scheme is shown to outperform post - equalized schemes based on zero - forcing or linear minimum - mean - squared - error ( lmmse ) principles . to further reduce system ber , a binary switching algorithm ( bsa ) is employed the first time for labeling high dimensional constellation . we thus obtains the optimal bits - to - symbols mapping . * optical wireless communication , constellation design , constellation labeling , multi - color optical , csk , im / dd . *
we consider a two - dimensional configuration of particles with contacts and polygons . for convenience of notation ,only single digit particle indices are used in this example , so that the notation means the cartesian component of the unit vector from the center of particle to that of particle . + and matrices are shown .arrows represent the normal vectors used to construct the and matrices ( before normalization ) .different arrow colors are for visualization purposes only . ]the convention for ordering of the contacts is demonstrated in eq .[ eq : c ] ( and see also fig .[ fig : m_configuration ] ) : the matrix is used to describe the force balance condition ( eq . 1 in the main text ) and has dimension in the most general case when contact forces have both normal and tangential components .each row is associated with a given particle and each column describes one contact and has non - zero entries corresponding only to the pair of particles and forming that contact .its first rows store the components and the next rows store the components of unit normal vectors and unit tangential vectors ( counter - clockwise orthogonal to ) .the first columns of correspond to the normal directions and the next columns correspond to the tangential directions ( which can also of course be expressed using the normal directions via a simple rotation transformation ) .an example of some of the terms of the matrix for the configuration of fig .[ fig : m_configuration ] is given in eq .[ eq : m ] : the matrix is used to describe the torque balance condition ( see eq . 9 in the main text ) and is of dimensions . again, the row indices correspond to particles and the column indices refer to contacts .the non - zero entries in each column correspond to the radii of particles and forming that contact .an example of some of the terms of the matrix for the configuration of fig .[ fig : m_configuration ] is given in eq .[ eq : t ] : when the external torque is zero , as in our loading protocol using compression , the radii are eliminated from the torque balance equation and the matrix can be further simplified to the form of eq .[ eq : t_alt ] : the matrix ( cf .eq . 7 in the main text ) is used to describe the presence of closed polygons formed by particles in contact and and is of dimensions . hererow indices correspond to polygons and column indices refer to the contacts .non - zero entries in each row describe the unit normal directions joining two particles in contact which are members of a given polygon .the first rows store the components and the next rows store the components of unit vectors .an example for some of the terms of the matrix is given in eq .[ eq : q ] ( and see fig .[ fig : q_configuration ] ) :
the determination of the normal and transverse ( frictional ) inter - particle forces within a granular medium is a long standing , daunting , and yet unresolved problem . we present a new formalism which employs the knowledge of the external forces and the orientations of contacts between particles ( of any given sizes ) , to compute all the inter - particle forces . having solved this problem we exemplify the efficacy of the formalism showing that the force chains in such systems are determined by an expansion in the eigenfunctions of a newly defined operator . in a highly influential paper from 2005 majmudar and behringer wrote : inter - particle forces in granular media form an inhomogeneous distribution of filamentary force chains . understanding such forces and their spatial correlations , specifically in response to forces at the system boundaries , represents a fundamental goal of granular mechanics . the problem is of relevance to civil engineering , geophysics and physics , being important for the understanding of jamming , shear - induced yielding and mechanical response . " a visual example of such force chains in a system of plastic disks is provided in fig . [ mahesh ] . in this letter we present a solution of this goal . to be precise , the problem that we solve is the following : consider a granular medium with known sizes of the granules , for example the 2-dimensional systems analyzed in ref . and shown in fig . [ mahesh ] , of disks of known diameters . given the external forces , denoted below as and the external torques exerted on the granules , and given the angular orientations of the vectors connecting the center of masses of contacting granules ( but not the distance between them ! ) , determine all the inter - particle normal and tangential forces and . the method presented below applies to granular systems in mechanical equilibrium ; the issue of instabilities and abrupt changes in the force chains will be discussed elsewhere . for the sake of clarity and simplicity we will present here the two - dimensional case ; the savvy reader will recognize that the formalism and the solution presented will go smoothly also for the three - dimensional case ( as long as the system is in mechanical equilibrium ) . the full formalism will be presented in a longer publication in due course . the obvious conditions for mechanical equilibrium are that the forces and the torques on each particle have to sum up to zero . the condition of force balance is usefully presented in matrix form using the following notation . denote the ( signed ) amplitudes of the inter - particle forces as a vector , where the amplitudes appear first and then the amplitudes . the vector of and components and is denoted as where all the components are presented in first and then all the components . the vector has entries where is the number of contacts between particles . the vector has entries where is the number of particles , with zero entries for all the particles on which there is no external force . we can then write the force balance condition as where is a matrix . the entries in the matrix contain the directional information , see supplemental material at [ url will be inserted by publisher ] for an example of an matrix . denote the unit vector in the direction of the vector distance between the centers of mass of particles and by , and the tangential vector by orthogonal to . then the entries of display the projections and or and as appropriate . we thus guarantee that eq.([m ] ) is equivalent to the mechanical equilibrium condition as is well known , the friction - less granular system in the thermodynamic limit is jammed exactly at the isostatic condition . in the friction - less case is a matrix and as long as one can solve the problem by multiplying eq . ( [ m ] ) by the transpose , getting in this case the matrix has generically exactly three goldstone modes ( two for translation and one for rotation ) , and since the external force vector is orthogonal to the goldstone modes ( otherwise the external forces will translate or rotate the system ) , eq . ( [ mmt ] ) can be inverted with impunity by multiplying by ^{-1}$ ] . in fact even when but the system is small enough to be jammed , this method can be used since there are enough constraints to solve for the forces . this last comment is important for our developments below . the problem becomes under - determined above isostaticity in the frictionless case , when force chains begin to build up that span from one boundary to the other . with friction we anyway have twice as many unknowns and we need to add the constrains of torque balance . the condition of torque balance is on every particle , where is the external torque exerted on the disk . for disks , is in the normal direction , and therefore the torque balance becomes a condition that the sum of tangential forces has to balance the external tangential force . this condition can be added to eq . ( [ m ] ) using a new matrix in the form the order of the extended matrix is , see supplemental material at [ url will be inserted by publisher ] for an example of . above jamming when the number of contacts increases . the matrix is not square , and the matrix which is of size , has at least zero modes . accordingly it can not be inverted and one can conclude that * the conditions of mechanical equilibrium are not sufficient to determine all the forces . * obviously what is missing are additional constraints to remove the host of zero modes . these additional constraints are _ geometrical _ constraints which can be read from those disks which describe connected polygons . in other words , since we know the orientation of each contact in our system , we can determine which granules are stressed in a triangular arrangement , and which in a square or pentagonal etc . , see fig . [ geometry ] . each such arrangement is a constraint on the radius vectors adjoining the centers of mass . for example if particles and are in a triangular arrangement then , with the analogous constraint on squares , pentagons etc . these constraints can be written in a matrix form by denoting the _ amplitudes _ of inter - particle vector distances as where we again arrange the components first and the components second : where the matrix again has entries or as appropriate to represent the vectorial geometric constraints , see supplemental material at [ url will be inserted by publisher ] for an example of . denoting the total number of polygons by the dimension of the matrix is . of course has entries while had entries . note that in generic situations there can be also disks which are not stressed at all . these are referred to as rattlers " . for example in the configuration shown in fig . [ geometry ] there exist 14 rattlers . at this point we specialize the treatment to hookean normal forces with a given force constant . non hookean forces result in a nonlinear theory that can still be solved but much less elegantly . for the present case \ . \label{sig}\ ] ] denoting the amplitudes of the vectors as the vector ( again with first the and then the components ) , we can rewrite eq . ( [ q ] ) in the form having this result at hand we can formulate the final problem to be solved . arrange now a new matrix , say g , operating on a vector , with a rhs being a vector , say , made of a stacking of , and , as before with and then components : using these objects our problem is now the dimension of the matrix is and the matrix has the dimension . we can use now the euler characteristic to show that the situation has been returned here to the analog of the invertible matrix when : the euler characteristic in two dimensions requires that where is the number of rattlers " i.e. disks on which there is no force . accordingly we find that consequently , the matrix has no zero eigenmodes . thus the final solution for the forces can be obtained as where is the set of eigenfunctions of associated with eigenvalues . we compared the inter - particle forces obtained from direct numerical simulations ( see below for details ) to those computed from eq . ( [ final ] ) . both normal and tangential forces are of course identical to machine accuracy . we reiterate that we did not need to know the distances between particles . this is important in applying the formalism to experiments since it is very difficult to measure with precision the degree of compression of hard particles like , say , metal balls or sand particles . note also the remarkable fact that we never had to provide the frictional ( tangential ) force law in the formalism to obtain the correct forces ! at this point we can discuss the force chains . by definition these are the large forces in the system that provide the tenuous network that keeps the system rigid . observing eq . ( [ final ] ) we should focus on the eigenfunction of that have the smallest eigenvalues and the largest overlaps with . these can be found and arranged in order of the magnitude of independently of the calculation of . in fig . [ order ] we show the contribution to the total energy , learning that about 20% of the leading eigenfunction are responsible for 90% of the energy . we can therefore hope that the force chains will be determined by the same relatively small number of eigenfunctions . this is not guaranteed ; due to contributions to the forces that oscillate in sign the convergence can be much slower than in the case of the energy where the sum is of positive contributions . in fig . [ chains ] we show in upper left panel the force chains in the configuration of fig . [ geometry ] . in the other panels we show the prediction of the force chains using 100 , 200 and 300 of the ( energy ) leading modes . we learn that with 100 out of the 864 modes the main force chains begin to be visible . with 200 out of the 864 modes the full structure of the force chains is already apparent , although with 300 it is represented even better . since the number of geometric constraints is very large , one can ask whether all the geometric constraints are necessary , as eq . ( [ euler ] ) shows that . the answer is no , we could leave out constraints as long as we have enough conditions to determine the solution . there is the obvious question then why do we have a unique solution when the number of equations is larger than the number of unknowns . the answer to this question lies in the properties of the vector and the matrix which does have many zero modes . a condition for the existence of a solution is that is orthogonal to all the zero modes of , as can be easily checked . we have ascertained in our simulations that this condition is always met . in the near future we will present an extension of this formalism to three dimensions and the use of the formalism to study the instabilities of the force networks to changes in the external forces . as a final comment we should note that in fact only _ one _ external force is necessary to determine _ all _ the inter - disk forces . this single external force is necessary to remove the re - scaling freedom that this problem has by definition . * simulations * : for the numerical experiments in 2-dimensions we use disks of two diameters , a ` small ' one with diameter and a ` large ' one with diameter . such disks are put between virtual walls at and . these walls exert external forces on the disks . the external forces are taken as hookean for simplicity . for disks near the wall at we write here denotes the component of the position vector of the center of mass of the disk , and we have a similar equation for the components with replaced by . when two disks , say disk and disk are pressed against each other we define their amount of compression as : where is the actual distance between the centers of mass of the disks and . in our simulations the normal force between the disks acts along the radius vector connecting the centers of mass . we employ a hookean force . to define the tangential force between the disks we consider ( an imaginary ) tangential spring at every contact which is put at rest whenever a contact between the two disks is formed . during the simulation we implement memory such that for each contact we store the signed distance to the initial rest state . for small deviations we require a linear relationship between the displacement and the acting tangential force . this relationship breaks when the magnitude of the tangential force reaches where due to coulomb s law the tangential loading can no longer be stored and is thus dissipated . when this limit is reached the bond breaks and after a slipping event the bond is restored with a the tangential spring being loaded to its full capacity ( equal to the coulomb limit ) . the numerical results reported above were obtained by starting with particles on a rectangular grid ( ratio 1:2 ) with small random deviations in space and no contacts . we implement a large box that contains all the particles . the box acts on the system by exerting a restoring harmonic normal force as described in eq . ( [ external ] ) . the experiment is an iterative process in which we first shrink the containing box infinitesimally ( conserving the ratio ) . the second step is to annul all the forces and torques , to bring the system back to a state of mechanical equilibrium . we therefore annul the forces using a conjugate gradient minimizer acting to minimize the resulting forces and torque on all particles . we iterate these two steps until the system is compressed to the desired state . this work had been supported in part by an ideas " grant stanpas of the erc . we thank deepak dhar for some very useful discussions . we are grateful to edan lerner for reading an early version of the manuscript with very useful remarks . 99 t.s . majmudar and r.p . behringer , contact force measurements and stress - induced anisotropy in granular materials " , nature * 435 * , 1083 ( 2005 ) . s. alexander , amorphous solids : their structure , lattice dynamics and elasticity " , phys . rep . * 296 * 65 - 236 ( 1998 ) . m. wyart , s.r.nagel and t. a. witten geometric origin of excess low - frequency vibrational modes in weakly connected amorphous solids " europhys . lett . * 72 * 486492 ( 2005 ) . the existence of rattlers " that are not in close contact with other particles may increase the number of zero modes . to see this note that the matrices and have the same rank , but can not have more than non - zero eigenmodes . therefore must have at least zero modes . r. c. ball and r. blumenfeld , `` stress field in granular systems : loop forces and potential formulation '' , phys . rev . lett . * 88 * , 115505 ( 2002 ) . r. blumenfeld , `` stresses in isostatic granular systems and emergence of force chains '' , phys . rev . lett . * 93 * , 108301 ( 2004 ) . the relevance of the linear forces to laboratory experiments was shown in t.s . majmudar , m. sperl , s. luding and r.p . behringer , jamming transition in granular systems " , phys . rev . lett . * 98 * , 058001 ( 2007 ) and suppl . information . s.v matveev , euler characteristic " , in hazewinkel , michiel , encyclopedia of mathematics , springer , isbn 978 - 1 - 55608 - 010 - 4 , ( 2001 ) .
bayesian networks or graphical models based on directed acyclic graphs ( dags ) are widely used to represent complex causal systems in applications ranging from computational biology to epidemiology , and sociology .a dag entails a set of conditional independence relations through the markov properties .two dags are said to be _ markov equivalent _ if they entail the same conditional independence relations . in general , observational datacan only identify a dag up to markov equivalence . for statistical causal inference it is therefore important to enumerate and describe the set of markov equivalence classes ( mecs ) and their sizes .if the mecs are large in size , then causal inference algorithms that operate in the space of mecs as compared to dags could significantly increase efficiency .however , gaining a full understanding of the causal relationships in a system with a large mec requires many interventional experiments that deliberately and carefully alter one or more components of the system .the purpose of this paper is to recast this important combinatorial and enumerative question from statistics in the language of combinatorial optimization .this new perspective yields complexity results on the problem in general , as well as solutions to the problem in some special cases . the problem of enumerating mecs has been studied from two fundamental perspectives : ( 1 ) enumerate all mecs on nodes ( as in ) , and ( 2 ) enumerate all mecs of a given size ( as in ) . at the heart of these studies is a result of verma and pearl , which states that a mec is determined by the underlying undirected graph ( or _ skeleton _ ) and the placement of immoralities , i.e. induced subgraphs of the form .this characterization leads to a representation of an mec by a graph with directed and undirected edges known as the _ essential graph _ ( or _ cpdag _ or _ maximally oriented graph _ ) . in ,gillespie and perlman use this characterization to identify all mecs on nodes ; namely , they fix a skeleton on nodes , and then count the number of ways to compatibly place immoralities within the skeleton . the works give inclusion - exclusion formulae for mecs of a fixed size by utilizing the combinatorial structure of the essential graph described in .however , since essential graphs can be quite complicated , these formulae are only realizable for relatively constrained classes of mecs. in particular , and only consider mecs of size one , and must fix the undirected edges of the essential graphs to be enumerated .as exhibited by these results , the implementation of combinatorial enumeration techniques appears difficult from perspective ( 2 ) .on the other hand , perspective ( 1 ) has only been considered via computer - enumeration . a common approach to difficult graphical structure enumeration problemsis to specify a type of graph for which to solve the problem .this approach is used in such problems as the enumeration of independent sets , matchings , and colorings . given a graph, it can be useful to consider a refined set of combinatorial statistics each of which plays a role in the enumeration question .for instance , given a graph researchers examine the total number of independent sets ( or the _ fibonacci number _ of ) , the maximum size of an independent set ( or _ independence number _ of ) , and/or the number of independent sets of a fixed size .these refined statistics work together to give a complete understanding of the problem of enumerating independent sets for . in the present paper, we initiate the combinatorial enumeration of mecs with respect to a fixed undirected graph and thereby recast this enumeration problem in the language of combinatorial optimization . for a graph amounts to enumerating all possible placements of immoralities within .thus , we are interested in the following combinatorial statistics : 1 . , the total number of mecs on , 2 . , the maximum number of immoralities on , 3 . , the number of ways to place exactly immoralities on , and 4 . , where denotes the number of mecs on of size .the first three statistics fit together naturally in the polynomial presentation in general , computing any or all of these statistics for a given type of graph appears to be difficult . in this paper, we will prove the following theorem in support of this observation .[ thm : np - complete ] given an undirected graph , the problem of computing a dag with skeleton and immoralities is np - hard . here ,we use the notion of np - hardness as defined in ( * ? ? ?* chapter 5 ) . as with most np - hard problems , restricting to special cases can make the problem tractable . in this paper , we will compute some or all of ( 1 ) , ( 2 ) , ( 3 ) , and ( 4 ) for some special types of graphs that are important in both statistical and combinatorial settings . moreover , these special cases can offer useful structural insights on the general problem .for example , it appears that the number and size of equivalence classes is guided by the number of cycles and high degree nodes in the skeleton . in order to test and verify these types of observations ,we develop a computer program for the enumeration of the combinatorial statistics ( 1 ) , ( 2 ) , ( 3 ) , and ( 4 ) that expands on the original program of gillespie and perlman . using this program we can not only verify the observations that high degree nodes and cycles in the skeleton play an important role , but we are also able to make the following interesting observation , indicating the profound role played by the underlying skeleton . [ thm : frequency determinism ] for , every connected graph on nodes has a unique frequency vector .the remainder of this paper is organized as follows . in section [ sec : some first examples ] , we examine some first and fundamental examples including paths , cycles , and the complete bipartite graph .we compute all the desired combinatorial statistics specified by ( 1 ) , ( 2 ) , ( 3 ) , and ( 4 ) for these graphs . the first two examples exhibit an important connection to independent sets and vertex covers . in section [ sec : trees ] , we consider our enumeration question in the special setting of trees . here , we derive results for stars , bistars , complete binary trees , and caterpillar graphs .the former two examples play an important role in bounding the number and size of mecs on tree graphs , and the latter two examples are fundamental to phlyogenetic modeling . following this , we identify bounds on the number of mecs on a given tree that exactly parallel the classically known bounds for independent sets in trees .we also identify tight bounds on the size of a mec on a given tree using properties of the associated essential graphs . in section [ sec : immorality numbers and star decompositions ] , we prove theorem [ thm : np - complete ] via a reduction of the minimum vertex cover problem .to do so , we prove a correspondence between minimum vertex covers of a given triangle - free graph and minimum decompositions of into non - overlapping stars , which we call _ minimum star decompositions_. using this correspondence , we can compute the number for triangle - free graphs whose minimum star decompositions are isomorphic as forests .we apply this result to recover for the complete bipartite graph and some special types of circulant graphs . in section [ sec : computational analysis ] , we describe our computer program for the computation of the statistics ( 1 ) , ( 2 ) ( 3 ) , and ( 4 ) .this program collects a variety of data on markov equivalence classes and the skeleton of each class for all connected graphs on nodes and for triangle - free graphs on nodes .in particular , we compare class size and the number of mecs per skeleton to skeletal features including average degree , max degree , clustering coefficient , and the ratio of number of immoralities in the essential graph of the mec to the number of induced -paths in the skeleton .finally , we see that this program validates theorem [ thm : frequency determinism ] , and we also use it to address the analogous result in the case of unconnected graphs . since this work draws heavily on different concepts from two different fields , statistics and combinatorics , we provide an extensive review of the required concepts and definitions from both fields in the appendix .in this section , we provide some first examples for which we can compute all of the desired combinatorial statistics ( 1 ) , ( 2 ) ( 3 ) , and ( 4 ) .the first two examples are the path and cycle on nodes . using some well - known results on the independent sets within these graphs, we can quickly obtain the desired numbers .the third example presented in this section is the graph .unlike the path and cycle , requires a more detailed analysis . to compute the polynomial and the vector for paths and cycles ,we will use the notion of independent sets .we refer the unfamiliar reader to section [ subsec : graphs ] for all the necessary definitions . in this section, we will use two well - studied combinatorial sequences , and their associated polynomial filtrations . recall that the fibonacci number is defined by the recursion the _ fibonacci polynomial _ is defined by and it has the properties that for all and for all .analogously , the lucas number is given by the fibonacci - like recursion the _ lucas polynomial _ is given by it is a well - known result that the independence polynomial of the path of length , which we denote by , is equal to the fibonacci polynomial and the independence polynomial of the -cycle is given by the lucas polynomial ; that is to say , with these facts in hand we prove the following theorem .[ thm : path and cycle polynomials ] for the path and the cycle on nodes we have that in particular , the number of mecs on and , respectively , is and the maximum number of immoralities is the result follows from a simple combinatorial bijection . since paths and cycles are the graphs with the property that the degree of any vertex is at most two , then the possible locations of immoralities are exactly the degree two nodes .that is , the unique head node in an immorality must be a degree two node . in the path , this corresponds to all non - leaf vertices , and for the cycle this is all the vertices of the graph .notice then that no two adjacent degree two nodes can simultaneously be the unique head node of an immorality , since this would require one arrow to be bidirected .thus , a viable placement of immoralities corresponds to a choice of any subset of degree two nodes that are mutually non - adjacent , i.e. that form an independent set .conversely , given any independent set in , a dag can be constructed by placing the head node of an immorality at each element of the set and directing all other arrows in one direction .similarly , this works for any nonempty independent set in .( notice that any mec on the cycle must have at least one immorality since all dags have at least one sink node . ) the resulting formulas are then which completes the proof . it remains to compute the vectors and and the maximum number of immoralities and . the formulae for these combinatorial statistics follow naturally from the description of the placement of immoralities given in theorem [ thm : path and cycle polynomials ] .[ thm : path vectors ] the number of mecs of size with skeleton is the number of compositions of into parts that satisfy as varies from let be a dag with skeleton . we denote the markov equivalence class of by ] correspond to the nodes in an independent -subset ] is a dag with skeleton that has no immoralities on these paths , then each path contains a unique sink .each independent -subset yields a distinct forest of paths on \backslash\mathcal{i} ] denotes the partitions of with parts with largest part at most .since is a graph in which every node is degree 2 , then each mec of containing immoralities corresponds to an independent -subset of :=\{1,2 , \dots , p\} ] and are collectively referred to as the _ spine _ of .this labeling of is depicted on the left in figure [ fig : k - two - p ] .first , it is easy to see that the maximum number of immoralities is given by orienting the edges such that all edge heads are at the nodes and .this results in .next , we compute a closed form formula for the number of mecs for .[ thm : k_2,p number of mecs ] the number of mecs with skeleton is to arrive at the desired formula , we divide the problem into three cases : 1 .the number of immoralities at node is .the number of immoralities at node is strictly between and .3 . there are no immoralities at node .notice that cases ( 1 ) and ( 2 ) have a natural interpretation via the indegree at node of the essential graph of the corresponding mecs .readers unfamiliar with the theory of essential graphs can find the basics in section [ subsec : essential graphs ] .if the indegree at is two or more , all edges adjacent to are essential , and the number of immoralities at node is given by its indegree .thus , we can rephrase cases ( 1 ) and ( 2 ) as follows :the indegree of node in the essential graph of the mec is .the indegree of node in the essential graph of the mec is . in case( 1 ) , the mec is determined exactly by the mec on the star with center node and edges .one can easily check ( this is also proven as part of theorem [ thm : stars ] in the following section ) that this yields mecs .case ( 2 ) is more subtle .first , assume that the indegree at node is , and the arrows with head have the tails ] along the spine , but some may occur at the nodes \backslash[k] ] that are the heads of immoralities by for , then the nodes are tails of the arrows adjacent to node . thus , if the number of immoralities with heads in \backslash[k] ] yields a single mec .figure [ fig : k - two - p ] depicts an example of one such choice of immoralities .we start by selecting the arrows to form immoralities at node which forces the remaining arrows at to point towards the spine .we then select some of these to form immoralities at the spine , and this forces the remaining arrows to be directed inwards towards .however , if , the star induced by nodes determines the mecs .this yields classes ( see again theorem [ thm : stars ] ) . in total , for case ( 2 ) the number of mecs is in case ( 3 ) , we consider when there are no immoralities at node , and we count via placement of immoralities along the spine .there are ways to place immoralities along the spine , one for each subset of ] .since there are no immoralities with head in the set \backslash[k] ] recall that the _ ( global ) clustering coefficient _ of a graph is defined as the ratio of the number of triangles in to the number of connected triples of vertices in .the clustering coefficient serves as a measure of how much the nodes in cluster together . figure [ fig : clustering coefficient all graphs ] presents two plots : one compares the clustering coefficient to the log average class size and the other compares it to the average number of mecs .this data is taken over all connected graphs on nodes with edges ( to achieve a large number of mecs ) .as we can see , the average class size grows as the clustering coefficient increases .this is to be expected , since an increase in the number of triangles within the dag should correspond to an increase in the size of the chordal components of the essential graph .on the other hand , the average number of mecs decreases with respect to the clustering coefficient , which is to be expected given that the class sizes are increasing .this decrease in the average number of mecs empirically captures the intuition that having many triangles in a graph results in fewer induced -paths , which represent the possible choices for distinct mecs with the same skeleton .{average - degree - and - log - average - class - size.pdf } & \includegraphics[width=0.48\textwidth]{average - degree - and - average - number - of - mecs.pdf } \\\end{array } ] the left - most plot in figure [ fig : max degree vs log average class size ] depicts the relationship between the maximum degree of a node in a skeleton and the average class size on the skeleton for all connected graphs and for triangle - free graphs on at most nodes . for all graphs, the relationship appears to be almost linear beginning with maximum degree , suggesting that average class size grows linearly with the maximum degree of the underlying skeleton .this growth in class size is due to the introduction of many triangles as the maximum degree grows . on the other hand , in the triangle - free settingwe actually see a decrease in average class size as the maximum degree grows , which empirically reinforces this intuition . the right - most plot in figure [ fig : max degree vs log average class size ] records the relationship between the maximum degree of a node in a skeleton and the average number of mecs supported by that skeleton for all connected graphs and triangle - free graphs on at most nodes . for all graphs , we see that the average number of mecs grows with the maximum degree of the graphs , and this growth is approximately exponential . in the triangle - free setting, the average number of mecs appears to be unimodal , but would be increasing if we considered also all graphs on . for triangle - free graphsthere is only one graph with maximum degree 9 , namely the star , where the number of mecs is . for connected graphs the average number of mecsis pushed up by those cases consisting of a complete bipartite graph where in addition one node is connected to all other nodes .-paths in the essential graph.,scaledwidth=46.0% ] the final plot of interest is in figure [ fig : class size versus immorality ratio ] , and it shows the relationship between markov equivalence class size and the ratio of the number of immoralities in the essential graph to the number of induced -paths in the skeleton for all connected graphs and triangle - free graphs on nodes .that is , it shows the relationship between the class size and how many of the potential immoralities presented by the skeleton are used by the class .it is interesting to note that , in the triangle - free setting , as the class size grows , this ratio appears to approach , suggesting that most large mecs use about a third of the possible immoralities in triangle - free graphs . in the connected graph setting , as the class size grows, we see a steady decrease in the value of this ratio .this supports the intuition that a larger class size corresponds to an essential graph with large chordal components and few immoralities .we wish to thank brendan mckay for some helpful advice in the use of the programs nauty and traces .liam solus was partially supported by an nsf mathematical sciences postdoctoral research fellowship ( dms - 1606407 ) .caroline uhler was partially supported by darpa ( darpa - sn-16 - 37 ) and onr ( n00014 - 16-s - ba10 ) .an _ ( undirected ) graph _ is a pair of sets in which is some _ node _ set and is the _ edge _ set , where an _ edge _ is taken to be an unordered pair of nodes for .we say that two nodes are _ adjacent _ in if , or equivalently , we say that is a _ neighbor _ of and vice versa .the _ neighborhood _ of a node in a graph is the set of all neighbors of in including the node itself .a node is said to be _ incident _ to an edge if is one of the two defining nodes of .the _ degree _ of a node in a graph is the number of edges incident to in , and it is denoted by . when the graph is understood , we often write .a node is called a _ leaf _ of when .a _ path _ in an undirected graph is an alternating sequence of nodes and edges in which for all ] , and .a graph is called _ connected _ if there exists a path in between every pair of nodes in .an undirected graph is called a _ forest _ if it contains no cycles .a _ tree _ is a connected forest .a graph is _ directed _ if the edge set is a set of _ arrows _ , where an _ arrow _ is defined to be any ordered pair of nodes , denoted or for . in an arrow the node is referred to as the _ tail _ of the arrow and the node is referred to as the _head_. a _ directed cycle _ in a directed graph is an alternating sequence of nodes and arrows in which for all ] .if the path includes no arrows , it is called _ undirected _ , and if it includes at least one arrow , it is called _directed_. a _ cycle _ in an undirected graph is an alternating sequence of nodes and edges / arrows in which ( or ) for all ] for which \} ] for which \}\cup\{\{1,n\}\} ] for which \} ] .there is a well - known characterization of when two dags are markov equivalent , and it is given in terms of their skeleta and their set of immoralities .immorality _ in a dag is a triple of node for which contains the arrows and but does not contain either of the arrows or .let be a dag and let ] .that is , is an essential arrow of if and only if is an arrow of for all ] .the _ essential graph _ of the class ] and whose set of edges is those edges in the skeleton of that support non - essential edges in .we denote the essential graph of ] denote the set of partitions of with parts , and we let ] , then and so is uniquely identified with the integer vector . in this case , we simply write $ ] .a wealth of enumerative combinatorics , including a complete treatment of compositions and integer partitions , can be found in .m. r. garey and d. s. johnson . _ computers and intractability : a guide to the theory of np - completeness_. a series of books in the mathematical sciences .wh freeman and company , new york , ny 25.27 ( 1979 ) : 141 .s. b. gillispie and m. d. perlman . _ enumerating markov equivalence classes of acyclic digraph models ._ proceedings of the seventeenth conference on uncertainty in artificial intelligence .morgan kaufmann publishers inc . , 2001 .t. verma and j. pearl ._ an algorithm for deciding if a set of observed independencies has a causal explanation_. proceedings of the eighth international conference on uncertainty in artificial intelligence .morgan kaufmann publishers inc .
bayesian networks or directed graphical models are widely used to represent complex causal systems . from observational data alone , a bayesian network can only be identified up to _ markov equivalence _ and interventional experiments are required to distinguish between markov equivalent bayesian networks . it is therefore important to describe and understand the set of markov equivalence classes ( mecs ) and their sizes . in this paper , we initiate the combinatorial enumeration of mecs on a fixed undirected graph , and thereby recast this important statistical problem into the language of combinatorial optimization . combinatorially , two bayesian networks are markov equivalent if their underlying undirected graphs and their set of immoralities are the same . we show that the np - complete minimum vertex cover problem reduces to the computation of a bayesian network with the maximum number of immoralities . the np - hardness of this problem shows the complexity of enumerating mecs from the perspective of combinatorial optimization . we then solve this enumeration problem for classically studied families of graphs including paths , cycles , and some types of complete bipartite graphs , trees , and circulants . we also provide tight bounds on the number and size of mecs on a given tree . these bounds exactly parallel the well - known bounds for the independent set problem . finally , a computer program was written to analyze the enumeration problem on all graphs with at most 10 nodes and all triangle - free graphs with at most nodes . the results show a trade - off between degree and clustering coefficient with respect to the number of mecs and their sizes . in addition , this program established that the frequency vector consisting of the number of mecs of each size uniquely determines the undirected graph .
helical structures in the solar features like sunspot whirls were reported long back by george e. hale in 1925 .he found that about 80% of the sunspot whirls were counterclockwise in the northern hemisphere and clockwise in the southern hemisphere .later , in 1941 the result was confirmed by richardson by extending the investigation over four solar cycles .this hemispheric pattern was found to be independent of the solar cycle . since the 90 s, the subject has been rejuvenated and this hemispheric behaviour independent of sunspot cycle is claimed to be observed for many of the solar features like active regions , filaments , coronal loops , interplanetary magnetic clouds ( imcs ) , coronal x - ray arcades and network magnetic fields etc .helicity is a physical quantity that measures the degree of linkages and twistedness in the field lines .magnetic helicity h is given by a volume integral over the scalar product of the magnetic field * b * and its vector potential * a * . with * * b**= * a*. it is well known that the vector potential * a * is not unique , thereby preventing the calculation of a unique value for the magnetic helicity from the equation ( 1 ) . pointed out that the helicity of magnetic field can best be characterized by the force - free parameter , also known as the helicity parameter or twist parameter .the force - free condition is given as , alpha is a measure of degree of twist per unit axial length ( see appendix - a for details of physical meaning of alpha ) .this is a local parameter which can vary across the field but is constant along the field lines .researchers have claimed to have determined the sign of magnetic helicity on the photosphere by calculating alpha , e.g. , averaged alpha e.g. = with current density .some authors have used current helicity density and .a good correlation was found between and by and .but the sign of magnetic helicity can not be inferred from the force - free parameter under all conditions ( see appendix b ) .it is well known that the reliable measurements of vector magnetic fields are needed to study various important parameters like electric currents in the active regions , magnetic energy dissipation during flares , field geometry of sunspots , magnetic twist etc .the study of error propagation from polarization measurements to vector field parameters is very important . studied the effects of realistic errors e.g. , due to random polarization noise , crosstalk between different polarization signals , systematic polarization bias and seeing induced crosstalk etc . on known magnetic fields .they derived analytical expressions for how these errors produce errors in the estimation of magnetic energy ( calculated from virial theorem ) .however , they simulated these effects for magnetographs which sample polarization at few fixed wavelength positions in line wings .it is well known that such observations lead to systematic under - estimation of field strength and also suffer from magneto - optical effects .whereas in our analysis , we simulate the effect of polarimetric noise on field parameters as deduced by full stokes inversion .the details are discussed in the section 6 . found large variations in the global values from repeated observations of the same active regions .it is important to model the measurement uncertainties before looking for physical explanations for such a scatter . in a study by the noise levels in the observed fieldswere analyzed , but a quantitative relationship between the uncertainties in fields and the uncertainties in global value were not established .they could only determine the extent to which the incremental introduction of noise affects the observed value of alpha .however , for the proper tracking of error propagation , we need to start with ideal data devoid of noise and with known values of and magnetic energy .we follow the latter approach in our present analysis . here , we estimate the accuracy in the calculation of the parameter and the magnetic energy due to different noise levels in the spectro - polarimetric profiles .modern instruments measure the full stokes polarization parameters within the line profile .basically there are two types of spectro - polarimeters : ( i ) spectrograph based e.g. , advanced stokes polarimeter ( asp : ) , zurich imaging polarimeter ( zimpol : ) , themis - mtr ( ) , solis - vector spectro - magnetograph ( vsm : ) , polarimetric littrow spectrograph ( polis : ) , diffraction limited spectro - polarimeter ( dlsp : ) , hinode ( sot / sp : ) , etc . and ( ii ) filter - based e.g. , imaging vector magnetograph ( ivm ) at mees solar observatory , hawaii , solar vector magnetograph at udaipur solar observatory ( svm - uso) etc .earlier magnetographs like crimea , msfc , hsp , oao , hsos , potsdam vector magnetograph , sft etc . were mostly based on polarization measurements at a few wavelength positions in the line wings and hence subject to zeeman saturation effects as well as magneto - optical effects like faraday rotation .the magnetic field vector deduced from stokes profiles by modern techniques are almost free from such effects .this paper serves three purposes .first , we estimate the error in the calculation of field strength , inclination and azimuth and thereafter in the calculation of the vector field components , and .second , we estimate the error in the determination of global due to noise in polarimetric profiles constructed from the analytical vector field data .third , we estimate the error in the calculation of magnetic energy derived using virial theorem , due to polarimetric noise . in the next section ( section 2 )we discuss a direct method for calculation of a single global for an active region . in section 3 , we describe the method of simulating an analytical bipole field .section 4 contains the analysis and the results .error estimation in global is given in section 5 . in section 6we discuss the process of estimating the error in the virial magnetic energy .section 7 deals with discussion and conclusions .taking the z - component of magnetic field , from the force - free field equation ( 2 ) can be written as , for a least squares minimization , we should have where is the local value at each pixel , is the global value of for the complete active region and n is total number of pixels .+ since eqn.(4 ) will lead to singularities at the neutral lines where b approaches 0 , therefore the next moment of minimization , should be used . from eqn.(5 ) we have which leads to the following result , this formula gives a single global value of in a sunspot and is the same as of .we prefer this direct way of obtaining global which is different from the method discussed in for determining .the main differences are : ( 1 ) .the singularities at neutral line are automatically avoided in our method by using the second moment of minimization and ( 2 ) .the computation of constant force - free fields for different test values of is not required . used a different parameter to avoid the effect of faraday rotation in sunspot umbrae .however , modern inversion techniques using complete stokes profiles are free of this problem .it must be noted that one can generate different values of using higher moments of minimization , e.g. , by weighting with , with n=3 , 5 , 7 , ... etc .the higher moments will be more sensitive to spatial variation of .such large and complex variation of is found generally in flare productive active regions .thus we can try to use higher order as a global index for predicting the flare productivity in active regions .finally , to compute we need all the three components of magnetic field which is obtained from the measurements of vector magnetograms .however , here we use the analytically generated bipole , as discussed in the following section , with known values of all the magnetic parameters to investigate the effect of polarimetric noise .we use the analytic , non - potential force - free fields of the form derived by .these fields describe an isolated bipolar magnetic region which is obtained by introducing currents into a potential field structure .this potential field is produced by an infinite straight line current running along the intersection of the planes y = 0 and z = -a , where negative sign denotes planes below the photosphere z = 0 . at the photosphere ( z = 0 ), the field has the following form : where is the magnitude of the field at origin and .the function is a free generating function related to the force - free parameter ( see eqn ( 2 ) ) by which determines the current structure and hence the amount and location of shear present in the region .by choosing we can obtain a simple potential ( current - free , ) field produced by the infinite line current lying outside the domain .steeper gradient of results in a more sheared ( non - potential ) field . in equation( 11 ) the sign on the right hand side is taken positive in the paper by low ( 1982 ) which is a typing mistake ( confirmed by b. c. low , private communication ) .we mention this here to avoid carrying forward of this typo as was done in . a grid of 100 x 100 pixels was selected for calculating the field components .the magnitude of field strength at origin has been taken as 1000 g and the value of ` a ' is taken as 15 pixels ( below the photosphere , z = 0 ) .the simulated field components with corresponding contours are shown in the figure 1 .here we use the following function ( e.g. , ) for the generation of the field components ( b , b , b ) : results for the fields generated by different are quantitatively similar . in this way we generate a set of vector fields with known values of . most of the time one of the bipoles of a sunspot observed on the sun is compact ( leading ) and the other ( following ) is comparatively diffuse .observations of compact pole gives half of the total flux of the sunspot and is mostly used for analysis .one can derive the twist present in the sunspot using one compact pole of the bipolar sunspot for constant .thus we have selected a single polarity of the analytical bipole as shown in figure [ fig1 ] to calculate the twist .fine structure in real sunspots is difficult to model .our analysis applies to the large scale patterns of the magnetic field regardless of fine structure .all the following sections discuss the analysis and results obtained .using the analytical bipole method the non - potential force - free field components b , b & b in a plane have been generated and are given as in equations ( 8) , ( 9 ) & ( 10 ) .we have shown b , b , & b maps ( generated on a grid of 100 x 100 pixels ) in figure [ fig1 ] . from these componentswe have derived magnetic field strength ( b ) , inclination ( ) and azimuth ( : free from 180 ambiguity ) . in order to simulate the effect of typical polarimetric noise in actual solar observations on magnetic field measurements and study the error in the calculation of and magnetic energy , we have generated the synthetic stokes profiles for each b , and in a grid of 100 x 100 pixels , using the he - line information extractor `` helix '' code .this code is a stokes inversion code based on fitting the observed stokes profiles with synthetic ones obtained by unno - rachkovsky solutions to the polarized radiative transfer equations ( rte ) under the assumption of milne - eddington ( me ) atmosphere and local thermodynamical equilibrium ( lte ) .however , one can also use this code for generating synthetic stokes profiles for an input model atmosphere .the synthetic profiles are functions of magnetic field strength ( b ) , inclination ( ) , azimuth ( ) , line of sight velocity ( ) , doppler width ( ) , damping constant ( ) , ratio of the center to continuum opacity ( ) , slope of the source function ( ) and the source function ( ) at = 0 .the filling factor is taken as unity . in our profile synthesisonly magnetic field parameters b , , are varied while other model parameters are kept same for all pixels .the typical values of other thermodynamical parameters are given in table [ tbl-1 ] .we use the same parameters for all pixels .further , all the physical parameters at each pixel are taken to be constant in the line forming region .however , one must remember that real solar observations have often stokes v area asymmetries as a result of vertical magnetic and velocity field gradients present in the line forming region .this has not been taken into account in our simulations .a set of stokes profiles with 0.5% and 2.0% noise for a pixel is shown in figure [ fig2 ] .+ the wavelength grid used for generating synthetic spectral profiles is same as that of hinode ( sot / sp ) data which are as follows : start wavelength of 6300.89 , spectral sampling 21.5 m / pixel , and 112 spectral samples .we add normally distributed random noise of different levels in the synthetic stokes profiles .typical noise levels in stokes profiles obtained by hinode ( sot / sp ) normal mode scan are of the order of 10 of the continuum intensity , i .we add random noise of 0.5 % of the continuum intensity i to the polarimetric profiles .in addition , we also study the effect of adding a noise of 2.0% level to stokes profiles as a worst case scenario .we add 100 realizations of the noise of the orders mentioned above to each pixel and invert the corresponding 100 noisy profiles using the `` helix '' code .the guess parameters to initialize the inversion are generated by perturbing known values of b , and by 10% .thus after inverting 100 times we get 100 sets of b , & maps for the input b , & values from bipole data . in this way we estimate the spread in the derived field values for various field strengths , inclinations etc .first , the inversion is done without adding any noise in the profiles to check the accuracy of inversion process .we get the results retrieved in this process which are very similar to that of the initial analytical ones .the scatter plot of input field strength , inclination , azimuth against the corresponding retrieved strength , inclination , azimuth after noise addition and inversion is shown in figure [ fig3 ] ( upper panel ) .typical b , b & b maps with different noise levels are shown in the lower panel . as the noise increases b , b & b become more grainy . from the plots shown in figure [ fig3 ]we can see that the error in the field strength for a given noise level decreases for strong fields .this is similar to results of .as the noise increases in the profiles , error in deriving the field strength increases .we find that the error in the field strength determination is for 0.5% noise and for 2% noise in the profiles .inclination shows more noise near 0 & 180 than at . the error is less even for large noisy profiles for the inclination angles between .the reason for this may be understood in the following way .linear polarization is weaker near 0 and 180 inclinations and is therefore more affected by the noise .the azimuth determination has inherent 180 ambiguity due to insensitivity of zeeman effect to orientation of transverse fields .thus in order to compare the input and output azimuths we resolve this ambiguity in by comparing it with i.e. , the value of which makes acute angle with has been taken as correct .we can see azimuth values after resolving the ambiguity in this way show good correlation with input azimuth values .some scatter is due to the points where ambiguity was not resolved due to 90 difference in and .first , the was calculated from the vector field components derived from the noise free profiles to verify the method of calculating global alpha and also the inversion process .we have used the single polarity to calculate global alpha present in sunspot as discussed in section 3 .we retrieved the same value of as calculated using the initial analytical field components . from the figure [ fig4 ] .we can see that the effect of noise on the field components is not much for the case of 0.5% noise but as the noise in the profiles is increased to 2.0% , the field components specially transverse fields show more uncertainty .the vertical field is comparatively less affected with noise . the scatter plot in figure [ fig4 ] .shows that the inversion gives good correlation to the actual field values .the points with large scatter are due to poor `` signal to noise '' ratio in the simulated profiles .the mean percentage error in the further discussions is given in terms of weighted average of error .we calculate the percentage error in global alpha each time after getting the inverted results , for both the cases when 0.5% and 2.0% ( of i ) noise is added in the profiles , by the following relation : where is calculated global alpha and is the analytical global alpha .the histogram of the results obtained is shown in figure [ fig5 ] .first , we inverted the profiles without adding any noise and calculated from retrieved results to compare it with the ` true ' calculated from the analytically generated vector field components .we get less than 0.002% difference in the both values . for the case of 0.5% noise in polarimetric profiles we get a mean error of 0.3% in the calculation of and erroris never more than 1% .thus the calculation of is almost free from the effect of noise in this case .hence , by using data from hinode ( sot / sp ) , one can derive the accurate value of twist present in a sunspot .if 2.0% noise is present in the polarization , then maximum error is obtained .weighted average shows only 1% error .thus the estimation of alpha is not influenced very much even from the data obtained with old and ground based magnetographs . in any event it is unlikely that a realistic error will be large enough to create a change in the sign of .the magnetic energy has been calculated using virial theorem .one form of the general virial theorem states that for a force - free magnetic field , the magnetic energy contained in a volume v is given by a surface integral over the boundary surface s , \cdot { \bf \hat{n}}\ \ ds\ ] ] where is the position vector relative to an arbitrary origin , and n is the normal vector at surface .let us adopt cartesian coordinates , taking as z=0 plane for photosphere .this assumption is reasonable because the size of sunspots are very small compared to the radius of the sun .if we make the further reasonable assumption that the magnetic field strength decreases with distance more rapidly than r whereas a point dipole field falls off as r , then the equation ( 15 ) can be simplified to where x and y are the horizontal spatial coordinates .b , b & b are the vector magnetic field components .this equation ( 16 ) is referred as the `` magnetic virial theorem '' .thus magnetic energy of an active region can be calculated simply by substituting the derived vector field components into the surface integral of equation ( 16 ) .magnetic field should be solenoidal and force - free as is the case for our analytical field .so the energy integral is independent of choice of the origin .if all the above conditions are satisfied then the remaining source of uncertainty in the magnetic energy estimation is the errors in the vector field measurements themselves .so , before the virial theorem can be meaningfully applied to the sun , it is necessary first to understand how the errors in the vector field measurements produce errors in the calculated magnetic energies .earlier , the efforts were made to estimate the errors for magnetographs like marshall space flight center ( msfc ) magnetograph . constructed a potential field from msfc data and computed its virial magnetic energy .then , they modified the vector field components by introducing random errors in b , b and b and recomputed the energy .they found the two energies differ by 11% . approached the problem differently .they introduced errors in the polarization measurements from which the field is derived instead of introducing errors to magnetic fields directly . in this way they were able to approximate reality , more closely and were able to include certain type of errors such as crosstalk which were beyond the scope of the treatment by .they found that the energy uncertainties are likely to exceed 20% for the observations made with the vector magnetographs present at that time ( e.g. msfc ) . here, our approach is very similar to that of except that we consider full stokes profile measurements to derive the magnetic fields like in the most of the recent vector magnetographs e.g. , hinode ( sot / sp ) , svm - uso etc . as mentioned earlier .we begin with an analytical field , determine polarization signal as explained in earlier parts , introduce the random noise of certain known levels ( 0.5% & 2.0% of i ) in the polarization profiles , infer an ` observed ' magnetic field after doing the inversion of the noisy profiles , compute an ` observed ' magnetic energy from the ` observed ' field and then compare this energy with the energy of the ` true ' magnetic field .the percentage error is calculated from the following expression : where e is ` observed ' energy and e is ` true ' energy .all the above processes have been described in detail in section 4 .figure [ fig6 ] shows the uncertainty estimated in the calculation of magnetic energy in two cases when error in the polarimetric profiles is 0.5% and 2.0% of i .needless to say , we first checked the procedure by calculating the magnetic energy from the vector fields derived from inverted results with no noise in the profiles .we found the same energy as calculated from the initial analytical fields .we can see that the magnetic energy can be calculated with a very good accuracy when less noise is present in the polarization as is observed in the modern telescopes like hinode ( sot / sp ) for which very small ( of the order of 10 of i ) noise is expected in profiles .we find that a mean of 0.5% and maximum up to 2% error is possible in the calculation of magnetic energy with such data .so , the magnetic energy calculated from the hinode data will be very accurate provided the force - free field condition is satisfied .the error in the determination of magnetic energy increases for larger levels of noise . in the case of high noise in profiles( e.g. 2.0% of i ) the energy estimation is very much vulnerable to the inaccuracies of the field values .we replaced the inverted value of the field parameters with the analytical value wherever the inverted values deviated by more than 50% of the ` true ' values .we then get the result shown in the right panel .we can see that the error is very small even in this case .the mean value of error is .we have discussed the direct method of estimating from vector magnetograms using the moment of minimization .the higher order moments also hold promise for generating an index for predicting the flare productivity in active regions .the global value of twist of an active region can be measured with a very good accuracy by calculating .accurate value of twist can be obtained even if one polarity of a bipole is observed .the magnetic energy calculation is very accurate as seen from our results .very less error ( approximately 0.5% ) is seen in magnetic energy with 0.5% noise in the profiles .thus we conclude that the magnetic energy can be estimated with very good accuracy using the data obtained from modern telescopes like hinode ( sot / sp ) .this gives us the means to look for magnetic energy changes released in weak c - class flares which release radiant energy of the order of 10 ergs ( see appendix - c ) , thereby improving the statistics .these energy estimates are however subject to the condition that the photospheric magnetic field is force - free , a condition which is not always met with .we must then obtain the energy estimates using vector magnetograms observed at higher atmospheric layers where the magnetic field is force - free .the 180 azimuthal ambiguity ( aa ) is another source of error for determining parameters like and magnetic free energy in real sunspot observations .the smaller the polarimetric noise , the smaller is the uncertainty in azimuth determination , thereby allowing us to extend the range of the acute angle method used in our analysis . on the other handit is difficult to predict the level of uncertainty produced by aa .influence of aa is felt more at highly sheared regions which will anyway deviate from the global alpha value .thus , avoiding such pixels will improve determination of .magnetic energy calculation at such pixels could be done by comparing energy estimates obtained by ` flipping ' the azimuths and choosing the mean of the smallest and the largest estimate of the energy .here we assume that half the number of pixels have the true azimuth .this is the best one can do for a problem that really has no theoretical solution allowed by the zeeman effect ( but see also , and references therein ) .observational techniques such as use of chromospheric chirality or use of magnetograms observed from different viewing angles could perhaps resolve the aa .patches of both signs of alpha can be present in a single sunspot . in those casesthe physical meaning of becomes unclear .efforts are needed to understand the origin of such complex variation of in a sunspot .real sunspots show filamentary structures . if this structure is accompanied by local variations of , then does the global result from correlations in the local values ? or , are the small scale variations due to a turbulent cascade from the large scale features ? the answers to these questions are beyond the scope of our present study .modeling sunspots with such complex fine structures is a great challenge .however , we plan to address the question of fine structure of twists in real sunspots observed from hinode ( sot / sp ) , in our forthcoming study . for the present, we demonstrate that the global twist present in an active region can be accurately measured without ambiguity in its sign .furthermore , the high accuracy of magnetic energy estimation that can be obtained using data from modern instruments will improve the probability for detecting the flare related changes in the magnetic energy of active regions .+ + we thank professor e. n. parker for discussion leading to our understanding about the physical meaning of parameter during his visit to udaipur solar observatory in november 2007 .we also thank him for looking at an earlier draft of the manuscript and for making valuable comments to improve it .one of us ( jayant joshi ) acknowledge financial support under isro / cawses - india programme .we thank dr .a. lagg for providing the helix code .we are grateful for the valuable suggestions and comments of the referee which have significantly improved the manuscript . +( derived from the discussions with professor eugene n. parker during his visit to udaipur solar observatory ) taking surface integral on both sides of eqn .( 2 ) , we get in the cylindrical coordinate we can write eqn.(a2 ) as where z and are axial and radial distances from origin , respectively the equation of field lines in cylindrical coordinates is given as : or , using eqns .( a3 ) & ( a5 ) , we get from equation ( a6 ) it is clear that the gives twice the degree of twist per unit axial length .if we take one complete rotation of flux tube i.e. , , and loop length , then comes out of the order of approximately per meter .( 2 ) can be written as giving vector potential in terms of scalar potential as which is valid only for constant .+ using this relation in eqn.(1 ) , we get magnetic helicity as second term in the right hand side of eqn .( b3 ) can be written as , ( from gauss divergence theorem ) which is equal to zero for a closed volume where magnetic field does not cross the volume boundary ( ) provided that remains finite on the surface .therefore , we get magnetic helicity in terms of as which shows that the force free parameter has the same sign as that of the magnetic helicity .however , if , then the contribution of the second term in eqn .( b3 ) remains unspecified .thus it is not correct to use alpha to determine the sign of magnetic helicity for the half space above the photosphere since at the photosphere .with the simplifying assumption that all classes of soft x - ray flares have a typical duration of 16 min , we can see that the energy released in the different classes of flares will be proportional to their peak power .since x - class flares typically release radiant energy of the order of ergs , therefore m - class , c - class , b - class and a - class flares will release radiant energy of the order of respectively 10 , 10 , 10 and 10 ergs .lr doppler velocity , v ( ms ) & 0 + doppler width , v ( m ) & 20 + ratio of center to continuum opacity , & 20 + source function , s & 0.001 + slope of the source function , s & 1.0 + damping constant , & 1.4 + , d. f. , et al .1992 , in society of photo - optical instrumentation engineers ( spie ) conference series , vol . 1746 , society of photo - optical instrumentation engineers ( spie ) conference series , ed .d. h. goldstein & r. a. chipman , 22 , h. p. , harvey , j. w. , henney , c. j. , hill , f. , & keller , c. u. 2002 , in esa special publication , vol .505 , solmag 2002 .proceedings of the magnetic coupling of the solar atmosphere euroconference , ed .h. sawaya - lacoste , 15 , c. u. , harvey , j. w. , & giampapa , m. s. 2003 , in presented at the society of photo - optical instrumentation engineers ( spie ) conference , vol .4853 , society of photo - optical instrumentation engineers ( spie ) conference series , ed .s. l. keil & s. v. avakyan , 194 , k. , et al .2004 , in society of photo - optical instrumentation engineers ( spie ) conference series , vol . 5171 , society of photo - optical instrumentation engineers ( spie ) conference series , ed . s. fineschi & m. a. gummin , 207
the force - free parameter , also known as helicity parameter or twist parameter , bears the same sign as the magnetic helicity under some restrictive conditions . the single global value of for a whole active region gives the degree of twist per unit axial length . we investigate the effect of polarimetric noise on the calculation of global value and magnetic energy of an analytical bipole . the analytical bipole has been generated using the force - free field approximation with a known value of constant and magnetic energy . the magnetic parameters obtained from the analytical bipole are used to generate stokes profiles from the unno - rachkovsky solutions for polarized radiative transfer equations . then we add random noise of the order of 10 of the continuum intensity ( i ) in these profiles to simulate the real profiles obtained by modern spectropolarimeters like hinode ( sot / sp ) , svm ( uso ) , asp , dlsp , polis , solis etc . these noisy profiles are then inverted using a milne - eddington inversion code to retrieve the magnetic parameters . hundred realizations of this process of adding random noise and polarimetric inversion is repeated to study the distribution of error in global and magnetic energy values . the results show that : ( 1 ) . the sign of is not influenced by polarimetric noise and very accurate values of global twist can be calculated , and ( 2 ) . accurate estimation of magnetic energy with uncertainty as low as 0.5% is possible under the force - free condition .
there are two important phenomena observed in evolutionary dynamical systems of any kind : _ self - organization _ and _ emergence_. both phenomena are the exclusive result of endogenous interactions of the individual elements of an evolutionary dynamical system .emergence characterizes the patterns that are situated at a higher macro level and that arise from interactions taking place at the lower micro level of the system .self - organization , besides departing from the individual micro interactions , implies an increase in order of the system , being usually associated to the promotion of a specific functionality and to the generation of patterns .typically , complex patterns emerge in a system of interacting individuals that participate in a self - organizing process .self - organization is more frequently related to the process itself , while emergence is usually associated to an outcome of the process .although less frequently mentioned , the emergence of patterns from self - organizing processes may be strongly dependent on _ locality_. emergence and self - organization are not enough to distinguish between two important and quite different circumstances : the presence of an influence that impacts the system globally and , conversely , the absence of any global influence and the lack of information about any global property of the system . in the latter case ,the system itself is the exclusive result of local interactions . such a global influence ( entity or property )is often associated with the concept of _ environment_. noteworthy , the latter circumstance may be considered a case of the former : when that global entity does not exist , the environment for each agent is just the set of all the other agents .conversely , when the global entity exists , it is considered part of the environment and may have an inhomogeneous impact on the individual dynamics .regardless of the environmental type , economical , ecological and social environments share as a common feature the fact that the agents operating in these environments usually try to improve some kind of utility , related either to profit , to food , to reproduction or to comfort and power .a general concept that is attached to this improvement attempt is the idea of _adaptation_. in the economy , adaptation may be concerned with the development of new products to capture a higher market share or with the improvement of the production processes to increase profits : that is , innovation . in ecology ,adaptation concerns better ways to achieve security or food intake or reproduction chance and , in the social context , some of the above economical and biological drives plus a few other less survival - oriented needs . in all cases, adaptation aims at finding strategies to better deal with the surrounding environment ( ) .natural selection through fitness landscapes or geographic barriers are good examples how global influences are considered when modeling adaptation in an evolutionary process . on the other hand, adaptation also operates in many structure generating mechanisms that can be found in both physical and social sciences but that are built on the exclusive occurrence of local interactions . in biology , the ultimate domain of evolution and natural selection , we are confronted with tremendous organic diversity virtually infinite forms and shapes none of which found twice but the distribution is well structured in a way that allows us to order this diversity and to speak of species , families , orders etc .a quite illustrative description is given by the evolutionary geneticist theodusius dobzhanski ( : p.21 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose that we make a fairly large collection , say some 10,000 specimens , of birds or butterflies or flowering plants in a small territory , perhaps 100 square kilometers .no two individuals will be exactly alike .let us , however , consider the entire collection .the variations that we find in size , in color , or in other traits among our specimens do not form continuous distributions .instead , arrays of discrete distributions are found .the distributions are separated by gaps , that is , by the absence of specimens with intermediate characteristics .we soon learn to distinguish the arrays of specimens to which the vernacular names english sparrow , chickadee , bluejay , blackbird , cardinal , and the like , are applied ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if we had to make a visual representation of this description of intra and interspecies variations it would perhaps look like the multi - modal distribution shown in figure [ fig : distribution01 ] .what we call a species , is in fact some norm or mean characteristics of a cluster of individuals .evolutionary theory is ultimately a theory about the history which led to such a pattern . andif the organic diversity we observe nowadays evolved in a way that is characterized by some kind of > > tree of live < < , then there must be events that may lead to the split of a connected set of individuals ( protospecies ) into ( at least ) two sets that are not connected any longer ( see figure [ fig : speciation ] ) . in biology , this is called _ speciation_. as we will see in this article , though , the generation of such a split with simple but well known evolutionary models in which `` natural selection impels and directs evolutionary changes '' ( ibid .p.2 ) is not straightforward .it so happens that constraints on the interaction behavior are required .the phenotype of living beings is not the only domain where patterns of structured diversity as illustrated in figure [ fig : distribution01 ] are observed .phenomena include certain phases of structure formation in physical cosmology , distribution of cultural behavior , languages and dialects , herd behavior in finance , among others . especially for the latter examples in the field of socio - cultural dynamicsa variety of models has been proposed which do not rely on the evolutionary concept of ( natural ) selection .they are rather based on the idea of exclusively _ local interactions ( li ) _ implemented in form of a system of agents that interact locally according to simple rules like assimilation or conformity . in these systems , finding strategies to better deal with the surrounding environment ( and thus improving fitness ) is not constrained by any global property .it may , however , be constrained by local ( individual ) rules .as we shall see later in this paper , constraints on the mechanisms of selection , interaction and replacement and the way they are combined in the modeling of an evolutionary process have an important bearing on both adaptation and emergence of speciation .locality operating in each of these mechanisms seems to be the fundamental modeling principle by which emergence of a multi - modal distribution as shown in figure [ fig : distribution01 ] can be explained .on the basis of these observations about the > > modelability < < of speciation with evolutionary and self - organisatory models , we study in this paper the conditions and mechanisms required for speciation and the emergence of a multi - modal distribution . in this analysis, we use computational ( section [ sec : computation ] ) as well as mathematical ( section [ sec : mathematic ] ) arguments .our models simulate how a population of individuals evolves in time in an abstract attribute space that represent phenetic traits , attitudes , verbal behavior , etcetera .modeling agents as points in an attribute space of this kind is of course a highly artificial abstraction from the complexity and multi dimensionality of real agents . for the purposes of this paper ,let us conceptualize an _ interaction event _ , defining the system evolution from one time step to the other , by the following three components : 1 .selection of agents , 2 .application of interaction rules , 3 .replacement of agents .any interaction event ( e.g. , mating , communication , ... ) that takes place in the course of a simulation of the model consists of the sequential application of these three steps .the reason to dissect the interaction events in this way is two fold : 1 . we want to look at the dynamical and structural effects of constraints applied to each of the three components independently ; 2 .the scheduling of interaction events may have a crucial effect on the model behavior , and with the distinction between selection and interaction on the one hand , and replacement on the other , we are able to make this effect explicit .the way interaction events are scheduled in the implementation of the models is not always given much importance in existing simulation studies .in the presence of constraints on the selection and interaction mechanisms , however , the outcome as well as the dynamical properties depend in a crucial way on the different choices . on the other hand , there are studies that do analyze the differences between synchronous and asynchronous update ( see , for instance , ) as well as studies on non overlapping ( nolg ) and respectively overlapping generations ( olg ) in biology and economics ( for instance , ) . herewe show that especially when the interaction is constrained ( as in the case of assortative mating ) there emerges an important qualitative difference between olg and nolg models .namely , speciation is observed in the former , but not in the latter case , whereas adaptation is favored by the latter and hindered by the former .however , by the distinction of selection , interaction and replacement we are able to show that in fact the difference between local and non - local replacement plays the determinant role ( and not the distinction between olg and nolg ) . even though locality also impacts selection and interaction mechanisms , it is on the replacement mode where relies the fundamental difference with respect to the conditions required for either adaptiveness or speciation .this paper is organized as follows : section 2 addresses the main issues of both the fitness landscape and the self - organizing models from a computer simulation framework . in both cases ,microscopic implementation rules are tested against their capability of reproducing adaptiveness and speciation . in section 3 ,the emergence of speciation is analytically shown to be dependent on the choice of different replacement modes .this is accomplished through a probabilistic description of a minimal model of just three phenetic traits where the transition probabilities between traits follow a markov chain .section 4 is targeted at presenting concluding remarks and a framework that relates interaction events to the emergence of collective structures in adaptive and self - organizing complex systems .in biology , and population genetics in particular , adaptive walks on fitness landscapes have been studied intensively .the main questions addressed by fitness landscapes approaches are related to the possible structure of the landscapes ( e.g. , ) , to how populations climb an adaptive peak in the landscape ( e.g. , ) , and to the circumstances under which a population might wander from one peak to another by crossing adaptive valleys ( e.g. ) .one of the best known models for populations on fitness landscapes is the wright - fisher model with non overlapping generations ( sometimes called wright - fisher sampling and shortened in the sequel by wf model , see and also ) . consider a population of individuals which is said to constitute the original generation ( ) .we consider only the case of sexual reproduction in this paper , in which the genotype of a new born individual is obtained by the recombination of the genoms of two randomly chosen parent individuals . as noted above, the choice of two parents and the application of a recombination rule is referred to as interaction ( or mating ) event . in the wf model , mating events are performed until a new generation of individuals is complete .as soon as it is complete , the parent generation is canceled and the process is repeated taking the new generation as parents . therefore , in the wf model the population size is always maintained at .we will denote the generation number by .we implemented this simple model and performed simulations on different toy fitness landscapes .the microscopic rules involved into the creation of a new individual , that is , the mating event , are as follows : 1 .selection of two individuals with a probability proportional to their fitness , 2 .application of recombination and mutation rules , 3 .replacement of an agent from the parent generation . in this toy model , we consider only one phenetic trait ( locus ) that takes discrete values ( from 0 to 99 ) .we denote the traits of the two chosen parent individuals and as and respectively and model recombination by taking the average of the two , . to model mutationswe add a random value to . in the wf model , is stored at an arbitrary place in the children array and one of the main objectives of this paper is clarify that this has important consequences for the model dynamics .an adaptive landscape is introduced into the model by assigning a fitness value to each of the 100 traits .for the first analysis shown in figure [ fig : onepeak.wf ] , a single peaked fitness function with a peak at trait 75 is used and the fitness assigned to trait is given by we have used the normal distribution with and in the construction of the fitness landscape ( solid line in figure [ fig : onepeak.wf ] ) . in the iteration process , individuals are chosen as parents with a probability proportional to , being the trait of the respective individual . for the illustrative model realizations in this section , we set .initially , the 500 individuals are distributed in this space according to a normal distribution with mean and ( see first image of figure [ fig : onepeak.wf ] ) .this section is mainly thought as an illustration of the different behaviors and patterns generated by certain constraints on the interaction mechanism . as the qualitative effects of different assumptions become evident and comprehensible in single simulations of the model, there is no need for a rigorous statistical analysis of suites of simulations with varying initial conditions .moreover , a mathematical analysis of the model dynamics is presented in the second part of this paper ( section [ sec : mathematic ] ) .[ cols="^,^,^ " , ] the framework presented in table 1 schematically shows the consequences of adopting ( un)constrained mechanisms to the emergent outcome of a so process .it helps to emphasize that the emergence of some specific patterns may be strongly dependent on the way constraints dictate limitations on the selection , interaction and replacement mechanisms .more specifically , it shows that differently ( un)constraining the replacement mechanism of an so process provides the conditions required for either speciation ( the emergence of multi - modal distributions ) or adaptation , since these features appear as two opposing phenomena , not achieved by one and the same model . in the same way that random interbreeding leads to conservative dynamics , randomreplacement is also an opposing force to speciation since newcomers may take the place of former - distant agents . in so doing , at the macro level , random replacement sets aside the effect of bounded confidence and - likeundirected genetic drift - may lead to the merging of subpopulations . even though we show in this paper that natural selection, operating as an external , environmental mechanism , is neither necessary nor sufficient for the creation of clustered populations , we do not want to argue against natural selection as an important mechanism in the biological domain and a substantive driving force in the speciation process .to the contrary , the concept of ( natural ) selection operating at a global level may provide us with plausible interpretations of the model results , even in disciplines where such interpretations are still lacking .in the words of t. dobzhanski ( , p.5 - 6 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ ... ] in biology nothing makes sense except in the light of evolution .it is possible to describe living beings without asking questions about their origins .the descriptions acquire meaning and coherence , however , only when viewed in the perspective of evolutionary development . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _financial support of the german federal ministry of education and research ( bmbf ) through the project _ linguistic networks _ is gratefully acknowledged ( http://project.linguistic-networks.net ) .this work has also benefited from financial support from the fundao para a cincia e a tecnologia ( fct ) , under the _ 13 multi - annual funding project of uece , iseg , technical university of lisbon_.
in this paper , we inspect well known population genetics and social dynamics models . in these models , interacting individuals , while participating in a self - organizing process , give rise to the emergence of complex behaviors and patterns . while one main focus in population genetics is on the adaptive behavior of a population , social dynamics is more often concerned with the splitting of a connected array of individuals into a state of global polarization , that is , the emergence of speciation . applying computational and mathematical tools we show that the way the mechanisms of selection , interaction and replacement are constrained and combined in the modeling have an important bearing on both adaptation and the emergence of speciation . differently ( un)constraining the mechanism of individual replacement provides the conditions required for either speciation or adaptation , since these features appear as two opposing phenomena , not achieved by one and the same model . even though natural selection , operating as an external , environmental mechanism , is neither necessary nor sufficient for the creation of speciation , our modeling exercises highlight the important role played by natural selection in the interplay of the evolutionary and the self organization modeling methodologies . _ keywords _ : * emergence*,*self - organization*,*agent based models * , * speciation * , * markov chains*. _ msc : _ 37l60 , 37n25 , 05c69 .
the general class of inventory - production systems is often associated to cost optimization problems . indeed , one must deal with three major matters : the storage of components , the possible random behavior of the manufacturing process and random clients demand . the controller must decide which production rate of the components fits best . a too slow production rate leads to low stock levels but it might not meet clients demand . on the opposite , a fast production rate does meet the demand , but may raise stock levels .one must then find a balance between both to minimize costs .this paper focuses on the optimization of a real - life industrial launcher integration process studied in collaboration with airbus defence and space .clients order a certain number of launches to be performed at specific dates .the controller has to determine the production rates in order to minimize costs .only storage and lateness costs are taken into account here .in general , the costs may also take into account several other constraints such as exploitation cost , workforce salary , the cost related to the unavailability of the structure including any penalty or the maintenance and inspection cost , among others . plus, a part of the architecture of the process is not set .indeed , the controller has to decide on the maximum capacity of one warehouse between two options .the originality of this problem is twofold . on the one hand ,the optimization horizon is rather long , 30 years , but the controller can only make decisions once a year concerning the production rates . on the other hand , the launches must be performed according to a prescribed calendar corresponding to clients orders . our goal is to find an optimization procedure usable in practice . it should provide explicit decision rules applicable to each trajectory as a table giving the controller the best action to take according to the current state and time .a preliminary study was performed on a simplified process using petri nets .although they are easy to simulate , they are not suitable for performing dynamic decisional optimization .a more suitable framework is that of markov decision processes ( mdps ) .mdps are a class of stochastic processes suitable for cost and decision optimization .briefly , at each state , a controller makes a decision which has an influence on the transition law to the next state and on a cost function .the latter depends on the starting state and the decision made .the sequence of decisions is called a policy , and its quality is gauged thanks to a cost criterion ( typically , it is the sum of all the costs generated by the transitions ) .the first step to solve our problem is to implement an mdp - based simulator of the launcher integration process .simulation results were presented at the esrel conference in 2015 .this paper deals with the optimization itself .it is a non standard optimization problem within the mdp framework because the transition law is not analytically explicit , it is only simulatable .thus , standard optimization techniques for mdps such as dynamic programming , or linear programming do not apply .in addition , the cost function is unusual as the actual lateness can be computed only at the end of a year , and not at its beginning when the controller makes their decisions .as the launcher integration process can be simulated , we investigate simulation - based algorithms for mdps .these extensively use monte - carlo methods to estimate the performance of a policy .thus , they require a fast enough simulator for the algorithms to give a result within a reasonable time .new difficulties arise here .first , the state space of our mdp , though finite is huge .second , the first simulator in matlab is not fast enough .third , the algorithms require the computation of a product of numerous numbers between and , and although the output is non zero on paper , it is treated as zero numerically , leading to erroneous results . to overcome these difficulties ,we reduce the state space by aggregating states in a manner that makes sense regarding our application , we use the c language and a special logarithmic representation of numbers .the results we obtained are presented and discussed .this paper is organized as follows .section [ lauint ] is dedicated to the description of the assembly line under study and the statement of the optimization problem . in section [ mardec ] , we present how the optimization problem for the assembly line fits into the mdp framework . section [ optlau ] presents the main difficulties encountered while trying to optimize our mdp , and solutions to bypass them . in section [ numres ] , we present and comment the numerical results obtained . finally a last section gives some concluding remarks .technical details regarding the implementation of algorithms are provided in the appendix .airbus defense and space ( airbus ds ) as prime contractor is in charge of launchers and ground facilities design .this paper is dedicated to the optimization of an assembly line representative of a launcher integration process managed by airbus ds . for confidentiality matters , all parameter values and random distributions given in this paperare arbitrary but realistic .the launcher integration process we study in this paper is depicted on figure [ process ] .this assembly line is composed of several workshops and storage facilities that are described in detail in the following sections , and is operational typically for 30 years .the subassemblies are the input of the assembly line .a launcher needs four types of subassemblies to be manufactured .these are * the insulated motor cases ( imcs ) , which are powder - free boosters , * the lower liquid propulsion modules ( llpms ) and the upper liquid propulsion modules ( ulpm ) , which are the lower part of a launcher , * the upper launchers , which are the fairings of the launchers .the upper launchers are always available when needed .so in the following , their production rate and storage cost are not taken into account .the production time of the other subassemblies ( imc , llpm , ulpm ) is assumed to be random .its distribution is symmetric around a mean and takes 5 different values with probabilities given in table [ lawsub ] .the average production time is computed by dividing the total number of workdays by the target number of subassemblies the controller decides to produce in a year , and taking its integer part .the number of workdays in a year is set at 261 .so for instance , if the controller wants an average production of 12 llpms a year , the value of for llpms will be = 21 ] stands for the integer part of a real number ..distribution of the production time of a subassembly with mean [ cols="<,<",options="header " , ] [ third ] storing 4 srms at most induces a lower cost .however , the gain compared to an 8-unit storage is only 0.61% ( it is 14.36% when compared to the nave policy ) . taking into account the variance of the costs ,one may conclude that , in this case , the two scenarios lead to similar performances . with these three examples, one sees that the question of the optimal srm storage capacity is not trivial at all .indeed , it seems to be impossible to answer this question with only prior knowledge .one may consider static costs to have a more accurate answer to this problem ( for instance exploitation costs , a larger warehouse being naturally more expensive ) .however , it seems that a maximum capacity of 4 srms is a better choice in general .using the mdp framework together with a simulation based algorithm , we performed the optimization of the launcher integration process .several problems had to be addressed , such as different time scales , state space reduction computation speed or numerical representation of numbers .given a launch calendar , optimal policies are computed and stored in the form of a matrix . to apply the optimal policy in practice , the controller looks up the best action to select in the matrix , given the current state of the process and the current time .such policies do not have a trivial form and can not be easily explained .they lead to up to 70% gain compared to trivial policies prescribing a manufacturing rate corresponding to the exact number of launches to be performed in the year . to address real life optimization problem regarding launcher operations , one should now work with a calendar that is known only two years advance .one option that would fit the present framework would be to consider that the calendar of year is randomly drawn at the beginning of year .further exchange with practitioners is required to derive realistic distributions for such calendars .another possible extension of interest to airbus ds is to model the production of subassemblies with more detail , inducing longer delays for selecting the production rates .instructions for the mras are given on algorithm [ mrasa ] and for the asa on algorithm [ asaa ] .they use the following notation . for some set , denotes its indicator function , that is if and otherwise .the upper integer part of is denoted .let and define the function on by for all states , all decisions and all times , let the set of policies which prescribe action in state at time . for a policy , let and , for , let with a given probability matrix .initial 3-dimensional probability matrix , ] , initial state , iteration count , limit number of iterations . draw policy from matrix with probability and matrix with probability . simulate trajectories with as initial state using policy . compute the cost generated for the trajectories . . arrange the in descending order to get a sequence such that . . . . . policy such that . . . . policy such that . where . . . . . + . . . initial 3-dimensional probability matrix , , , temperature parameter , , , initial state , iteration count , limit number of iterations . draw policy from matrix with probability and matrix with probability . simulate trajectories with as initial state using policy . compute the cost generated for the trajectories . . . ) ] . . . . .elegbede c , brard - bergery d et al .dynamical modelling and stochastic optimization for the design of a launcher integration process . in _ safety , reliability and risk analysis : beyond the horizon _( eds rdjm steenbergen et al ) , amsterdam , the netherlands , 29 september2 october 2013 , pp .3039 3046 , london : taylor and francis group , crc press .puterman ml ._ markov decision processes : discrete stochastic dynamic programming_. wiley series in probability and mathematical statistics : applied probability and statistics .new - york : john wiley & sons , inc , 1994 .nivot c , de saporta b , dufour f et al .modeling and optimization of a launcher integration process . in _safety and reliability of complex engineered systems : esrel 2015 _ ( eds l podofillini et al ) , zrich , switzerland , 7 september10 september 2015 , pp .2281 2288 , london : taylor and francis group , crc press .
this paper is dedicated to the numerical study of the optimization of an industrial launcher integration process . it is an original case of inventory - production system where a calendar plays a crucial role . the process is modeled using the markov decision processes ( mdps ) framework . classical optimization procedures for mdps can not be used because of specificities of the transition law and cost function . two simulation - based algorithms are tuned to fit this special case . we obtain a non trivial optimal policy that can be applied in practice and significantly outperforms reference policies .
solar - cycle prediction , _i.e. _ forecasting the amplitude and/or the epoch of an upcoming maximum is of great importance as solar activity has a fundamental impact on the medium - term weather conditions of the earth , especially with increasing concern over the various climate change scenarios .however , predictions have been notoriously wayward in the past .there are basically two classes of methods for solar cycle predictions : empirical data - analysis - driven methods and methods based on dynamo models .most successful methods in this regard can give reasonably accurate predictions only when a cycle is well advanced ( _ e.g. , _ three years after the minimum ) or with the guidance from its past .hence , these methods show very limited power in forecasting a cycle which has not yet started .the theoretical reproduction of a sunspot series by most current models shows convincingly the illustrative nature " of the existing record .however , they generally failed to predict the slow start of the present cycle 24 .one reason cited for this is the emergence of prolonged periods of extremely low activity .the existence of these periods of low activity brings a big challenge for solar - cycle prediction and reconstruction by the two classes of methods described above , and hence prompted the development of special ways to evaluate the appearance of these minima .moreover , there is increasing interest in the minima since they are known to provide insight for predicting the next maximum .some earlier authors have both observed and made claims for the chaotic or fractal features of the observed cycles , but the true origin of such features has not yet been fully resolved .for instance , the hurst exponent has been used as a measure of the long - term memory in time series an index of long - range dependence that can be often estimated by a rescaled range analysis .the majority of hurst exponents reported so far for the sunspot numbers are well above , indicating some level of predictability in the data .nonethteless , it is not clear whether such predictability is due to an underlying chaotic mechanism or the presence of correlated changes due to the quasi-11-year cycle .it is the irregularity ( including the wide variations in both amplitudes and cycle lengths ) that makes the prediction of the next cycle maximum an interesting , challenging and , as yet , unsolved issue .in contrast to the 11-year cycle _ per se _ , we concentrate on the recently proposed hypothetical long - range memory mechanism on time scales shorter than the quasi - periodic 11-year cycle . in this work ,we provide a distinct perspective on the strong maximal activities and quiescent minima by means of the so - called visibility graph analysis .such graphs ( mathematica graphs , in the sense of networks ) have recently emerged as one alternative to describe various statistical properties of complex systems .in addition to applying the standard method , we generalize the technique further making it more suitable for studying the observational records of the solar cycles .both the international sunspot number ( isn ) and the sunspot area ( ssa ) series are used in this work , and we have obtained consistent conclusions in either case . the length of the data sets are summarized in table [ tab : tspan ] .we perform a visibility - graph analysis using both monthly and daily sunspot series , which yields , respectively , month - to - month and day - to - day correlation patterns of the sunspot activities .note that we depict the annual numbers _ only _ for graphical visualization and demonstration purposes ( we use the annual numbers to demonstrate our method the actual analysis is performed in daily and monthly data ) .we discuss the results with the isn ( in figs .[ sn_sa_data ] , [ ts_deg_maxmin_cp ] ) in the main text and illustrate the results for the ssa ( in figs .[ sa_nasa_data ] , [ ts_deg_maxmin_cpnasa ] ) with notes in the captions .moreover , we compare our findings based on observational records to the results obtained from data produced by simulations from computational models ..temporal resolution and the length of the data sets .values in parentheses are the number of points of each series .note that the annual isn is used _ only _ for graphical visualization purposes and to provide a reference time interval for models .[ cols="<,^,^",options="header " , ] recently a variety of methods have been proposed for studying time series from a complex networks viewpoint , providing us with many new and distinct statistical properties . in this work ,we restrict ourselves to the concept of the visibility graph ( vg ) , where individual observations are considered as vertices and edges are introduced whenever vertices are visible .more specifically , given a univariate time series , we construct the 01 binary adjacency matrix of the network .the algorithm for deciding non - zero entries of considers two time points and as being mutually connected vertices of the associated vg if the following criterion is fulfilled for all time points with .therefore , the edges of the network take into account the temporal information explicitly . by default ,two consecutive observations are connected and the graph forms a completely connected component without disjoint subgraphs .furthermore , the vg is known to be robust to noise and not affected by choice of algorithmic parameters most other methods of constructing complex networks from time series data are dependent on the choice of some parameters ( _ e.g. _ the threshold of recurrence networks , see more details in ) .while the inclusion of these parameters makes these alternative schemes more complicated , they do gain the power to reconstruct ( with sufficient data ) the underlying dynamical system .for the current discussion we prefer the simplicity of the visibility graph .the vg approach is particularly interesting for certain stochastic processes where the statistical properties of the resulting network can be directly related with the fractal properties of the time series .figure [ visi_intro ] illustrates an example of how we construct a vg for the sunspot time series .it is well known that the solar cycle has an approximately 11-year period , which shows that most of the temporal points of the decreasing phase of one solar cycle are connected to those points of the increasing phase of the next cycle ( figure [ visi_intro](a ) ) .therefore , the network is clustered into communities , each of which mainly consists of the temporal information of two subsequent solar cycles ( figure [ visi_intro](b ) ) . when the sunspot number reaches a stronger but more infrequent extreme maximum , we have inter - community connections , since they have a better visibility contact with more neighbors than other time points hence , forming hubs in the graph .the inter - community connections extend over several consecutive solar cycles encompassing the temporal cycle - to - cycle information .( open circles ) and the negatively inverted series ( filled circles ) , respectively.,title="fig : " ] ( open circles ) and the negatively inverted series ( filled circles ) , respectively.,title="fig : " ] ( open circles ) and the negatively inverted series ( filled circles ) , respectively.,title="fig : " ] depending on various notions of `` importance '' of a vertex with respect to the entire network , various centrality measures have been proposed to quantify the structural characteristics of a network ( _ c.f ._ ) .recent work on vgs has mainly concentrated on the properties of the degree and its probability distribution , where degree measures the number of direct connections that a randomly chosen vertex has , namely , .the degree sequence reflects the maximal visibility of the corresponding observation in comparison with its neighbors in the time series ( figure [ visi_intro](c ) ) .based on the variation of the degree sequence , we consider , , , and as hubs of the network , which can be used to identify the approximately 11-year cycle reasonably well ( figure [ visi_intro](b ) ) . furthermore in the case of sunspot time series, one is often required to investigate what contributions local minimum values make to the network something that has been largely overlooked by the traditional vgs .one simple solution is to study the negatively inverted counterpart of the original time series , namely , , which quantifies the properties of the local minima .we use and to denote the case of . here, we remark that this simple inversion of the time series allows us to create an entirely different complex network this is because the vg algorithm itself is extremely simple and does not attempt to reconstruct an underlying dynamical system .as shown in figure [ visi_intro](c ) , captures the variation of the local minima rather well .we will use this technique later to understand the long - term behavior of strong minima of the solar cycles .the degree distribution is defined to be the fraction of nodes in the network with degree .thus if there are nodes in total in a network and of them have degree , we have . for many networks from various origins , has been observed to follow a power - law behavior : .in the case of vgs , is related to the dynamical properties of the underlying processes .more specifically , for a periodic signal , consists of several distinct degrees indicating the regular structure of the series ; for white noise , has an exponential form ; for fractal processes , the resulting vgs often have power law distributions with the exponent being related to the hurst exponent of the underlying time series .it is worth pointing out that when one seeks to estimate the exponent it is often better to employ the cumulative probability distribution so as to have a more robust statistical fit . estimating the exponent of the hypothetical power law model for the degree sequence of vg can be done rather straightforwardly , but , the statistical uncertainties resulting from the observability of sunspots are a challenge for reliable interpretation . meanwhile ,fitting a power law to empirical data and assessing the accuracy of the exponent is a complicated issue .in general , there are many examples which had been claimed to have power laws but turn out not to be statistically justifiable .we apply the recipe of to analyze the hypothetical power - law distributed data , namely , ( i ) estimating the scaling parameter by the maximum likelihood ( ii ) generating a -value that quantifies the plausibility of the hypothesis by the kolmogorov smirnov statistic . aside from the aforementioned degree and degree distribution , in the appendix [sapp : bet ] , some alternative higher order network measures are suggested which may be applied to uncover deeper dynamical properties of the time series from the vg .figure [ sn_sa_data](a , b ) show the degree distributions of the vgs derived from the isn with heavy - tails corresponding to hubs of the graph , which clearly deviates from gaussian properties .in contrast , of the negatively inverted sunspot series shows a completely different distribution , consisting of a bimodal property ( figure [ sn_sa_data]c , d ) , extra large degrees are at least two orders of magnitude larger than most of the vertices ( figure [ sn_sa_data](d ) ) .of vgs from monthly ( a , c ) and daily data ( b , d ) .( a , b ) is for , and ( c , d ) .upper insets of all plots are in linear scale , while lower inset of ( a , b ) shows cumulative distribution in double logarithmic scale , where a straight line is expected if would follow a power law . in ( a, b ) could be suspected to be in the range $ ] , while a fit to the first part of yields that the slope of dashed line in ( c ) is 1.79 , and that of ( d ) is 3.61 , _ but _ all -values are , rejecting the hypothetical power laws . ] , while circles ( strong minima ) are that of the vg from .( b , c ) cumulative probability distribution of the time intervals between subsequent strong maxima ( b ) , and minima ( c ) .a cumulative exponential distribution is plotted as a dashed line in ( b ) , while the dashed line in ( c ) is a linear fit with slope being equal to .the corresponding -value of ( c ) is , indicating that the power law is a plausible hypothesis for the waiting times of strong minima . ]since well - defined scaling regimes are absent in either or ( nor do they appear in the cumulative distributions as shown in the insets , see captions of figure [ sn_sa_data ] for details of the statistical tests which we apply ) , we may reject the hypothetical power laws in contrast to what has been reported in other contexts . in the context of studying grand maxima / minima over ( multi-)millennial timescale using some particular indirect proxy time series, the main idea lies in an appropriately chosen threshold , excursions above which are defined as grand maxima , respectively , below which are defined as grand minima . in this work , we use a similar concept but here in terms of degrees of the corresponding vg .we define a strong maximum if its degree is larger than ( figure [ sn_sa_data]a , b ) , a strong minimum if its degree is over ( figure [ sn_sa_data]c , d ) .note that , in general , our definition of strong maxima / minima coincides _ neither _ with the local maximal / minimal sunspot profiles since degree takes into account the longer term inter - cycle variations _ nor _ with those defined for an individual cycle .our definition avoids the choice of maximum / minimum for one cycle , which suffers from moving - average effects ( _ e.g. _ , ) .in contrast , our results below are robust with respect to the choice of the threshold degrees , especially in the case of the definition of strong minima , since large degrees are very well separated from others .the gray line in figure [ ts_deg_maxmin_cp](a ) shows the sunspot numbers overlaid by the maxima / minima identified by the large degrees .we find that the positions of strong maxima are substantially homogeneously distributed over the time domain , while that of the strong minima are much more clustered in the time axis although irregularly ( figure [ ts_deg_maxmin_cp](a ) ) .we emphasize that the clustering behavior of the strong minima on the time axis as shown in figure [ ts_deg_maxmin_cp](a ) and figure [ ts_deg_maxmin_cpnasa](a ) does _ not _ change if threshold degrees are varied in the interval of ( 200 , 8000 ) to define strong minima , _i.e. _ , , as shown in figure [ ts_degk7000](a , b ) . therefore , the bimodality as observed in figure [ sn_sa_data](c , d ) and figure [ sa_nasa_data](c , d ) are not due to the finite size effects of time series .the hidden regularity of the time positions of maxima / minima can be further characterized by the waiting time distribution : the interval between two successive events is called the waiting - time . the statistical distribution of waiting - time intervals reflects the nature of a process which produces the studied events .for instance , an exponential distribution is an indicator of a random memoryless process , where the behavior of a system does not depend on its preceding states on both short or long time scales .any significant deviation from an exponential law suggests that the underlying event occurrence process has a certain level of temporal dependency .one representative of the large class of non - exponential distributions is the power laws , which have been observed in many different contexts , ranging from the energy accumulation and release property of earthquakes to social contacting patterns of humans . in the framework of vgs ,the possible long temporal correlations are captured by edges that connect different communities ( the increasing and decreasing phases of one solar cycle belong to two temporal consecutive clusters ) . as shown in figure [ ts_deg_maxmin_cp](b ) , the distribution of the waiting time between two subsequent maxima sunspot deviates significantly from an exponential function , although the tail part could be an indicator of an exponential form .in contrast , we show in figure [ ts_deg_maxmin_cp](c ) that the waiting times between subsequent strong minima have a heavy - tail distribution where the exponent is estimated in the scaling regime .this suggests that the process of the strong minima has a positive long term correlation , which might be well developed over the time between 151000 days where a power - law fit is taken .waiting time intervals outside this range are due to either noise effects on shorter scales or the finite length of observations on longer time scales .again , the power law regimes identified by the waiting time distributions are robust for various threshold degree values in the interval of ( 200 , 8000 ) ( for instance , the case of is shown in figure [ ts_degk7000](c , d ) ) . of vgs from monthly ( a , c ) and daily data ( b , d ) .( a , b ) is for , and ( c , d ) .upper insets of all plots are in linear scale , while lower inset of ( a , b ) shows cumulative distribution in double logarithmic scale , where a straight line is expected if would follow a power law . in ( c ,d ) , could be fit by dashed lines ( slopes : c , ; d , respectively ) , _ but _ all -values are rejecting the hypothetical power laws . ] , while circles ( strong minima ) are that of the vg from .( b , c ) cumulative probability distribution of the time intervals between subsequent strong maxima ( b ) , and minima ( c ) .a cumulative exponential distribution is plotted as a dashed line in ( b ) , while the dashed line in ( c ) is a linear fit with slope being equal to .the corresponding -value of ( c ) is , indicating that the power law is a plausible hypothesis for the waiting times of strong minima . ] ) of vgs reconstructed from negatively inversed series .( a ) isn , and ( b ) ssa .cumulative probability distribution of the time intervals between subsequent strong minima ,( c ) isn , and ( d ) ssa .the dashed lines in ( c , d ) are linear fits which are obtained in the same way as shown in figs .[ ts_deg_maxmin_cp]c , [ ts_deg_maxmin_cpnasa]c , respectively . ]in contrast to the computations described above with observational data , we now demonstrate the inadequacy of two models of solar cycles . by applying the vg methods to model simulationswe demonstrate that the observed data has , according to the complex network perspective , features absent in the models .we first choose a rather simple yet stochastic model which describes the temporal complexity of the problem , the barnes model , consisting of an autoregressive moving average arma(2 , 2 ) model with a nonlinear transformation where , , , , and are identically independent distributed gaussian random variables with zero mean , and standard deviation sd = 0.4 .the second model is a stochastic relaxation van der pol oscillator which is obtained from a spatial truncation of the dynamo equations .the equations read ,\end{aligned}\ ] ] where , , , is gaussian noise with zero mean and sd = 1 , and is adjustable but often chosen to be . the variable is associated with the mean toroidal magnetic field , and therefore the sunspot number is considered to be proportional to , which prompts us to construct vgs from ( respectively ) .both models reproduce the rapid growth in the increasing phase and slow decay in the decreasing phase of the activity cycles adequately . for the truncated model of the dynamo equations , a statistical significant correlation between instantaneous amplitude and frequency has been established , while the barnes model shows virtually no correlation , which are generally termed as the waldmeier effect . from both models ,we generate independent realizations , each of them has a one month temporal resolution and the same time span as we have for the observations ( namely , over years ) .we then construct vgs from both and from each realization in the same way as we processed for the original observation . as shown in figure [ barnes_mininni_model ] , neither can mimic the heavy tails of the distributions we observed in figure [ sn_sa_data](a , b ) , nor can capture the bimodality of the large degrees for strong minima as we have observed in the case of observational raw records in figure [ sn_sa_data](c , d ) . this does not occur even if the parameter of the second model is adjusted ( figure [ barnes_mininni_model](c , d ) ) .one reason for the absence of the bimodality of in the nonlinear - oscillator model is the fact that the model was designed to reproduce smoothed sunspot - number time series .the often - used 13-month running - average method in the literature is known to suppress the maximum / minimum amplitudes of the series .therefore , the visibility condition for each time point is changed if the 13-month smoothing technique is applied to the original data .we show the degree distribution of vgs reconstructed from smoothed isn series in appendix ( figure [ pdf_smooth](b ) ) , where the bimodality is absent . of vgs constructed from : ( a , b ) and of barnes model , and ( c , d ) and of mininni s model . all and estimated by an average over independent realizations using a kernel smoother . in ( c ,d ) ( filled circles ) , ( open triangles ) . ] of vgs reconstructed from smoothed monthly isn data .( a ) is for , and ( b ) . ]as shown in figure [ barnes_mininni_model ] , strong maxima / minima are _ not _ well separated .consequently we are prevented from identifying unique waiting time sequences .the corresponding analysis then depends significantly on the choice of threshold degrees .dynamo theory provides several hints that might explain the features observed in the long - term evolution of the solar activity .the two theoretical models tested in this work can reproduce qualitative features of the system reasonably well , however , within the context performed in this study , can not yield a complete and conclusive rendition of the statistical properties of .the power - law regimes obtained from waiting time sequences suggest that the interaction patterns for two subsequent minima can be much more complicated than what has been previously described as the instantaneous amplitudes frequencies correlation using rather simple models .in this work , we apply a recently proposed network approach , namely the visibility graph , to disclose the intricate dynamical behavior of both strong maximal and minimal sunspot numbers with observational records .more specifically , we show that : 1 .there is a strong degree of memory af the time scale of 151000 days in the occurrence of low - activity episodes , observed as clusters of inactive days .the identified persistence time scale of the strong minima agrees with the recently proposed hypothetical long range memory on time scales shorter than the 11-year cycle .the occurrence of high activity episodes is nearly random , _i.e. _ strong active regions appear more or less independently of each other .the distinctive long - term correlations of the strong maxima and strong minima are reflected by the structural asymmetries of the degree distributions of the respective vgs .3 . there is no evidence for a long term inter - cycle memory .this is in agreement with the present paradigm based on alternative methods ( see , _ e.g. _ , reviews by ) , and provides an observational constraint for solar - activity models .since the long term intra - cycle memory is relatively easy to establish but inter - cycle memory remains largely unclear , therefore we propose that our results could be used for evaluating models for solar activity at this time scale because they reflect important properties that are not included in other measures reported in the literature . from the methodological perspective , we propose an interesting generalization for the construction of vgs from the negatively - inverted time series .this has been seen , via our analysis of sunspot observations , to show complementary aspects of the original series .note that the negatively - inverse transformation is crucial for understanding when asymmetry is preserved in the time series .therefore , it is worth analyzing the dependence of the resulted vgs on an arbitrary monotonic nonlinear transformation .furthermore , as presented in appendix [ sapp : cor ] , a systematic investigation of the general conclusion as to whether large sampled points correspond to hubs of vgs will be a subject of future work especially in the presence of cyclicity and asymmetry .it is worth stressing that we construct vgs directly based on the raw sunspot series without any preprocessing .many researchers prefer to base their studies on some kind of transformed series since most common methods of data analysis in the literature rely on the assumption that the solar activity follows gaussian statistics .it is certain that the conclusions will then show some deviations depending on the parameters chosen for the preprocessing .the pronounced peaked and asymmetrical sunspot - cycle profiles prompt one to develop techniques such that the possible bias due to the unavoidable choice of parameters should be minimized .the complex network perspective offered by vg analysis has the clear advantage of being independent of any priori parameter selection .the procedure for network analysis outlined here can be directly applied to other solar activity indicators , for instance , the total solar irradiance and the solar flare index .we compared our results to two rather empirical models and showed that the distinctive correlation patterns of maximal and minimal sunspots are currently absent from these two models . using our analysis for more refined dynamo - based models ( _ e.g. _ ) would be straightforward .certainly , further work on this line of research will examine any differences given by the particular quantity and strengthen the understanding of the hypothetical long - range memory process of the solar activity from a much broader overview .this work was partially supported by the german bmbf ( projects progress ) , the national natural science foundation of china ( grant no .11135001 ) , and the hong kong polytechnic university postdoctoral fellowship .ms is supported by an australian research council future fellowship ( ft110100896 ) .besides the fact that the maximal sunspot numbers are identified as hubs of vgs by the degree sequence , convincing links between further network - theoretic measures and distinct dynamic properties can provide some additional interesting understanding for the time series . in this study, we provide a graphical visualization on the relationship between degree and node betweenness centrality , which characterizes the node s ability to transport information from one place to another along the shortest path . using the annual isn as a graphical illustration , here only the relationship between some relatively large degrees ( ) and betweenness centrality values ( )is highlighted in figure [ year_sspndegbet ] for the entire series available .for instance in the annual series , time points , , , , , , and are all identified simultaneously as large degrees and high betweenness , indicating strong positive correlations . ) , and ( b ) high betweenness centrality ( ) . ]certainly many other measures can be directly applied to the sunspot numbers , however , providing the appropriate ( quantitative ) interpretations of the results in terms of the particular underlying geophysical mechanisms remains a challenging task and is largely open for future work .note that there is in general a strong interdependence between these different network structural quantities .a general rule of understanding the scale - free property of the degree distribution of complex networks is the effects of a very few hubs having a large amount of connections . in the particular case of vgs , hubs are related to maxima of the time series since they have better visibility contact with more vertices than other sampled points .however , this result can not be generalized to all situations , for instance , it is easy to generate a time series such that its maxima are not always mapped to hubs in the vgs .one simple way to better explore this correlation is to use scatter plots between the degree sequence and the sunspot time series . as we show in figure [ deg_ts_rhob ] , the spearman correlation coefficients as very small ( still significantly larger than zero ) in the case of vgs reconstructed from the original time series .this provides an important cautionary note on the interpretation of hubs of vgs by local maximal values of the sunspot numbers . on the contraryif the network is reconstructed from , hubs of vgs could be better interpreted by local minimal values of the sunspot data since the correlations become larger .these results hold for both the isn and ssa series ( figure [ deg_ts_rhog ] ) .one reason for the lack of strong correlation between the degree and is because of the ( quasi-)cyclicity of the particular time series , which has a similar effect as the conway series .it is this concave behavior over the time axis ( although quasi - periodic from cycle to cycle ) that prevents the local maxima from having highly connected vertices .it remains unclear how local maxima of a time series are mapped to hubs of vgs we defer this topic for future work especially in the presence of cyclicity .this situation becomes even more challenging if some sort of asymmetric property is preserved in the data , as we have found for the sunspot series . for vg constructed from isn series based on ( a ) monthly and ( b ) daily data respectively .spearman is indicated .( c , d ) are based on the vgs constructed from negatively inverted series . ] for vg constructed from ssa series based on ( a ) monthly and ( b ) daily data respectively .spearman is indicated .( c , d ) are based on the vgs constructed from negatively inverted series . ] : 2011 , the international sunspot number & sunspot area data ._ monthly report on the international sunspot number , http://www.sidc.be/sunspot-data/ , royal observatory greenwich , http://solarscience.msfc.nasa.gov / greenwch.shtml/_.
complex network approaches have been recently developed as an alternative framework to study the statistical features of time - series data . we perform a visibility - graph analysis on both the daily and monthly sunspot series . based on the data , we propose two ways to construct the network : one is from the original observable measurements and the other is from a negative - inverse - transformed series . the degree distribution of the derived networks for the strong maxima has clear non - gaussian properties , while the degree distribution for minima is bimodal . the long - term variation of the cycles is reflected by hubs in the network which span relatively large time intervals . based on standard network structural measures , we propose to characterize the long - term correlations by waiting times between two subsequent events . the persistence range of the solar cycles has been identified over 151000 days by a power - law regime with scaling exponent of the occurrence time of the two subsequent and successive strong minima . in contrast , a persistent trend is not present in the maximal numbers , although maxima do have significant deviations from an exponential form . our results suggest some new insights for evaluating existing models . the power - law regime suggested by the waiting times does indicate that there are some level of predictable patterns in the minima .
to uncover the neural circuit mechanisms underlying animal behavior , e.g. , working memory or decision making , is a fundamental issue in systems neuroscience .recent developments in multi - neuron recording methods make simultaneous recording of neuronal population activity possible , which gives rise to the challenging computational tasks of finding basic circuit variables responsible for the observed collective behavior of neural populations .the collective behavior arises from interactions among neurons , and forms the high dimensional neural code . to search for a low dimensional and yet neurobiologically plausible representation of the neural code ,thus becomes a key step to understand how the collective states generate behavior and cognition .correlations among neurons spiking activities play a prominent role in deciphering the neural code .various models were proposed to understand the pairwise correlations in the population activity .modeling these correlations sheds light on the functional organization of the nervous system .however , as the population size grows , higher order correlations have to be taken into account for modeling synchronous spiking events , which are believed to be crucial for neural information transmission .in addition , the conclusion drawn from small size populations may not be correct for large size populations .theoretical studies have already proved that high order interactions among neurons are necessary for generating widespread population activity .however , introduction of high order multi - neuron couplings always suffers from a combinatorial explosion of model parameters to be estimated from the finite neural spike train data . to account for high order correlations ,various models with different levels of approximation were proposed , for example , the reliable interaction model with the main caveat that the rare patterns are discarded during inference of the coupling terms , the dichotomized gaussian model in which correlations among neurons are caused by common gaussian inputs to threshold neurons , the k - pairwise model in which an effective potential related to the synchronous firing of neurons was introduced , yet hard to be interpreted in terms of functional connectivity , and the restricted boltzmann machine where hidden units were shown to be capable of capturing high order dependences but their number should be predefined and difficult to infer from the data .one can also take into account part of the statistical features of the population activity ( e.g. , simultaneous silent neural pattern ) and assume homogeneity for high order interactions among neurons due to the population size limitation . in this paper ,i provide a low dimensional neurobiological model for describing the high order correlations and extracting useful information about neural functional organization and population coding . in this study ,i interpret correlations in terms of population coupling , a concept recently proposed to understand the multi - neuron firing patterns of the visual cortex of mouse and monkey .the population coupling characterizes the relationship of the activity of a single neuron with the population activity ; this is because , the firing of one neuron is usually correlated with the firing pattern of other neurons .i further generalize the original population coupling to its higher order form , i.e. , the relationship of pairwise firing with the population activity .i then derive the practical dimensionality reduction method for both types of population couplings , and test the method on different types of neural data , including ganglion cells in the salamander retina onto which a repeated natural movie was projected , and layer 2/3 as well as layer 5 cortical cells in the medial prefrontal cortex ( mpc ) of behaving rats . in this paper ,i develop a theoretical model of population coupling and its advanced form , to explain higher order correlations in the neural data .methodologically , i propose the fast mean field method not only to learn the population couplings but also to evaluate the high order correlations .note that this is computationally hard in a traditional maximum entropy model by using sampling - based method .conceptually , i generalize the normal population coupling by introducing the second - order population coupling , which reveals interesting features from the data . first , it can explain a significant amount of three - cell correlations , and it works much better in cortical data than in retinal data .second , the second - order population coupling matrix has distinct features in retinal and cortical data .the cortical one shows clear stripe - like structure while the retinal one has no such apparent structure .altogether , this work marks a major step to understand the low - order representation of complex neural activity in both concepts and methods .for a neuronal population of size , the neural spike trains of duration are binned with temporal resolution , yielding samples of -dimensional binary neural firing patterns .i use to denote firing state of neuron , and for silent state .neural responses to repeated stimulus ( or the same behavioral tasks ) vary substantially ( so - called trial - to - trial variability ) . to model the firing pattern statistics , i assign each firing pattern a cost function ( energy in statistical physics jargon ) , then the probability of observing one pattern can be written as , where this is the first low dimensional representation to be studied .high energy state corresponds to low probability of observation . is the firing bias constraining the firing rate of neuron , while characterizes how strongly neuron s spiking activity correlates with the population activity measured by the sum of other neurons activity .i name the first order population coupling ( ) .thus , only parameters needs to be estimated from the neural data .this number of model parameters is much less than that in conventional maximum entropy model . to model the high order correlation ( e.g. , three neuron firing correlation ) , i further generalize to its advanced form , i.e., the second order population coupling , namely , describing the relationship of pairwise firing with the population activity , and the corresponding energy is given by where characterizes how strongly the firing state of the neuron pair correlates with the firing activities of other neurons .this term is expected to increase the prediction ability for modeling high order correlations in the neural data . under the framework of ,the total number of parameters to be estimated from the data is . and have a clear neurobiological interpretation ( for , see a recent study , and the results obtained under the can also be experimentally tested ) , and moreover they can be interpreted in terms of functional interactions among neurons ( as shown later ) . to find the model parameters as a low dimensional representation , i apply the maximum likelihood learning principle corresponding to maximizing the log - likelihood with respect to the parameters . the learning equation for given by [ le01 ] where and denote the learning step and learning rate , respectively .the maximum likelihood learning shown here has a simple interpretation of minimizing the kullback - leibler divergence between the observation probability and the model probability . in an analogous way, one gets the learning equation for , in the learning equations eq .( [ le01 ] ) and eq .( [ le02 ] ) , the data dependent terms can be easily computed from the binned neural data .however , the model expectations of the firing rate ( magnetization in statistical physics ) and correlations are quite hard to evaluate without any approximations . herei use the mean field method to tackle this difficulty .first , i write the energy term into a unified form , where denotes the interaction index and denotes the neuron set involved in the interaction . for and for .therefore , introduces the pairwise interaction as , while introduces the triplet interaction as . the multi - neuron interaction in the conventional ising modelis decomposed into first order or second order population coupling terms .this decomposition still maintains the functional heterogeneity of single neurons or neuron pairs , but reduces drastically the dimensionality of the neural representation for explaining high order correlations . in principle , one can combine and to predict both pairwise and triplet correlations .however , in this work , i focus on the pure effect of each type of population coupling . in fact , the conventional ising model can be recovered by setting , which is pairwise interaction .the learning equation is derived similarly , and is run by reducing the deviation between the model pairwise correlation and the clamped one ( computed from the data ) .second , the statistical properties of the model ( eq . [ energyising ] ) can be analyzed by the cavity method in the mean field theory .the self - consistent equations are written in the form of message passing ( detailed derivations were given in refs , see also appendix [ app : deriv ] ) as [ bp ] where denotes the member of interaction except , and denotes the interaction set is involved in with removed . is interpreted as the message passing from the neuron to the interaction it participates in , while is interpreted as the message passing from the interaction to its member .this iteration equation is also called the belief propagation ( bp ) , which serves as the message passing algorithm for the statistical inference of the model parameters .iteration of the message passing equation on the inferred model would converge to a fixed point corresponding to a global minimum of the free energy ( in the cavity method approximation ) where is the normalization constant of the model probability . the free energy contribution of one neuron is and the free energy contribution of one interaction is .i define the function . at the same time, the model firing rate and multi - neuron correlation can be estimated as [ magcorre ] magnetization and correlation are defined as and , respectively . a brief derivation of eq .( [ magcorre ] ) is given in appendix [ app : deriv ] . herethe multi - neuron correlation is calculated directly from the cavity method approximation and expected to be accurate enough for current neural data analysis .this is because , correlations under the model are evaluated taking into account nearest - neighbor interactions , rather than naive full independence among neurons .this approximation is expected to work well in a weakly - correlated neural population , where long - range strong correlations do not develop .a similar application of this principle revealed a non - trivial geometrical structure of population codes in salamander retina .another advantage is the low computation cost .both the free energy and the pairwise correlations can be estimated by the time complexity of the order for , and for triplet correlations in , which is one order of magnitude lower than the tractable model of recently proposed in ref .a more accurate expression could be derived from linear response theory with much more expensive computational cost ( increased by an order of magnitude ( ) . to estimate the information carried by a neural population, one needs to compute the entropy , which is defined as , and it measures the capacity of the neural population for information transmission .the more obvious variability the neural responses have , the larger the entropy value is .the entropy of the model can be estimated from the fixed point of the message passing equation .based on the standard thermodynamic relation , , where is the energy of the neural population and given by [ energ ] \\ & \times\prod_{a\in\partial i\backslash b}\cosh\gamma_a(1+x\hat{m}_{a\rightarrow i } ) .\end{split}\end{aligned}\ ] ] the basic procedure to infer population couplings is given as follows .at the beginning , all model parameters are assigned zero value .it is followed by three steps : ( ) messages are initialized randomly and uniformly in the interval .( ) eq . ( [ bp ] ) are then run until converged , and the magnetizations as well as multi - neuron correlations are estimated using eq .( [ magcorre ] ) .( ) the estimated magnetizations and correlations are used at each gradient ascent learning step ( eq .( [ le01 ] ) or eq .( [ le02 ] ) ) .when one gradient learning step is finished , another step starts by repeating the above procedure ( from ( ) to ( ) ) . to learn the higher order population coupling ,the damping technique is used to avoid oscillation behavior , i.e. , where is the damping factor taking a small value .the inferred model can also be used to generate the distribution of spike synchrony , i.e. , the probability of simultaneous spikes .this distribution can be estimated by using monte carlo ( mc ) simulation on the model .the standard procedure goes as follows .the simulation starts from a random initial configuration , and tries to search for the low energy state , then the energy is lowered by a series of elementary updates , and for each elementary update , proposed neuronal state flips are carried out .that is , the transition probability from state to with only flipped ( ) is expressed as where .the equilibrium samples are collected after sufficient thermal equilibration .these samples ( a total of samples in simulations ) are finally used to estimate the distribution of spike synchrony . on the retinal data ( , one typical example ) .( a ) firing rate in the data is reproduced by the model .( b ) two - cell correlations are explained partially by ( ) .a monte - carlo ( mc ) sampling of the model yields similar results to belief propagation ( bp ) , which is much faster .( c ) from the mc samples , three - cell correlations can also be estimated .( d ) probability of synchronous spiking under the model is compared with that of the data . ]by using the mean field method , i first test both types of population couplings on the retina data , which is the spike train of ganglion cells in a small patch of the salamander retina .the retina was stimulated with a repeated natural movie .the spike train data is binned with the bin size equal to reflecting the temporal correlation time scale , yielding about binary firing patterns for data modeling .i then test the same concepts on the cortical data of behaving rats .rats performed the odor - place matching working memory during one task session , and spiking activities of cells in both superficial layer and deep layer ( layer ) of medial prefrontal cortex were simultaneously recorded ( for detailed experiments , see ref .one task session consists of about trials , yielding a spike train of these cortical cells binned with the temporal resolution ( a total of firing patterns ) .[ retinapc1 ] reports the inference result on a network example of neurons selected randomly from the original dataset .the firing rate is predicted faithfully by the model using either mc or bp ( fig .[ retinapc1 ] ( a ) ) . inferring only , one could predict about of entire pairwise correlation ( a precision criterion is set to in this paper ) ( fig .[ retinapc1 ] ( b ) ) .this means that of the whole correlation set have the absolute value of the deviation between the predicted correlation and measured one ( ) smaller than the precision criterion . using the sampled configurations of neural firing activity from the mc simulation, one could also predict three - cell correlations ( fig .[ retinapc1 ] ( c ) ) , whereas , the prediction fraction can be improved by a significant amount after introducing , as i shall show later . in addition , fitting only model parameters in analysis could not predict the tail of spike synchrony distribution ( fig .[ retinapc1 ] ( d ) ) ; this is expected as no higher order interaction terms are included in the model , and rare events of large spikes are also difficult to observe in a finite sampling during mc simulations . on the retinal data ( , one typical example ) .( a ) firing rate in the data is reproduced by the model .( b ) three - cell correlations are explained partially by ( ) .( c ) interaction matrix for .( d ) probability of synchronous spiking under the model is compared with that of the data . ]the inference results of are given in fig .[ retinapc2 ] .note that , by considering the correlation between the pairwise firing activity and the global population activity , i.e. , the second order population coupling , the three - cell correlation could be predicted partially ( ) , and this fraction is much larger than that of ( fig .[ retinapc1 ] ( c ) ) .this is due to the specific structure of , which incorporates explicitly three - cell correlations into the construction of couplings ( eq . ( [ le02 ] ) ) .technically , the mean - field theory for avoids the slow sampling and evaluates the high order correlations in a fast way .alternatively , one could fit the data using the conventional ising model with the same number of model parameters as , whereas , the three - cell correlations are hard to predict using mc samplings , and a similar phenomenon was also observed in a previous work for modeling pairwise correlations . therefore i speculate that acts as a key circuit variable for third order correlations .the interaction matrix of reveals how important each pair of neurons is for the entire population activity ( emergent functional state of the whole network ) . as shown in fig .[ retinapc2 ] ( c ) , matrix has no apparent structure of organization , i.e. , each neuron can be paired with both positive and negative couplings .some pairs have large negative , suggesting that these components are anti - correlated with the population activity .that is to say , the activity of these neuron - pairs is not synchronized to the population activity characterized by the summed activity over all neurons except these pairs . in the network, there also exist positive , which shows that these neuron - pairs are positively correlated with the population in neural activity .the interaction matrix shown here may be related to the revealed overlapping modular structure of retinal neuron interactions . in this structure , neurons interact locally with their adjacent neurons , and in particular this feature is scalable and applicable for larger networks .it seems that one individual neuron does not impact directly the entire population , and a small group of neighboring neurons have similar visual feature selectivity .this result is also consistent with two - neuron interaction map of the conventional ising model ( fig .[ retinaising ] ( a ) ) .note that in functional interpretation , these two - neuron interactions are inherently different from , which is designed to explain high - order correlations by using less model parameters than necessary . , one typical example ) compared with population coupling .( a ) two - neuron interaction matrix .( b ) probability of synchronous spiking under the model is compared with that of the data . ] behaves better than in predicting the spike synchrony distribution ( fig .[ retinapc2 ] ( d ) ) in the small regime ( the prediction is improved from for to for ) .an intuitive explanation is that introduces equivalently triplet interactions among neurons , and it is known that high order interactions are necessary for generating widespread population activity .however , overestimates the distribution when rare events of synchronous spiking are considered .this may be related to the difficulty of obtaining sufficient equilibrium samples of the model , especially those samples with large population activity .the spike synchrony distribution is also compared with that obtained under ising model ( fig .[ retinaising ] ( b ) ) .different performances are related to the multi - information measure of neural population explained below .network samples for each .( a ) multi - information ( in bits ) versus the network size .( b ) the prediction fraction versus the network size . is used to predict two - cell correlations , and is used to predict three - cell correlations ., title="fig : " ] .1 cm network samples for each .( a ) multi - information ( in bits ) versus the network size .( b ) the prediction fraction versus the network size . is used to predict two - cell correlations , and is used to predict three - cell correlations ., title="fig : " ] .1 cm the amount of statistical structure in the neural data due to introducing interactions among neurons can be measured by the multi - information .i first introduce an independent model where only the firing rates of individual neurons are fitted and the corresponding entropy is defined as .the multi - information is then defined as , in which , where , and is assumed to be an upper bound to the true entropy .the true entropy for large populations is difficult to estimate since it requires including all possible interactions among neurons .however , the model entropy with low order interaction parameters could be an approximate information capacity for the neural population , which depends on how significant the higher order correlations are in the population .[ retinaif ] ( a ) shows the multi - information as a function of the network size . and are compared with the ising model , which reconstructs faithfully the pairwise correlations . improves significantly over in capturing the information content of the network , but its multi - information is still below that of the ising model , which is much more evident for larger network size . thisis expected , because only part of third order correlations are captured by , while the ising model describes accurately the entire pairwise correlation profile which may be the main contributor to the collective behavior observed in the population .however , provides us an easy way to understand the higher order correlation , while in the ising model , it is computationally difficult to estimate the higher order correlations . the average prediction fraction of correlations by and is plotted in fig .[ retinaif ] ( b ) . predicts more than of the pairwise correlations , while predicts more than of the triplet correlations .the prediction fraction changes slightly with the network size . on the cortical data ( , one typical example ) .( a ) firing rate in the data is reproduced by the model .( b ) two - cell correlations are explained partially by ( ) .a monte - carlo ( mc ) sampling of the model yields similar results to belief propagation ( bp ) .( c ) from the mc samples , three - cell correlations can also be estimated .( d ) probability of synchronous spiking under the model is compared with that of the data . ] on the cortical data ( , one typical example ) .( a ) firing rate in the data is reproduced by the model .( b ) three - cell correlations are explained partially by ( ) .( c ) interaction matrix for .( d ) probability of synchronous spiking under the model is compared with that of the data . ] to show the inference performance of both types of population couplings on the cortical data , i randomly select a typical network example of neurons from the original dataset , and then apply the computation scheme to this typical example .results are shown in fig .[ mpcpc1 ] .surprisingly , the simplified is able to capture as high as of pairwise correlations , implying that when a rat performed working memory tasks , there exists a simplified model to describe emergent functional states in the medial prefrontal cortical circuit .moreover , mc sampling of the model also predicts well the spike synchrony distribution ( fig .[ mpcpc1 ] ( d ) ) .this is very different from that observed in the retinal data . in this sense ,the mpc circuit is simple in its functional states when the subject is performing specified tasks .more interesting circuit features are revealed by , which is shown in fig .[ mpcpc2 ] . about of three - cell correlationsare explained by in the mpc circuit .the interaction matrix of in fig .[ mpcpc2 ] ( c ) shows a clear non - local structure in the cortical circuit ( stripe - like structure ) .that is , some neurons interact strongly with nearly all the other neurons in the selected population , and these interactions have nearly identical strength of .such neurons having stripe - like structure in the matrix may receive a large number of excitatory inputs from pyramidal neurons , and thus play a key role in shaping the collective spiking behavior during the working memory task .the non - local effects are consistent with findings reported in the original experimental paper ( cross - correlogram analysis ) and the two - neuron interaction map under ising model ( fig .[ cortexising ] ( a ) ) .thus , to some extent , may reflect intrinsic connectivity in the cortical circuit , although the relationship between functional connections and anatomical connections has not yet been well established .lastly , overestimates the tail of the spike synchrony distribution ( fig .[ mpcpc2 ] ( d ) ) , which may be caused by the sampling difficulty of the inferred model ( a model with triplet interactions among its elements ) .the spike synchrony distribution of ising model is also compared ( fig .[ cortexising ] ( b ) ) . , one typical example ) compared with population coupling .( a ) two - neuron interaction matrix .( b ) probability of synchronous spiking under the model is compared with that of the data . ]network samples for each . (a ) multi - information ( in bits ) versus the network size .( b ) the prediction fraction versus the network size . is used to predict two - cell correlations , and is used to predict three - cell correlations ., title="fig : " ] .1 cm network samples for each .( a ) multi - information ( in bits ) versus the network size .( b ) the prediction fraction versus the network size . is used to predict two - cell correlations , and is used to predict three - cell correlations ., title="fig : " ] .1 cm multi - information versus the cortical network size is plotted in fig .[ mpcif ] ( a ) . in the cortical circuit, behaves comparably with the ising model ; even for some network size ( ) , it reports a higher information content than the ising model in the randomly selected subpopulations , which may be caused by the nature of the selected neurons ( e.g. , inhibitory interneurons , and they have stripe - like structure in the matrix ) .note that gives an information close to zero for small network sizes , suggesting that by introducing , one could not increase significantly the amount of statistical structure in the network activity explained by the model .however , the multi - information of grows with the network size , indicating that the role of would be significant for larger neural populations .[ mpcif ] ( b ) reports the prediction fraction of the correlation profile by applying and .both population couplings can capture over of correlations , which is significantly different from that observed in the retinal data .the emergent properties of the neural code arise from interactions among individual neurons .a complete characterization of the population activity is difficult , because on the one hand , the number of potential interactions suffers from a combinatorial explosion , on the other hand , the collective behavior at the network level would become much more complex as the network size grows . in this paper ,i develop a theoretical framework to understand how pairwise or higher order correlations arise and the basic circuit variables corresponding to these correlation structures .the model is based on the concept of population coupling , characterizing the relationship between local firing activity of individual neuron or neuron - pair and the global neural activity .an advantage is that , it provides a low dimensional and neurobiologically interpretable representation to understand the functional interaction between neurons and their correlation structures . in particular , the concept of population coupling and the associated mean field method used in this paper offer an easy way to evaluate higher order correlations , while the usual sampling method is computationally hard and traditional models ( e.g. , ising model ) lack a direct interpretation of higher order correlations in terms of simplified ( population ) couplings . with the mean field method ,the concept of population coupling is tested on two different types of neural data .one is the firing neural activities of retinal ganglion cells under natural movie stimuli .the other is the population activities of medial prefrontal cortex when a rat was performing odor - place matching working memory tasks . for the retinal data , on average accounts for more than of pairwise correlations , and accounts for over of three - cell correlations .the interaction matrix of contains information about the functional interaction features in the retinal circuitry .it seems that a retinal neuron can be paired with not only negatively strong couplings , but also slightly positive couplings .only a few pairs of neurons have strong correlations with the global activity of the population .to describe the spike synchrony distribution , performs better than , nevertheless , both of them could not capture the trend of the tail ( rare events related to higher order interactions existing in the network ) .this is not surprising , because and are simplified descriptions of the original high dimensional neural activity , taking the trade - off between the computation complexity and the description goodness . to extract the statistical structure embedded in the neural population , improves significantly over , and has further additional benefit of describing the third - order correlations observed in the data , as could be used to construct triplet interactions among neurons , although direct constructing all possible triplet interactions is extremely computationally difficult . unlike the retinal circuit , the cortical circuit yields a much smaller absolute value of the multi - information , implying that no significant higher order correlations ( interactions ) were present in the neural circuit when the circuit was carrying out task - related information processing rather than encoding well - structured stimuli ( as in the retinal network )this also explains why a simplified description such as and is accurate enough to capture the main features of the population activity , including the spike synchrony distribution .the inferred model on the cortical data reveals a different interaction map from that of the retinal circuit . in the cortical circuit, neurons form the stripe - like structure in the interaction matrix , suggesting that these neurons may receive a large number of excitatory inputs .these inputs may come from different layers of cortex , and they can execute top - down or bottom - up information processing , thus modulate the global brain state in the target cortex during behavioral tasks . before summary of this work, i made some discussions about two relevant recent studies on population coupling ( see _ notes added _ ) . modeled directly the joint probability distribution of individual neural response and population rate ( the number of neurons simultaneously active ) by linear coupling and complete coupling models .the linear coupling reproduces separately the distribution of individual neuronal state and the population rate distribution , and their couplings , while introduced in my work reproduces mean firing rate and the correlation between individual neuronal state and the background population activity ( except the neuron itself ) . note that does not model population rate distribution explicitly , which is hard to interpret in terms of functional connectivity .the complete - coupling model reproduces the joint probability distributions between the response of each neuron and the population rate , from which it is hard to conclude that the high - order interactions responsible for high - order correlations can be interpreted and tested .however , reproduces mean firing rate and the correlation between neuron - pair activities and the background population activity ( except the neuron - pair itself ) , and thus explains high - order correlations by an energy model .furthermore , in this sense , this work overcame a weakness pointed out in another independent later work of population coupling , which fitted directly the population rate distribution and the firing probability for each neuron conditioned on the population rate , and analogously the corresponding model parameters can not be readily interpretable in a biological setting . due to intrinsic difference in model definitions ,these two relevant works have nice properties of studying tuning curves of individual neurons to the population rate , and sampling from the model to reproduce the population synchrony distribution . in summary ,i develop a theoretical model of population coupling and its advanced form , to relate the correlation profile in the neural collective activity to the basic circuit variables .the practical dimensional reduction method is tested on different types of neural data , and specific features of neural circuit are revealed .this model aiming at describing high order correlations with a low order representation , is expected to be useful for modeling big neural data .note that the interaction matrices shown in fig .[ retinapc2 ] and fig .[ mpcpc2 ] are qualitatively robust to changes of the data size to only the first half ( data not shown ) , verifying that the revealed features are not an artifact of overfitting .however , it still deserves further studies by introducing regularization in the learning equation .it is also very interesting to incorporate more physiologically plausible parameters to explain how the collective spiking behavior arises from the microscopic interactions among the basic units .another interesting study is to clarify the role of higher order correlations in decoding performances based on maximum likelihood principles .i am grateful to shigeyoshi fujisawa and michael j berry for sharing me the cortical and retinal data , respectively .i also thank hideaki shimazaki and taro toyoizumi for stimulating discussions .this work was supported by the program for brain mapping by integrated neurotechnologies for disease studies ( brain / minds ) from japan agency for medical research and development , amed ._ note added._after i submitted this work to arxiv:1602.08299 , i became aware of ref . ( arxiv:1606.08889 ) , and later ref . ( biorxiv , 2016 ) .discussions about these two recent relevant works are made in the last section of this paper .in this appendix , i give a simple derivation of mean - field equations given in sec . [ model ] . more details can be obtained from refs .first , after removing an interaction , one defines the cavity probability , where the product comes from the physical meaning of the second kind of cavity probability , namely defined as the cavity probability when only the connection from to is retained while other neighbors of are removed ( so - called cavity probability ) . is thus formulated as . with these two probabilities, it follows that the cavity magnetization , where is named cavity bias in physics .it is related to by .[ biassm02 ] +e^{-\gamma_b}[p_{j\rightarrow b}(+1)p_{k\rightarrow b}(-1)+p_{j\rightarrow b}(-1)p_{k\rightarrow b}(+1 ) ] } { e^{-\gamma_b}[p_{j\rightarrow b}(+1)p_{k\rightarrow b}(+1)+p_{j\rightarrow b}(-1)p_{k\rightarrow b}(-1)]+e^{\gamma_b}[p_{j\rightarrow b}(+1)p_{k\rightarrow b}(-1)+p_{j\rightarrow b}(-1)p_{k\rightarrow b}(+1)]}\\ & = \frac{1}{2}\ln\frac{1+e^{-2\gamma_b}\frac{1-m_{j\rightarrow b}m_{k\rightarrow b}}{1+m_{j\rightarrow b}m_{k\rightarrow b}}}{e^{-2\gamma_b}+\frac{1-m_{j\rightarrow b}m_{k\rightarrow b}}{1+m_{j\rightarrow b}m_{k\rightarrow b}}}\\ & = \tanh^{-1}\left(\tanh\gamma_b m_{j\rightarrow b}m_{k\rightarrow b}\right ) .\end{split}\end{aligned}\ ] ] similarly , the single neuron magnetization is obtained via , where is a normalization constant , and the multi - neuron correlation , where is a normalization constant .note that and are also related to the free energy contribution of single neuron and neuronal interaction , respectively .the full ( non - cavity ) magnetization can be derived in a similar manner to eq .( [ magsm ] ) , as . in detail , the two - point correlation ( ) is computed as follows , where i have used and the magnetization parameterization of . for triplet interaction, +e^{-\gamma_b s_i}[p_{j\rightarrow b}(+1)p_{k\rightarrow b}(-1)+p_{j\rightarrow b}(-1)p_{k\rightarrow b}(+1)]$ ] , and the corresponding .following the same line , the partition function is evaluated for pairwise interaction as m. okun , n. a. steinmetz , l. cossell , m. f. iacaruso , h. ko , p. bartho , t. moore , s. b. hofer , t. d. mrsic - flogel , m. carandini , and k. d. harris .diverse coupling of neurons to populations in sensory cortex ., 521:511 , 2015 .michael okun , pierre yger , stephan l marguet , florian gerard - mercier , andrea benucci , steffen katzner , laura busse , matteo carandini , and kenneth d harris .population rate dynamics and multineuron firing patterns in sensory cortex . , 32:17108 , 2012 .
to understand the collective spiking activity in neuronal populations , it is essential to reveal basic circuit variables responsible for these emergent functional states . here , i develop a mean field theory for the population coupling recently proposed in the studies of visual cortex of mouse and monkey , relating the individual neuron activity to the population activity , and extend the original form to the second order , relating neuron - pair s activity to the population activity , to explain the high order correlations observed in the neural data . i test the computational framework on the salamander retinal data and the cortical spiking data of behaving rats . for the retinal data , the original form of population coupling and its advanced form can explain a significant fraction of two - cell correlations and three - cell correlations , respectively . for the cortical data , the performance becomes much better , and the second order population coupling reveals non - local effects in local cortical circuits .
the intermediate silicon tracker ( ist ) is a silicon based particle detector installed at brookhaven national laboratory ( bnl ) in the relativistic heavy ion collider ( rhic ) .the detector is located in the central region of the solenoidal tracker at rhic ( star ) experiment and makes up the third tracking layer in a 4-layer vertex detector upgrade .it is comprised 24 staves , each of which is a flexible pcb wrapped around a carbon fiber core .attached to each stave are 6 silicon sensors which can detect energetic particles which pass through them .these silicon sensors are read out by 36 analog front - end chips which are connected to an external data acquisition system .each stave holds the electrical substrate known as a `` hybrid . ''the hybrid is the flexible circuit onto which the silicon sensors , analog front - end chips , and passive components are mounted .the hybrid is permanently bonded to the stave during the stave manufacturing .the hybrid is broken up into 2 areas , the sensor area and the connector area . in the sensor areawe desire as little mass as possible to reduce interactions with passing particles . for this reason , the sensor area , which makes up most of the hybrid ,is made of 2 layers of copper on a kapton substrate .the connector area is made of the same 2 layers of copper but with an additional copper layer and kapton layer . over both areasis a kapton coverlay attached with of adhesive .this coverlay has no cutouts on the bottom layer and acts as insulation between the copper and the structural material in the stave . on the top layerlarge areas of this cover lay are cut out around any area with bonding pads or component pads .conventional solder mask is applied to this area as shown in figure [ fig : hybrid ] .the hybrids are finished with enepig for compatibility with both wire bonding and conventional soldering . in order to keep the copper weight down, all vias are selectively plated through .this selective plating can be seen in figure [ fig : passive ] ., width=480 ] each hybrid provides the electrical connection from the apv chips to the connector site on one end of the stave .electrically the hybrid is broken up into three separate circuits which share a common ground .each circuit serves 12 apv chips and 2 sensors .the apv chips require power , analog output , and control signals to be routed to each chip .the sensors require a bias line which must withstand up to . in the sensor areawe require as little mass as possible .because of this , the sensor area has only two routing layers , so careful routing was needed to optimize the power net distribution and signal distribution .the power nets are distributed as three large busses per section for , , and ground .the power supplies for the staves operate on a remote regulation scheme , so a sense net from each bus is routed locally away from the apv chips in each section and back to the connectors . for the analog output signals , a differential pair from each apv chip is routed to the connectors .the analog output is a differential current mode signal .source termination is provided at the board which plugs into the stave .in addition , the apv chips use i2c for slow controls to set various parameters of the chip .these nets are also routed as a bus to each of the chips in a section .finally , two differential pairs provide a clock and a trigger to each apv chip .these are routed as a bus to each of the chips in a section .the hybrid has 36 apv placement sites where apv chips are attached and 6 sensor placement sites where sensors are attached .the apv chips generate nearly all of the heat in the stave so an aluminum cooling tube is embedded in the stave directly beneath the apv chips as shown in figure [ fig : cooling_tube ] . to reduce the thermal resistance ,14 vias are placed on the apv placement sites to conduct heat through the kapton core .the 6 sensors do not produce any appreciable thermal load ., title="fig:",height=163 ] , title="fig:",height=163 ]each hybrid was assembled into a stave at the lawrence berkeley national laboratory ( lbnl ) composites shop in berkeley , california .two carbon fiber cover sheets were co - cured to a hybrid and then a carbon fiber honeycomb was glued to one side .a block of carbon foam with a channel cut in it for cooling was also glued on .the carbon foam has a lower radiation length but is homogeneous unlike the carbon fiber honeycomb .this uniformity makes it more suitable for wire bonding , so the carbon foam is located underneath any area where wire bonding will be performed .each stave nominally generates about of heat .cooling is provided by flowing a synthetic heat transfer fluid , 3m^tm^ novec^tm^ 7200 engineered fluid , through the aluminum tube which runs through the carbon foam . because the ist is installed in the central region of star , we needed a cooling fluid which would not damage the other detectors surrounding it in the event of a leak. novec has a very low vapor pressure , leaves no residue when it evaporates and has a very high resistivity , making it a good candidate .novec was chosen because it has a lower ozone - depletion potential than other engineered fluids with similar properties .other detectors using a similar design have observed galvanic corrosion between the carbon foam and the aluminum tube . in order to prevent this ,the carbon foam channel was coated with epoxy before the tube was laid in to provide a barrier .the target running temperature is which is above the dew point in the experimental hall ; this will prevent moisture from condensing on the detector which could facilitate galvanic corrosion .in addition , all parts of the stave and cooling system are electrically grounded .two end pieces called `` closeouts '' were then attached on either end of the stave .one is made of carbon filled peek and the other of aluminum .the closeouts have holes and slots to provide a way to mount the stave to the finished detector .the aluminum closeout is electrically bonded to the carbon face sheets , the carbon fiber honeycomb , and the carbon foam .this provides a means to ground the materials in the stave since they are electrically isolated from the hybrid .once everything was bonded to one of the face sheets , glue was applied to the other side and it was folded over to complete the stave . figure [ fig : stave_assmb ] shows the stave right before this step . fully cured staves were inspected and then sent to mit for further assembly ., width=528 ]the ist sensors were designed by mit and manufactured by hamamatsu .the sensors are silicon with 2 metallization layers .the backside has a sputtered aluminum coating to provide the bias voltage .this voltage reverse biases the pn junctions in the sensor which form the individual sensing elements ( see figure [ fig : sensor_sch ] .particles passing through the sensor will interact with the silicon lattice and produce charge pairs which will drift apart . a capacitor above each sensing element ac couples the signal and routes it to a bonding pad located on one edge of the chip .this is a proven , standard technology for particle detection . , width=384 ] each sensor has 6 groups of sensing elements with 128 sensing elements in each group .a bond pad for each one of the sensing elements is grouped at one edge of the sensor and is arranged in a pattern which mirrors the bonding pattern for the apv chip ( see figure [ fig : bonding_pads ] ) .this allows for much simpler bonding between the apv and the sensor , especially useful because the bonding pitch is so small .the sensing elements are arranged in a grid at a pitch of in 64 rows and 12 columns .the bonding pads are arranged in 2 rows of 64 pads .the pads are with between each pad and between the 2 rows . , width=288 ]the apv25-s1 chip is the readout and pre - amplifier asic for the sensors .it has 128 channels each with a charge sensitive pre - amplifier , shaper , and long pipeline .events are read into the pipeline at .events in the pipeline are selected by triggers and marked for readout .a single differential pair per chip reads out each of the 128 channels in series for a selected event ., height=192 ] this asic was designed by the imperial college in london for the compact muon solenoid ( cms ) running at the large hadron collider ( lhc ) .it was designed for the high radiation environment present in the middle of a physics experiment and used an ibm radiation hard process .detailed information about the apv chip can be found in and .figure [ fig : apv ] shows the apv chip bonded to a stave .in addition to the sensors and apvs , a few passive components are needed on the stave .each stave has 195 additional components , mostly 0402 sized capacitors for bypassing the apv chips .in addition , there is a small temperature sensor , termination resistors for the clock and trigger lines , protection resistors for the sensor bias lines , and a connector for each section .the connectors used were the samtec tem and sem series .they provide a low mating height and high pin density which is ideal for our application .the pins are extremely fragile , however , so extreme care had to be taken during testing and assembly to reduce the number of mating cycles and prevent any connectors from being damaged .the temperature sensor used was a texas instruments tmp102 which shares the same i2c bus as the apv chips in one of the sections .a computational fluid dynamics ( cfd ) study was done to determine the location of the hottest spot on a stave assuming nominal operating conditions ( see figure [ fig : cfd ] ) .the temperature sensor is placed in this location . ,width=288 ] the passives were soldered onto the hybrids after stave assembly . because the hybrid was already attached to the carbon fiber parts of the stave , it could not be put into a solder oven and had to be hand - assembled .proxy manufacturing in methuen , massachusetts assembled all of the staves .the assembly process was further complicated because there is no silk screen legend for most of the components on the board . because the bypass capacitors are located so close to the apv chips there was no space for a silk screen legend ( see figure [ fig : passive ] ) . instead, the pattern of component placement was made the same for all apv placement sites and detailed documentation was produced to prevent any parts from being incorrectly placed . a visual inspection at mit after the passives were attached to the staves provided an additional quality check for such an error ., width=288 ] during production one stave was found to have a short between two power nets on the hybrid , away from any solder areas . using a thermal camera we were able to determine the location of the short ( see figure [ fig : thermal ] ) . until that point in production, the staves were not electrically tested after receipt from lbnl or after passives were attached . after this incident ,the power nets were electrically tested for shorts .we found no additional problems during production ., width=288 ]during production we performed separate assembly steps for the apv chips and the silicon sensors .our production volume was limited by the silicon sensors as they were the single most expensive component to purchase and also had the longest lead time . running out of sensors late in production would have delayed installation of the full detector . to increase the overall yield , the apv chips were installed on the stavesfirst , wire bonded , and tested to make sure they were fully functional . only after all apv chips on a stave were verified to be working properly were silicon sensors attached .the apv chips were glued directly to the hybrid using a conductive epoxy , tra - duct 2902 .a pneumatic dispenser was used to apply a uniform amount of glue for each apv chip .the glue is both thermally and electrically conductive and acts as a means to connect the back side to vss and as a means to conduct heat away from the chip .it is important that the alignment of the apv chips is precise . since there are bonding pads on all sides of the chip , both `` x '' and `` y '' alignment are important .to achieve good alignment we made custom tooling which would allow the technician to align each chip manually .the tooling provides an edge which all apv chips are pressed against to set the `` y '' alignment . then the technician aligns the bonding pads on the apv chip with copper features on the hybrid to ensure a precise alignment in `` x '' . the tooling used for thisis shown in figure [ fig : apv_attach ] .this alignment step was most important because the mounted apv chips would then have to match exactly the location of the output pad sites on the silicon sensors .misalignment would make the bonding process more difficult and increase the likelihood of shorts between wire bonds .all of the alignment was done by hand under a microscope by a skilled technician ., width=480 ]staves with apv chips attached were sent to the instrumentation division at brookhaven national laboratory for wire bonding and testing .wire bonding was done on a hesse & knipps bond jet 815 bonding machine using aluminum wire .a program for the bonding machine was written to wire bond each apv chip automatically , but inspection was done by a bonding technician during the bonding process to ensure the operation went smoothly .example images from the inspection are shown in figure [ fig : apv_bond ] . during this operation31 control nets were bonded from the hybrid to the apv chip and 21 power nets are bonded from the hybrid to the apv chip .there were a total of 1872 wire bonds for this operation .the stave was held down using only the stock vacuum chuck on the bonding machine .no additional fixture was needed . ,title="fig:",width=240 ] , title="fig:",width=240 ] bonded apv chips had to be tested before silicon sensors could be attached . in an experimental assembly building located next to the star detectorwe set up a clean room with a testing station for the ist .the testing station consists of a rack to hold detectors with a cosmic ray trigger inside a light - tight box ( ambient light will create noise on the sensors due to the photoelectric effect . )the staves were attached to a data acquisition system ( daq ) ( see for more information ) which provided power and control signals and digitized the data from the apv chips . to test each stave ,it was attached to the daq and operated .noise and pedestal levels were recorded over many events and then analyzed .all staves were then sent back to mit for further work .staves that had apv chips on them which did not pass the noise and pedestal level tests had those chips removed and replaced and then sent back to bnl .the replaced apv chips were bonded and then re - tested before they were sent back to mit. staves with fully functioning apv chips then had the silicon sensors installed .the silicon sensor attachment was the final assembly step done at mit .before the process began , all tooling and staves were cleaned with isopropyl alcohol to ensure that no contaminants were transferred onto the sensors .staves were placed in mit - made tooling which held the stave in place .an adjustable blade at the top edge of the stave was moved until it marked the top edge of where each sensor would sit .the blade was fastened in position and then the sensor attach began .the sensors are held down with two different types of epoxy as shown in figure [ fig : si_glue ] .the first is a conductive epoxy needed to make a good contact between the aluminized back side of the sensor and the traces below , which carry the bias voltage .we used the same tra - duct 2902 adhesive for this .the second epoxy is epon 828 resin with versamid 140 hardener .we used this epoxy for mechanical strength and stability since the conductive epoxy is not very strong .a single line of conductive epoxy was put down and then several lines of conventional epoxy were applied to make a strong bond .we found that the working time after applying the glue was about 30 minutes after which it would be much more difficult to get the sensor to lie flat ., width=576 ] the silicon sensors were aligned in a similar way to the apv chips .the tooling provided a stop in the `` y '' direction so the technician needed only focus on alignment in the `` x '' direction . under a microscope, a technician would align the sensor bonding pads with the apv bonding pads checking all 6 apv chips for each sensor .an example view through the microscope during this alignment procedure is shown in figure [ fig : si_align ] .after all sensors were aligned , weights were applied to the sensors . in early prototypes we found that the sensors would drift during the curing process .adding weights to the tops of the sensors prevented the sensors from drifting .after the weights were applied , the alignment was rechecked and adjusted if necessary .staves would be allowed to cure for 24 hours before they would be packaged and transported ., width=192 ]the bonding of the silicon sensors took place at the same instrumentation division at bnl on the same bonding equipment .again aluminum wire was used .an automatic bonding program was used but we required close supervision by the technician . because of the size and pitch of the bonding pads it was important to catch any error and repair it before bonding continued .the bonding process put down 128 bonds between each apv chip and each silicon sensor for a total of 4608 wire bonds .in addition , 24 bonds were put down between each sensor and the hybrid as part of the biasing circuitry for a total of 144 bonds .depending on the amount of technician intervention , this process would take between 1 and 2 hours .the end result can be seen in figure [ fig : si_bonded ] and figure [ fig : full_stave ] ., width=624 ] fully bonded staves were brought back into the ist clean room and tested again . using the daq system, we were able to identify any shorted bonding wires and determine if it was possible to repair them . in addition , baseline noise and pedestal information was again recorded .we found very few errors with the bonding done at bnl and nearly all of the staves produced are in use in the final detector . fully functioning staves then had an encapsulant applied to the bonding wires to protect them during handling and installation .the first prototypes used dymax 9001-e - v3.1 cured with a dymax 5000-ec series ultraviolet curing lamp .we found that , after exposure to the uv light source , some apv chips would no longer function or have higher input noise . while the exact cause of this was unknown , we suspect that the silicon was damaged by the uv treatment .for production staves , dow corning sylgard 186 elastomer was used .this encapsulant has a room temperature cure and did not affect the performance of the apv chips .a fully completed stave , ready for installation , is shown in figure [ fig : full_stave ] ., width=624 ] after the encapsulation the staves were tested extensively and optically surveyed before being installed .the structure which the staves are mounted to has a mechanical alignment accuracy of about . however , once installed we require knowing the position of the detector to a higher degree of accuracy .we are able to do this by aligning tracks which pass through the ist as well as other detectors .we can reduce the number of tracks needed to do this alignment by knowing very precisely how the sensors on a single stave are located relative to each other . before installationeach of the staves was placed on an optical survey station and measured .the survey station , a nikon vm-150 , was used to measure each of the staves .we estimate the accuracy of these measurements to be about in the plane of the detector and out of plane .we combine this information along with the `` rough '' position of each detector inside the experiment and will be able to align the detector from tracks generated in this and other detectors ( see figure [ fig : survey ] ) . ,title="fig:",height=192 ] , title="fig:",height=192 ]fully functional staves were installed on a carbon fiber support structure arranged to have full azimuthal coverage . as seen in figure [ fig : full ] the silicon components are facing into the support structure which protects the silicon from damage during the full installation process .electrical connections were laid on the tube and connected to each of the detectors .we were able to run the detectors in place to ensure that all electrical connections were sound .data was taken to establish the baseline noise and pedestal data before final installation .finally , cooling lines were attached to the cooling tubes running through the staves and were tested with a helium leak checker .two staves were found to have leaks inside the cooling tube .these staves were not installed and the cause of the leaks is currently being investigated . ,width=480 ] once the entire ist was assembled the support structure was mated with a larger detector support system and inserted into the central region of star .the detector was then hooked up to electrical and cooling services and run in place again to ensure that all detectors are working properly .the ist for star is made of 24 staves , each of which carry 6 silicon sensors read out by 36 apv chips .the detectors are designed to operate in a high radiation environment with as little mass as possible to reduce the chance of interactions with particles .the staves were designed and manufactured at mit and lbnl .passive assembly as well as apv readout chip and silicon sensor attachment took place at mit while bonding , testing and installation took place at brookhaven national laboratory .the finished detector was installed into star and is currently taking data with 95 percent of channels working properly .the fabrication of the ist was split between mit and university of illinois at chicago .zhenyu ye at uic coordinated with the silicon detector lab at fermi national accelerator laboratory to complete all steps in production after the passive attach .their assembly process was very similar to the assembly process used at mit but not identical ; it is outside the scope of this paper to detail the two separate assembly processes .999 g. visser _et al . _ , `` a readout system utilizing the apv25 asic for the forward gem tracker in star , '' ieee real time conference record , berkeley , ca , 2012 .+ l. jones , `` apv25-s1 user guide version 2.2 , '' royal academy london , 5 september , 2001 .+ m. l. brooks _ et al ._ , `` the phenix forward silicon vertex detector , '' arxiv:1311.3594v2 [ physics.ins-det ] , 14 february 2014 . + m. j. french , _ et al . _ ,`` design and results from the apv25 , a deep sub - micron cmos front - end chip for the cms tracker , '' nuclear instruments and methods in physics research a 466 ( 2001 ) 359 - 365 .
we present the design of a detector used as a particle tracking device in the star experiment at the rhic collider of brookhaven national laboratories . the `` stave , '' 24 of which make up the completed detector , is a highly mechanically integrated design comprised of 6 custom silicon sensors mounted on a kapton substrate . 4608 wire bonds connect these sensors to 36 analog front - end chips which are mounted on the same substrate . power and signal connectivity from the hybrid to the front - end chips is provided by wire bonds . the entire circuit is mounted on a carbon fiber base co - cured to the kapton substrate . we present the unique design challenges for this detector and some novel techniques for overcoming them . electrical and mechanical integration ; hybrid substrate ; nuclear physics ; carbon fiber reinforced polymer
consider the problem of transmitting two correlated gaussian sources over a gaussian broadcast channel with two receivers , each of which desires only to recover one of the sources . in , it was proven that analog ( uncoded ) transmission , the simplest possible scheme , is actually optimal when the signal - to - noise ratio ( snr ) is below a threshold for the case of matched source and channel bandwidth . to solve the problem for other cases , various hybrid digital / analog ( hda ) schemeshave been proposed in , and .in fact , the hda scheme in achieves optimal performance for matched bandwidth whenever pure analog transmission does not , thereby leading to a complete characterization of the achievable power - distortion tradeoff . for the bandwidth - mismatch case ,the hda schemes proposed in and comprise of different combinations of previous schemes using either superposition or dirty - paper coding . in all the aforementioned work ,authors also compared achieved performances with that of separate source - channel coding .since the channel is degraded , source coding boils down to sending a `` common '' message to both decoders and a `` refinement '' message to the decoder at the end of the better channel . in both of the two source coding schemes proposed in ,the first source is encoded as the common message , but one scheme encodes ( as the refinement message ) the second source independently , and the other after _ de - correlating _ it with the first source . in , on the other hand , the second source is encoded after it is de - correlated with the _ reconstruction _ of the first source .although this approach provably yields a better performance than the schemes in , it is still not optimal . in , it was shown that the optimal rate - distortion ( rd ) tradeoff in this source coding scenario is in fact achieved by a scheme called successive coding , whereby both common and refinement messages are generated by encoding both sources jointly , instead of using any kind of de - correlation .although successive coding is a special case of successive refinement in its general sense , _ computation _ of the rd tradeoff , even for gaussians , turned out to be non - trivial .a shannon - type lower bound derived for the problem was rigorously shown to be tight , yielding an analytical characterization of the rd tradeoff . in this paper, we investigate the performance of separate source and channel coding for any bandwidth compression / expansion ratio .as discussed in the previous paragraph , the source coding method to be used for optimal performance is successive coding .we first show that this separate coding scheme achieves the optimal power - distortion tradeoff when one receiver requires almost lossless recovery , and the other requires a small enough distortion . comparing with best - known schemes and outer bounds , we then show that this scheme is competitive in other cases as well .our results imply that with a ( sometimes marginal ) sacrifice of power - distortion performance , we can design separate source and channel codes , and thus enjoy the advantages such as simple extension to different bandwidth compression / expansion ratios . in sectionii , the problem is formally defined .our main results are proved in section iii and the separate coding scheme is compared with other separation - based schemes and hybrid schemes in section iv .as depicted in fig . [ fig : system ] , a pair of correlated gaussian sources are broadcast to two receivers , and receiver , , is only to reconstruct . without loss of generality, we assume the source sequences are generated in an i.i.d .fashion by , where \ ] ] and ] and .several separation - based schemes have been previously proposed , differing only in their source coding strategy . in the first separation - based scheme , termed scheme a in , sources and encoded as if they are independent , resulting in the distortion region given by in scheme b in , the second source is written as , where , and and are treated as two new independent sources .hence we obtain in the scheme introduced in , which we call scheme c , is quantized to and is then encoded conditioned on .the resultant distortion region becomes \left ( 1+\frac{\bar{\eta } p}{n_2 } \right ) ^ { -\kappa}\ ; .\label{eq : tiand2}\end{aligned}\ ] ] of the three , it is obvious that scheme c achieves the best performance .however , it is still not optimal as we will show in section iv .the optimal strategy is in fact what is called successive coding in , whereby the sources are encoded jointly at both the common and the refinement layers .the rd tradeoff for successive coding of gaussian sources with squared - error distortion was given in parametrically with respect to ] , and with , and is the unique root of in the interval ] .the proof is deferred to appendix a. in separate coding , the region of all achievable triplets can be determined using one of two methods .the conventional method fixes and searches for the lower envelope of all whose source rate region intersects with the capacity region given in .alternatively , we can fix and search for the minimum whose corresponding capacity region intersects with the source rate region given in lemma .we find this alternative both more convenient and more meaningful .more specifically , it is easier to compare schemes based on the minimum power they need to achieve the same distortion pair , and the ratio of minimum powers yields a single number as a quality measure . to be able to use this alternative , first we need to find out the minimum required power for any given source coding rate pair . for any source coding rate pair ,the minimal required power is given by for a gaussian broadcast channel where the better receiver is the second one , and , rates of common and private information , respectively , can be achieved if and only if there exists such that where .this , in turn , implies that is achievable if and only if there exists such that since the terms in the maximum exhibit opposite monotonicity with respect to with asymptotes at and , the minimum power is achieved when the two terms are equal , that is , when and has the form in ( [ eq : mismatchedp ] ) . by substituting ( [ eq : sourcerate1 ] ) and ( [ eq : sourcerate2 ] ) into ( [ eq : mismatchedp ] ) , we obtain the minimum power required for the separate coding scheme as a function of : ^ { 1/ \kappa } -1\right ) \\+ n_2\left [ \left ( \frac{1-\nu^2 \delta}{d_2 } \right ) ^ { 1/ \kappa } -1\right ] \left [ \frac{1-\rho^2}{d_1(1-\nu^2 \delta ) - ( \rho-\nu \delta)^2 } \right]^ { 1 / \kappa } \ ; . \label{eq : pnu}\end{gathered}\ ] ] for bandwidth - matched case , the minimum power of separate coding can actually be found analytically for any .we omit the details here .the following theorem is our first main result .separate source - channel coding achieves optimal power - distortion tradeoff when satisfies either of the following conditions 1 . and , 2 . and .we first find the minimum power the outer bound ( [ eq : ob1 ] ) and ( [ eq : ob2 ] ) requires .note that when , ( [ eq : ob2 ] ) will hold for any ] is equivalent with varying in $ ] , by showing is a monotonically decreasing function of .when , and also note when , respectively .we examine instead of , and the right hand side is a quadratic function of centered at and the maximum value is .( when , the function value is . )similarly , when , we have and in this case , when .we examine and thus have the right hand side is centered at and the maximum value is . . behroozi , f. alajaji , and t. linder , `` hybrid digital - analog joint source - channel coding for broadcasting correlated gaussian sources , '' _ proc .inf . theory ( isit 2009 ) _ ,seoul , korea , june 2009 .c. tian , s. diggavi , and s. shamai , `` the achievable distortion region of bivariate gaussian source on gaussian broadcast channel , '' _ proc .theory ( isit 2010 ) _ , austin , tx , june 2010 .
the problem of broadcasting a pair of correlated gaussian sources using optimal separate source and channel codes is studied . considerable performance gains over previously known separate source - channel schemes are observed . although source - channel separation yields suboptimal performance in general , it is shown that the proposed scheme is very competitive for any bandwidth compression / expansion scenarios . in particular , for a high channel snr scenario , it can be shown to achieve optimal power - distortion tradeoff .
the lattice boltzmann method ( lbm ) has become a popular numerical tool for flow simulations .it solves the discrete velocity boltzmann equation ( dvbe ) with sophistically chosen discrete velocity set . with the coupled discretization of velocity space and spatial space ,the numerical treatment of convection term reduces to a very simple _ streaming _ processes , which provides the benefits of low numerical dissipation , easy implementation , and high parallel computing efficiency .another advantage of lbm is that , the simplified collision term is computed implicitly while implemented explicitly , which allows for a large time step even though the collision term causess stiffness at a small relaxation time .this advantage makes the lbm a potential solver for high reynolds number flows .however , the coupled discretization of velocity and spatial spaces limits the lbm to the use of uniform cartesian meshes , which prohibits its applications for practical engineering problems .some efforts have been made to extend the standard lbm to non - regular ( non - uniform , unstructured ) meshes , and a number of so called off - lattice boltzmann ( olb ) methods have been developed by solving the dvbe using certain finite - difference , finite - volume , or finite - element schemes .these olb schemes differ from each other in the temporal and spatial discretizations .however , a straightforward implementation of the cfd techniques usually leads to the loss of the advantages of standard lbm , especially the low dissipation property and stability at large time step .for example , in many of the schemes , the time step is limited by the relaxation time to get an accurate solution , even as the collision term is computed implicitly .this drawback makes these olb schemes very computational expensive when simulating high reynolds number flows .an alternative way to construct olb schemes is to use the time - splitting strategy in solving the dvbe , in which the dvbe is decomposed into a collision sub - equation and a followed pure advection sub - equation .the collision sub - equation is fully local and is discretized directly , leading to a collision step the same as the standard lbm ; the collisionless advection subequation is then solved with certain numerical schemes on uniform or non - uniform meshes , leading to a general streaming step . specifically , the scheme proposed by bardow et al .( denoted by bkg ) , which combines the variable transformation technique for the collision term and the lax - wendroff scheme for the streaming step , overcomes the time step restriction by the relaxation time .it was demonstrated that accurate and stable solutions can be obtained even as the time step is much larger than the relaxation time .the above olb schemes are developed in the lbm framework , and are limited to continuum flows .recently , a finite volume kinetic approach using general mesh , i.e. , discrete unified gas kinetic scheme ( dugks ) , was proposed for all knudsen number flows . in the dugks the numerical fluxis constructed based on the governing equation i.e. , the dvbe itself , instead of using interpolations .with such a treatment , the time step is not restricted by the relaxation time , and its superior accuracy and stability for high reynolds continuum flows have been demonstrated . since both the bkg and the dugks methods overcome the time step restriction from different approaches , it is still not clear the performance difference between them , so in this work we will present a comparative study of these two kinetic schemes for continuum flows , even the dugks is not limited to such flowswe will also investigate the link between the two schemes by comparing them in the same finite volume framework .the remaining part of this paper is organized as follows .sec . 2 will introduce the dugks and bkg methods and discuss their relation , sec .3 will present the comparison results , and a conclusion is given in sec .the governing equation for the olb schemes and dugks method is the boltzmann equation with the bhatnagar - gross - krook collision operator , where is the distribution function ( df ) with particle velocity at position and time , is relaxation time due to particle collisions , and is the maxwellian equilibrium distribution function . in this article, we consider the isothermal two - dimensional - nine - velocities ( d2q9 ) lattice model .the corresponding dvbe is where and are the df with discrete velocity and the corresponding discrete equilibrium df respectively .the d2q9 discrete velocity set is defined as , \sin[(\alpha-1)\pi/2]\right ) & \text{for } ~ \alpha=1,2,3,4,\\ \sqrt{3rt } \left(\cos[(2\alpha-9)\pi/4 ] , \sin[(2\alpha-9)\pi/4]\right)\sqrt{2 } & \text{for } ~ \alpha=5,6,7,8 , \end{cases}\ ] ] where is the gas constant and is the constant temperature . under the low mach number condition, the discrete equilibrium df can be approximated by its taylor expansion around zero particle velocity up to second order , ,\ ] ] where is the lattice sound speed and the weights are the fluid density and velocity are the moments of the df , the shear viscosity of the fluid is related to the relaxation time by which can be deduced from chapman - enskog analysis .the conservation property of the collision term is maintained at its discrete velocity counterpart , i.e. , the dugks employs a cell centered finite volume ( fv ) discretization of the dvbe .the computational domain is firstly divided into small control volumes . for a clear illustration of the formulas , we denote the volume averaged df with discrete velocity in control volume at time level by , i.e. , then integrating eq . from time to time and applying the gauss theorem we can get ,\ ] ] where is the numerical flux that flows into the control volume from its faces , and is the time step size . note that trapezoidal rule is used for the collision term .this implicit treatment of the collision term is crucial for its stability when the time step is much larger than the relaxation time .this implicitness can be removed in the actual implementation using the following variable transformation technique , which is also adopted by the standard lbm , equation can then be rewritten in an explicit formulation , in the implementation , we track the evolution of instead of the original df . due to the conservation property of the collision term , the macroscopic variables can be calculated by the transformed df , the key merit of dugks lies in its treatment of the advection term , i.e. , the way to construct the numerical flux . in dugks, the middle point rule is used for the integration of the flux , the integration over the faces is computed by the at the centers of the faces , which are computed using the characteristic solution of the kinetic equation .supposed the center of a face is , then integrating eq . along the characteristic line in a half time step from to , and applying the trapezoidal rule, we can get .\label{eq_charraw}\ ] ] again the implicitness can be eliminated by introducing another two variable transformations , and we can reformulate eq . in an explicit form , for smooth flow, can be interpolated linearly from its neighboring cell centers . after getting , the original df can be transformed back with the help of eq . .the macroscopic fluid variable and used by the collision term in eq . are calculated from , to insure the interpolation is stable , the time step is limited by the cfl condition , where is the cfl number and measures the size of the cell . unlike the dugks in which the collision and particle - transport are treated simultaneously , the bkg scheme is a splitting method for eq . , which treats the convection term and collision term sequentially , i.e. , in the collision step , the collision term is integrated using trapezoidal rule , .\label{eq : splic_c}\ ] ] using the same notation as used in the dugks , eq . can be rewritten in an explicit formulation , with .it is noted that this treatment is identical to that of the standard lbm .then eq . is solved with the lax - wendroff scheme with the initial value or , where the subscripts denote the spatial indices .equation forms the evolution of the bkg scheme . in the original works , either finite element ( fe ) or finite difference ( fd )is employed to discretize the spatial gradients in eq . .in ref . [ 14 ] , the central finite - difference scheme on a uniform mesh is used , i.e , the first and second order spatial derivatives are computed as , [ eq : central_schemes ] , \end{aligned}\ ] ] where the computational stencil for each node is illustrated in fig .[ fig : fv_stencil ] .it is noted that if we use the one dimensional lax - wendroff scheme to solve eq . in each discrete velocity direction on a uniform cartesian grid , this characteristic based scheme reduces to the lax - wendroff lbe scheme developed in . the lax - wendroff scheme eq .can also be expressed as \\ & \approx \tilde f^{+,n}_\alpha - \delta t \xi_{\alpha i } \frac{\partial}{\partial x_i}\left(\tilde f^{+,n}_\alpha(\bm{x}-\bm{\xi_\alpha}h)\right)\\ & \equiv \tilde f^{+,n}_\alpha - \delta t \xi_{\alpha i } \frac{\partial}{\partial x_i}\left(f^{n+1/2}_\alpha(\bm{x})\right ) .\end{split}\end{aligned}\ ] ] this means that the bkg scheme can be reformulated as a fv scheme , where with to be more specific , we rewrite eq . in the finite - volume form as - \frac{\xi_{\alpha 2 } \delta t}{\delta x_2 } \left [ f^{n+1/2}_{\alpha , l , m+1/2 } - f^{n+1/2}_{\alpha , l , m-1/2}\right],\ ] ] where and are the dfs at the face - centers of cell at the half time step , which depend on interpolation schemes .if the distribution function is assumed to be a linear piece - wise polynomial , we can obtain the distribution functions at the cell interfaces , e.g. , - \frac{\xi_{\alpha 1 } \delta t}{2\delta x_1 } [ \tilde f^{+,n}_{\alpha , l , m } - \tilde f^{+,n}_{\alpha , l-1,m } ] \\ & - \frac{\xi_{\alpha 2 } \delta t}{8\delta x_2 } [ \tilde f^{+,n}_{\alpha , l-1,m+1 } + \tilde f^{+,n}_{\alpha , l , m+1 } -\tilde f^{+,n}_{\alpha , l-1,m-1 } - \tilde f^{+,n}_{\alpha , l , m-1 } ] \\ & \approx \tilde f^{+,n}_{\alpha , l-1/2,m } - \frac{\xi_{\alpha1}\delta t}{\delta x_1}\frac{\partial}{\partial x_1}\tilde f^{+,n}_{\alpha , l-1/2,m } - \frac{\xi_{\alpha2}\delta t}{\delta x_2}\frac{\partial}{\partial x_2}\tilde f^{+,n}_{\alpha , l-1/2,m}. \end{split}\end{aligned}\ ] ] one can immediately check the equivalence of eqs . and , after calculating the rest flux terms in a similar way like eq . , and inserting eq . to eq . .we now analyse the differences between the dugks and the bkg scheme in finite - volume formulation .this is achieved by analyzing accuracy of the reconstructed distribution function at the cell interface center .firstly , it is noted that the exact solution of the dvbe at the cell interface center is we can immediately find that if we approximate the integration of the collision term in eq .explicitly , i.e. , assuming , we get eq . , which is the reconstructed distribution function in the bkg scheme .on the other hand , if we apply trapezoidal rule to the quadrature , we get eq . ,i.e. , the reconstructed cell - interface distribution functions in the dugks .so , in both the dugks and the bkg methods , the flux is determined from the local characteristic solution of the dvbe , and the convection and collision effects are considered simultaneously .we also indicate that , for those fv / fd schemes that use the simple central difference or upwind schemes of traditional cfd methods , the corresponding cell - interface distribution functions are and where the collision effect is totally ignored .the effects of the different treatments of the integration of the collision term in eq . can be analyzed by using the chapman - enskog expansion method . by approximating the distribution function by its first - order chapman - enskog solution , with , we have therefore , for continuous flows where , the distribution function at the cell interface reconstructed in either the bkg or the dugks method is a second - order approximation of the exact one . furthermore , with the trapezoidal rule for the collision term, the dugks is expected to be more accurate than the bkg method which uses a lower order explicit rule . in regions with large velocity gradients , where the collision effect or is important, the bkg scheme may yield significant error .more importantly , as the dugks employs an implicit treatment of the collision term in the evaluation of numerical flux , it is expected to be more stable than the bkg scheme .finally , we shall remark that if the distribution functions at cell interfaces are obtained by direct interpolations , i.e. , neglecting the integral of the collision term in eq . , the leading error of the approximation is . as the integral of on right hand side of eq. contributes to the diffusive flux , the lack of the collision term is equivalent to introduce a numerical viscosity proportional to .this explains why many other fv based lattice boltzmann schemes have to keep the time step much smaller than the collision time to obtain accurate results .in this subsection , we briefly mention the implementation of no - slip boundary condition for the bkg scheme and dugks methods , the basic idea here is to mimic the half - way bounce - back rule of the standard lbm by reversing the dfs at boundary faces at middle time steps .[ fig : boundary_condition ] illustrates a boundary face located at a no slip wall with velocity .both of the incoming and outgoing dfs at the boundary face at middle time steps have to be provided to update the cell - centered dfs .we denote the incoming and outgoing dfs by and respectively , where stands for in the dugks and in bkg scheme .the ghost cell method is used to facilitate the implementation of the no - slip boundary condition .an extra layer of cells ( ghost cells ) are allocated outside of the wall .the unknown dfs in the ghost cells are extrapolated linearly from the cell centers of the neighboring inner cells . here, stands for in the dugks and in the bkg scheme. then we can compute the normally .after that , the incoming dfs are calculated in the same way as the half - way bounce - back rule in the standard lbm , where stands for an outgoing df direction and is its reverse direction .in this section , we compare the dugks and the bkg scheme in terms of accuracy , stability and computational efficiency by simulating several two dimensional flows .the first one is the unsteady taylor - green vortex flow which is free from boundary effect , and exhibits an analytical solution exists for this problem , the second test case is the lid - driven cavity flow , which is used to evaluate the accuracy and stability , and the last one is the laminar boundary layer flow problem , which is used to verify the dissipation property of the the dugks and the bkg methods . in all of our simulation , is set to be and the cfl number is set to be 0.5 unless stated otherwise .this problem is a two dimensional unsteady incompressible flow in a square domain with periodical condition in both directions .the analytical solution is given by \exp(-16\pi^2\nu t),\end{aligned}\ ] ] [ eq : tvana ] where is a constant indicating the kinetic energy of the initial flow field , is the shear viscosity , is the velocity , and is the pressure . the computation domain is and with .we set and .the corresponding reynolds number and mach number are and , respectively .the initial distribution function is computed from the chapman - enskog expansion at the navier - stokes order , \label{}\ ] ] where the equilibrium distribution functions are evaluated from the initial analytical solution .we first evaluate the spatial accuracy of the dugks and bkg scheme by simulating the flow with varies mesh resolutions ( ) .as we are analyzing the spatial accuracy , the time step is set to a very small value ( ) to suppress the errors caused by the time step size .the -error of the velocity filed is measured , where and are the analytical solution and numerical solution respectively .the -errors at the half - life time of the two schemes are measured and listed in table [ tab : tv_nerr ] .the results are listed in table [ tab : tv_nerr ] .it can be seen that both of the methods are of second order accuracy in space .but the errors computed from dugks results are smaller than those of the bkg scheme on the same mesh resolutions ..-errors of the velocity filed for the taylor - green vortex flow [ cols="<,>,>,>,>,>",options="header " , ] [ tab : tv_nerr ] since both the dugks and bkg methods can admit a time step larger than the relaxation time , we now investigate their performance at large values of .we fix the mesh size ( ) and the relaxation time but change the time step .the -errors at are shown in fig .[ fig : tv_dterr ] , from which we can see that the errors scale almost linearly with the time step size for both methods .particularly , the two methods still give reasonably accurate results is as large as 50 , as shown in fig .[ fig : tv_uprofile ] . andagain , the errors of the dugks are smaller than those of the bkg scheme in all cases .the two methods both blow up as since the cfl number goes beyond 1 at this condition . using varies on a mesh ., scaledwidth=50.0% ] at on a mesh , the time step is ., scaledwidth=50.0% ] here , we also discuss the computational efficiencies of the dugks and the bkg scheme when implementing both of the schemes in fv framework , their only difference is the computing of numerical flux .the dugks introduces two sets of additional dfs and macro variables at cell faces are required .so it can be expected that the dugks s computing cost is obviously higher than that of the bkg scheme .for example on an intel xeon e5 - 2670v3.6ghz cpu , the computation times for 10,000 steps of the bkg and the dugks with mesh are 10.5s and 19.7s respectively , meaning the dugks is about one time more expansive than the bkg scheme .incompressible two dimensional lid - driven cavity flow is a popular benchmark problem for numerical schemes . here , we use it to evaluate the accuracy and stability of the two schemes at different reynolds numbers .the flow domain is a square cavity with length .the top wall moves with a constant velocity , while other walls are kept fixed .the reynolds number is defined as with being the viscosity of the fluid . in the computation , we set , , and the viscosity of the fluid is adjusted to achieve different reynolds numbers .uniform cartesian meshes with grid number are used in our simulations .we first simulate the flow at and with different mesh resolutions to compare the accuracy and stability of the dugks and bkg methods .the velocity profiles at steady states along the vertical and horizontal center lines predicted by the two schemes are presented in figs .[ fig : cavity_re1000]-[fig : cavity_re10000 ] .the benchmark solutions are also included for comparison .it should be noted that , the grid numbers used in the bkg schemes are doubled from those in the dugks at each reynolds number , in that the bkg computations are unstable at the coarsest meshes used in the dugks . from these results , we can clearly observe that the dugks scheme gives more accurate results than the bkg scheme , especially at large reynolds numbers .furthermore , the results show that the dugks is insensitive to mesh resolutions , while the bkg scheme is rather sensitive .generally , much finer meshes should be used in the bkg scheme to obtained accurate results . specifically , the horizontal velocity profiles in the boundary layer of the top wall departure from the benchmark solutions severely at high reynolds numbers ( see figs .[ fig : cavity_re5000 ] and [ fig : cavity_re10000 ] ) with coarser meshes , which was also observed in .contrary to the bkg scheme , the dugks gives surprisingly good results with the same meshes even at . as has been analyzed in sec .[ sec : numerical ] , the only difference between the dugks and the bkg scheme is the treatment of the quadrature for the collision term in the reconstruction of the cell - interface distribution function , and the difference scales with the time step , which have been confirmed in the test of taylor - green vortex flow .now we explore the effect of time step on the solution of this steady flow for the bkg scheme .we simulate the flow at and using various cfl numbers with a fixed grid ( ) .the calculated velocity profiles are shown in fig .[ fig : cavity_cfl ] .we can see that the errors decrease with decreasing cfl number .but even with cfl=0.1 , the errors are still much larger than those of the dugks .we also use the cavity flow to assess and compare the stability of the two schemes .generally , the stability of the numerical schemes for the bgk equation is affected by the treatments of both the advection term and the collision term .the stability for an explicit discretization of the advection term is controlled by the cfl number , while the stability due to the collision term treatment depends on the ratio of and the collision time , i.e. , .the maximum values of at varies cfl numbers for a stable computation on the and meshes are measured and presented in fig .[ fig : cavity_stab ] with error ranges .it can be seen that there exists a clear distinction between the bkg and the dugks methods . for the bkg scheme ,the computation is unstable at moderately large even though , while for the dugks , the stability is almost not affected by the cfl number as long as .this observation confirms to the analysis in sec . [sec : numerical ] that the numerical stability is also affected by the treatment of the collision term in the evaluation of numerical flux .computing the collision term implicitly both in eq .and eq . makes the dugks a rather robust scheme . for stable computations of the cavity flow.,scaledwidth=60.0% ] in the cavity flow, it is observed that the bkg scheme fails to capture the boundary layer accurately near the top wall of the cavity for large reynolds numbers . in this subsection , we use the laminar flow over a flat plate as a stand - alone case to check this phenomenon and therefore , evaluate the dissipation characteristics of the bkg scheme and the dugks .the flow configuration of this problem is sketched in fig .[ fig : bl_mesh ] . a uniform flow with horizontal velocity flows past a flat plate with length .this steady problem has an analytical self - similar blasius solution .the reynolds number is defined as , where is the kinematic viscosity . in the simulations , we set , and .the boundary layer is very thin at such a high reynolds number , so non - uniform structured meshes stretched in the vertical direction are employed ( fig.[fig : bl_mesh ] ) .the cell size along direction increases with a ratio , and the height of the first layer is . the grid number in the directionis adjusted according to to make sure the height of the computation domain is right beyond 50 .the cell size in the direction is refined at the leading edge of the plate , with to account for the singularity of the flow behavior there .the increasing ratios of the cell size to the downstream and upstream from the leading edge are and , respectively .the total cell number in the direction is 120 , with 80 cells distributed on the plate .free stream condition is applied to the left and top boundaries .outflow boundary condition is applied to the right boundary , and symmetric boundary condition is used at the section before the plate at the bottom boundary .no - slip boundary condition is imposed at the bottom wall and is realized by the method described in sec .[ sec : wallboundary ] .+ we simulate the flow with different mesh resolutions by adjusting the parameter from to .the cfl number is fixed at 0.5 .the velocity profiles at and predicted by the bkg and the dugks methods together with the blasius solutions are shown in figs .[ fig : bl_u]-[fig : bl_v ] .the horizontal velocity is scaled by , and the vertical velocity is scaled by , where is the local reynolds number defined by .+ from these results , we can observe that , the boundary layer can be captured accurately by the dugks with the three meshes , particularly , with the coarsest mesh ( ) there are only 4 cells in the boundary layer at .on the other hand , the bkg scheme ca nt give satisfactory results even with the finest mesh ( ) , as shown in fig .[ fig : bl_u](a ) , fig .[ fig : bl_u](b ) , fig .[ fig : bl_v](a ) and fig .[ fig : bl_v](b ) .it is also observed that the results of the bkg scheme are quite sensitive to the mesh employed .these results suggest again that the dugks is more robust than the bkg scheme . like the cavity flow case, we reduce the time step in the bkg simulation to examine the effects of time step .the computation is carried out on the mesh of and the cfl number varies from to .the velocity profiles are presented in fig .[ fig : bl_bkg_cfl ] .it can be seen that the use of a small time step can improve the accuracy , but the deviations from the blasius solution are still obvious even with cfl=0.01 .in this paper , the performance of two kinetic schemes , i.e. , the bkg scheme and the dugks is compared . both of them can remove the time step restriction which is commonly seen in many off - lattice boltzmann schemes . a theoretical analysis in the finite - volume framework demonstrates that the two methods differ only in the constructions of numerical flux .the bkg scheme treats the collision integral with the one - point quadrature when integrate the bgk equation along the characteristic line to evaluate the numerical flux , while dugks computes it with the trapezoidal quadrature .consequently , the dugks is more accurate and stable than the bkg scheme .the numerical results of three test cases , including unsteady and steady flows , confirm that the the dugks is more accurate and stable than the bkg scheme on the same computing configurations , especially for high reynolds number flows .it is also observed that the dugks is stable as long as , while the bkg scheme s stability degrades quickly as the cfl number goes beyond 0.5 .we attribute this to the implicit treatment of the collision term of the dugks when evaluating the numerical flux .furthermore , the results show that the dugks is more insensitive to mesh resolutions than the bkg method .numerical results also demonstrate that the bkg scheme is about one time faster than the dugks on a same mesh .however , it should be noted that the latter can achieve an accurate solution with a much finer mesh , suggesting that it can be more efficient for flow computations than the bgk scheme . in summary ,the theoretical analysis and numerical results demonstrate that the dugks can serve as an efficient method for simulating continuum flows , although it is not limited to such flow regime .the authors thank prof .kun xu for many helpful discussions , and thank dr .weidong li for providing the benchmark data of the laminar boundary layer .this study is financially supported by the national science foundation of china ( grant no .51125024 ) and the fundamental research funds for the central universities ( grant no .2014ts119 ) .34 a. bardow , i.v .karlin , a.a .gusev , general characteristic - based algorithm for off - lattice boltzmann simulations , europhys . lett . , 75 ( 2006 ) 434 .guo , k. xu , r.j .wang , discrete unified gas kinetic scheme for all knudsen number flows : low - speed isothermal case , phys .e , 88 ( 2013 ) 033305 .r. mei , w. shyy , on the finite difference - based lattice boltzmann method in curvilinear coordinates , j. comput .phys . , 143 ( 1998 ) 426 - 448. g. peng , h. xi , c. duncan , s .- h .chou , finite volume scheme for the lattice boltzmann method on unstructured meshes , phys .e , 59 ( 1999 ) 4675 - 4682 .t. lee , c .-lin , a characteristic galerkin method for discrete boltzmann equation , j. comput .phys . , 171 ( 2001 ) 336 - 356 .t. lee , c .-lin , an eulerian description of the streaming process in the lattice boltzmann equation , j. comput .phys . , 185 ( 2003 ) 445 - 471 .guo , t.s .zhao , explicit finite - difference lattice boltzmann method for curvilinear coordinates , phys .e , 67 ( 2003 ) 066709 .n. rossi , s. ubertini , g. bella , s. succi , unstructured lattice boltzmann method in three dimensions , int ., 49 ( 2005 ) 619 - 633 .a. bardow , i.v .karlin , a.a .gusev , multispeed models in off - lattice boltzmann simulations , phys .e , 77 ( 2008 ) 025701 .patil , k.n .lakshmisha , finite volume tvd formulation of lattice boltzmann simulation on unstructured mesh , j. comput .phys . , 228 ( 2009 ) 5262 - 5279 .patil , k.n .lakshmisha , two - dimensional flow past circular cylinders using finite volume lattice boltzmann formulation , int .j. numer . meth ., 69 ( 2012 ) 1149 - 1164 .mcnamara , a.l .garcia , b.j .alder , stabilization of thermal lattice boltzmann models , j. stat ., 81 ( 1995 ) 395 - 408 .guo , c.g .zheng , t.s .zhao , a lattice bgk scheme with general propagation , j. sci .comput . , 16 ( 2001 ) 569 - 585 .rao , l.a .schaefer , numerical stability of explicit off - lattice boltzmann schemes : a comparative study , j. comput .phys . , 285 ( 2015 ) 251 - 264 .guo , r.j .wang , k. xu , discrete unified gas kinetic scheme for all knudsen number flows .ii . thermal compressible case , phys .e , 91 ( 2015 ) 033313 .p. wang , l.h .zhu , z.l .guo , k. xu , a comparative study of lbe and dugks methods for nearly incompressible flows , commun .phys . , 17 ( 2015 ) 657 - 681 .zhu , z.l .guo , k. xu , discrete unified gas kinetic scheme on unstructured meshes , arxiv preprint arxiv:1503.07374 , ( 2015 ) .bhatnagar , e.p .gross , m. krook , a model for collision processes in gases .i. small amplitude processes in charged and neutral one - component systems , phys . rev ., 94 ( 1954 ) 511 .guo , c. shu , lattice boltzmann method and its applications in engineering , world scientific , singapore , 2013 . c. hirsch , numerical computation of internal and external flows : the fundamentals of computational fluid dynamics , second ed .butterworth - heinemann , burlington , 2007 .leveque , finite volume methods for hyperbolic problems , cambridge university press , cambridge , 2002 .kim , h. pitsch , i.d .boyd , accuracy of higher - order lattice boltzmann methods for microscale flows with finite knudsen numbers , j. comput .phys . , 227 ( 2008 ) 8655 - 8671. t. ohwada , on the construction of kinetic schemes , j. comput .phys . , 177 ( 2002 ) 156 - 175 .s. chen , k. xu , a comparative study of an asymptotic preserving scheme and unified gas - kinetic scheme in continuum flow limit , j. comput ., 288 ( 2015 ) 52 - 65 .k. xu , z. li , dissipative mechanism in godunov type schemes , int ., 37 ( 2001 ) 1 - 22 .k. xu , a gas - kinetic bgk scheme for the navier - stokes equations and its connection with artificial dissipation and godunov method , j. comput .phys . , 171 ( 2004 ) 289 - 335 .u. ghia , k.n .ghia , c.t .shin , high - re solutions for incompressible flow using the navier - stokes equations and a multigrid method , j. comput .phys . , 48 ( 1982 ) 387 - 411 .
the general characteristics based off - lattice boltzmann scheme ( bkg ) proposed by bardow et al . , and the discrete unified gas kinetic scheme ( dugks ) are two methods that successfully overcome the time step restriction by the collision time , which is commonly seen in many other kinetic schemes . basically , the bkg scheme is a time splitting scheme , while the dugks is an un - split finite volume scheme . in this work , we first perform a theoretical analysis of the two schemes in the finite volume framework by comparing their numerical flux evaluations . it is found that the effects of collision term are considered in the reconstructions of the cell - interface distribution function in both schemes , which explains why they can overcome the time step restriction and can give accurate results even as the time step is much larger than the collision time . the difference between the two schemes lies in the treatment of the integral of the collision term , in which the bardow s scheme uses the rectangular rule while the dugks uses the trapezoidal rule . the performance of the two schemes , i.e. , accuracy , stability , and efficiency are then compared by simulating several two dimensional flows , including the unsteady taylor - green vortex flow , the steady lid - driven cavity flow , and the laminar boundary layer problem . it is observed that , the dugks can give more accurate results than the bkg scheme . furthermore , the numerical stability of the bkg scheme decreases as the courant - friedrichs - lewy ( cfl ) number approaches to 1 , while the stability of dugks is not affected by the cfl number apparently as long as . it is also observed that the bkg scheme is about one time faster than the dugks scheme with the same computational mesh and time step .
todos ns j tentmos , numa ou outra ocasio , manter em equilbrio uma vara sobre o dedo indicador ( , resolver o problema do pndulo invertido ) .por outro lado muito mais difcil , sobretudo se fecharmos os olhos , manter em equilbrio um pndulo invertido duplo .a teoria do controlo permite faz - lo sob a condio de dispormos de um bom modelo matemtico .um sistema de controlo um sistema dinmico , que evolui no tempo , sobre o qual podemos agir atravs de uma funo de comando ou controlo .um computador , que permite a um utilizador efectuar uma srie de comandos , um ecossistema sobre o qual podemos agir favorecendo esta ou aquela espcie , os tecidos nervosos que formam uma rede controlada pelo crebro e realizam a transformao de estmulos provenientes do exterior em aces do organismo , um robot que deve efectuar uma tarefa bem precisa , uma viatura sobre a qual agimos por intermdio de um pedal de acelerao , de travagem e embraiagem e que conduzimos com a ajuda de um volante , um satlite ou uma nave espacial , so todos eles exemplos de sistemas de controlo , os quais podem ser modelados e estudados pela teoria dos sistemas de controlo . a teoria do controlo analisa as propriedades de tais sistemas , com o intuito de os `` conduzir '' de um determinado estado inicial a um dado estado final , respeitando eventualmente certas restries .a origem de tais sistemas pode ser muito diversa : mecnica , elctrica , biolgica , qumica , econmica , etc . o objectivo pode ser o de estabilizar o sistema tornando - o insensvel a certas perturbaes ( problema de _ estabilizao _ ) ou ainda determinar as solues ptimas relativamente a um determinado critrio de optimizao ( problema do _ controlo ptimo _ ) .para modelar os sistemas de controlo podemos recorrer a equaes diferenciais , integrais , funcionais , de diferenas finitas , s derivadas parciais , determinsticas ou estocsticas , etc .por esta razo a teoria do controlo vai beber e contribui em numerosos domnios da matemtica ( , , ) . a estrutura de um sistema de controlo representada pela interconexo de certos elementos mais simples que formam sub - sistemas .neles transita _ informao_. a dinmica de um sistema de controlo define as transformaes possveis do sistema , que ocorrem no tempo de maneira determinista ou aleatria .os exemplos j dados mostram que a estrutura e a dinmica de um sistema de controlo podem ter significados muito diferentes .em particular , o conceito de sistema de controlo pode descrever transformaes discretas , contnuas , hbridas ou , de um modo mais geral , numa _ time scale _ ou _ measure chain _ .um sistema de controlo diz - se _ controlvel _ se o podemos `` conduzir '' ( em tempo finito ) de um determinado estado inicial at um estado final prescrito .em relao ao problema da controlabilidade , kalman demonstrou em 1949 um resultado importante que caracteriza os sistemas lineares controlveis de dimenso finita ( teorema [ condkalman ] ) .para sistemas no lineares o problema matemtico da controlabilidade muito mais difcil e constitui um domnio de investigao ainda activo nos dias de hoje .assegurada a propriedade de controlabilidade , podemos desejar passar de um estado inicial a um estado final minimizando ou maximizando um determinado critrio .temos ento um problema de controlo ptimo .por exemplo , um condutor que efectue o trajecto lisboa - porto pode querer viajar em tempo mnimo .nesse caso escolhe o trajecto pela auto - estrada a1 .uma consequncia de tal escolha ser o pagamento de portagem .outro problema de controlo ptimo obtido se tivermos como critrio de minimizao os custos da viagem . a soluo de tal problema envolver a escolha de estradas nacionais , gratuitas , mas que levam muito mais tempo a chegar ao destino ( segundo a informao do stio da internet http://www.google.pt/maps o trajecto pela auto - estrada dura 3h e pela estrada nacional dura 6h45 m ) .um problema de controlo ptimo pode ser formulado do seguinte modo .consideremos um sistema de controlo , cujo estado num determinado instante representado por um vector .os controlos so funes ou parmetros , habitualmente sujeitos a restries , que agem sobre o sistema sob a forma de foras exteriores , de potenciais trmicos ou elctricos , de programas de investimento , etc .e afectam a dinmica .uma equao dada , ou tipicamente um sistema de equaes diferenciais , relacionando as variveis e modelando a dinmica do sistema . depois necessrio utilizar a informao presente e as caractersticas do problema para construir os controlos adequados que vo permitir realizar um objectivo preciso .por exemplo , quando nos deslocamos na nossa viatura agimos de acordo com o cdigo da estrada ( pelo menos aconselhvel ) e concretizamos um plano de viagem para chegar ao nosso destino .so impostas restries sobre a trajectria ou sobre os controlos , que imprescindvel ter em considerao .fixamos um critrio permitindo medir a qualidade do processo escolhido .este apresenta - se normalmente sob a forma de uma funcional que depende do estado do sistema e dos controlos .para alm das condies anteriores procuramos ainda minimizar ( ou maximizar ) esta quantidade .um exemplo j dado anteriormente o de deslocarmo - nos em tempo mnimo de um ponto a outro .notemos que a forma das trajectrias ptimas depende fortemente do critrio de optimizao .por exemplo , para estacionar o nosso carro fcil verificar que a trajectria seguida difere se queremos realizar a operao em tempo mnimo ( o que arriscado ) ou minimizando a quantidade de combustvel gasta na operao . a teoria do controlo ptimo tem uma grande importncia no domnio aeroespacial , nomeadamente em problemas de conduo , transferncia de rbitas aero - assistidas , desenvolvimento de lanadores de satlites recuperveis ( o aspecto financeiro aqui muito importante ) e problemas da reentrada atmosfrica , como seja o famoso projecto _ mars sample return _ da agncia espacial europeia ( esa ) , que consiste em enviar uma nave espacial ao planeta marte com o objectivode trazer amostras marcianas ( figura [ espaco ] ) .o clculo das variaes nasceu no sculo dezassete com o contributo de bernoulli , fermat , leibniz e newton .alguns matemticos como h.j . sussmann e j.c .willems defendem a origem do controlo ptimo coincidente com o nascimento do clculo das variaes , em 1697 , data de publicao da soluo do problema da braquistcrona pelo matemtico johann bernoulli .outros vo ainda mais longe , chamando a ateno para o facto do problema da resistncia aerodinmica de newton , colocado e resolvido por isaac newton em 1686 , no seu _ principia mathematica _ ,ser um verdadeiro problema de controlo ptimo .em 1638 galileu estudou o seguinte problema : determinar a curva sobre a qual uma pequena esfera rola sob a aco da gravidade , sem velocidade inicial e sem atrito , de um ponto at um ponto com um tempo de percurso mnimo ( escorrega de tempo mnimo , ver figura [ pontoaeb ] ) .trata - se do problema da braquistcrona ( do grego _ brakhistos _ , `` o mais breve '' , e _ chronos _ , `` tempo '' ) .galileu pensou ( erradamente ) que a curva procurada era um arco de crculo .observou no entanto , correctamente , que o segmento de linha recta no o caminho de tempo mais curto .em 1696 , jean bernoulli colocou este problema como um desafio aos melhores matemticos da sua poca .ele prprio encontrou a soluo , assim como o seu irmo jacques bernoulli , newton , leibniz e o marqus de lhopital .a soluo um arco de ciclide comeando com uma tangente vertical . as rampas de skate assim como as descidas mais rpidas dos _ aqua - parques _ , tm a forma de ciclide ( figura [ cicloide ] ) .a teoria do controlo ptimo surge depois da segunda guerra mundial , respondendo a necessidades prticas de engenharia , nomeadamente no domnio da aeronutica e da dinmica de voo .a formalizao desta teoria colocou vrias questes novas .por exemplo , a teoria do controlo ptimo motivou a introduo de novos conceitos de solues generalizadas na teoria das equaes diferenciais e originou novos resultados de existncia de trajectrias .regra geral , considera - se que a teoria do controlo ptimo surgiu em finais dos anos cinquenta na antiga unio sovitica , em 1956 , com a formulao e demonstrao do princpio do mximo de pontryagin por l.s .pontryagin ( figura [ pontryagin ] ) e pelo seu grupo de colaboradores : v.g .boltyanskii , r.v .gamkrelidze e e.f .mishchenko .pontryagin e os seus companheiros introduziram um aspecto de importncia primordial : generalizaram a teoria do clculo das variaes a curvas que tomam valores em conjuntos fechados ( com fronteira ) .a teoria do controlo ptimo est muito ligada mecnica clssica , em particular aos princpios variacionais ( princpio de fermat , equaes de euler - lagrange , etc . ) na verdade o princpio do mximo de pontryagin uma generalizao das condies necessrias de euler - lagrange e de weierstrass .alguns pontos fortes da nova teoria foram a descoberta do mtodo de programao dinmica , a introduo da anlise funcional na teoria dos sistemas ptimos e a descoberta de ligaes entre as solues de um problema de controlo ptimo e os resultados da teoria de estabilidade de lyapunov .mais tarde apareceram as fundaes da teria do controlo estocstico e da filtragem em sistemas dinmicos , a teoria dos jogos , o controlo de equaes com derivadas parciais e os sistemas de controlo hbrido algumas de entre as muitas reas de investigao actual .a teoria do controlo ptimo muito mais simples quando o sistema de controlo sob considerao linear .o controlo ptimo no linear ser abordado na seco [ sec : co : nl ] . a teoria linear ainda , nos dias de hoje , a mais usada e conhecida nas reas de engenharia e suas aplicaes .seja ( denotamos por conjunto das matrizes de entradas reais ) ; ; um intervalo de ; e uma funo mensurvel ( ) tal que .o teorema de existncia de soluo para equaes diferenciais assegura a existncia de uma nica aplicao absolutamente contnua ( ) tal que esta aplicao depende do controlo .ao mudarmos a funo obtemos uma outra trajectria em ( figura [ trajectoria ] ) .= .25 mm [#1 ] killglue ( 227,161)(215,-5 ) = cmbxti10=cmbxti10 scaled 1(296,72)(303,66 ) ( 296,66)(303,73 ) ( 299,69)(300.59,69.31)(302.16,69.64)(303.71,69.99)(305.24,70.36 ) ( 306.75,70.75)(308.24,71.16)(309.71,71.59)(311.16,72.04)(312.59,72.51 ) ( 314,73)(315.165,72.925)(316.36,73)(317.585,73.225)(318.84,73.6 ) ( 320.125,74.125)(321.44,74.8)(322.785,75.625)(324.16,76.6)(325.565,77.725 ) ( 327,79)(328.87,80.785)(330.68,82.64)(332.43,84.565)(334.12,86.56 ) ( 335.75,88.625)(337.32,90.76)(338.83,92.965)(340.28,95.24)(341.67,97.585 ) ( 343,100)(343.55,103.205)(344.2,106.32)(344.95,109.345)(345.8,112.28 ) ( 346.75,115.125)(347.8,117.88)(348.95,120.545)(350.2,123.12)(351.55,125.605 ) ( 353,128)(354.91,130.575)(356.84,133)(358.79,135.275)(360.76,137.4 ) ( 362.75,139.375)(364.76,141.2)(366.79,142.875)(368.84,144.4)(370.91,145.775 ) ( 373,147)(375.11,148.075)(377.24,149)(379.39,149.775)(381.56,150.4 ) ( 383.75,150.875)(385.96,151.2)(388.19,151.375)(390.44,151.4)(392.71,151.275 ) ( 395,151 ) ( 299,70)(305.56,68.575)(311.84,67.4)(317.84,66.475)(323.56,65.8 ) ( 329,65.375)(334.16,65.2)(339.04,65.275)(343.64,65.6)(347.96,66.175 ) ( 352,67)(353.69,69.74)(355.56,72.36)(357.61,74.86)(359.84,77.24 ) ( 362.25,79.5)(364.84,81.64)(367.61,83.66)(370.56,85.56)(373.69,87.34 ) ( 377,89)(382.245,90.675)(387.28,92.2)(392.105,93.575)(396.72,94.8 ) ( 401.125,95.875)(405.32,96.8)(409.305,97.575)(413.08,98.2)(416.645,98.675 ) ( 420,99)(423.145,99.175)(426.08,99.2)(428.805,99.075)(431.32,98.8 ) ( 433.625,98.375)(435.72,97.8)(437.605,97.075)(439.28,96.2)(440.745,95.175 ) ( 442,94 ) ( 298,70)(298.35,67.785)(298.8,65.64)(299.35,63.565)(300,61.56 ) ( 300.75,59.625)(301.6,57.76)(302.55,55.965)(303.6,54.24)(304.75,52.585 ) ( 306,51)(308.07,49.755)(310.08,48.52)(312.03,47.295)(313.92,46.08 ) ( 315.75,44.875)(317.52,43.68)(319.23,42.495)(320.88,41.32)(322.47,40.155 ) ( 324,39)(325.29,38.395)(326.56,37.68)(327.81,36.855)(329.04,35.92 ) ( 330.25,34.875)(331.44,33.72)(332.61,32.455)(333.76,31.08)(334.89,29.595 ) ( 336,28)(337.09,26.295)(338.16,24.48)(339.21,22.555)(340.24,20.52 ) ( 341.25,18.375)(342.24,16.12)(343.21,13.755)(344.16,11.28)(345.09,8.695 ) ( 346,6 ) ( 298,70)(296.01,72.105)(294.04,74.12)(292.09,76.045)(290.16,77.88 ) ( 288.25,79.625)(286.36,81.28)(284.49,82.845)(282.64,84.32)(280.81,85.705 ) ( 279,87)(277.3,87.98)(275.6,88.92)(273.9,89.82)(272.2,90.68 ) ( 270.5,91.5)(268.8,92.28)(267.1,93.02)(265.4,93.72)(263.7,94.38 ) ( 262,95)(260.885,95.715)(259.64,96.36)(258.265,96.935)(256.76,97.44 ) ( 255.125,97.875)(253.36,98.24)(251.465,98.535)(249.44,98.76)(247.285,98.915 ) ( 245,99)(242.585,99.015)(240.04,98.96)(237.365,98.835)(234.56,98.64 ) ( 231.625,98.375)(228.56,98.04)(225.365,97.635)(222.04,97.16)(218.585,96.615 ) ( 215,96 ) ( 299,70)(294.86,69.025)(290.84,68)(286.94,66.925)(283.16,65.8 ) ( 279.5,64.625)(275.96,63.4)(272.54,62.125)(269.24,60.8)(266.06,59.425 ) ( 263,58)(259.52,56.525)(256.28,55)(253.28,53.425)(250.52,51.8 ) ( 248,50.125)(245.72,48.4)(243.68,46.625)(241.88,44.8)(240.32,42.925 ) ( 239,41)(238.415,38.665)(237.96,36.36)(237.635,34.085)(237.44,31.84 ) ( 237.375,29.625)(237.44,27.44)(237.635,25.285)(237.96,23.16)(238.415,21.065 ) ( 239,19)(239.715,16.965)(240.56,14.96)(241.535,12.985)(242.64,11.04 ) ( 243.875,9.125)(245.24,7.24)(246.735,5.385)(248.36,3.56)(250.115,1.765 ) ( 252,0 ) ( 290,78) neste contexto , surgem naturalmente duas questes : \(i ) dado um ponto , existir um controlo tal que a trajectria associada a esse controlo liga a em tempo finito ?( figura [ probcontrolabilidade ] ) este o _ problema da controlabilidade_. = .25 mm [#1 ] killglue ( 321,90)(176,-5 ) = cmbxti10=cmbxti10 scaled 1(201,11)(206,6 ) ( 200,6)(206,11 ) ( 398,11)(404,6 ) ( 397,6)(404,12 ) ( 203,9)(204.965,8.76)(206.96,8.64)(208.985,8.64)(211.04,8.76 ) ( 213.125,9)(215.24,9.36)(217.385,9.84)(219.56,10.44)(221.765,11.16 ) ( 224,12)(226.355,13.59)(228.72,15.16)(231.095,16.71)(233.48,18.24 ) ( 235.875,19.75)(238.28,21.24)(240.695,22.71)(243.12,24.16)(245.555,25.59 ) ( 248,27)(250.545,28.66)(253.08,30.24)(255.605,31.74)(258.12,33.16 ) ( 260.625,34.5)(263.12,35.76)(265.605,36.94)(268.08,38.04)(270.545,39.06 ) ( 273,40)(275.31,40.815)(277.64,41.56)(279.99,42.235)(282.36,42.84 ) ( 284.75,43.375)(287.16,43.84)(289.59,44.235)(292.04,44.56)(294.51,44.815 ) ( 297,45)(299.285,45.115)(301.64,45.16)(304.065,45.135)(306.56,45.04 ) ( 309.125,44.875)(311.76,44.64)(314.465,44.335)(317.24,43.96)(320.085,43.515 ) ( 323,43)(326.84,42.145)(330.56,41.28)(334.16,40.405)(337.64,39.52 ) ( 341,38.625)(344.24,37.72)(347.36,36.805)(350.36,35.88)(353.24,34.945 ) ( 356,34)(357.965,33.18)(359.96,32.32)(361.985,31.42)(364.04,30.48 ) ( 366.125,29.5)(368.24,28.48)(370.385,27.42)(372.56,26.32)(374.765,25.18 ) ( 377,24)(379.265,22.78)(381.56,21.52)(383.885,20.22)(386.24,18.88 ) ( 388.625,17.5)(391.04,16.08)(393.485,14.62)(395.96,13.12)(398.465,11.58 ) ( 401,10 ) ( 278,54)(280.2,55.005)(282.4,55.92)(284.6,56.745)(286.8,57.48 ) ( 289,58.125)(291.2,58.68)(293.4,59.145)(295.6,59.52)(297.8,59.805 ) ( 300,60)(302.2,60.105)(304.4,60.12)(306.6,60.045)(308.8,59.88 ) ( 311,59.625)(313.2,59.28)(315.4,58.845)(317.6,58.32)(319.8,57.705 ) ( 322,57 ) ( 315.371,61.478)(322,57)(314.043,56.169 ) ( 290,68) ( 409,2) ( 176,-1) \(ii ) assegurada a controlabilidade ( questo anterior ) , existir um controlo que _ minimiza o tempo de percurso de at _ ?( figura [ probcontroloptimo ] ) temos ento um problema de controlo ptimo ( de tempo mnimo ) .= .25 mm [#1 ] killglue ( 321,97)(176,-5 ) = cmbxti10=cmbxti10 scaled 1(201,49)(206,44 ) ( 200,44)(206,49 ) ( 398,49)(404,44 ) ( 397,44)(404,50 ) ( 203,47)(204.965,46.76)(206.96,46.64)(208.985,46.64)(211.04,46.76 ) ( 213.125,47)(215.24,47.36)(217.385,47.84)(219.56,48.44)(221.765,49.16 ) ( 224,50)(226.355,51.59)(228.72,53.16)(231.095,54.71)(233.48,56.24 ) ( 235.875,57.75)(238.28,59.24)(240.695,60.71)(243.12,62.16)(245.555,63.59 ) ( 248,65)(250.545,66.66)(253.08,68.24)(255.605,69.74)(258.12,71.16 ) ( 260.625,72.5)(263.12,73.76)(265.605,74.94)(268.08,76.04)(270.545,77.06 ) ( 273,78)(275.31,78.815)(277.64,79.56)(279.99,80.235)(282.36,80.84 ) ( 284.75,81.375)(287.16,81.84)(289.59,82.235)(292.04,82.56)(294.51,82.815 ) ( 297,83)(299.285,83.115)(301.64,83.16)(304.065,83.135)(306.56,83.04 ) ( 309.125,82.875)(311.76,82.64)(314.465,82.335)(317.24,81.96)(320.085,81.515 ) ( 323,81)(326.84,80.145)(330.56,79.28)(334.16,78.405)(337.64,77.52 ) ( 341,76.625)(344.24,75.72)(347.36,74.805)(350.36,73.88)(353.24,72.945 ) ( 356,72)(357.965,71.18)(359.96,70.32)(361.985,69.42)(364.04,68.48 ) ( 366.125,67.5)(368.24,66.48)(370.385,65.42)(372.56,64.32)(374.765,63.18 ) ( 377,62)(379.265,60.78)(381.56,59.52)(383.885,58.22)(386.24,56.88 ) ( 388.625,55.5)(391.04,54.08)(393.485,52.62)(395.96,51.12)(398.465,49.58 ) ( 401,48 ) ( 409,40) ( 176,37) ( 203,47)(204.4,46.49)(206,45.96)(207.8,45.41)(209.8,44.84 ) ( 212,44.25)(214.4,43.64)(217,43.01)(219.8,42.36)(222.8,41.69 ) ( 226,41)(231.11,39.705)(236.04,38.52)(240.79,37.445)(245.36,36.48 ) ( 249.75,35.625)(253.96,34.88)(257.99,34.245)(261.84,33.72)(265.51,33.305 ) ( 269,33)(271.77,33.255)(274.48,33.52)(277.13,33.795)(279.72,34.08 ) ( 282.25,34.375)(284.72,34.68)(287.13,34.995)(289.48,35.32)(291.77,35.655 ) ( 294,36)(295.675,36.085)(297.4,36.24)(299.175,36.465)(301,36.76 ) ( 302.875,37.125)(304.8,37.56)(306.775,38.065)(308.8,38.64)(310.875,39.285 ) ( 313,40)(315.04,41.595)(317.16,43.08)(319.36,44.455)(321.64,45.72 ) ( 324,46.875)(326.44,47.92)(328.96,48.855)(331.56,49.68)(334.24,50.395 ) ( 337,51)(340.875,51.225)(344.6,51.4)(348.175,51.525)(351.6,51.6 ) ( 354.875,51.625)(358,51.6)(360.975,51.525)(363.8,51.4)(366.475,51.225 ) ( 369,51)(370.79,50.23)(372.56,49.52)(374.31,48.87)(376.04,48.28 ) ( 377.75,47.75)(379.44,47.28)(381.11,46.87)(382.76,46.52)(384.39,46.23 ) ( 386,46)(387.59,45.83)(389.16,45.72)(390.71,45.67)(392.24,45.68 ) ( 393.75,45.75)(395.24,45.88)(396.71,46.07)(398.16,46.32)(399.59,46.63 ) ( 401,47 ) ( 203,47)(203.825,45.1)(204.8,43.2)(205.925,41.3)(207.2,39.4 ) ( 208.625,37.5)(210.2,35.6)(211.925,33.7)(213.8,31.8)(215.825,29.9 ) ( 218,28)(220.595,25.47)(223.28,23.08)(226.055,20.83)(228.92,18.72 ) ( 231.875,16.75)(234.92,14.92)(238.055,13.23)(241.28,11.68)(244.595,10.27 ) ( 248,9)(251.585,7.915)(255.24,6.96)(258.965,6.135)(262.76,5.44 ) ( 266.625,4.875)(270.56,4.44)(274.565,4.135)(278.64,3.96)(282.785,3.915 ) ( 287,4)(291.96,4.665)(296.84,5.36)(301.64,6.085)(306.36,6.84 ) ( 311,7.625)(315.56,8.44)(320.04,9.285)(324.44,10.16)(328.76,11.065 ) ( 333,12)(337.16,12.515)(341.24,13.16)(345.24,13.935)(349.16,14.84 ) ( 353,15.875)(356.76,17.04)(360.44,18.335)(364.04,19.76)(367.56,21.315 ) ( 371,23)(374.36,24.815)(377.64,26.76)(380.84,28.835)(383.96,31.04 ) ( 387,33.375)(389.96,35.84)(392.84,38.435)(395.64,41.16)(398.36,44.015 ) ( 401,47 ) ( 298,87)(306,83)(298,80 ) ( 289,9)(296,5)(289,0 ) ( 279,39)(287,35)(279,32 ) os teoremas que se seguem respondem a estas questes .as respectivas demonstraes so bem conhecidas e podem facilmente ser encontradas na literatura ( , , ) .considerando o sistema linear de controlo comeamos por introduzir um conjunto de grande importncia : _ o conjunto acessvel_. o conjunto dos pontos acessveis a partir de em tempo denotado e definido por ,i),\\ & \exists x : \r \to \r^n \ , \in ac \text { com } x(0 ) = x_0 , \\ & \forall t \in [ 0 , t ] \ , \ , \dot{x}(t ) = a x(t ) + b u(t ) , \ , \ , x(t ) = x_1 \ } \ , .\end{split}\ ] ] por palavras , o conjunto das extremidades das solues de em tempo , quando fazemos variar o controlo ( figura [ imgacceset ] ) .= .25 mm [#1 ] killglue ( 277,170)(210,-5 ) = cmbxti10=cmbxti10 scaled 1(296,77)(303,71 ) ( 296,71)(303,78 ) ( 299,74)(300.59,74.31)(302.16,74.64)(303.71,74.99)(305.24,75.36 ) ( 306.75,75.75)(308.24,76.16)(309.71,76.59)(311.16,77.04)(312.59,77.51 ) ( 314,78)(315.165,77.925)(316.36,78)(317.585,78.225)(318.84,78.6 ) ( 320.125,79.125)(321.44,79.8)(322.785,80.625)(324.16,81.6)(325.565,82.725 ) ( 327,84)(328.87,85.785)(330.68,87.64)(332.43,89.565)(334.12,91.56 ) ( 335.75,93.625)(337.32,95.76)(338.83,97.965)(340.28,100.24)(341.67,102.585 ) ( 343,105)(343.55,108.205)(344.2,111.32)(344.95,114.345)(345.8,117.28 ) ( 346.75,120.125)(347.8,122.88)(348.95,125.545)(350.2,128.12)(351.55,130.605 ) ( 353,133)(354.91,135.575)(356.84,138)(358.79,140.275)(360.76,142.4 ) ( 362.75,144.375)(364.76,146.2)(366.79,147.875)(368.84,149.4)(370.91,150.775 ) ( 373,152)(375.11,153.075)(377.24,154)(379.39,154.775)(381.56,155.4 ) ( 383.75,155.875)(385.96,156.2)(388.19,156.375)(390.44,156.4)(392.71,156.275 ) ( 395,156 ) ( 299,75)(305.56,73.575)(311.84,72.4)(317.84,71.475)(323.56,70.8 ) ( 329,70.375)(334.16,70.2)(339.04,70.275)(343.64,70.6)(347.96,71.175 ) ( 352,72)(353.69,74.74)(355.56,77.36)(357.61,79.86)(359.84,82.24 ) ( 362.25,84.5)(364.84,86.64)(367.61,88.66)(370.56,90.56)(373.69,92.34 ) ( 377,94)(382.245,95.675)(387.28,97.2)(392.105,98.575)(396.72,99.8 ) ( 401.125,100.875)(405.32,101.8)(409.305,102.575)(413.08,103.2)(416.645,103.675 ) ( 420,104)(423.145,104.175)(426.08,104.2)(428.805,104.075)(431.32,103.8 ) ( 433.625,103.375)(435.72,102.8)(437.605,102.075)(439.28,101.2)(440.745,100.175 ) ( 442,99 ) ( 298,75)(298.35,72.785)(298.8,70.64)(299.35,68.565)(300,66.56 ) ( 300.75,64.625)(301.6,62.76)(302.55,60.965)(303.6,59.24)(304.75,57.585 ) ( 306,56)(308.07,54.755)(310.08,53.52)(312.03,52.295)(313.92,51.08 ) ( 315.75,49.875)(317.52,48.68)(319.23,47.495)(320.88,46.32)(322.47,45.155 ) ( 324,44)(325.29,43.395)(326.56,42.68)(327.81,41.855)(329.04,40.92 ) ( 330.25,39.875)(331.44,38.72)(332.61,37.455)(333.76,36.08)(334.89,34.595 ) ( 336,33)(337.09,31.295)(338.16,29.48)(339.21,27.555)(340.24,25.52 ) ( 341.25,23.375)(342.24,21.12)(343.21,18.755)(344.16,16.28)(345.09,13.695 ) ( 346,11 ) ( 298,75)(296.01,77.105)(294.04,79.12)(292.09,81.045)(290.16,82.88 ) ( 288.25,84.625)(286.36,86.28)(284.49,87.845)(282.64,89.32)(280.81,90.705 ) ( 279,92)(277.3,92.98)(275.6,93.92)(273.9,94.82)(272.2,95.68 ) ( 270.5,96.5)(268.8,97.28)(267.1,98.02)(265.4,98.72)(263.7,99.38 ) ( 262,100)(260.885,100.715)(259.64,101.36)(258.265,101.935)(256.76,102.44 ) ( 255.125,102.875)(253.36,103.24)(251.465,103.535)(249.44,103.76)(247.285,103.915 ) ( 245,104)(242.585,104.015)(240.04,103.96)(237.365,103.835)(234.56,103.64 ) ( 231.625,103.375)(228.56,103.04)(225.365,102.635)(222.04,102.16)(218.585,101.615 ) ( 215,101 ) ( 299,75)(294.86,74.025)(290.84,73)(286.94,71.925)(283.16,70.8 ) ( 279.5,69.625)(275.96,68.4)(272.54,67.125)(269.24,65.8)(266.06,64.425 ) ( 263,63)(259.52,61.525)(256.28,60)(253.28,58.425)(250.52,56.8 ) ( 248,55.125)(245.72,53.4)(243.68,51.625)(241.88,49.8)(240.32,47.925 ) ( 239,46)(238.415,43.665)(237.96,41.36)(237.635,39.085)(237.44,36.84 ) ( 237.375,34.625)(237.44,32.44)(237.635,30.285)(237.96,28.16)(238.415,26.065 ) ( 239,24)(239.715,21.965)(240.56,19.96)(241.535,17.985)(242.64,16.04 ) ( 243.875,14.125)(245.24,12.24)(246.735,10.385)(248.36,8.56)(250.115,6.765 ) ( 252,5 ) ( 290,83) ( 252,5)(257.65,3.78)(263.2,2.72)(268.65,1.82)(274,1.08 ) ( 279.25,.5)(284.4,.08)(289.45,-.18)(294.4,-.28)(299.25,-.22 ) ( 304,0)(307.48,.47)(311.12,1.08)(314.92,1.83)(318.88,2.72 ) ( 323,3.75)(327.28,4.92)(331.72,6.23)(336.32,7.68)(341.08,9.27 ) ( 346,11)(353.51,13.68)(360.64,16.32)(367.39,18.92)(373.76,21.48 ) ( 379.75,24)(385.36,26.48)(390.59,28.92)(395.44,31.32)(399.91,33.68 ) ( 404,36)(406.09,37.2)(408.16,38.6)(410.21,40.2)(412.24,42 ) ( 414.25,44)(416.24,46.2)(418.21,48.6)(420.16,51.2)(422.09,54 ) ( 424,57)(427.375,61.28)(430.4,65.52)(433.075,69.72)(435.4,73.88 ) ( 437.375,78)(439,82.08)(440.275,86.12)(441.2,90.12)(441.775,94.08 ) ( 442,98)(440.885,102.42)(439.64,106.68)(438.265,110.78)(436.76,114.72 ) ( 435.125,118.5)(433.36,122.12)(431.465,125.58)(429.44,128.88)(427.285,132.02 ) ( 425,135)(422.045,137.865)(419.08,140.56)(416.105,143.085)(413.12,145.44 ) ( 410.125,147.625)(407.12,149.64)(404.105,151.485)(401.08,153.16)(398.045,154.665 ) ( 395,156)(393.61,156.85)(391.84,157.6)(389.69,158.25)(387.16,158.8 ) ( 384.25,159.25)(380.96,159.6)(377.29,159.85)(373.24,160)(368.81,160.05 ) ( 364,160)(356.515,160.165)(349.16,160.16)(341.935,159.985)(334.84,159.64 ) ( 327.875,159.125)(321.04,158.44)(314.335,157.585)(307.76,156.56)(301.315,155.365 ) ( 295,154)(288.005,152.015)(281.32,149.96)(274.945,147.835)(268.88,145.64 ) ( 263.125,143.375)(257.68,141.04)(252.545,138.635)(247.72,136.16)(243.205,133.615 ) ( 239,131)(235.51,128.315)(232.24,125.56)(229.19,122.735)(226.36,119.84 ) ( 223.75,116.875)(221.36,113.84)(219.19,110.735)(217.24,107.56)(215.51,104.315 ) ( 214,101)(212.935,97.255)(212.04,93.52)(211.315,89.795)(210.76,86.08 ) ( 210.375,82.375)(210.16,78.68)(210.115,74.995)(210.24,71.32)(210.535,67.655 ) ( 211,64)(211.86,59.77)(212.84,55.68)(213.94,51.73)(215.16,47.92 ) ( 216.5,44.25)(217.96,40.72)(219.54,37.33)(221.24,34.08)(223.06,30.97 ) ( 225,28)(227.06,25.17)(229.24,22.48)(231.54,19.93)(233.96,17.52 ) ( 236.5,15.25)(239.16,13.12)(241.94,11.13)(244.84,9.28)(247.86,7.57 ) ( 251,6 ) ( 399,19) [ thm:1 ] sejam , compacto e .ento para todo o ] .a soluo de constatamos que se , , se partirmos da origem , ento a expresso de simplificada : linear em .esta observao leva - nos seguinte proposio .suponhamos que e .ento , * um sub - espao vectorial de . alm disso , * . o conjunto o conjunto dos pontos acessveis ( num tempo qualquer ) a partir da origem .o conjunto um sub - espao vectorial de .o sistema de controlo diz - se controlvel se para todo o existe um controlo tal que a trajectria associada une a em tempo finito ( figura [ controlabilidade ] ) .de modo mais formal temos : o sistema de controlo diz - se controlvel se \to i \ , \in l^1\ ] ] \to \r^n \ , \ , | \ , \ , \begin{cases } \dot{x } = ax + bu \ , , \\x(0 ) = x_0 \ , , \\x(t ) = x_1 \ , .\end{cases}\ ] ] = .25 mm [#1 ] killglue ( 292,40)(165,-5 ) = cmbxti10=cmbxti10 scaled 1(197,12)(202,7 ) ( 196,7)(202,12 ) ( 397,12)(402,7 ) ( 395,7)(403,12 ) ( 199,10)(200.875,9.785)(202.8,9.64)(204.775,9.565)(206.8,9.56 ) ( 208.875,9.625)(211,9.76)(213.175,9.965)(215.4,10.24)(217.675,10.585 ) ( 220,11)(223.005,11.935)(225.92,12.84)(228.745,13.715)(231.48,14.56 ) ( 234.125,15.375)(236.68,16.16)(239.145,16.915)(241.52,17.64)(243.805,18.335 ) ( 246,19)(247.7,19.635)(249.4,20.24)(251.1,20.815)(252.8,21.36 ) ( 254.5,21.875)(256.2,22.36)(257.9,22.815)(259.6,23.24)(261.3,23.635 ) ( 263,24)(263.935,24.47)(265.04,24.88)(266.315,25.23)(267.76,25.52 ) ( 269.375,25.75)(271.16,25.92)(273.115,26.03)(275.24,26.08)(277.535,26.07 ) ( 280,26)(283.49,25.78)(286.96,25.52)(290.41,25.22)(293.84,24.88 ) ( 297.25,24.5)(300.64,24.08)(304.01,23.62)(307.36,23.12)(310.69,22.58 ) ( 314,22)(317.38,21.47)(320.72,20.88)(324.02,20.23)(327.28,19.52 ) ( 330.5,18.75)(333.68,17.92)(336.82,17.03)(339.92,16.08)(342.98,15.07 ) ( 346,14)(348.935,11.52)(351.84,9.28)(354.715,7.28)(357.56,5.52 ) ( 360.375,4)(363.16,2.72)(365.915,1.68)(368.64,.88)(371.335,.32 ) ( 374,0)(376.635,-.08)(379.24,.08)(381.815,.48)(384.36,1.12 ) ( 386.875,2)(389.36,3.12)(391.815,4.48)(394.24,6.08)(396.635,7.92 ) ( 399,10 ) ( 272,30)(278,26)(272,23 ) ( 409,4) ( 165,1) o teorema seguinte d - nos uma condio necessria e suficiente de controlabilidade chamada _ condio de kalman_. [ condkalman ] o sistema controlvel se e somente se a matriz tiver caracterstica completa ( , ) .comeamos por formalizar , com a ajuda do conjunto acessvel , a noo de tempo mnimo .sejam .suponhamos que acessvel a partir de , , suponhamos que existe pelo menos uma trajectria unindo a .de entre todas as trajectrias que unem a gostaramos de caracterizar aquela que o faz em tempo mnimo ( figura [ tempominimo ] ) .= .25 mm [#1 ] killglue ( 321,97)(176,-5 ) = cmbxti10=cmbxti10 scaled 1(201,49)(206,44 ) ( 200,44)(206,49 ) ( 398,49)(404,44 ) ( 397,44)(404,50 ) ( 203,47)(204.965,46.76)(206.96,46.64)(208.985,46.64)(211.04,46.76 ) ( 213.125,47)(215.24,47.36)(217.385,47.84)(219.56,48.44)(221.765,49.16 ) ( 224,50)(226.355,51.59)(228.72,53.16)(231.095,54.71)(233.48,56.24 ) ( 235.875,57.75)(238.28,59.24)(240.695,60.71)(243.12,62.16)(245.555,63.59 ) ( 248,65)(250.545,66.66)(253.08,68.24)(255.605,69.74)(258.12,71.16 ) ( 260.625,72.5)(263.12,73.76)(265.605,74.94)(268.08,76.04)(270.545,77.06 ) ( 273,78)(275.31,78.815)(277.64,79.56)(279.99,80.235)(282.36,80.84 ) ( 284.75,81.375)(287.16,81.84)(289.59,82.235)(292.04,82.56)(294.51,82.815 ) ( 297,83)(299.285,83.115)(301.64,83.16)(304.065,83.135)(306.56,83.04 ) ( 309.125,82.875)(311.76,82.64)(314.465,82.335)(317.24,81.96)(320.085,81.515 ) ( 323,81)(326.84,80.145)(330.56,79.28)(334.16,78.405)(337.64,77.52 ) ( 341,76.625)(344.24,75.72)(347.36,74.805)(350.36,73.88)(353.24,72.945 ) ( 356,72)(357.965,71.18)(359.96,70.32)(361.985,69.42)(364.04,68.48 ) ( 366.125,67.5)(368.24,66.48)(370.385,65.42)(372.56,64.32)(374.765,63.18 ) ( 377,62)(379.265,60.78)(381.56,59.52)(383.885,58.22)(386.24,56.88 ) ( 388.625,55.5)(391.04,54.08)(393.485,52.62)(395.96,51.12)(398.465,49.58 ) ( 401,48 ) ( 409,40) ( 176,37) ( 203,47)(204.4,46.49)(206,45.96)(207.8,45.41)(209.8,44.84 ) ( 212,44.25)(214.4,43.64)(217,43.01)(219.8,42.36)(222.8,41.69 ) ( 226,41)(231.11,39.705)(236.04,38.52)(240.79,37.445)(245.36,36.48 ) ( 249.75,35.625)(253.96,34.88)(257.99,34.245)(261.84,33.72)(265.51,33.305 ) ( 269,33)(271.77,33.255)(274.48,33.52)(277.13,33.795)(279.72,34.08 ) ( 282.25,34.375)(284.72,34.68)(287.13,34.995)(289.48,35.32)(291.77,35.655 ) ( 294,36)(295.675,36.085)(297.4,36.24)(299.175,36.465)(301,36.76 ) ( 302.875,37.125)(304.8,37.56)(306.775,38.065)(308.8,38.64)(310.875,39.285 ) ( 313,40)(315.04,41.595)(317.16,43.08)(319.36,44.455)(321.64,45.72 ) ( 324,46.875)(326.44,47.92)(328.96,48.855)(331.56,49.68)(334.24,50.395 ) ( 337,51)(340.875,51.225)(344.6,51.4)(348.175,51.525)(351.6,51.6 ) ( 354.875,51.625)(358,51.6)(360.975,51.525)(363.8,51.4)(366.475,51.225 ) ( 369,51)(370.79,50.23)(372.56,49.52)(374.31,48.87)(376.04,48.28 ) ( 377.75,47.75)(379.44,47.28)(381.11,46.87)(382.76,46.52)(384.39,46.23 ) ( 386,46)(387.59,45.83)(389.16,45.72)(390.71,45.67)(392.24,45.68 ) ( 393.75,45.75)(395.24,45.88)(396.71,46.07)(398.16,46.32)(399.59,46.63 ) ( 401,47 ) ( 203,47)(203.825,45.1)(204.8,43.2)(205.925,41.3)(207.2,39.4 ) ( 208.625,37.5)(210.2,35.6)(211.925,33.7)(213.8,31.8)(215.825,29.9 ) ( 218,28)(220.595,25.47)(223.28,23.08)(226.055,20.83)(228.92,18.72 ) ( 231.875,16.75)(234.92,14.92)(238.055,13.23)(241.28,11.68)(244.595,10.27 ) ( 248,9)(251.585,7.915)(255.24,6.96)(258.965,6.135)(262.76,5.44 ) ( 266.625,4.875)(270.56,4.44)(274.565,4.135)(278.64,3.96)(282.785,3.915 ) ( 287,4)(291.96,4.665)(296.84,5.36)(301.64,6.085)(306.36,6.84 ) ( 311,7.625)(315.56,8.44)(320.04,9.285)(324.44,10.16)(328.76,11.065 ) ( 333,12)(337.16,12.515)(341.24,13.16)(345.24,13.935)(349.16,14.84 ) ( 353,15.875)(356.76,17.04)(360.44,18.335)(364.04,19.76)(367.56,21.315 ) ( 371,23)(374.36,24.815)(377.64,26.76)(380.84,28.835)(383.96,31.04 ) ( 387,33.375)(389.96,35.84)(392.84,38.435)(395.64,41.16)(398.36,44.015 ) ( 401,47 ) ( 298,87)(306,83)(298,80 ) ( 289,9)(296,5)(289,0 ) ( 279,39)(287,35)(279,32 ) se for o tempo mnimo , ento para todo o , ( com efeito , se assim no fosse seria acessvel a partir de num tempo inferior a e no seria o tempo mnimo ) .consequentemente , o valor de est bem definido pois , a partir do teorema [ thm:1 ] , varia continuamente com , logo fechado em .em particular o nfimo em mnimo .o tempo o primeiro instante para o qual contm ( figura [ tempominimo2 ] ) .= .25 mm [#1 ] killglue ( 290,155)(167,-5 ) = cmbxti10=cmbxti10 scaled 1(185,84)(191,77 ) ( 185,77)(191,84 ) ( 181,62) ( 167,81)(167.985,83.61)(169.04,86.04)(170.165,88.29)(171.36,90.36 ) ( 172.625,92.25)(173.96,93.96)(175.365,95.49)(176.84,96.84)(178.385,98.01 ) ( 180,99)(182,99.81)(184,100.44)(186,100.89)(188,101.16 ) ( 190,101.25)(192,101.16)(194,100.89)(196,100.44)(198,99.81 ) ( 200,99)(203.17,97.515)(206.08,95.96)(208.73,94.335)(211.12,92.64 ) ( 213.25,90.875)(215.12,89.04)(216.73,87.135)(218.08,85.16)(219.17,83.115 ) ( 220,81)(220.525,77.69)(220.8,74.56)(220.825,71.61)(220.6,68.84 ) ( 220.125,66.25)(219.4,63.84)(218.425,61.61)(217.2,59.56)(215.725,57.69 ) ( 214,56)(210.225,53.545)(206.6,51.48)(203.125,49.805)(199.8,48.52 ) ( 196.625,47.625)(193.6,47.12)(190.725,47.005)(188,47.28)(185.425,47.945 ) ( 183,49)(180.725,50.445)(178.6,52.28)(176.625,54.505)(174.8,57.12 ) ( 173.125,60.125)(171.6,63.52)(170.225,67.305)(169,71.48)(167.925,76.045 ) ( 167,81 ) ( 204,88)(204.97,90.42)(206.08,92.68)(207.33,94.78)(208.72,96.72 ) ( 210.25,98.5)(211.92,100.12)(213.73,101.58)(215.68,102.88)(217.77,104.02 ) ( 220,105)(223.54,106.36)(226.96,107.44)(230.26,108.24)(233.44,108.76 ) ( 236.5,109)(239.44,108.96)(242.26,108.64)(244.96,108.04)(247.54,107.16 ) ( 250,106)(253.195,103.435)(256.08,100.84)(258.655,98.215)(260.92,95.56 ) ( 262.875,92.875)(264.52,90.16)(265.855,87.415)(266.88,84.64)(267.595,81.835 ) ( 268,79)(268.005,74.29)(267.72,69.96)(267.145,66.01)(266.28,62.44 ) ( 265.125,59.25)(263.68,56.44)(261.945,54.01)(259.92,51.96)(257.605,50.29 ) ( 255,49)(249.315,48.765)(243.96,48.76)(238.935,48.985)(234.24,49.44 ) ( 229.875,50.125)(225.84,51.04)(222.135,52.185)(218.76,53.56)(215.715,55.165 ) ( 213,57)(210.615,59.065)(208.56,61.36)(206.835,63.885)(205.44,66.64 ) ( 204.375,69.625)(203.64,72.84)(203.235,76.285)(203.16,79.96)(203.415,83.865 ) ( 204,88 ) ( 234,91)(235.545,94.89)(237.28,98.56)(239.205,102.01)(241.32,105.24 ) ( 243.625,108.25)(246.12,111.04)(248.805,113.61)(251.68,115.96)(254.745,118.09 ) ( 258,120)(262.435,121.645)(266.84,123.08)(271.215,124.305)(275.56,125.32 ) ( 279.875,126.125)(284.16,126.72)(288.415,127.105)(292.64,127.28)(296.835,127.245 ) ( 301,127)(305.81,126.86)(310.44,126.44)(314.89,125.74)(319.16,124.76 ) ( 323.25,123.5)(327.16,121.96)(330.89,120.14)(334.44,118.04)(337.81,115.66 ) ( 341,113)(345.315,108.26)(349.16,103.64)(352.535,99.14)(355.44,94.76 ) ( 357.875,90.5)(359.84,86.36)(361.335,82.34)(362.36,78.44)(362.915,74.66 ) ( 363,71)(361.22,67.19)(359.28,63.56)(357.18,60.11)(354.92,56.84 ) ( 352.5,53.75)(349.92,50.84)(347.18,48.11)(344.28,45.56)(341.22,43.19 ) ( 338,41)(334.35,38.18)(330.6,35.72)(326.75,33.62)(322.8,31.88 ) ( 318.75,30.5)(314.6,29.48)(310.35,28.82)(306,28.52)(301.55,28.58 ) ( 297,29)(290.145,30.77)(283.68,32.68)(277.605,34.73)(271.92,36.92 ) ( 266.625,39.25)(261.72,41.72)(257.205,44.33)(253.08,47.08)(249.345,49.97 ) ( 246,53)(243.045,56.17)(240.48,59.48)(238.305,62.93)(236.52,66.52 ) ( 235.125,70.25)(234.12,74.12)(233.505,78.13)(233.28,82.28)(233.445,86.57 ) ( 234,91 ) ( 316,103)(318.395,108.6)(321.08,113.8)(324.055,118.6)(327.32,123 ) ( 330.875,127)(334.72,130.6)(338.855,133.8)(343.28,136.6)(347.995,139 ) ( 353,141)(360.86,142.465)(368.44,143.56)(375.74,144.285)(382.76,144.64 ) ( 389.5,144.625)(395.96,144.24)(402.14,143.485)(408.04,142.36)(413.66,140.865 ) ( 419,139)(425.05,135.595)(430.6,132.08)(435.65,128.455)(440.2,124.72 ) ( 444.25,120.875)(447.8,116.92)(450.85,112.855)(453.4,108.68)(455.45,104.395 ) ( 457,100)(456.61,93.965)(456.04,88.16)(455.29,82.585)(454.36,77.24 ) ( 453.25,72.125)(451.96,67.24)(450.49,62.585)(448.84,58.16)(447.01,53.965 ) ( 445,50)(443.305,46.22)(441.32,42.68)(439.045,39.38)(436.48,36.32 ) ( 433.625,33.5)(430.48,30.92)(427.045,28.58)(423.32,26.48)(419.305,24.62 ) ( 415,23)(408.065,21.17)(401.36,19.68)(394.885,18.53)(388.64,17.72 ) ( 382.625,17.25)(376.84,17.12)(371.285,17.33)(365.96,17.88)(360.865,18.77 ) ( 356,20)(350.96,22.155)(346.24,24.52)(341.84,27.095)(337.76,29.88 ) ( 334,32.875)(330.56,36.08)(327.44,39.495)(324.64,43.12)(322.16,46.955 ) ( 320,51)(318.16,55.255)(316.64,59.72)(315.44,64.395)(314.56,69.28 ) ( 314,74.375)(313.76,79.68)(313.84,85.195)(314.24,90.92)(314.96,96.855 ) ( 316,103 ) ( 359,78)(366,72 ) ( 358,72)(367,78 ) ( 372,67) ( 270,45) ( 229,18)(311,18 ) ( 303.482,20.736)(311,18)(303.482,15.264 ) ( 247,-1) por outro lado , temos necessariamente : com efeito , se pertencesse ao interior de , ento para prximo de , pertenceria ainda a pois varia continuamente com . isto contradiz o facto de ser o tempo mnimo .estas observaes do uma viso geomtrica noo de tempo mnimo e conduzem - nos seguinte definio : seja , i) ] ptimo se e somente se onde o produto interno em e soluo da equao . a condio inicial depende de .como ela no directamente conhecida , a utilizao do teorema [ principiomaxlinear ] maioritariamente indirecta .vejamos um exemplo .consideremos uma massa pontual ligada a uma mola cujo movimento est restrito a um eixo ( figura [ mola ] ) .= .25 mm [#1 ] killglue ( 239,109)(191,-5 ) = cmbxti10=cmbxti10 scaled 1(200,0)(200,80 ) ( 200,75)(192,70 ) ( 200,66)(192,61 ) ( 199,47)(191,42 ) ( 199,56)(191,51 ) ( 200,38)(192,33 ) ( 200,29)(192,24 ) ( 200,21)(192,16 ) ( 200,13)(192,8 ) ( 200,40)(400,40 ) ( 392.482,42.736)(400,40)(392.482,37.264 ) ( 390,19) ( 200,40)(212,51)(224,28)(236,52)(248,28 ) ( 260,52)(272,28)(284,52)(296,28)(305,40 ) ( 305,40 ) ( 306,46)(306,33 ) ( 327,33)(327,46)(306,46 ) ( 311,15) ( 275,79)(321,79 ) ( 313.482,81.736)(321,79)(313.482,76.264 ) ( 283,87) ( 203,24) a massa pontual sai da origem por uma fora que supomos igual a onde o comprimento da mola em repouso . aplicamos a essa massa pontual uma fora exterior horizontal .a segunda lei de newton diz - nos que a fora resultante aplicada directamente proporcional ao produto entre a massa inercial e a acelerao adquirida pela mesma , ou seja as leis bsicas da fsica dizem - nos tambm que todas as foras so limitadas .impomos a seguinte _ restrio _ fora exterior : isto significa que a fora apenas pode tomar valores no intervalo _ fechado _ ] mensurveis limitadas .dada uma aplicao , denotamos por o custo de uma trajectria associada a e definido sobre ] tal que a trajectria associada a , soluo de , verifica isto leva - nos a definir : seja . a aplicao entrada - sada em tempo sistema de controlo inicializado em a aplicao : onde o conjunto dos controlos admissveis .por outras palavras , a aplicao entrada - sada em tempo associa a um controlo o ponto final da trajectria associada a .uma questo importante na teoria do controlo estudar esta aplicao , descrevendo a sua imagem , as suas singularidades , a sua regularidade , etc .a resposta a estas questes depende , obviamente , do espao de partida e da forma do sistema ( da funo ) .com toda a generalidade temos o seguinte resultado ( , , ) .consideremos o sistema onde `` suave '' e ) ] tal que a trajectria partindo de definida sobre ] se o diferencial de frchet da aplicao entrada - sada no ponto no sobrejectiva .caso contrrio dizemos que regular .[ propeaberta ] sejam e fixos .se um controlo regular , ento uma aplicao aberta numa vizinhana de .o conjunto acessvel em tempo para o sistema , denotado por , o conjunto das extremidades em tempo das solues do sistema partindo de .por outras palavras , a imagem da aplicao entrada - sada em tempo . o sistema diz - se controlvel se argumentos do tipo do teorema da funo implcita permitem deduzir os resultados de _ controlabilidade local _ do sistema de partida a partir do estudo da controlabilidade do sistema linearizado ( , , ) .por exemplo , deduzimos do teorema de controlabilidade no caso linear a proposio seguinte .consideremos o sistema de controlo onde .seja e .se ento o sistema no linear localmente controlvel em .em geral o problema da controlabilidade difcil .diferentes abordagens so possveis .umas fazem uso da anlise , outras da geometria , outras ainda da lgebra .o problema da controlabilidade est ligado , por exemplo , questo de saber quando um determinado semi - grupo opera transitivamente .existem tambm tcnicas para mostrar , em certos casos , que a controlabilidade global .uma delas , importante , a chamada `` tcnica de alargamento '' ( ) .para alm de um problema de controlo , consideramos tambm um problema de optimizao : de entre todas as solues do sistema unindo a , encontrar uma trajectria que minimiza ( ou maximiza ) uma certa funo _ custo _uma tal trajectria , se existir , diz - se _ ptima _ para esse custo .a existncia de trajectrias ptimas depende da regularidade do sistema e do custo .para um enunciado geral de existncia , , .pode tambm acontecer que um controlo ptimo no exista na classe de controlos considerada , mas exista num espao mais abrangente .esta questo remete - nos para outra rea importante : o estudo da regularidade das trajectrias ptimas .francis clarke e richard vinter deram um contributo importantssimo nesta rea , introduzindo o estudo sistemtico da regularidade lipschitziana dos minimizantes no controlo ptimo linear .resultados gerais de regularidade lipschitziana das trajectrias minimizantes para sistemas de controlo no lineares podem ser encontrados em .dado um problema de controlo ptimo para o qual esto garantidas as condies de existncia e regularidade da soluo ptima , como determinar os processos optimais ?a resposta a esta questo dada pelo clebre _princpio do mximo de pontryagin_. para um estudo aprofundado das condies necessrias de optimalidade sugerimos .comeamos por mostrar que uma trajectria singular pode ser parametrizada como a projeco de uma soluo de um sistema hamiltoniano sujeito a uma _ equao de restrio_. consideremos o _ hamiltoniano _ do sistema : onde denota o produto escalar usual de .[ propcontsing ] seja um controlo singular e a trajectria singular associada a esse controlo em ] tal que as equaes seguintes so verificadas para quase todo o ] se no sobrejectiva .logo existe um vector linha tal que ) \hspace{0.2 cm } \langle\bar{p } , de_t(u ) \cdot v\rangle = \bar{p } \int_0^t m(t ) m^{-1}(s ) b(s )v(s ) ds = 0 \ , .\ ] ] consequentemente , \ , .\ ] ] seja , ] da proposio [ propcontsing ] chamamos _ vector adjunto _ do sistema .procuramos condies necessrias de optimalidade .consideremos o sistema .os controlos so definidos em ] tal que satisfaz o sistema hamiltoniano e a condio de estacionaridade onde .o teorema [ teo : hestenes ] tem a sua gnese nos trabalhos de graves de 1933 , tendo sido obtido primeiramente por hestenes em 1950 .trata - se de um caso particular do princpio do mximo de pontryagin , onde no so consideradas restries aos valores dos controlos ( , com ) .escrevendo , onde a varivel dual do custo e , temos que satisfaz o sistema e onde .repare - se que , isto , constante em ] de .denotemos por o conjunto dos controlos admissveiscujas trajectrias associadas unem um ponto inicial de a um ponto final de .para um tal controlo definimos o custo onde de classe .se o controlo ptimo em ] absolutamente contnua , chamada vector adjunto , onde uma constante negativa ou nula , tal que a trajectria ptima associada ao controlo verifica , em quase todos os pontos de ] que se e / ou so variedades de com espaos tangentes em e em , ento o vector adjunto satisfaz as seguintes condies de transversalidade : no teorema [ thm : pmp ] o tempo final livre .se impusermos um tempo final fixo igual a , isto , se procuramos , partindo de , atingir o alvo em tempo e minimizando o custo em $ ] ( problema a tempo fixo ) , ento o teorema continua verdadeiro , salvo a condio que deve ser substituda por \ ] ] ( com constante no necessariamente nula ) .o problema de tempo mnimo corresponde ao caso em que .se o conjunto alvo igual a todo o ( problema com extremidade final livre ) , ento a condio de transversalidade no instante final diz - nos que .o princpio do mximo de pontryagin um resultado profundo e importante da matemtica contempornea , com inmeras aplicaes na fsica , biologia , gesto , economia , cincias sociais , engenharia , etc .( , , ) .reconsideremos o exemplo ( no linear ) da mola , modelado pelo sistema de controlo onde admitimos como controlos todas as funes seccionalmente contnuas tais que .o objectivo consiste em levar a mola de uma posio inicial qualquer sua posio de equilbrio _ em tempo mnimo _ . apliquemos o princpio do mximo de pontryagin a este problema .o hamiltoniano tem a forma se uma extremal , ento notemos que uma vez que o vector adjunto deve ser no trivial , no pode anular - se num intervalo ( seno teramos igualmente e , por anulao do hamiltoniano , teramos tambm ) .por outro lado , a condio do mximo d - nos em particular , os controlo ptimos so sucessivamente iguais a , isto , verifica - se o princpio _ bang - bang _ ( , , ) .concretamente , podemos afirmar que .invertendo o tempo o nosso problema equivalente ao problema de tempo mnimo para o sistema dadas as condies iniciais e ( posio e velocidade inicial da massa ) , o problema facilmente resolvido . o leitor interessado encontra em uma resoluo efectuada com o sistema de computao algbrica maple .sobre o uso do maple no clculo das variaes e controlo ptimo veja - se .a teoria matemtica dos sistemas e controlo ensinada nas instituies dos autores , nos departamentos de matemtica da universidade de aveiro e da universidade de orlans , frana .em aveiro no mbito do mestrado _ matemtica e aplicaes _ , especializao em _matemtica empresarial e tecnolgica _ , e no mbito do _ programa doutoral em matemtica e aplicaes _ este ltimo uma associao entre os departamentos de matemtica da universidade de aveiro e da universidade do minho ; em orlans na opo `` controlo automtico '' do mestrado passion . o primeiro autor foi aluno de mestrado em aveiro e faz actualmente um doutoramento em aveiro e orlans na rea do controlo ptimo , com o apoio financeiro da fct , bolsa sfrh / bd/27272/2006 .l. s. pontryagin , v. g. boltyanskii , r. v. gamkrelidze , e. f. mishchenko . _ the mathematical theory of optimal processes _ , translated from the russian by k. n. trirogoff ; edited by l. w. neustadt , interscience publishers john wiley & sons , inc .new york , 1962 .
neste trabalho so referidas motivaes , aplicaes e relaes da teoria do controlo com outras reas da matemtica . apresentamos uma breve resenha histrica sobre o controlo ptimo , desde as suas origens no clculo das variaes e na teoria clssica do controlo aos dias de hoje , dando especial destaque ao princpio do mximo de pontryagin . * palavras chave : * controlo ptimo , princpio do mximo de pontryagin , aplicaes da teoria matemtica dos sistemas de controlo .
since cloud computing has become increasingly accepted as one of the most promising computing paradigms in industry , providing cloud services also becomes an emerging business .an increasing number of providers have started to supply commercial cloud services with different terminologies , definitions , and goals .as such , evaluation of those cloud services would be crucial for many purposes ranging from cost - benefit analysis for cloud computing adoption to decision making for cloud provider selection .however , evaluation of commercial cloud services is different to and more challenging than that of other computing systems .there are three main reasons for this : * in contrast with traditional computing systems , the cloud is relatively chaos . there is still a lack of standard definition of cloud computing , which inevitably leads to market hype and also skepticism and confusion . as a result , it is hard to point out the range of cloud computing , and not to mention a specific guideline to evaluate different commercial cloud services . consequently , although we have already learned rich lessons from the evaluation of traditional computing systems , it is still necessary to accumulate evaluation experiences in the cloud computing domain .* evaluation results could be invalid soon after the evaluation and then not reusable .cloud providers may continually upgrade their hardware and software infrastructures , and new commercial cloud services may gradually enter the market .hence , previous evaluation results can be quickly out of date as time goes by . for example, at the time of writing , google is moving its app engine service from cpu usage model to instance model ; amazon is still acquiring additional sites for cloud data center expansion ; while ibm just offered a public and commercial cloud . as a result, customers would have to continually re - design and repeat evaluation for employing commercial cloud services . *the back - ends ( e.g. configurations of physical infrastructure ) of commercial cloud services are not controllable from the perspective of customers .unlike consumer - owned computing systems , customers have little knowledge and control over the precise nature of cloud services even in the locked downenvironment .evaluations in the context of public cloud computing are then inevitably more challenging than that for systems where the customer is in direct control of all aspects . in fact , it is natural that the evaluation of uncontrollable systems would be more complex than that of controllable ones .therefore , particularly for commercial cloud services , it is necessary to find a way to facilitate evaluation , and make existing evaluation efforts reusable and sustainable .this paper suggests an expert system for cloud evaluation to address the aforementioned issues .this expert system concentrates on processes and experiences rather than results of cloud services evaluation . when it comes to the general implementation process of cloud services evaluation, we can roughly draw six common steps following the systematic approach to performance evaluation of computer systems , as specified below and illustrated in figure [ fig>1 ] : general process of an evaluation implementation.,height=245 ] 1 .first of all , the requirement should be specified to clarify the evaluation purpose , which essentially drives the remaining steps of the evaluation implementation .2 . based on the evaluation requirement, we can identify the relevant cloud service features to be evaluated .3 . to measure the relevant service features , suitable metrics should be determined .4 . according to the determined metrics , we can employ corresponding benchmarks that may already exist or have to be developed .5 . before implementing the evaluation experiment , the experimental environment should be constructed .the environment includes not only the cloud resources to be evaluated but also assistant resources involved in the experiment .6 . given all the aforementioned preparation ,the evaluation experiment can be done with human intervention , which finally satisfies the evaluation requirement . through decomposing and analyzing individual evaluation experiments following the six steps , we have collected and arranged data of detailed evaluation processes . based on the primary evaluation data , general knowledge about evaluating commercial cloud servicescan be abstracted and summarized .after manually constructing the _ data / knowledge base _ , we can design and implement an _ inference engine _ to realize knowledge and data reasoning respectively . as such , given particular enquiries , the proposed expert system is not only able to supply common evaluation suggestions directly , but also able to introduce similar experimental practices to users for reference .the remainder of this paper is organized as follows .section [ iii ] specifies the establishments of _ data base _ , _ knowledge base _ , and _ inference engine _ in this expert system .section [ iv ] employs three samples to show different application cases of this expert system , which also gives our current work a conceptual validation .conclusions and some future work are discussed in section [ v ] .similar to general expert systems , the expert system proposed in this paper also comprises an _ interface _ with which users interact , an _ inference engine _ that performs knowledge / data reasoning , and a _ knowledge base _ that stores common and abstracted knowledge about evaluation of commercial cloud services .however , we did not employ a specific knowledge acquisition module for building up the _ knowledge base _ in this case . at the current stage , instead of obtaining knowledge by interviewing external experts , we extracted cloud evaluation knowledge only from the collected data of published experimental studies . moreover , for the convenience of acquiring experimental references , a _ data base _ is maintained in this expert system to store initially - analyzed details of existing evaluation experiments .the complete structure of this expert system can be illustrated as shown in figure [ fig>3 ] .structure of this expert system.,height=170 ] considering the _ interface _ of this expert system can be designed at last in the future , this paper only specifies how we are realizing the _ data / knowledge base _ and _ inference engine_. to collect and initially analyze existing evaluation practices , we employed the systematic literature review ( slr ) as the main approach .slr is the methodology applied for evidence - based software engineering ( ebse ) , and has been widely accepted as a standard and systematic way to investigate specific research questions by identifying , assessing , and analyzing published primary studies .according to the guidelines of slr , an entire slr instance mainly requires three stages , namely preparation , implementation , and summarization . after adjusting some steps, here we list a rough slr procedure suitable for this work : + * _ planning review : _ * * justify the necessity of carrying out this slr .* identify research questions for this slr . *develop slr protocol by defining search strategy , selection criteria , quality assessment standard , and data extraction schema for conducting review stage . * _ conducting review : _ * * exhaustively search relevant primary studies in the literature .* select relevant primary studies and assess their qualities for answering research questions . *extract useful data from the selected primary studies .* arrange and synthesize the initial results of our study into review notes .* _ reporting review : _ * * analyze and interpret the initial results together with review notes into interpretation notes . *finalize and polish the previous notes into an slr report . due to the limit of space ,the detailed slr process is not elaborated in this paper . with the pre - defined search strategy and rigorous selection criteria ,we have identified 46 relevant primary studies covering six commercial cloud providers from a set of popular digital publication databases ( the studies are listed online for reference : http://www.mendeley.com/groups/1104801/slr4cloud/papers/ ) .note that this work focused only on the commercial cloud services to make our effort closer to industry s needs .moreover , this study paid attention to infrastructure as a service ( iaas ) and platform as a service ( paas ) without concerning software as a service ( saas ) .since saas is not used to further build individual business applications , various saas implementations may comprise infinite and exclusive functionalities to be evaluated , which could make our slr out of control even if adopting extremely strict selection / exclusion criteria .after exhaustively identifying evaluation practices , independent evaluation experiments can be first isolated , and then be broken into atomic components by using the data extraction schema .for example , more than 500 evaluation metrics including duplications were finally extracted from the identified cloud services evaluation studies . the summarized metrics in turn can help facilitate metric selection in future evaluation work .in other words , every single isolated experiment is finally represented as a set of elements , which essentially facilitates knowledge mining among different elements across different evaluation steps .therefore , the conducting review stage in this slr also indicates the procedure of establishing the _ data base _ of the proposed expert system .considering that this expert system gives evaluation suggestions according to users inputs , the stored knowledge is suitable to be represented as rules each of which is composed of antecedent ( to meet input ) and consequent ( to be output ) . to efficiently obtain rule - based knowledge from the _ data base _, we adopted association - rule mining to inspire the method of knowledge mining . by analogy, we can find that the rule - based knowledge used in this expert system has similar characteristics to association rules . as we know, different association rules can express different regularities that underlie a dataset .in other words , association rules have freedom to use and predict any combination of items in a predefined set . as for the expert system, it is supposed that users can start from any particular evaluation step , or from any combination of detailed evaluation steps , to enquire about different evaluation experiences . when mining association rules in a dataset , we use predefined _ coverage _ to seek combinations of items that appear frequently , and then use _ accuracy _ to extract suitable rules among each of the identified item combinations .note that we name an attribute - value pair as an item in the dataset .the _ coverage _ of a particular item combination refers to the number of data instances comprising the combination , while the _ accuracy _ of a candidate rule is the proportion of the correctly - predicted data instances to the applied data instances .similarly , we followed the same rule - induction procedure to mine evaluation knowledge from the collected experiment data .however , compared with the quantitative and programmable process of rule mining , the knowledge mining in this case has to involve more human interventions .although some evaluation step details ( e.g. the cloud service features ) can be pre - standardized , most of the experiment data ( e.g. requirement or experimental manipulation ) would be not able to be specified within an exactly same schema from the beginning . as a result , we had to manually extract common knowledge through abstracting specific descriptions of the raw data .for the convenience of the discussion , we briefly demonstrate the process of knowledge mining only from two - item sets .suppose the focus is now on the scalability evaluation of cloud services .we can initially list all the scalability - related two - item combinations , gradually abstract their descriptions , and then rationally classify or unify them into fewer groups . at last, common knowledge can be sought within each of the combination groups .here we only list four straightforward pieces of evaluation experience identified in our work . * * if service feature = __ scalability__then experimental manipulation = _ varying cloud resource with the same amount of workload _ * ( extracted from the change of cloud resource , without distinguishing that the resource was varied in terms of type or amount ) . * * if service feature = _ _ vertical scalability__then experimental manipulation = _ different types of cloud resource _ * ( extracted from the experiments each of which covers different types of service instances ) . * * if service feature = _ _ horizontal scalability__then experimental manipulation = _ different amount of cloud resource _ * ( extracted by focusing on the amount of the same type of cloud resource , no matter the resource is cpu core or service instance ) . * * if service feature = _ _scalability__then metric = _ speedup over a baseline _ * ( extracted by summarizing scalability - related metrics like pipeline performance speedup , computation speedup , and throughput speedup ) . in this expert system ,the _ inference engine _ is associated with both _ knowledge base _ and _ data base_. as such , the _ inference engine _ can not only perform knowledge reasoning but also supply similar experimental cases . 1 ._ first - order rule reasoning in knowledge base : _ to facilitate the knowledge reasoning process, we proposed to enrich the representations of knowledge by bridging between some concepts or item combinations .the bridges can be viewed as common - sense knowledge that supplements the aforementioned , extracted knowledge .for example , we added two new rules * if service feature = _ vertical scalability _ " then service feature = _ scalability _ " * and * if service feature = _ horizontal scalability _ " then service feature = _ scalability _ " * to the four previous samples .in fact , the two new rules are always true although they are not generated by knowledge mining . benefiting from the knowledge bridges, we can employ the algorithm of learning first - order rules to conveniently reveal underlying rule - based knowledge that does not visibly exist in the _ knowledge base_. in this case , for instance , the expert system can give suggestions of evaluating vertical scalability by using a visible rule * if service feature = _ vertical scalability _ " then experimental environment = _ different types of cloud resource _ " * , and also resorting to the scalability - related knowledge , as shown below .* experimental manipulation = _ varying cloud resource with the same amount of workload _ " .* experimental environment = _ different types of cloud resource _ " .* metric = _ speedup over a baseline _ " .1 . _ case retrieving in data base : _ as mentioned previously , in addition to evaluation knowledge , this expert system is also able to retrieve experimental cases to users for analogy .in fact , as one of the basic human reasoning processes , analogy is used by almost every individual on a daily basis to solve new problems based upon past experiences .the process of analogy generally follows the procedure of the case - based reasoning ( cbr ) , while one general cbr procedure comprises a four - stage cycle , as shown in figure [ fig>4 ] .general cyclical case - based reasoning process.,height=226 ] in the general cyclical cbr process , an initial problem is described as a new case . following the new case ,we can retrieve a case from the previous cases .the retrieved case is combined with the new case through reuse into a solved case .the revise process is then used to test the solution for the new case .finally , useful experience is retained for future reuse , and the dataset of previous cases will be updated by a new learned case , or by modification of some existing cases . when it comes to case retrieving , the essential issue is how to identify rational and similar cases to the new one .we proposed three modes of case retrieving in this expert system , namely precise mode , heuristic mode , and fuzzy mode .as the name suggests , under the precise mode , the expert system identifies similar evaluation experiments exactly following users enquiries .for example , suppose a user is interested in the evaluation experiments with respect to horizontal scalability of cloud services , the expert system will only retrieve the experiment data with service feature = _ horizontal scalability _ " in the worst case of precise mode , there would be no experiment record directly meeting a user s enquiry .the user can then try the heuristic mode .the heuristic mode relies on the knowledge reasoning process discussed previously . in detail, the expert system first explores the _ knowledge base _ to identify the rules with antecedents meeting the user s enquiry , and then retrieves experiment data that include those rules consequents . for example , when retrieving data with inquiry service feature = _ horizontal scalability _ " under the heuristic mode , the expert system will list evaluation experiments having experimental environment = _ different amount of cloud resource _ " ( according to the previous rule * if service feature = _ horizontal scalability _ " then experimental environment = _ different amount of cloud resource _ " * ) . in this case , suppose that even if the retrieved experiments focus on evaluating cost - benefit by using _ different amount of cloud resource _ " , they can still be used to inspire the evaluation of horizontal scalability . in the worst case of heuristic mode, the expert system could yet retrieve nothing due to lack of data , lack of knowledge , or invalid enquiry .then , the data retrieving can be switched to the fuzzy mode .ideally , the fuzzy mode relies on the uncertain reasoning in the _ knowledge base_. in the current stage , however , we only realize the fuzzy mode to allow the expert system to use sub - content of the enquiry information to explore for useful data . for example , suppose an invalid inquiry includes three elements : service feature = _ horizontal scalability _ " , experimental environment = _ different types of cloud resource _ " , and metric = _ speedup over a baseline _ " . under the fuzzy mode ,the expert system first removes one of the inquiry elements , and then uses both precise mode and heuristic mode methods to identify similar experimental cases . in this sample , consequently , users can still achieve useful experimental cases after removing the inquiry element experimental environment = _ different types of cloud resource _ " .note that , since the case retrieving here is based on incomplete enquiry information , the fuzzy mode does not necessarily guarantee that all the retrieved experiments are valuable for users .ideally , this expert system is supposed to deal with enquiries about any component in evaluation experiments .for example , given a particular metric , we can ask the expert system for candidate benchmarks supplying the metric ; or given particular experimental operations , we can ask the expert system for what evaluation requirement can be satisfied . therefore , in general practices of cloud services evaluation , the proposed expert system can be applied after planning an evaluation and before designing and implementing the evaluation , as illustrated in figure [ fig>5 ] . in the current stage of our work , we constrain the enquiry condition as : given a particular cloud service feature , we ask the expert system for suitable evaluation scenarios ( the combination of suitable experimental environment and experimental operations ) and evaluation metrics ( also relevant benchmarks ) . herewe use three real examples to show the possible application cases of this expert system .the three cases can meanwhile be viewed as a conceptual validation of our current work .note that , to highlight the application flow , the expert system s working mechanism is simplified without elaborating the data / knowledge reasoning procedures .analytic modeling is a relatively light - weight evaluation technique , which employs approximate calculations to supply quick and rough analysis .suppose we decide to adopt analytic modeling to satisfy the requirement about how elastic a particular cloud platform is .according to the keywords in the requirement , we can find that the concerned cloud service feature is elasticity in this case .as such , the requirement is manually translated into elasticity " as the input to the expert system .the output includes scenarios and metrics for evaluating elasticity , as demonstrated in figure [ fig>6 ] .the evaluation scenarios and metrics can be further translated into design parameters and variables by evaluator while performing the modeling work .the suggested metrics like _ vm boosting latency _ are used to model the cloud platform , while the suggested scenarios like _ workloads rise and fall repeatedly _ are used to model the cloud - hosted workloads .the complete process of this application case is shown in figure [ fig>6 ] .real measurement is a relatively effort - intensive evaluation technique , which implements experiments on prototypes or actual computing systems to conduct more accurate analysis . in this case , suppose we plan to measure how variable the real cloud service performance is . similarly , the requirement here can be translated into variability " as the input to the expert system , and the output suggests scenarios and metrics for evaluating variability ( cf .figure [ fig>7 ] ) .unlike the previous case , however , the suggested scenarios like _ repeat experiment at different time _ are used to prepare and perform experimental environment / operations , while the suggested metrics like _ standard deviation with average value _ are used to measure and display the experimental results .the application flow of this case is shown in figure [ fig>7 ] .this application case is essentially an extension of previous two cases .in fact , suggesting usage strategy of cloud resources is out of the scope of applying this expert system .this expert system provides suggestions only for cloud services evaluation , without making any decision based on the evaluation result .nevertheless , since evaluation is the prerequisite of further decision making , this expert system can still be helpful in this case .suppose there is an evaluation requirement about choosing alternative architecture for transaction processing in the cloud " .this requirement is a typical decision making about alternative strategies of utilizing cloud resources .given particular cloud service features concerned in the predefined architectures , the expert system supplies suggestions of evaluating those service features ; the evaluation suggestions can then be employed in each experiment for each of the architectures respectively ; the architectures are finally judged through contrasting the evaluation results . in other words, this application case normally comprises a set of sibling experiments with the same evaluation suggestions , and the application flow is similar to figure [ fig>7 ] . as for the sample , this expert system will give suggestions of evaluation scenarios and metrics for the service features storage " and cost " .+ overall , this section has demonstrated three typical cases of applying the proposed expert system . to better distinguish between these application cases , here we highlight two points : * _ * direct vs. indirect help from the expert system . *_ when it comes to the application context of this expert system , cases 1 and 2 are in the pure evaluation context , whereas case 3 is in the decision making context . as previously mentioned , this expert system can facilitate but not directly suggest usage strategies of cloud resources .therefore , the expert system directly helps satisfy the evaluation requirement in the first two cases , while it indirectly helps satisfy the evaluation requirement in the third case . * _ * different usage purpose and sequence of the evaluation suggestions . * _ in application case 1 , the suggestion of evaluation metrics is used before the analytic modeling work . in fact , the suggested metrics are used to design the indicator tradeoffs of the simulated cloud environment . on the contrary , in application case 2 ,the suggestion of evaluation metrics is used to measure the cloud service indicators and display the result after real experiments .along with the booming cloud computing in industry , various commercial providers have started offering cloud services .thus , it is normal and vital to implement evaluations when deciding whether or not to employ a particular cloud service , or choosing among several candidate cloud services .however , given the rapidly - changing and customer - uncontrollable conditions , evaluation of commercial cloud services are inevitably more challenging than that of traditional computing systems . to facilitate evaluation work in the context of cloud computing in industry , we proposed to accumulate existing evaluation knowledge , and to establish an expert system for cloud services evaluation to make evaluation experiences conveniently reusable and sustainable .note that the proposed expert system does not work like an automated evaluation tool or benchmark involved in evaluation implementations , but gives evaluation suggestions or guidelines according to users enquiries . the most significant contribution of this work is to help practitioners implement cloud services evaluation along a systematic way .in fact , it is impossible to require everyone , especially cloud customers , to be equipped with rich knowledge and expertise on cloud services evaluation .we can find that the current evaluation work is relatively ad hoc in the cloud computing field .for example , evaluation techniques and benchmarks are selected randomly .based on the accumulated evaluation experiences , however , this proposed expert system can intelligently supply rational and comprehensive consultation to future evaluation practices , which has been conceptually validated by using three real application cases .this paper roughly introduces the structure and components of this expert system , and mainly specifies the study methodology we are following . the methodology then reveals and guides our current and future work , such as using the slr to collect and analyze existing evaluation practices , following the procedure of association - rule mining to extract evaluation knowledge , building the _ inference engine _ to conduct knowledge reasoning and data retrieving , and patching a well - designed _ interface _ to complete the expert system .the prototypes of different function parts will be gradually developed and integrated into an online system . finally , a data maintenance system will be also built up online to collect feedback , update data , and keep all the data versions with time stamps .the _ data / knowledge base _ of the expert system can eventually be updated regularly through the maintenance system .this project is supported by the commonwealth of australia under the australia - china science and research fund .nicta is funded by the australian government as represented by the department of broadband , communications and the digital economy and the australian research council through the ict centre of excellence program .c. binnig , d. kossmann , t. kraska , and s. loesing , how is the weather tomorrow ? towards a benchmark for the cloud , " _ proc .2nd int .workshop on testing database systems ( dbtest 2009 ) in conjunction with acm sigmod / podps int .management of data ( sigmod / pods 2009 ) _ , acm press , jun .2009 , pp .r. buyya , c. s. yeo , s. venugopal , j. broberg , and i. brandic , cloud computing and emerging it platforms : vision , hype , and reality for delivering computing as the 5th utility , " _ future gener . comp ._ , vol . 25 , no . 6 , jun . 2009 , pp . 599616 . c. evangelinos and c. n. hill , c.n. cloud computing for parallel scientific hpc applications : feasibility of running coupled atmosphere - ocean climate models on amazon s ec2 , " _ proc .1st workshop on cloud computing and its applications ( cca 08 ) _ , oct .2008 , pp .d. harris , watch out , world : ibm finally offers a real cloud , " _ gigaom - tech news , analysis and trends _ , available at http://gigaom.com/cloud/watch-out-world-ibm-finally-offers-a-real-cloud/ , apr .2011 .r. k. jain , _ the art of computer systems performance analysis : techniques for experimental design , measurement , simulation , and modeling_. new york , ny : wiley computer publishing , john wiley & sons , inc .1991 .d. kossmann , t. kraska , and s. loesing , an evaluation of alternative architectures for transaction processing in the cloud , " _ proc .2010 acm sigmod int .management of data ( sigmod 10 ) _ , acm press , jun .2010 , pp .579590 .j. li , m. humphrey , d. agarwal , k. jackson , c. van ingen , and y. ryu , escience in the cloud : a modis satellite data reprojection and reduction pipeline in the windows azure platform , " _ proc .23rd ieee int .parallel and distributed processing ( ipdps 2010 ) _ , ieee computer society , apr .2010 , pp .110 .w. lu , j. jackson , and r. barga , azureblast : a case study of developing science applications on the cloud , " _ proc .1st workshop on scientific cloud computing ( science cloud 2010 ) _ , acm press , jun .2010 , pp .413420 .r. prodan and s. ostermann , a survey and taxonomy of infrastructure as a service and web hosting cloud providers , " _ proc .10th ieee / acm int .grid computing ( grid 2009 ) _ , ieee computer society , oct .2009 , pp .1725 .w. sobel , s. subramanyam , a. sucharitakul , j. nguyen , h. wong , a. klepchukov , s. patil , a. fox , and d. patterson , cloudstone : multi - platform , multi - language benchmark and measurement tools for web 2.0 , " _ proc .1st workshop on cloud computing and its applications ( cca 08 ) _ , oct .2008 , pp .v. stantchev , performance evaluation of cloud computing offerings , " _ proc .3rd int .advanced engineering computing and applications in science ( advcomp 09 ) _ , ieee computer society , oct .2009 , pp . 187192
commercial cloud services have been increasingly supplied to customers in industry . to facilitate customers decision makings like cost - benefit analysis or cloud provider selection , evaluation of those cloud services are becoming more and more crucial . however , compared with evaluation of traditional computing systems , more challenges will inevitably appear when evaluating rapidly - changing and user - uncontrollable commercial cloud services . this paper proposes an expert system for cloud evaluation that addresses emerging evaluation challenges in the context of cloud computing . based on the knowledge and data accumulated by exploring the existing evaluation work , this expert system has been conceptually validated to be able to give suggestions and guidelines for implementing new evaluation experiments . as such , users can conveniently obtain evaluation experiences by using this expert system , which is essentially able to make existing efforts in cloud services evaluation reusable and sustainable . = 5 expert system ; cloud computing ; commercial cloud service ; cloud services evaluation ; evaluation experiences
quantum systems are usually described by a hilbert space , a state vector , and a hamiltonian .do these structures alone fully characterize a physical system ? without specifying more information , like a preferred choice of basis , it is difficult to make sense of the hamiltonian or the state .for example , consider the one - dimensional ising model , with hamiltonian the hamiltonian clearly describes a chain of locally coupled two - level systems .this interpretation is possible because the expression for the hamiltonian implicitly includes a partition of the total hilbert space into subsystems using a tensor product factorization , this choice of tensor product structure " ( tps ) allows one to write the hamiltonian simply in terms of local operators .however , if one does not specify a tps but instead writes the hamiltonian as a large matrix in some arbitrary basis , the system becomes difficult to interpret .is it the one - dimensional ising model , or is it a collection of interacting particles in three dimensions ?up to a change of basis , different hamiltonians are only distinguished by their energy spectra . moreover , the only canonical choice of basis is the energy eigenbasis .thus the hamiltonian and state vector alone do not yield an obvious physical description , at least without a choice of tps . we therefore ask , without a preferred choice of basis , is there a natural way to decompose the hilbert space into subsystems ( i.e. tensor factors ) , knowing only the hamiltonian ?in other words , do the energy eigenspaces and spectrum alone determine a natural choice of tps ?this is a question that has rarely been addressed in the literature , though it is discussed in a few papers such as .more commonly , it has been assumed that a preferred tps must be specified before any further progress can be made in describing the system . to even attempt finding a natural tps, one must first specify what constitutes a natural choice . here , we seek a choice of subsystems such that most pairs of subsystems do not directly interact .that is , we want the hamiltonian to act locally with respect to the chosen tps . the question of finding a natural tps is especially relevant when one considers dualities in quantum systems .for instance , consider the mapping under which the hamiltonian of the one - dimensional ising model becomes where and are boundary terms .the mapping demonstrates two different sets of variables , and , which define two different tps s .the hamiltonian acts locally with respect to both tps s , even though the and operators are non - locally related to each other .we say that the and descriptions are dual , " providing different local descriptions of the same hamiltonian .a simple argument in section [ sec : existence ] demonstrates that given a random hamiltonian , there is usually no choice of tps for which the hamiltonian is local .however , given a generic hamiltonian that is local in _ some _ tps , we can ask whether that is the unique tps for which the hamiltonian is local .in other words , given a hamiltonian with some local description , is that local description unique ?we present evidence that generic local hamiltonians have unique local descriptions : that is , dualities are the exception rather than the rule . as a result ,the spectrum is generically sufficient to uniquely determine a natural choice of tps , whenever such a choice exists .we formalize a version of this statement and then prove a weaker result .in particular , we show that by finding a single example of a hamiltonian with a unique local tps , one can then prove that almost all local hamiltonians have a unique local tps .( by local tps , " we mean a tps in which the hamiltonian is local . )this genericity result holds for -local hamiltonians on systems of any finite size , as well as for several other notions of locality . restricting to the class of translation - invariant , nearest - neighbor hamiltonians on a small number of qubits, we proceed to numerically find an example of such a hamiltonian with unique local tps .when combined with the numerical result , the analytic result mentioned above provides an effective proof that there exists a unique local tps for generic local hamiltonians within this restricted class .we speculate that this conclusion extends to generic local hamiltonians on systems of any finite size .all results presented are derived for models with a finite number of finite - dimensional subsystems .such models may be used to approximate regularized quantum field theories , although the results here are not rigorously extended to infinite - dimensional hilbert spaces .interesting subtleties may exist for infinite - dimensional systems , both due to the possibility of continuous spectra and also due to the breakdown of analyticity , familiar from the study of phase transitions .however , we speculate that results of a similar spirit would still hold in the large - system limit . the rest of the paper is organized as follows . in sections [ sec : tps ] and [ sec :locality ] , we formalize the notion of tps s and local hamiltonians . in section [ sec : main ] , we address the main question of this paper , providing analytic results .the proof of theorem [ thrm : constduals ] in section [ sec : constduals ] may be skipped without affecting one s understanding of the remaining sections .section [ sec : numerics ] explains the numerical examples needed to augment the analytic results . in section [ sec : gauge ] , we discuss generalizations of tps s , needed for fermions and gauge theories . finally , we comment on how our results frame discussions of quantum mechanics and quantum gravity .here we precisely define the notion of a tensor product structure , or tps .often , one considers a hilbert space with an explicit tensor factorization where the subsystems have hilbert spaces .we will usually imagine that the subsystems correspond to spatial lattice sites .( in few - body quantum mechanics , the subsystems might correspond to distinguishable particles , whereas in many - body physics or regularized quantum field theory , the subsystems might correspond to lattice sites , momentum modes , quasiparticle modes , or some other choice . )our first task is to define a tps on an abstract hilbert space that is not written as an explicit tensor product .this formalism will lend precision to the discussion of different tensor product structures on the same hilbert space , the topic at the heart of this paper .consider a map on a hilbert space which is an isomorphism ( unitary map ) the choice of isomorphism endows with a notion of locality : one can then speak of local operators , subsystems , entanglement , and so on within the hilbert space . for instance , we say the operator on is local to subsystem if is local to .similarly , the entanglement entropy of a state is defined as the entanglement entropy of .these notions will remain unchanged if is composed with a map that acts unitarily on each subsystem .we therefore define a tps as follows : + * definition ( tps ) : * a tps of hilbert space is an equivalence class of isomorphisms , where whenever may be written as a product of local unitaries and permutations of subsystems . + to avoid confusion , note that the usage of local " in the phraselocal unitary " is distinct from its usage in local hamiltonian . " local unitaries are products of unitaries acting on single tensor factors , while local hamiltonians are sums of operators acting on small subsets of tensor factors .another equivalent and useful way to define a tps involves observables rather than states . in short ,a tps naturally defines subalgebras of observables local to each subsystem , but we can turn this data around and use the subalgebras to define the tps .this perspective was developed by .let us collect the local observables as a set of mutually commuting subalgebras , where denotes the algebra of operators on , and denotes the algebra of operators of the form i.e. operators that act as the identity on all subsystems except . with this motivation, we can equivalently define a tps on as a collection of of subalgebras , , such that the following hold : 1 .the mutually commute , =0 ] .+ + we will be most interested in considering the same hilbert space and hamiltonian with different tps s , given by and .rather than talk about two different tps s for the same hamiltonian , we can often simplify the discussion by talking about two different hamiltonians on the space with fixed tps . both perspectives are equivalent .this observation will be important and bears repeating .+ * observation : * it is equivalent to consider either perspective : 1 . a hilbert space with fixed hamiltonian and varying choice of tps or , or 2 . a hilbert space with fixed tps and unitarily varying choice of hamiltonians or with the same spectrum .for some fixed hamiltonian , questions about the existence of a tps in which is local may then be translated into questions about the existence of local hamiltonians with the same spectrum as .the notion of duality can also be expressed in either of these perspectives : + * definition ( dual ) : * from the first perspective above , we say that two tps s are dual if the given hamiltonian is local in both tps s and if also the tps s are inequivalent with respect to that hamiltonian . from the second perspective, we say that two hamiltonians are dual if they are local , have the same spectrum , and can not be related by local unitaries , permutations of subsystems , and transposition .+ the results of sections [ sec : main ] and [ sec : numerics ] will largely be cast in the second perspective , i.e. as statements about the existence of different local hamiltonians with the same spectrum . however , the results may always be re - cast in the first perspective , as results about the existence of different local tps for the same hamiltonian .for instance , we can either say that generic local hamiltonians uniquely determine a local tps , or we can say that the spectrum of a generic local hamiltonian will uniquely determine the hamiltonian .given a hilbert space and hamiltonian , what qualitatively distinguishes different choices of tps ?most broadly , we might ask whether some tps s yield simpler , more meaningful , or more calculationally tractable descriptions of a system . more specifically , we are interested in tps s for which the hamiltonian appears local , in the sense that it only exhibits interactions among certain collections of subsystems. we should clarify what it means for a hamiltonian to include an interaction among a given collection of tensor factors .in general , one can write a hamiltonian on qudits ( i.e. , systems of local dimension ) as where the operators for form an orthogonal basis for single - qudit operators on site .this decomposition is unique , up to the choice of basis .the terms , for instance , are considered as interactions between qudits , , and .the space of operators decomposes into orthogonal sectors , with one sector for each combination of subsystems , and we say that contains an interaction among some subset of qudits if has a nonzero component in the corresponding sector .qualitatively , we say that a hamiltonian is local when relatively few combinations of subsystems are interacting .( note that we use local operator " to refer to an operator that is local to a single subsystem or collection of subsystems , while we use local hamiltonian " to refer to a sum of such operators . ) for instance , the ising model only exhibits nearest - neighbor interactions , as do lattice - regularized quantum field theories without higher derivatives .meanwhile , other models like spin glasses and matrix models exhibit interactions among all particles , but only in groups of fixed size , like two or four .likewise , non - relativistic electrons have only pairwise interactions , if each electron is treated as a subsystem .. ] to incorporate all these notions of locality , one can use a hypergraph .first , note that an ordinary graph can be thought of as a collection of vertices , along with a collection of edges , where each edge is written as a pair of vertices , for .we emphasize that each , called an edge , " is a two element subset of .a hypergraph is like an ordinary graph with a set of vertices , but the edges " may contain more than two vertices . for convenience , we subsequently refer to a hypergraph as a graph . given a fixed tps and hamiltonian , the associated interaction graph " has vertices corresponding to the subsystems and has ( hyper-)edges corresponding to every combination of subsystems that interact under the hamiltonian .we say that the hamiltonian is local with respect to some graph if its interaction graph is a subgraph . given a hamiltonian , different choices of tps give rise to different operators on , with different associated interaction graphs .we are interested in tps s which give rise to interaction graphs with edges connecting only a small number of sites . as one measure of sparsity or locality , we say that a graph is -local if it has edges joining at most vertices .- particle first quantized quantum system , -locality with respect to particle subsystems means that the particles only have -body interactions .] likewise , we call a hamiltonian -local with respect to some tps if the associated graph is -local , and we will also refer to the tps as -local . while the study of generic -local hamiltonians is important ,- particle first quantized quantum system with -body interactions is -local with respect to particle subsystems . ] for example in the study of quantum circuits and black holes , generally we are interested in the stronger condition of geometric locality .this means that each site has edges connecting it to only a small number of other sites .for example , we are often interested in graphs which form a -dimensional lattice with only neighboring lattice sites interacting . such graphs are ubiquitous since they arise in any local spin system or lattice regularization of quantum field theory .we might even make further constraints such as requiring that the hamiltonian be translation - invariant with respect to the lattice .all of the analytic results in this paper will be valid for generic hamiltonians within any specified locality class , including all the classes discussed above . specifically , we prove results about the number of duals within a particular locality class of a generic hamiltonian in that same locality class .for example , we can prove results about the number of translation - invariant duals of a generic translation - invariant hamiltonian .it is harder to prove results about the number of -local duals of generic translation - invariant hamiltonians , since translation - invariant geometrically local hamiltonians are a measure zero subspace of the larger space of -local hamiltonians .however in section [ sec : numerics ] we show that some of our results can be extended to such cases .first we ask whether a generic hamiltonian has any -local tps .the answer is no , as will be demonstrated .we restrict our attention to a finite - dimensional hilbert space , , with hamiltonian .we ask whether there exists a tps with subsystems such that the hamiltonian is -local . for larger than , we will see that a -local tps exists only for a measure zero set of operators in .recall from the previous section that for a given hamiltonian , a choice of tps produces a hamiltonian on , up to local unitaries and permutations of subsystems .the operator then defines some associated interaction graph , up to relabeling of vertices .we call the tps -local if it gives rise to a -local interaction graph for . note that for any tps , and have the same spectrum .conversely , if there is some operator on with the same spectrum as , then there exists a tps such that .so has a -local tps if and only if there is a -local hamiltonian on with the same spectrum .the above observation allows a change of perspective , as suggested at the end of section [ sec : tps ] . rather than asking whether generic hamiltonians on an abstract hilbert space have some -local tps , we can equivalently ask whether generic hamiltonians on are isospectral to some -local hamiltonian .a simple dimension - counting argument yields the answer .the space of possible spectra of all hamiltonians is .meanwhile , examining eqn .( [ locham1 ] ) , we see that the space of -local hamiltonians on will have dimension for subsystems of local dimension . then the space of spectra of -local hamiltonians will also have dimension at most .for , the space of all spectra of -local hamiltonians will have positive codimension in the space of all possible spectra .so for any sufficiently large ( e.g. , for and ) , the set of hamiltonians that are isospectral to a -local hamiltonian has measure zero .in general , the results in this paper will apply to all hamiltonians in some specified subspace , excluding an exceptional set of measure zero . on the other hand , when asking questions of an approximate nature for instance , when asking whether a generic hamiltonian has a tps that is _ approximately _ local the relevant question is not quite does the exceptional set have measure zero ? " but rather what is the volume of an -neighborhood of the exceptional set ? "such questions are more difficult to tackle directly , requiring analysis to augment the linear algebra and algebraic geometry used in this paper .however , the exceptional sets in question not only have measure zero but also have a codimension that is exponential in the system size , perhaps suggesting that the desired results about -neighborhoods would hold .now we ask , given a hamiltonian with some -local tps , is the unique -local tps , up to equivalence in the sense of section [ sec : tps ] above ?we again follow the strategy of reformulating the question on the space , using the observation at the end of section [ sec : tps ] .recall that two -local hamitonians on are called dual if they are isospectral and are not related by local unitaries , permutations of subsystems , or transposition .now we can reformulate the question of whether a -local tps for a hamiltonian is generically unique .the question becomes , does a generic -local hamiltonian on have any duals ?in other words , can one generically recover a local hamiltonian from its spectrum ?posed in the latter terms , the question may be interesting for independent reasons .however , we are motivated by the original question , asking whether a -local tps for a hamiltonian is generically unique .one s initial intuition may suggest that a -local hamiltonian may indeed be recovered from its spectrum .this intuition is due to dimension counting : a -local hamiltonian is specified by a number of parameters polynomial in , while the number of eigenvalues is exponential in . in the previous section , this dimension counting was used to rigorously demonstrate that generic hamiltonians have no -local tps .however , the argument here is less immediate .while the spectrum has more parameters than the hamiltonian , this fact alone does not prevent generic hamiltonians from having duals .for instance , imagine that we slightly modified the question , instead defining two -local hamiltonians to be dual whenever they are not related by local unitaries or permutations , failing to include the possibility of transposition .then we would discover that all hamiltonians ( except real - symmetric ones ) have at least one dual , given by their transpose .thus it is not immediately obvious that most -local hamiltonians do not have duals .however , transposition , just like local unitaries and permutations , is a linear map on the space of hamiltonians , and it preserves the subspace of local hamiltonians . a simple check , after dimension - counting , is to ask whether there are any _ other _ linear spectrum - preserving maps that preserve this subspace .it turns out that no such maps exist .all linear maps that preserve eigenvalues are generated by transposition and unitaries , and since we already know that taking the transpose preserves locality , we only have to worry about whether other unitaries preserve the subspace .the proof that the only such unitaries are 1-local or permutations is relatively simple .suppose there exists a unitary which preserves the space of -local hamiltonians and which is not generated by 1-local unitaries and permutations .then there must be some site whose local hermitian operators are mapped to hermitian operators that sometimes involve at least 2 sites , otherwise it would be generated by 1-local unitaries and permutations .since it preserves -locality , any -local operator must be mapped to at most sites .now consider two sets of sites , whose intersection is only site .each are mapped to at most sites , which must both contain the two sites to which is mapped .therefore any operator involving only those sites must be mapped to an operator on only sites . by dimension countingit is impossible for this to be true for an invertible map , which gives a contradiction .this proof generalizes to other notions of locality , e.g. the subspace of local lattice hamiltonians for some fixed lattice . as discussed above, we would like to show that generic -local hamiltonians do not have -local duals .however , the statements proven in this paper will be weaker statements . more specifically, our result will apply to any particular linear subspace of hamiltonians , for a hilbert space of fixed size .for instance , consider the subspace of all -local hamiltonians on qubits .the result then states : if there exists a single example of a -local hamiltonian on qubits without any duals , then almost all -local hamiltonians on qubits do not have -local duals .the analogous result applies to systems of qudits rather than qubits ( i.e. using -dimensional subsystems ). or , for instance , consider the linear subspace of all hamiltonians on spin chains with nearest - neighbor couplings .then the result states : if there exists a single example of a spin chain hamiltonian on spins without any duals that are also spin chains , then the same must hold for almost all spin chain hamiltonians on spins .of course , these results on their own do not guarantee that almost all -local hamiltonians do not have duals .however , as a proof of principle , we will numerically find an example of a translation - invariant spin chain hamiltonian on 10 spins that does not have any translation - invariant spin chain duals .combined with the above result , the numerical example effectively proves that almost all translation - invariant spin chain hamiltonians on 10 spins do not have any translation - invariant spin chain duals .if a family of examples could be generated analytically for systems of different sizes , perhaps using induction in the size of the system , then our result would imply rigorously that almost all -local hamiltonians do not have duals .we suspect that such examples exist , which would imply that the general result holds .now we begin to formalize the statement .consider a subspace of hamiltonians , .for instance , may be the subspace of -local hamiltonians on qubits .for a given local hamiltonian , we are interested in whether has a dual . by dual , we mean a hamiltonian with the same spectrum as , such that is not related by any combination of local unitary transformations , permutations of qubits , or the transpose operation .let be the subgroup of linear transformations on generated by local unitary operations , permutations of qubits , and the transpose operation .in addition , let the unitary group act on by conjugation .note that the orbit is the set of hamiltonians isospectral to .then the local duals of are precisely the points in that are not in .note that the statement that has no duals is the statement that this condition says that the only hamiltonians in isospectral to are those related by local unitary operations , permutations of qubits , and the transpose operation .equivalently , the condition states is uniquely determined by its spectrum , up to the previous operations .the situation of a hamiltonian with no duals is illustrated in figure 1 on the next page .[ fig : wacky ] and intersecting in the ambient space .the hamiltonian is depicted to have no duals .the orbit intersects in multiple disconnected components , appearing as circles in the diagram .these disconnected components together make up each disconnected component of contains hamiltonians related by local unitaries ( which are continuous transformations ) , and the sets are related to one another by permutations of qubits and transposition , which are discrete transformations .alternatively , if the intersection contained points not related to by local unitaries , permutations , or transposition , then would have duals .the figure is only intended as a schematic representation of the spaces involved.,title="fig : " ] as a first step , we can constrain the answer by counting the dimensions of the spaces involved .how big is ?for simplicity , consider the tangent space of at , given by the image of the linear map , taking ] or , where is the lie algebra of .( in particular , is equal to the space of 1-local hamiltonians . )then the same property holds for almost all matrices : if \in s\ ] ] for some hermitian matrix , then either = 0 ] , so by the exact same argument made below equation [ eq : loc2 ] , for almost all .plugging equations [ eq : loc3 ] and [ eq : loc4 ] into equations [ eq : loc5 ] and [ eq : loc1 ] , one finds that for almost all , and hence the above is an equality , with which implies because .finally , the above expression is precisely the desired condition of the theorem . to complete the proof of theorem [ thrm : finiteduals ] , it only remains to show that the number of duals is finite .suppose that for some then for any , \in s ] or .then there must exist a finite volume around the identity in within which implies or .however since the unitary group is compact , this can only be true for every if the set of hamiltonians in with the same eigenvalues as quotiented by the action of by conjugation is finite .however we have already shown that this exact result is true for almost all .it follows that for almost all , the set of hamiltonians in with the same eigenvalues as quotiented by the action of is finite , which we defined to be the number of duals . to extend our result to non - infinitesimal unitary transformations , considering the whole orbit rather than just the tangent space at , we make use of more sophisticated mathematical tools than were necessary for the previous results .the proof consists mostly of classical algebraic geometry , though it makes use of some theorems phrased in the language of schemes .nevertheless the basic strategy , as well as the result itself , is highly analogous to the previous section .we show that almost all local hamiltonians have the same number of duals .that is , the number of duals per hamiltonian is almost everywhere constant over the space of local hamiltonians .the numerical results in section [ sec : numerics ] will augment the theorem below to show that the number of duals is generically zero ( rather than simply being constant ) , at least for certain small systems .one main difference from the style of the previous proof is that we must include non - hermitian local hamiltonians when searching for duals , rather than just ordinary hermitian hamiltonians . in other words, we must consider the orbit of under conjugation with and not just .similarly , we generalize the equivalence classes associated to a single tps to include the orbit under conjugation by elements of on each subsystem ( analogous to local unitaries ) , as well as the familiar permutation of the subsystems and transposition .this requirement is particularly important as it means we are working over an algebraically - closed field , the complex numbers . when performing associated numerics , the complexification adds a small amount of numerical difficulty , since we must search a space with twice the number of parameters .although the proof itself is somewhat technical , the outline is easy to understand .first we construct the space of orbits of local complex hamiltonians under conjugation by local operators on each subsystem , permutation of subsystems , and transposition .then we define a map from this space such that two orbits are mapped to the same point if and only if they have the same spectrum and are therefore dual . here, the complexification of the spaces becomes important .note that the number of distinct solutions to the complex algebraic equation is the same for almost all values of , although the analogous statement does not hold for real solutions .similarly , for a class of sufficiently well - behaved maps , the number of points in the fiber will be constant almost everywhere .[ thrm : constduals ] suppose that we have a complex subspace of matrices that is preserved by transposition , together with some reductive subgroup that preserves when acting by conjugation and is invariant under transposition .let be the subgroup of whose fundamental representation is generated by transposition and the action of by conjugation .suppose that almost all matrices in are diagonalizable and moreover that for almost all matrices in , the number of -orbits which are similar is finite ; we refer each such orbit as a complex dual . "then the number of complex duals is constant on a zariski open subset of . _proof_. we want to define a morphism of varieties for which the domain is the orbits of under and for which the fibers are the sets of duals .our starting point is the rational map , defined to be the projectivization of the map here can be taken to be the weighted projective space which is the quotient of by the action of the multiplicative group of nonzero complex numbers , taking note that we can identify the quotient of by the permutation group ( acting by permuting indices ) with itself via the map where are the symmetric polynomials of , which may be defined by matching coefficients of the formal power series the elementary symmetric polynomials of can then be identified via newton s identities with the power sum symmetric polynomials .this gives us an identification of with the map that associates to a matrix its projectivized set of eigenvalues with algebraic ( not geometric ) multiplicities for .we make use of projective rather than affine spaces in this construction simply because we later need to take advantage of the nicer properties of projective morphisms .we are assuming that is nonzero and the generic jordan normal form of a matrix in is diagonalizable , so a dense open subset in the projective variety , which is the closure in of the image of , has fibers that are the intersection of an orbit of with , a set of similar matrices .now we want to quotient by the action of .since is not compact , the topological quotient of by is not well behaved .instead we will show that a git ( geometric invariant theory ) quotient exists .a git quotient of a projective variety is well - defined for a linearized action of a reductive algebraic group .since was assumed to be reductive and is a finite extension of which acts linearly on we have a linearized action of on and we can construct a git quotient by . a git quotient of a projective variety with a linear action of gives a categorical quotient where are the semistable points of .a point is semistable if and only if there exists a homogeneous -invariant polynomial which is non - zero at . in our case , for any that is not nilpotent, is -equivariant and homogeneous and will be non - zero for some . since a git quotient is a categorical quotient on the category of algebraic varieties , any -invariant map will uniquely factor through the quotient . since , for all , is -invariant , the restriction of to the semistable points will uniquely factor through a map we shall refer to as .a git quotient is only a geometric quotient on an open subset known as the stable points , here used in the original mumford sense [definition 1.7 ] .however since the number of -orbits mapped to a given point in is generically finite , such generic orbits will be stable , because all the orbits in a -invariant open neighborhood given by the inverse image under of an open subset of will be closed , as the fibers of will all be closed and the fibers are finite disjoint unions of orbits .it follows that the open subset of stable points is non - empty and hence dense ( since is irreducible ) . the next step will be to take the projective morphism and base change onto an open subscheme of the image of .this new morphism will still be projective since projectiveness is preserved under base changes .we make use of iii corollary 10.7 which states that if is a morphism of non - singular varieties over an algebraically closed field of characteristic , then there is a non - empty open subset such that is smooth .this corollary does not directly apply to since is not necessarily nonsingular , but since we have assumed that almost all the fibers are finite , the dimension of the image must be equal to the dimension of .this means that the image of the singular locus is not dense in the image of and hence the complement of the closure of in the closure of will be an non - empty open subset and have non - singular preimage in . then applying corollary 10.7 to the restriction of to , we learn that there exists an non - empty open subset such that the restriction of to is smooth . if a morphism is smooth , it is also flat ( iii theorem 10.2 ) .further , is a noetherian scheme since it is quasi - projective .this mean that the degree of the fiber , which for finite fibers is just the number of points in the fiber ( counting multiplicities ) is constant everywhere ( iii corollary 9.10 ) .now we take the intersection of with the stable points .the restriction of the morphism to this dense open subset will be a morphism from the geometric quotient by and hence the degree of the fibers will simply count the number of geometric orbits .we have therefore shown that a dense open subset of local complex hamiltonians have a constant number of complex duals , which completes the proof of theorem [ thrm : constduals ] .notice that the intersection of any nonempty zariski open with the real subspace of a complex vector space has complement of measure zero in the real vector space , since the zariski open is the complement of the solution space of a set of complex algebraic equations .when we restrict to the real subspace this becomes the complement of a set of real algebraic equations ( the real and imaginary parts of the original equations ) and all real algebraic equations have measure zero solution except for .if there exists a zariski open of local complex hamiltonians with no complex duals , then generic local ( real ) hamiltonians have no complex duals , and hence since hermitian duals are simply a subclass of complex duals , they also have no hermitian duals .we also need to show that the assumptions that we made for and apply for the particular case of a space of local complex hamiltonians with conjugations by local and permutations of tensor product factors . and are trivially invariant under transposition . to show that is generically diagonalizable we first note that in any zariski closed subspace of , matrices will have the generic jordan form for that space on a zariski open subspace of it .then exactly the same arguments as above , tell us that , generically , hermitian matrices in will have the generic jordan form for .since all hermitian matrices are diagonalizable , the generic jordan form for must be diagonalizable . to show that is reductive , we note that the connected component of is the direct product of copies of quotient by a subgroup of the center of the direct product group .it then follows that since is reductive , so is . finally we need to be able to confirm that any example we might construct ( either numerically or analytically ) with no complex duals , lies in the open subset of local hamiltonians in which the number of duals is constant .firstly , we note from our proof of theorem [ thrm : constduals ] that if satisfies the conditions in the complex version of theorem [ thrm : finiteduals ] then it lies in the open subspace of stable points of .we then simply need to show that is smooth at .however since the git quotient is locally just a geometric quotient , this will be true so long as the differential of is surjective on the tangent space of quotiented by the tangent space of the orbit of , which is again just a restatement of the conditions for theorem [ thrm : finiteduals ] .finally , again because the differential is surjective , the point in the fiber necessarily has trivial multiplicity . combining theorems [ thrm : finiteduals ] and [ thrm : constduals ] , we have therefore proved analytically that , if we have a single example ( subject to the conditions described above ) in some class of local hamiltonians which has no complex duals , then almost all hamiltonians in that class have a unique tps in which they are local .in the previous section we showed that for systems of a fixed size , if you can find a single example of a local hamiltonian with a unique local tps , then generic local hamiltonians of that size must also have a unique local tps . in this section ,we use numerics to demonstrate that such example hamiltonians " exist , at least for a small class of numerically tractable problems .these numerical examples , when combined with theorems [ thrm : finiteduals ] and [ thrm : constduals ] , amount to a proof of the following statements : 1 .almost all 2-local hamiltonians on 10 qubits have finitely many ( and possibly zero ) 2-local duals .almost all nearest - neighbor hamiltonians on 10-qubit spin chains have finitely many ( and possibly zero ) 2-local duals .the above statements are fully proven , if the associated numerical calculation is robust .we believe the numerical result that aids the proof ( analogous to numerically calculating that a certain quantity is nonzero ) is robust to finite - precision machine error , although we do not undertake a rigorous analysis of the error . on the other hand , the result belowis only verified in a probabilistic fashion , as elaborated later in the section . 1 . (_ probabilistically verified _ ) almost all translation - invariant , nearest - neighbor hamiltonians on 6-qubit spin chains have no translation - invariant duals .the numerical calculations behind these results are discussed below .first we focus on finding an example of a local hamiltonian with a finite number of duals , or equivalently , a local hamiltonian without infinitesimally nearby duals .that is , we want a hamiltonian that will satisfy the hypotheses of theorem [ thrm : finiteduals ] .the theorem applies within the context of a fixed hilbert space and a fixed subspace of local hamiltonians , such as the subspace of 2-local hamiltonians on 10 qubits ( =2 , =2 , =10 ) .a valid example hamiltonian " must have non - degenerate spectrum , and it must have the property listed in theorem [ thrm : finiteduals ] : for any such that \in s ] or .given an example hamiltonian , application of theorem [ thrm : finiteduals ] implies statement 1 above .we want to choose a particular hamiltonian and check numerically that the above criterion holds . from the proof of theorem [ thrm : finiteduals ] , we see that the criterion is equivalent to asking that , provided that has nondegenerate spectrum .the connected component of is the group of local unitary operators , so .we will assume that does not commute with any local unitaries besides the identity , which is true for generic local hamiltonians , and which is easy to check for a particular .then , and the criterion becomes for a particular choice of , one could compute the rank of the operator directly .however , we will use a more efficient approach to check the above criterion . note that and furthermore , \, | \ , a \in { \textrm{herm}}({\mathcal{h}})\} ] for , , and .fermionic lattice theories do not directly fit this description , because fermionic operators at different sites anti - commute .one might therefore wonder in what sense fermionic theories local " : is commutation necessary for locality ?in fact , commutation relations _ are _ generally necessary to prevent signalling between distant locations .but physical theories with fermions are nonetheless local , because the hamiltonian contains terms with even products of nearby fermion operators , and these terms do commute with each other . by restricting the algebra of observables on the hilbert space to the subalgebra of physical " observables namely , even products of single - fermion operators we can then arrange the physical algebra into mutually commuting subalgebras associated with spatial regions .this general notion is captured by a net of observables , " the basic structure used in algebraic quantum field theory , and one can easily adapt the field - theoretic definition to discretized lattice systems .we equip a hilbert space with a set of spatial sites , like the points of a lattice .crucially , these sites do not correspond to tensor factors of the hilbert space ; they are just abstract labels .subsets are regions , " and we have * definition ( net of observables ) : * a net of observables on hilbert space is a subalgebra of physical " observables , along with a set of sites , and an assignment of a subalgebra to each region . the subalgebras must satisfy , along with 1 . for 2 . =0 $ ] for disjoint regions 3 . for disjoint regions finally , one might also require : 1 .the map is injective .this definition is similar to that used by . in the context of a net of observables ,a local hamiltonian would be one that may be written as a sum of terms in for small regions , perhaps where has the additional structure of a geometric lattice . as a simple example of a net of observables , consider a hilbert space with an ordinary tps .then the naturally associated net of observables would be defined by for . however , the purpose of defining a net of observables is that not all nets must be associated with explicit tensor factorizations of .for instance , given a fermionic theory on a lattice , we would define to be generated by products of even numbers of fermion creation and annihilation operators .then , i.e. only fermion - even operators are considered physical , and the hilbert space has no natural tps . like fermionic theories, gauge theories also lack an ordinary tps , at least when restricting to the physical " hilbert space .but , like fermionic theories , the local structure of gauge theories is suitably generalized by using a net of observables instead . for simplicity , consider two - dimensional -lattice gauge theory .the full , unphysical " hilbert space is the tensor product of qubit degrees of freedom living on the edges of a square lattice .so the full hilbert space is endowed with a natural tps .however , the physical hilbert is the proper subspace gauge - invariant physical states , and the physical observables consist of gauge - invariant observables on , restricted to the gauge - invariant subspace . in general, a subspace of a space with an explicit tps will not inherit the tps in any natural way , so the physical hilbert space will not have a natural tps . on the other hand, we can construct a net of observables by defining to be the algebra of gauge - invariant observables , with the algebra of gauge - invariant observables local to .gauge theory does not have the property that for disjoint regions , which would be true for any theory with an ordinary tps , showing that an ordinary tps would not have sufficed to capture the local structure of the theory .we have seen that nets of observables provide a generalized notion of tps sufficient to capture the local structure of fermions and gauge theory .do the uniqueness results at the heart of this paper generalize to theories whose local structure is described by a net of observables , rather than a strict tps ? that is , given an abstract hilbert space and hamiltonian , we can ask whether there exists a net of observables on the hilbert space such that the hamiltonian is local . and then , given that such a net exists , we can ask whether it is unique .questions about nets are harder to tackle than the analogous questions about ordinary tps s .to see why , let us reconsider a nuance in the discussion of ordinary tps s that we have not addressed .given some local system of qubits , rather than simply asking whether the system has a dual using a different set of qubit degrees of freedom , we might ask whether there is a dual using qudits .that is , we may want to consider duals that use different types " of tps s . up to unitary equivalence , a tps on a hilbert space is just characterized by the list of dimensions of the subsystems , and these must multiply to the total dimension , so it is easy to characterize the types " of tps s : qubits , or qutrits , or some combination , etc .meanwhile , there are many more types of nets .indeed , on a given hilbert space , there are more possible nets of observables , even up to unitary equivalence .the net corresponding to fermions is different than the net corresponding to gauge theory , because they use different sorts of algebras , and one could construct nets that do not obviously correspond to bosons , fermions , or gauge theories .the harder question then becomes : given some hamiltonian that looks local using a given net , does have any duals that not only use different local degrees of freedom but also use a different type of net ? while the above questions are certainly difficult , the following observation suggests they may tractable .we already know the types of tps are easy to characterize , controlled by the dimension of the hilbert space . given some fixed , the tps can not have too many subsystems , assuming each subsystem has dimension greater than one .the types of nets are similarly controlled .as long as one assumes that the algebra is non - trivial for any region larger than some fixed size , then condition ( 4 ) in the definition of a net ensures that will be exponential in the number of sites .so for given hilbert space , a net can not be constructed with more sites than about , offering some control on the types of nets allowed .none of the results in this paper directly apply to infinite - dimensional systems .there are three types of infinities to consider . first, a theory with non - hardcore bosons will have infinite - dimensional hilbert spaces at each lattice site .second , in the continuum limit , there are infinitely many lattice sites per fixed volume , associated with uv divergences .third , in the large system limit , there are infinitely many lattice sites at fixed spacing .the large - system limit alone may yield interesting complications when attempting to reproduce the finite - dimensional results .the discussion in section [ sec : main ] relies essentially on the well - behavedness of the map from a hamiltonian to its spectrum .however , in the large - system limit , the eigenvalues may vary non - analytically with respect to the hamiltonian , leading famously to phase transitions .another result of non - analyticity is that properties which are true generically for finite - size systems may not be true generic infinite - size systems .for instance ,finite - size local hamiltonians are generically non - degenerate ; that is , a random local perturbation of a degenerate local hamiltonian will break the degeneracy .but certain infinite - size lattice systems have topological order , with a ground state degeneracy that is robust to any local perturbation .in particular , there exist open neighborhoods in the space of infinite two - dimensional lattice hamiltonians such that all hamiltonians in the neighborhood have degenerate spectrum .therefore , one can not navely rule out the possibility that there exists a region of nonzero volume in the space of infinite - size local hamiltonians where the hamiltonians all have duals .we began by formally defining a tensor product structure ( tps ) on a hilbert space , allowing one to pose clear questions about the existence of tps s for which a hamiltonian is local .first we observed that for some fixed hamiltonian , questions about the existence of a tps in which is local may be translated into questions about the existence of local hamiltonians with the same spectrum as . with this perspective , we showed that almost all hamiltonians do _ not _ have any tps for which the hamiltonian is local .equivalently , generic hamiltonians are not isospectral to any local hamiltonian .on the other hand , physical systems are distinguished by the property that they _ are _ local in some tps , or at least approximately so .we therefore considered hamiltonians known to have some local tps and argued that the local tps is generically unique .equivalently , a generic local hamiltonian is uniquely determined by its spectrum .put a third way , generic local hamiltonians do not have duals . "the argument for this claim involves two parts : first , we proved that if there exists a single example of a local hamiltonian without any duals , then almost all local hamiltonians have no duals .second , we found numerical examples of local hamiltonians for small systems that do not have any duals , effectively proving that almost all hamiltonians on these systems do not have any duals .we speculated that these results may be extended to arbitrarily large finite - dimensional systems , with the possibility of interesting subtleties in the infinite - size limit .finally , we presented a generalization of a tps , suitable for fermions and gauge theories .further generalizations discussed below address situations where only a subspace of the full hilbert space is equipped with a tps , perhaps with relevance to the bulk side of holographic theories .in this paper we argue that given the spectrum of a hamiltonian that is local in some tps , then generically the local tps is uniquely determined .in fact , one can also determine the local tps by merely knowing the time evolution of a single generic state in the hilbert space , without otherwise knowing .the time evolution of a generic state has the form for each of the eigenstates with energy . taking the fourier transform of with respect to time, we can determine the spectrum and hence the local tps .however , even with a known tps , much remains to be understood about the unitary evolution of states .much research is dedicated to the subject of expressing the wavefunction as a sum of decoherent classical branches .this research generally assumes the existence of some underlying tps and it has been recognized that it would be preferable to have the tps emerge naturally in the same way as the branches ; our results suggest a way to do that . given both a low - energy state and a tps , recent work suggests one can construct a metric on the discrete sites of the tps .the distance assigned between sites is dictated by the mutual information between the subsystems , giving a geometry " that depends on the state .it would be possible to combine this approach with the work in this paper , determining both a tps _ and _ a notion of distance between subsystems , starting from just the spectrum .first one determines the most local tps , then finds the ground state of the hamiltonian with respect to that tps , and finally uses the mutual information of the ground state to define distances between subsystems . in this paper, we already associate a graph to the tps , based on which sites are directly interacting under the hamiltonian .the graph approximately describes the topological structure of the space , while the proposal of would assign lengths to the edges of the graph , upgrading the topological data to geometric data .however , when the hamiltonian is already known , it may be more natural to rely on dynamical notions of distance like the light - cone or butterfly velocity , rather than asking about the mutual information of a state at fixed time .recent progress has been made on the construction of universal quantum simulators .in particular , consider a finite lattice system in spatial dimensions , governed by local hamiltonian on hilbert space . then one can always construct a local , two - dimensional spin system with hamiltonian on hilbert space , such that the low energy subspace of reproduces the spectrum of with arbitrary precision .because the simulator requires many auxiliary degrees of freedom , the number of lattice sites used in will be larger than the number of sites present in the original system , so and the systems are not dual in the strict sense used above .however , one might consider the notion of a tps for a subspace of the full hilbert space .restricting attention to the low energy subspace of the device , one could in principle find the tps corresponding to the simulated system .this situation may be analogous to the ads / cft duality in which the tps of the bulk gravity theory only describes a subspace of the full hilbert space .this is further discussed in section [ sec : qg ] below . until now, we have avoided the question of why to prefer one tps over another .instead , we have simply asserted that we are interested in tps s for which dynamics appear local .if one treats the wavefunction and its hamiltonian without any preferred basis as the only fundamental data of a quantum system , then _ a priori _ all tps s are equally valid descriptions of the system ., the radical view would suggest that the tps of the simulated system has the same ontological status as the tps of the simulation device . ] because the world around us has local interactions , it is natural that we are interested in tps s with local dynamics .however , one might ask why our experience privileges a particular tps for the universe namely , the tps associated with spatial degrees of freedom ?one possible answer is that local interactions are an essential ingredient for localized observers .for contrast , consider some randomly chosen tps , in which interactions are non - local .a hypothetical observer localized " in this tps will quickly become delocalized , so perhaps observers in such a tps can not exist for extended periods . instead , only a tps with local dynamics can naturally describe localized observers , and their experience will privilege that local tps . the existence of localized observers may also require more than just a local tps .for example , even local interactions may be strongly coupled and chaotic , such that localized objects quickly become maximally entangled with their environment .one might expect that such dynamics do not allow localized observers , because such observers would quickly become delocalized despite having only local interactions .a measure of entanglement growth was considered in as a criterion for choosing a tps , though the author restricted the analysis to tps s related by bogoliubov transformations .when searching for a tps with slow entanglement growth , one must decide for which class of states to consider the entanglement .one possibility is to consider random product states , while another natural choice would be low - energy states .the computational complexity of a unitary operator the number of local quantum gates needed to contstruct the operator is an important notion in quantum information theory and features in discussions of quantum gravity .the complexity of an operator depends crucially on the choice of tps .given a fixed tps , random unitary operators will have complexity that is exponential in system size , as demonstrated by a dimension - counting argument .moreover , by an argument similar to that of section [ sec : existence ] , generic unitary operators will have no tps in which they have low complexity .however , if a hamiltonian is local in some tps , the time - evolution operator will have much smaller complexity in that tps , at least for times sub - exponential in the system size . because the locality of determines the growth rate of complexity of , at least for sufficiently small times , an alternate description of the local tps is the tps in which has minimal complexity at small times .the most well - known of the dualities in quantum gravity is the ads / cft correspondence between strongly coupled super yang - mills in ( the boundary theory ) and weakly coupled quantum gravity in ( the bulk theory ) .this duality is unlikely to satisfy the precise definition of duality used in this paper , even using the generalization of section [ sec : gauge ] . in particular ,the tps in the bulk is only defined for a subspace of states of the complete hilbert space .these are states associated with small perturbations of the geometry around a flat ads background .however , when the state contains a black hole , for example , it does not make sense to talk about the same approximately - local degrees of freedom that existed in flat space .the discrepancy is especially manifest in tensor network toy models of ads / cft , where the model of a black hole involves tearing out tensors from the network .this model completely removes some of the bulk lattice sites , and instead the ` correct ' tps for the subspace of states containing the black hole consists of the remaining bulk sites , together with new lattice sites at the boundary of the black hole . describing different subsets of states in the hilbert space with different tps s in a coherent way seems to require yet another generalization tensor product structures .the question of whether the boundary theory or bulk theory is more local `` is somewhat subtle .the bulk gravitational theory will necessarily have small non - local interactions , but it also has far fewer degrees of freedom at each lattice site '' than does the boundary theory , where there is a large matrix of operators associated to each site .the bulk tps has a much smaller algebra of local operators , since it divides the hilbert space up into much smaller subsystems .one might therefore describe the bulk tps as more local than the boundary tps when considering the low energy subspace , even though the hamiltonian is only approximately local with respect to the bulk tps . a toy model for ads / cft, the sachdev - ye - kitaev model , is particularly relevant to the discussions in this paper .the hamiltonian of the theory is comprised of majorana fermions with all - to - all -local coupling terms - local , it is completely geometrically non - local , with every site interacting with every other site . ] : the coefficients are sampled from i.i.d .random gaussians , describing an ensemble of hamiltonians .since this ensemble is the fermionic analog of a class of bosonic local hamiltonians considered in this paper , we might expect that generic syk hamiltonians would not have any local duals . on the other hand ,when one disorder - averages the syk hamiltonian and takes the expectation of observables over the probability distribution for , one can remarkably rewrite the theory in terms of degrees of freedom that include a type of einstein - dilaton gravity in dimensions . as a consequence , one can compute the spectrum of the bulk gravity theory by computing the spectrum of the majorana theory ( in a particular limit ) , which is comparatively easier to treat . in accordance with our intuition , it is likely that the complete description of the dynamics is not even approximately local at scales smaller than the 1 + 1d ads scale .nonetheless , it is interesting that this alternative description is able to exist at all , when the hamiltonian itself is generic within some class of local hamiltonians .there are many open questions about whether our results extend to the generalized notion of tps suitable for fermions and gauge theories discussed in section [ sec : gauge ] , as well as to infinite - dimensional systems or approximately local tps s . furthermore , while we have provided evidence that recovery of the tps from the spectrum of spin chains is generically possible in principle , we have not discussed practical measures to determine whether a local tps exists for a given spectrum or how to find it apart from numerically searching through possible tps s .it appears that finding the most local tps for a given spectrum is computationally impractical ( using classical computation ) for all but the smallest hilbert spaces , but it is possible that there may be very good heuristic algorithms .it would be interesting if there was , in contrast , an efficient quantum algorithm to find local tps s .we would like to thank daniel bump , dylan butson , benjamin lim , edward mazenc , xiao - liang qi , semon rezchikov , leonard susskind , arnav tripathy , ravi vakil , and michael walter for their valuable discussions and support .we are also especially grateful to patrick hayden and frances kirwan for their valuable insights and feedback , and for reviewing this manuscript .jc is supported by the fannie and john hertz foundation and the stanford graduate fellowship program .dr is supported by the stanford graduate fellowship program .maldacena , juan . the large n limit of superconformal field theories and supergravity . " _ aip conference proceedings conf-981170_. eds .ricardo e. gamboa saravi , horacio falomir , and fidel a. schaposnik .vol . 484 .1 . aip , 1999 .a. kitaev , a simple model of quantum holography . "+ http://online.kitp.ucsb.edu/online/entangled15/kitaev/ + http : //online.kitp.ucsb.edu / online / entangled15/kitaev2/ + talks at kitp , april 7 , 2015 and may 27 , 2015 .maldacena , juan , douglas stanford , and zhenbin yang . conformal symmetry and its breaking in two - dimensional nearly anti - de sitter space . "_ progress of theoretical and experimental physics _ 2016.12 ( 2016 ) : 12c104 .
essential to the description of a quantum system are its local degrees of freedom , which enable the interpretation of subsystems and dynamics in the hilbert space . while a choice of local tensor factorization of the hilbert space is often implicit in the writing of a hamiltonian or lagrangian , the identification of local tensor factors is not intrinsic to the hilbert space itself . instead , the only basis - invariant data of a hamiltonian is its spectrum , which does not manifestly determine the local structure . this ambiguity is highlighted by the existence of dualities , in which the same energy spectrum may describe two systems with very different local degrees of freedom . we argue that in fact , the energy spectrum alone almost always encodes a unique description of local degrees of freedom when such a description exists , allowing one to explicitly identify local subsystems and how they interact . in special cases , multiple dual local descriptions can be extracted from a given spectrum , but generically the local description is unique . 5cm(12.5cm,-8 cm )
countries were selected by data availability . for each country we require availability of at least one aggregation level where the average population per territorial unit . this limit for chosen to include a large number of countries , that have a comparable level of data resolution .we use data from recent parliamentary elections in austria , canada , czech republic , finland , russia ( 2011 ) , spain and switzerland , the european parliament elections in poland and presidential elections in the france , romania , russia ( 2012 ) and uganda . herewe refer by `` unit '' to any incarnation of an administrative boundary ( such as districts , precincts , wards , municipals , provinces , etc . ) of a country on any aggregation level .if the voting results are available on different levels of aggregation , we refer to them by roman numbers , i.e. poland - i refers to the finest aggregation level for poland , poland - ii to the second finest , and so on . for each unit on each aggregation level for each countrywe have the data of the number of eligible persons to vote , valid votes and votes for the winning party / candidate .voting results were obtained from official election homepages of the respective countries , for more details see si tab.s1 .units with an electorate smaller than 100 are excluded from the analysis , to prevent extreme turnout and vote rates as artifacts from very small communities .we tested robustness of our findings with respect to the choice of a minimal electorate size and found that the results do not significantly change if the minimal size is set to 500 .the histograms for the 2d - vote - turnout distributions ( vtds ) for the winning parties , also referred to as `` fingerprints '' , are shown in fig.[figure1 ] . of the winning parties as rescaled distributions with zero - mean and unit - variance .large deviations from other countries can be seen for uganda and russia with the plain eye . for more detailed resultssee tab.s3.,width=328 ] it has been shown that by using an appropriate re - scaling of election data , the distributions of votes and turnouts approximately follow a gaussian .let be the number of votes for the winning party and the number of voters in any unit .a re - scaling function is given by the _ logarithmic vote rate _, . in units where ( due to errors in counting or fraud ) or is not defined , and the unit is omitted in our analysis .this is a conservative definition , since districts with extreme but feasible vote and turnout rates are neglected ( for instance , in russia 2012 there are 324 units with 100% vote and 100% turnout ) . to motivate our parametric model for the vtd , observe that the vtd for russia and uganda in fig.[figure1 ] are clearly bimodal , both in turnout and votes .one cluster is at intermediate levels of turnout and votes .note that it is smeared towards the upper right parts of the plot .the second peak is situated in the vicinity of the 100% turnout , 100% votes point .this suggests two modes of fraud mechanisms being present , _ incremental _ and _ extreme _ fraud .incremental fraud means that with a given rate ballots for one party are added to the urn and votes for other parties are taken away .this occurs within a fraction of units . in the election fingerprints in fig.[figure1 ] these units are those associated with the smearing to the upper right side .extreme fraud corresponds to reporting a complete turnout and almost all votes for a single party .this happens in a fraction of units .these form the second cluster near 100% turnout and votes for the winning party . ) . for switzerlandthe fair and fitted model are almost the same .the results for russia and uganda can be explained by the model assuming a large number of fraudulent units.,width=328 ] for simplicity we assume that within each unit turnout and voter preferences can be represented by a gaussian distribution with the mean and standard deviation taken from the actual sample , see si fig.s1 .this assumption of normality is not valid in general .for example the canadian election fingerprint of fig.[figure1 ] is clearly bimodal in vote preferences ( but not in turnout ) . in this case , the deviations from approximate gaussianity are due to a significant heterogeneity within the country . in the particular case of canadathis is known to be due to the mix of the anglo- and francophone population .normality of the observed vote and turnout distributions is discussed in the si , see tab.s2 .let be the number of valid votes in unit .the first step in the model is to compute the empirical turnout distribution , , and the empirical vote distribution , , over all units from the election data . to compute the _ model _ vtd the following protocolis then applied to each unit . * for each , take the electorate size from the election data .* model turnout and vote rates for are drawn from normal distributions .the mean of the model turnout ( vote ) distribution is estimated from the election data as the value that maximizes the empirical turnout ( vote ) distribution .the model variances are also estimated from the width of the empirical distributions , see si and fig.s1 for details . *_ incremental fraud_. with probability ballots are taken away from both the non - voters and the opposition and are added to the winning party s ballots .the fraction of ballots which are shifted to the winning party is again estimated from the actual election data . * _ extreme fraud_. with probability almost all ballots from the non - voters and the opposition are added to the winning party s ballots .the first step of the above protocol ensures that the actual electorate size numbers is represented in the model .the second step guarantees that the overall dispersion of vote and turnout preferences of the country s population are correctly represented in the model . given nonzero values for and , incremental and extreme fraudare then applied in the third and fourth step , respectively . for a complete specification of these fraud mechanisms ,see the si .values for and are reverse engineered from the election data in the following way .first , model vtds are generated according to the above scheme , for each combination of values , where and .we then compute the point - wise sum of the square difference of model and observed vote distributions for each pair and extract the pair giving the minimal difference .this procedure is repeated for 100 iterations , leading to 100 pairs of fraud parameters . in the followingwe report the average values of these and values , respectively , and their standard deviations . for more details see si .fig.[figure1 ] shows 2-d histograms ( vtds ) for the number of units for a given fraction of voter turnout ( x - axis ) and for the percentage of votes for the winning party ( y - axis ) .results are shown for austria , canada , czech republic , finland , france , poland , romania , russia , spain , switzerland and uganda .for each of these countries the data is shown on the finest aggregation level , where .these figures can be interpreted as fingerprints of several processes and mechanisms leading to the overall election results . for russia and ugandathe shape of these fingerprints differ strongly from the other countries .in particular there is a large number of territorial units ( thousands ) with approximately 100% percent turnout and at the same time about 100 % of votes for the winning party . in fig.[sifigurecoll ]we show the distribution of for each country . roughly , to first order the data from different countries collapse to an approximate gaussian , as previously observed .clearly , the data for russia falls out out of line .skewness and kurtosis for the distributions of are listed for each data - set and aggregation level in tab.s3 .most strikingly , the kurtosis of the distributions for russia ( 2003 , 2007 , 2011 and 2012 ) exceed the kurtosis of each other country on the coarsest aggregation level by a factor of two to three .values for the skewness of the logarithmic vote rate distributions for russia are also persistently below the values for each other country .note that for the vast majority of the countries skewness and kurtosis for the distribution of are in the vicinity of 0 and 3 , respectively ( which are the values one would expect for normal distributions ) . however , the moments of the distributions do depend on the data aggregation level .fig.[momentaggregate ] shows skewness and kurtosis for the distributions of for each election on each aggregation level . by increasing the data resolution ,skewness and kurtosis for russia decrease and approach similar values as observed in the rest of the countries , see also si tab.s3 .these measures depend on the data resolution and thus can not be used as unambiguous signals for statistical anomalies . as will be shown ,the fraud parameters and do _ not _ significantly depend on the aggregation level or total sample size .estimation results for and are given in tab.s3 for all countries on each aggregation level .they are zero ( or almost zero ) in all of the cases except for russia and uganda . in the right column of fig.[figure2 ] we show the model results for russia ( 2011 and 2012 ) , uganda and switzerland for .the case where both fraud parameters are zero corresponds to the absence of incremental and extreme fraud mechanisms in the model and can be called the fair election case . in the middle column of fig.[figure2 ] we show results for the estimated values of and .the left column shows the actual vtd of the election .values of and significantly larger than zero indicate that the observed distributions may be affected by fraudulent actions . to describe the smearing from the main peak to the upper right corner which is observed for russia and uganda , an incremental fraud probability around needed for _ united russia _ in 2011 and in 2012 .this means fraud in about 64% of the units in 2011 and 39% in 2012 . in the second peak close to 100% turnoutthere are roughly 3,000 units with 100% of votes for united russia in the 2011 data , representing an electorate of more than two million people .best fits yield for 2011 and for 2012 , i.e. two to three percent of all electoral units experience extreme fraud .a more detailed comparison of the model performance for the russian parliamentary elections of 2003 , 2007 , 2011 and 2012 is found in fig.s2 .fraud parameters for the uganda data in fig.[figure2 ] are found to be and .a best fit for the election data from switzerland gives .these results are drastically more robust to variations of the aggregation level of the data than the previously discussed distribution moments skewness and kurtosis , fig.[fraudaggregate ] and tab.s3 . even if we aggregate the russian data up to the coarsest level of federal subjects ( approximately 85 units , depending on the election ) , estimates are still at least two standard deviations above zero , estimates more than ten standard deviations .similar observations hold for uganda . for no other country , on no other aggregation level , such deviations are observed .the parametric model yields similar results for the same data on different levels of aggregation , as long as the values maximizing the empirical vote ( turnout ) distribution and the distribution width remains invariant .in other words , as long as units with similar vote ( turnout ) characteristics are aggregated to larger units , the overall shapes of the empirical distribution functions are preserved and the model estimates do not change significantly .note that more detailed assumptions about possible mechanisms leading to large heterogeneity in the data ( such as the _ qubcois _ in canada or voter mobilization in the helsinki region in finland , see si ) may have an effect on the estimate of .however , these can under no circumstances explain the mechanism of extreme fraud .results for elections in sweden , uk and usa , where voting results are only available on a much coarser resolution ( ) , are given in tab.s4 .another way to visualize the intensity of election irregularities is the cumulative number of votes as a function of the turnout , fig.[figure3 ] . for each turnout levelthe total number of votes from units with this or lower levels are shown .each curve corresponds to the respective election winner in a different country , with average electorate per unit of comparable order of magnitude .usually these cdfs level off and form a plateau from the party s maximal vote count on . againthis is not the case for russia and uganda .both show a boost phase of increased extreme fraud toward the right end of the distribution ( red circles ) .russia never even shows a tendency to form a plateau .as long as the empirical vote distribution functions remain invariant under data aggregation as discussed above , the shape of these cdfs will be preserved too .note that fig.[figure3 ] .demonstrates that these effects are decisive for winning the 50% majority in russia 2011 .we demonstrate that it is not sufficient to discuss the approximate normality of turnout , vote or logarithmic vote rate distributions , to decide if election results may be corrupted or not .we show that these methods can lead to ambiguous signals , since results depend strongly on the aggregation level of the election data .we developed a model to estimate parameters quantifying to which extent the observed election results can be explained by ballot stuffing .the resulting parameter values are shown to be insensitive to the choice of the aggregation level .note that the error margins for values start to increase by decreasing below 100 , see fig.[fraudaggregate]d , whereas estimates stay robust even for very small .it is imperative to emphasize that the shape of the fingerprints in fig.[figure1 ] will deviate from pure 2-d gaussian distributions also as a result of non - fraudulent mechanisms , but due to heterogeneity in the population .the purpose of the parametric model is to quantify to which extent ballot stuffing and the mechanism of extreme fraud may have contributed to these deviations , or if their influence can be ruled out on the basis of the data . for the elections in russia and ugandathey can not be ruled out . as shown in fig.s2 , assumptions of their wide - spread occurrenceseven allow to reproduce the observed vote distributions to a good degree . in conclusionit can be said with almost certainty that an election does not represent the will of the people , if a substantial fraction ( ) of units reports a 100% turnout with almost all votes for a single party , and/or if any significant deviations from the sigmoid form in the cumulative distribution of votes versus turnout are observed .another indicator of systematic fraudulent or irregular voting behavior is an incremental fraud parameter which is significantly greater than zero on each aggregation level .should such signals be detected it is tempting to invoke g.b .shaw who held that `` [ d]emocracy is a form of government that substitutes election by the incompetent many for appointment by the corrupt few . ''we acknowledge helpful discussions and remarks by erich neuwirth and vadim nikulin .we thank christian borghesi for providing access to his election datasets and the anonymous referees for extremely valuable suggestions .diamond , m.f .plattner ( 2006 ) .electoral systems and democracy p.168 .johns hopkins university press , 2006 .f. lehoucq ( 2003 ) , electoral fraud : causes , types and consequences ._ annual review of political science _ * 6 * , p. 233alvarez , t.e . hall , s.d .hyde ( 2008 ) .election fraud : detecting and deterring electoral manipulations .brookings institution press , washington d.c . ,f. benford ( 1938 ) , the law of anomalous numbers . _proceedings of the american philosophical society _ * 78 * ( 4 ) , p. 551mebane ( 2006 ) , election forensics : vote counts and benford s law . summer meeting of the political methodology society , uc - davis .mebane , k. kalinin ( 2009 ) , comparative election fraud detection , apsa 2009 toronto meeting . c. breunig , a. goerres ( 2011 ) , searching for electoral irregularities in an established democracy : applying benford s law tests to bundestag elections in unified germany , _ electoral studies _ * 30 * ( 3 ) , p. 534 - 45 .f. cantu , s.m . saiegh ( 2011 ) , fraudulent democracy ?an analysis of argentina s infamous decade using supervised machine learning , _ polit ._ * 19 * ( 4 ) p. 409 - 33 .b. beber , a. scacco ( 2012 ) , what the numbers say : a digit - based test for election fraud , _ polit .* 20 * ( 2 ) p. 211 - 34 .deckert , m. myagkov , p.c .ordeshook ( 2011 ) , benford s law and the detection of election fraud , _ polit ._ * 19 * ( 3 ) p. 245mebane ( 2011 ) comment on `` benford s law and the detection of election fraud '' , _ polit .* 19 * ( 3 ) p. 269 - 72 .mebane , j.s .sekhon ( 2004 ) , robust estimation and outlier detection for overdispersed multinomial models of count data , _ ajps _ , * 48 * ( 2 ) , p. 392sukhovolsky , a.a .sobyanin ( 1994 ) , vyboryi referendom 12 dekabrya 1993 g. v. rossii : politicheskie itogi , perskeptivy , dostovemost rezultatov , moksva - arkhangelskoe .m. myagkov , p.c .ordeshook ( 2008 ) , russian election : an oxymoron of democracy .caltech / mit voting technology project , wp63 .m. myagkov , p.c .ordershook , d. shakin ( 2009 ) .the forensics of election fraud : russia and ukraine .cambridge university press ; ( 2005 ) fraud or fairytales : russia and ukraine s electoral experience , _ post soviet affairs _ * 21 * ( 2 ) , 91 - 131 .s. shpilkin ( 2009 ) , statistical investigation of the results of russian elections in 2007 - 2009 , _ troitskij variant . science _* 21 * ( 40 ) , 1 .s. shpilkin ( 2011 ) , mathematics of elections , _ troitskij variant . science _ * 25 * ( 94 ) , 2 . d. kobak , s. shpilkin , m. s. pshenichnikov ( 2012 ) , statistical anomalies in 2011 - 2012 russian elections revealed by 2d correlation analysis , _ arxiv preprint _ arxiv : 1205.0741 . c. castellano , s. fortunato , v. loreto ( 2009 ) , statistical physics of social dynamics , _ rev .phys . _ * 81 * , p. 591 - 646 .costa filho , m.p .almeida , j.e .moreira , j.s .andrade jr ( 2003 ) , brazilian elections : voting for a scaling democracy , _ physica a _ * 322 * 1 - 4 , p. 698 - 700 .lyra , u.m.s .costa , r.n .costa filho , j.s .andrade jr ( 2003 ) , generalized zipf s law in proportional voting processes , _ europhys .lett . _ * 62 * , 131 .mantovani , h.v .ribeiro , m.v .moro , s. picoli jr .mendes ( 2011 ) , scaling laws and universality in the choice of election candidates , _ europhys . lett . _* 96 * , 48001 .s. fortunato , c. castellano ( 2007 ) , scaling and universality in proportional elections , _ phys .lett . _ * 99 * , 138701 . c. borghesi , j.p .bouchaud ( 2010 ) , spatial correlations in vote statistics : a diffusive field model for decision - making , _ eur .j. b _ * 75 * , 395 - 404 . j. agnew ( 1996 ) , mapping politics : how context counts in electoral geography , _ political geography _ * 15 * ( 2 ) , 129 - 46 .descriptive statistics and official sources of the election results are shown in tab.s1 .the raw data will be made available for download at http://www.complex - systems.meduniwien.ac.at/. they report election results of parliamentary ( austria , canada , czech republic , finland , russia , spain and switzerland ) , european ( poland ) or presidential ( france , romania , russia , uganda ) elections on at least one aggregation level . in the rare circumstances where electoral districts report more valid ballots than registered voters , we work with a turnout of 100% .territorial units with an electorate less than hundred are omitted at each point of the analysis , to avoid extreme vote and turnout rates as spurious results due to small communities .the countries to include in this work have been chosen on the basis of data availability .a country is included , if the voting results are available in electronic form on an aggregation level where a number of vote eligible persons comprises one territorial unit .required data is the number of vote eligible persons , the number of valid votes and the number of votes for the winning party / candidate for each unit .a country is separated into electoral units , each having an electorate of people and in total valid votes .the fraction of valid votes for the winning party in unit is denoted .the average turnout over all units , , is given by with standard deviation , the mean fraction of votes for the winning party is with standard deviation .the mean values and are typically close to but not identical to the values which maximize the empirical distribution function of turnout and votes over all units .let be the number of votes where the empirical distribution function assumes its ( first local ) maximum ( rounded to entire percents ) , see fig .s[sifiguremeth ] .similarly is the turnout where the empirical distribution function of turnouts takes its ( first local ) maximum .the distributions for turnout and votes are extremely skewed to the right for uganda and russia which also inflates the standard deviations in these countries , see tab .s2 . to account for this a left - sided ( right - sided ) meandeviation ( ) from is introduced . can be regarded as the _ incremental fraud width _ , a measurable parameter quantifying how intense the vote stuffing is .this contributes to the smearing out of the main peaks in the election fingerprints , see fig.1 in the main text .the larger , the more inflated the vote results due to urn stuffing , in contrast to which quantifies the scatter of the voters actual preferences .they can be estimated from the data by similarly the _extreme fraud width _ can be estimated , i.e. the width of the peak around 100% votes .we found that describes all encountered vote distributions reasonably well .for a visualization of , and see fig .s[sifiguremeth ] .while and measure in how many units incremental and extreme fraud occur , and quantify how intense these activities are , if they occur . to get an estimate for the width of the distribution of turnouts over territorial unit which is free of possible fraudulent influences , the _ turnout distribution width _ is calculated from electoral districts which have both and , that is .incremental fraud is a combination of two processes : stuffing ballots for one party into the urn and re - casting or deliberately wrong - counting ballots from other parties ( e.g. erasing the cross ) .which one of these two processes dominates is quantified by the _ deliberate wrong counting parameter _for the wrong - counting process dominates , for the urn stuffing mechanism is prevalent . in the following a normal distributed random variable with mean and standard deviation .the model is specified by the following protocol , which is applied to each district . *pick a unit with electorate taken from the data . *the model turnout of unit , , is . * a fraction of people vote for the winning party . * with probability fraud takes place . in this casethe unit is assigned a fraud intensity .values for are only accepted if they lie in the range .this is the fraction of votes not cast , , which are added to the winning party .votes for the opposition are wrong counted for the winning party with a rate ( where is an exponent ) . to summarize , if incremental fraud takes place the winning party receives votes .* with probability extreme fraud takes place . in this case opposition votesare canceled and added to the winning party with probability ( i.e. the above with replacing ) .acceptable values for are again from the range . , , and are estimated from the election results . is the maximum of the distribution function . measures the distribution width of values to the left of , i.e. smaller than . the incremental fraud with measures the distribution width of values to the right of , i.e. larger than .the extreme fraud width is the width of the peak at 100% votes.,width=328 ] the parameters for incremental and extreme fraud , and , as well as the deliberate wrong counting parameter , are estimated by a goodness - of - fit test .let be the empirical distribution function of votes for the winning party ( the data is binned with one bin corresponding to one percent ) over all territorial units .the distribution function for the model units is calculated for each set of values where .we report values for the fraud parameters where the statistic assumes its minimum , averaged over 100 realizations over the parameter space , see tab.s3 for and .the extreme fraud parameter is zero ( within one standard deviation ) for almost all elections except russia ( 2003 , 2007 , 2011 and 2012 ) and uganda .for very small ( ) estimates for become less robust .these are also the only elections where the incremental fraud parameter is not close to zero .values for for the russian elections are ( 2003 ) , ( 2007 ) , ( 2011 ) , ( 2012 ) , and for uganda .results for from countries where is close to zero can not be detected in a robust way and are superfluous , since there are ( almost ) no deviations from the fair election case .special care is needed in the interpretation of and values in countries where election units contain several polling stations .it may be the case that extreme fraud takes only in a subset of the polling stations within a unit place . in that caseextreme fraud would be indistinguishable from the incremental fraud mechanism .it is hard to construct other plausible mechanisms leading to a large number of territorial units having 100% turnout and votes for a single party than urn stuffing .the case is not so clear for the smeared out main cluster . in some cases , namely canada and finland, this cluster also takes on a slightly different form .this effect clearly does not inflate the turnout as much as it is the case in russia and uganda , but it is nevertheless present . in canada the distribution of vote preferencesis bimodal , with one peak around 50% and one around 10% ( of the vote eligible population ) , but with similar turnout levels .this is a result of a large - scale heterogeneity in the data : english and french canada .votes are shown for the winning conservatives .looking at their results by province , they tallied 16.5% votes cast in quebec , but more than 40% in eight of the remaining twelve other provinces .as a consequence the logarithmic vote rate kurtosis becomes inflated .however , these statistical deviations are perfectly distinguishable from the traces of ballot stuffing , resulting in vanishing fraud parameters on all aggregation levels .another possible mechanism leading to irregularities in the voting results is successful voter mobilization .this may lead to a correlation between turnout and a party s votes .the finland elections , for example , where marked by radical campaigns by the true finns .they managed to mobilize evenly spread out across the country , with the exception of the helsinki region , where the winning national coalition party performed significantly better than in the rest of the country .
democratic societies are built around the principle of free and fair elections , that each citizen s vote should count equal . national elections can be regarded as large - scale social experiments , where people are grouped into usually large numbers of electoral districts and vote according to their preferences . the large number of samples implies certain statistical consequences for the polling results which can be used to identify election irregularities . using a suitable data collapse , we find that vote distributions of elections with alleged fraud show a kurtosis of hundred times more than normal elections on certain levels of data aggregation . as an example we show that reported irregularities in recent russian elections are indeed well explained by systematic ballot stuffing and develop a parametric model quantifying to which extent fraudulent mechanisms are present . we show that if specific statistical properties are present in an election , the results do not represent the will of the people . we formulate a parametric test detecting these statistical properties in election results . remarkably , this technique produces similar outcomes irrespective of the data resolution and thus allows for cross - country comparisons . free and fair elections are the cornerstone of every democratic society . a central characteristic of elections being free and fair is that each citizen s vote counts equal . however , already joseph stalin believed that `` it s not the people who vote that count ; it s the people who count the votes . '' how can it be distinguished whether an election outcome represents the will of the people or the will of the counters ? elections can be seen as large - scale social experiments . a country is segmented into a usually large number of electoral units . each unit represents a standardized experiment where each citizen articulates his / her political preference via a ballot . although elections are one of the central pillars of a fully functioning democratic process , relatively little is known about how election fraud impacts and corrupts the results of these standardized experiments . there is a plethora of ways of tampering with election outcomes , for instance the redrawing of district boundaries known as gerrymandering , or the barring of certain demographics from their right to vote . some practices of manipulating voting results leave traces which may be detected by statistical methods . recently , benford s law experienced a renaissance as a potential election fraud detection tool . in its original and naive formulation , benford s `` law '' is the observation that for many real world processes the logarithm of the first significant digit is uniformly distributed . deviations from this law may indicate that other , possibly fraudulent mechanisms are at work . for instance , suppose a significant number of reported vote counts in districts is completely made up and invented by someone preferring to pick numbers which are multiples of ten . the digit `` 0 '' would then occur much more often as the last digit in the vote counts when compared to uncorrupted numbers . voting results from russia , germany , argentina and nigeria have been tested for the presence of election fraud using variations of this idea of digit - based analysis . however , the validity of benford s law as a fraud detection method is subject to controversy . the problem is that one needs to firmly establish a baseline of what the _ expected _ distribution of digit occurrences for fair elections should be . only then it can be asserted if _ actual _ numbers are over- or underrepresented and thus suspicious . what is missing in this context is a theory that links specific fraud mechanisms to statistical anomalies . a different strategy for detecting signals of election fraud is to look at the distribution of vote and turnout numbers as in . this has been extensively done for the russian presidential and duma elections over the last 20 years . these works focus on the task of detecting two mechanisms , the stuffing of ballot boxes and the reporting of contrived numbers . it has been noted that these mechanisms are able to produce different features of vote and turnout distributions than those observed in fair elections . while for russian elections between 1996 and 2003 these features were `` only '' observed in a relatively small number of electoral units , they eventually spread and percolated through the entire russian federation from 2003 onwards . according to myagkov and ordeshook `` [ o]nly kremlin apologists and putin sycophants argue that russian elections meet the standards of good democratic practice '' . this point was further substantiated with election results from the 2011 duma and 2012 presidential elections . here it was also observed that ballot stuffing not only changes the shape of vote and turnout distributions , but also induces a high correlation between them . unusually high vote counts tend to _ co - occur _ with unusually high turnout numbers . several recent advances in the understanding of statistical regularities of voting results are due to the application of statistical physics concepts to quantitative social dynamics . in particular several approximate statistical laws of how vote and turnout are distributed have been identified , some of them are shown to be valid across several countries . it is tempting to think of deviations from these approximate statistical laws as potential indicators for election irregularities which are valid cross - nationally . however , the magnitude of these deviations may vary from country to country due to different numbers and sizes of electoral districts . any statistical technique quantifying election anomalies across countries should not depend on the size of the underlying sample nor its aggregation level , i.e. the size of the electoral units . as a consequence , a conclusive and robust signal for a fraudulent mechanism , e.g. ballot stuffing , must not disappear if the same dataset is studied on different aggregation levels . in this work we expand earlier work on statistical detection of election anomalies in two directions . first , we test for reported statistical features of voting results ( and deviations thereof ) in a cross - national setting , and discuss their dependence on the level of data aggregation . as the central point of this work we propose a parametric model to statistically quantify to which extent fraudulent processes , such as ballot stuffing , may have influenced the observed election results . remarkably , under the assumption of coherent geographic voting patterns , the parametric model results do not depend significantly on the aggregation level of the election data or the size of the data sample .
the study of large - scale ( complex ) networks , such as computer , biological and social networks , is a multidisciplinary field that combines ideas from mathematics , physics , biology , social sciences and other fields .a remarkable and widely discussed phenomena associated with such networks is the _ small world _ property .it is observed in many such networks , man - made or natural , that the typical distance between the nodes is surprisingly small .more formally , as a function of the number of nodes , , the average distance between a node pair typically scales at or below . in this work ,we study the load characteristics of small world networks .assuming one unit of demand between each node pair , we quantify as a function of , how the maximal nodal load scales , independently of how each unit of demand may be routed . in other words , we are interested in the smallest of such maximal nodal loads as a function of routing , which we refer to as _ congestion _ , that the network could experience .we show that in the planar small - world network congestion is almost quadratic in , which is as high as it can get , specifically .in contrast , for some non - planar small - world networks , congestion may be almost linear in , namely for arbitrarily small . since congestion in a network with nodes can not have scaling order less than or more than , we conclude that the small world property alone is not sufficient to predict the level of congestion _ a priori _ and additional characteristics may be needed to explain congestion features of complex networks .this has been argued in for the case of intrinsic hyperbolicity , which is a geometric feature above and beyond the small world property . additionally , we investigate what happens to congestion when we change the link metric that prescribes routing .that is , for a network with edge weight we change the metric by a factor thus assigning each edge a new weight .we explore the extent to which this change in the metric can change congestion in the network .we prove that if we allow the weights to get arbitrarily small or large , i.e. when for some edges , approach zero and for some others approach infinity , then considerable changes in congestion can occur . on the other hand ,if we require the weights to be bounded away from zero and infinity , i.e. when for all edges , then congestion can not change significantly .these observations quantify the degree to which remetrization in a small world network may be helpful in affecting congestion .as mentioned in the introduction , the small world property is ubiquitous in complex networks .formally we say that a graph has the small world property if its diameter is of the order where is the number of nodes in the graph .it has been shown that a surprising number of real - life , man - made or natural , networks have the small world property , see , for example , . to be more specific , assume is an infinite planar graph and let be an arbitrary fixed node in the graph that we shall designate as the root .let us assign a weight to each edge , thus for all edges , and let be the ball of center and radius .in other words a node belongs to if and only of .more generally , we can consider weighted graphs where each edge has a non - negative length .we will further assume that the sub - graphs have exponential growth , which is clearly equivalent to the small world property , as defined above .more precisely , there exist and such that for all .assume that for every and every pair of nodes in there is a unit of demand between each node pair .therefore , the total demand in is where . given a node in we denote by the total flow in routed through .the load , or congestion , , is the maximum of over all vertices in , which is typically a function of the routing .the next theorem shows that for a planar graph with exponential growth , there exist nodes with load for sufficiently large regardless of routing .[ fig1 ] .right : flows crossing the boundary separating two planar wedges each with nodes .note that ( red ) geodesic paths may cross the boundary more than once.,title="fig:",width=220 ] .right : flows crossing the boundary separating two planar wedges each with nodes .note that ( red ) geodesic paths may cross the boundary more than once.,title="fig:",width=172 ] let be an infinite planar graph with exponential growth .assume one unit of demand between every pair of nodes in .then for every there exists a node such that where .fix and let be the spanning tree of all geodesic ( shortest ) paths with the node as the origin , as shown in figure 1 , left . observe that since is small world , all node pairs have distance and thus each ray from in has length thus bounded .enumerate all the paths in in clockwise order , possible because of the planarity of .therefore , and by the small world property each ray .let , and in general for .it is clear that for all such that and because addition of each adds at most nodes to , and moreover by inequality ( [ eqq1 ] ) we know that let us consider the set .it is clear that all the paths between and have to intersect the path .the traffic between and is equal to .since which satisfies since this traffic has to pass through at some point then there exists at least one node in with load at least ( see figure 1 , right ) proving our claim .our first claim follows a corollary of the above theorem .let be an infinite planar graph with exponential growth .then for sufficiently large there exists a node such that where is a constant independent on .it turns out that the planarity property is essential for the existence of highly congested nodes proven above .we now show that in contrast , when is not planar , congestion can be made to approach .more explicitly , given there exists infinite graphs with exponential growth with uniformly bounded degree such that for every node and sufficiently large , . before providing sucha construction let us show the following lemma .let be an infinite graph and let be the ball of radius centered at as before .assume moreover that .then for every the following holds where .let and define .then it is clear that and moreover .therefore , where the inequality is coming from the fact that if then the geodesic path between a node in and a node in does not pass through .hence , we state the following result due to bollobas for completeness .[ boll ] given for sufficiently large a random graph with nodes has diameter at most where is a fixed constant depending on and independent on .now we are ready to show the construction of a small world graph with small congestion .let and consider a infinite -regular tree where the value of will be chosen later .denote by the root of and as before .let be the graph constructed by connecting all the nodes in the spheres by a -regular random graph for every .then and it is clear that using theorem [ boll ] we have that note that since we are not adding new nodes .it is not difficult to see that therefore , using the previous lemma we see that for every node therefore , and hence by taking sufficiently large depending on we see that for sufficiently large .in , it has been shown that -hyperbolicity implies the existence of a core , that is , a non - empty set of nodes whose load scale as under geodesic routing . in section [ sec2 ], we proved that planar graphs with exponential growth can not avoid congestion of order no matter how the routing is performed . on the other hand, we observed in section [ sec3 ] that exponential growth alone is not sufficient to guarantee the existence of such highly congested nodes .thus , unlike the small world property , -hyperbolicity is a sufficient condition for a network to have highly congested nodes .the reverse need not be true , however .it is not difficult to construct non - hyperbolic graphs in which load scales as .for instance , two square grids in two vertical planes separated by a single horizontal link joining their origins .it is even possible to construct small world graphs with load which are not hyperbolic .let be the 3-regular infinite tree and let .the graph is not gromov hyperbolic since it has as a sub - graph .yet , has traffic of order where is the root of .it is interesting to note that even tough the graph is not hyperbolic , it has as a sub - graph .examples of small world graphs with load of order appear to include hyperbolic sub - graphs .we do not know if this is always true but it seems likely since exponential growth implies existence of an exponentially growing tree sub - graph ( e.g. , its spanning tree ) .we next explore what happens to hyperbolicity when we apply remetrization . more specifically , assume a metric graph where each edge has an associated non - negative distance that satisfies the triangle inequality .we modify each edge distance by a factor so that the new length of the edge is .we also require that the coefficients are chosen in such a way that the new edge distances continue to satisfy the triangle inequality and thus constitute a metric . to determine if scaling of congestion persists after remetrization , let us start with a -hyperbolic graph and modify the edge metric according to the above scheme .does remetrization ensure another -hyperbolic graph ?we show below that this is not the case and thus remetrization can significantly affect the congestion scaling in the graph , unless the weights are bounded away from zero and infinity .we shall prove the result for regular hyperbolic grids embedded in and then appeal to the quasi - isometry of all -hyperbolic graphs with these reference graphs ( see ) to complete the proof . to simplify exposition ,we focus on dimension 2 only , since the argument carries through similarly for higher dimensions .let be a regular tessellation of the poincre disk with , . may be viewed as a ( hyperbolic ) grid where each node has ( the same ) degree and each face has ( the same ) sides , figure 2 depicts .note that in the case that the graph is a -regular tree and is thus -hyperbolic regardless of any metrization .let be the node at the center of the disk and let be the set of nodes in at distance from .note that the sub - graph induced by the set is a cycle with nodes .let us denote this graph also by .it is not difficult to see that there exists a sequence such that exponentially fast so that if we remetrize every edge in by the constant then the induced graph is not hyperbolic since it will be quasi - isometric to the euclidean grid .it was observed in and then proved in , that the nodes in have congestion of the order .therefore , the nodes in the new graph have a congestion of the order .we observe that the above construction used arbitrarily small weights .more precisely , given there are infinitely many weights in this construction such that .it is not hard to see that the same construction is possible with arbitrarily large weights instead of small weights .however , if we restrict these weights so that there exist positive constants and such that then the original and the remetrized graphs are indeed quasi - isometric .therefore , by a result of gromov , see , if one graph is hyperbolic so is the other and thus is unaffected by the said change of metric .lohsoonthorn , _ hyperbolic geometry of networks _ , ph.d .thesis , department of electrical engineering , university of southern california , 2003 .available at http://eudoxus.usc.edu/iw/mattfinalthesis main.pdf .
in this report we show that in a planar exponentially growing network consisting of nodes , congestion scales as independently of how flows may be routed . this is in contrast to the scaling of congestion in a flat polynomially growing network . we also show that without the planarity condition , congestion in a small world network could scale as low as , for arbitrarily small . these extreme results demonstrate that the small world property by itself can not provide guidance on the level of congestion in a network and other characteristics are needed for better resolution . finally , we investigate scaling of congestion under the geodesic flow , that is , when flows are routed on shortest paths based on a link metric . here we prove that if the link weights are scaled by arbitrarily small or large multipliers then considerable changes in congestion may occur . however , if we constrain the link - weight multipliers to be bounded away from both zero and infinity , then variations in congestion due to such remetrization are negligible .
since the discovery of synchronization in pendulum clocks by huygens , synchronous behavior has been widely observed not only in physical systems but also in biological ones such as pacemaker cells in the heart , chirps of crickets , and fetal - maternal heart rate synchronization .such synchronization phenomena have been studied theoretically in terms of nonlinear dynamics , particularly by exploiting oscillator models . for example , synchronization observed in fireflies can be modeled using nonlinear periodic oscillators and is described as _phase synchronization_. further , it has been indicated that the notion of phase synchronization can be extended to chaotic oscillators .this phenomenon is called _ chaotic phase synchronization _ ( cps ) .furthermore , synchronization phenomena in neural systems have also attracted considerable attention in recent years . at the macroscopic level of the brain activity, synchronous behavior has been observed in electroencephalograms , local field potentials , etc .these observations raise a possibility that such neural synchronization plays an important role in brain functions such as perception as well as even in dysfunctions such as parkinson s disease and epilepsy .in addition , at the level of a single neuron , it has been observed that specific spiking - bursting neurons in the cat visual cortex contribute to the synchronous activity evoked by visual stimulation ; further , in animal models of parkinson s disease , several types of bursting neurons are synchronized .moreover , two coupled neurons extracted from the central pattern generator of the stomatogastric ganglion in a lobster exhibit synchronization with irregular spiking - bursting behavior .hence , it is important to use mathematical models of neurons to examine the mechanism of neuronal synchronization with spiking - bursting behavior . as mathematical models that include such neural oscillations , the chay model and the hindmarsh - rose ( hr ) model been widely used .these models can generate both regular and chaotic bursting on the basis of _ slow - fast _ dynamics .the slow and fast dynamics correspond to slow oscillations surmounted by spikes and spikes within each burst , respectively .the former is related to a long time scale , and the latter , to a short one .phase synchronization in such neuronal models is different from that in ordinary chaotic systems such as the rssler system , owing to the fact that neuronal models typically exhibit multiple time scales .however , it is possible to quantitatively analyze the neuronal models by simplification , for example , by reducing the number of phase variables to 1 by a projection of an attractor ( a projection onto a delayed coordinate and/or a velocity space ) .recently , a method called _ localized sets _ technique has been proposed for detecting phase synchronization in neural networks , without explicitly defining the phase . in this paper , we focus on synchronization in periodically driven single bursting neuron models , which is simpler than that in a network of neurons . in previous studies , phase synchronization of such a neuron with a driving force has been considered both theoretically and experimentally . in these studies ,the period of the driving force was made close to that of the slow oscillation of a driven neuron .on the other hand , in this work , we adopt the chay model to investigate whether phase synchronization also occurs with the application of a force whose period is as short as that of the spikes .in particular , we focus on the effect of the slow mode ( slow oscillation ) on the synchronization of the fast mode ( spikes ) .it should be noted that this fast driven system may be significant from the viewpoint of neuroscience .in fact , fast oscillations with local field potentials have been observed in the hippocampus and are correlated with synchronous activity at the level of a single neuron . from intensive numerical simulations of our model , we find that the localized sets technique can be used to detect cps between the spikes and the periodic driving force , even in the case of multiple time scales .furthermore , we find two characteristic properties around the transition point to cps .first , the average time interval between successive phase slips exhibits a power - law scaling against the driving force strength .the scaling exponent undergoes an unsmooth change as the driving force strength is varied .second , an order parameter , which measures the degree of phase synchronization , shows a stepwise dependence on the driving force strength before the transition . that is , does not increase monotonically with but includes a plateau over a range of ( a step ) , where is almost constant .both of these characteristics are attributed to the effects of the slow mode on the fast mode and have not been observed in a system with a single time scale .this paper is organized as follows .section [ model ] explains the model and describes an analysis method for spiking - bursting oscillations .section [ result ] presents the results of this study .finally , section [ summary ] summarizes our results and discusses their neuroscientific significance with a view to future work .as an illustrative example of a bursting neuron model , we consider the model proposed by chay , which is a hodgkin - huxley - type conductance - based model expressed as follows : .\label{dcdt } \end{aligned}\ ] ] equation ( [ dvdt ] ) represents the dynamics of the membrane potential , where , , and are the reversal potentials for mixed na and ca ions , k ions , and the leakage current , respectively .the concentration of the intracellular ca ions divided by its dissociation constant from the receptor is denoted by .the maximal conductances divided by the membrane capacitance are denoted by , , , and , where subscripts ( i ) , ( k , v ) , ( k , c ) , and ( l ) refer to the voltage - sensitive mixed ion channel , voltage - sensitive k channel , ca-sensitive k channel , and leakage current , respectively .finally , and are the probabilities of activation and inactivation of the mixed channel , respectively . in eq .( [ dndt ] ) , the dynamical variable denotes the probability of opening the voltage - sensitive k-channel , where is the relaxation time ( in seconds ) , and is the steady - state value of . it should be noted that , on the basis of the formulation in , the variables , , and are described by where stands for , or with ,\\ \beta_m&=&4\exp[-(v+50)/18],\\ \alpha_h&=&0.07\exp(-0.05v-2.5),\\ \beta_h&=&1/[1+\exp(-0.1v-2)],\\ \alpha_{q}&=&0.01(20+v)/[1-\exp(-0.1v-2)],\\ \beta_{q}&=&0.125\exp[-(v+30)/80].\end{aligned}\ ] ] further , is defined as ^{-1}.\end{aligned}\ ] ] in eq .( [ dcdt ] ) , , , and are the efflux rate constant of the intracellular ca ions , a proportionality constant , and the reversal potential for ca ions , respectively . in this study , a sinusoidal driving force with amplitude and frequency is added in eq .( [ dvdt ] ) as follows : in the following sections , we fix the frequency at and vary the amplitude to investigate the response of the system .the values of the reversal potentials and the fixed parameters used in our simulation are listed in table [ parameter ] .the value of significantly influences the dynamics of the system , and a chaotically bursting behavior can be observed in the vicinity of .we use this value in our simulation .figure [ fig : fig_attractor.eps ] shows ( a ) the chaotic attractor , ( b ) the time series of , and ( c ) the average power spectrum of the time series for , where the time series includes two time scales one for the spikes within each burst and the other one for the bursts themselves . for discussions in the rest of this paper , we introduce the following terminology for the time series of the chay model : the fast mode describes the spiking oscillation in the dashed rectangles in fig . [fig : fig_attractor.eps](b ) , whereas the slow mode describes the oscillation in the lower envelope of between the dotted lines , where the dashed - dotted curve shows the slow oscillation .the variable dominates the slow dynamics with the time constant .in fact , the decrease in ( i.e. , hyperpolarization ) between the bursts shown in fig .[ fig : fig_attractor.eps](b ) corresponds to the decrease in shown in fig .[ fig : fig_attractor.eps](a ) . on the other hand , the increase in shown in fig .[ fig : fig_attractor.eps](a ) corresponds to the spiking of in fig .[ fig : fig_attractor.eps](b ) .hereafter , the period for hyperpolarization of is called the _ quiescent period _ , as indicated in fig .[ fig : fig_attractor.eps](b ) .it should be noted that the amplitude of takes small values in our simulation as compared with the change in voltage for a spike , such that the driving force is weak .a clear peak can be observed in the high - frequency part of the power spectrum ( fig .[ fig : fig_attractor.eps](c ) ) , corresponding to the fast spiking activity .additionally , a broadband peak for the slow oscillation is observed in the low - frequency part . in what follows ,let us investigate the system under force with a frequency close to that of the spiking mode ( i.e. , hz ) , which is the natural frequency of the fast dynamics of the system .the arrows indicate the direction of the trajectory .( b ) time series of for the chaotically bursting behavior described by eqs .( 1)(3 ) .the terms fast mode and slow mode describe the spiking oscillation and the oscillation in the lower envelope of between the dotted lines ( shown by the dashed - dotted curve ) , respectively .( c ) power spectrum for time series of .the spectrum is averaged over 100 time series with a length of , where s is the time step for the numerical integration . ]the arrows indicate the direction of the trajectory .( b ) time series of for the chaotically bursting behavior described by eqs .( 1)(3 ) .the terms fast mode and slow mode describe the spiking oscillation and the oscillation in the lower envelope of between the dotted lines ( shown by the dashed - dotted curve ) , respectively .( c ) power spectrum for time series of .the spectrum is averaged over 100 time series with a length of , where s is the time step for the numerical integration . ].parameters used in the numerical simulations . [ cols="^,^,^",options="header " , ] next , in order to investigate phase synchronization between the spikes and the periodic driving force , we consider the phase of the driving force at when the spike occurs in the neuron ( see ) .figure [ fig : schematic_phasesets.eps ] illustrates the manner in which the phase variable of the driving force at , which is defined as the moment when exceeds a certain threshold , can be measured . once a sequence of is determined , we can assign points on a unit circle , where each is determined as the phase of the sinusoidal force at time .we assume that .we term these points the _ spiking time points _ ( stps ) .we can detect the cps between the driven system and the driving force as a localization of the stp distribution ; that is , there is an open interval on the unit circle where no spiking time point is detected . in , such a localization of the stps ( obtained from a sufficiently long time series )is mathematically described in the following manner .let the distribution of the stps be included in a set on the unit circle ; is localized if there exist open sets on the circle such that .it should be noted that only one parameter is required to detect the cps by this algorithm .in this section , we mainly show the results for the periodic driving force with the frequency at hz .this value is close to the natural frequency of the fast mode . in this case , the quiescent periods of disappear when the forcing amplitude increases ; that is , exhibits spiking without quiescent periods .then , we find the cps between the single spikes and the driving force .moreover , we observe two characteristic phenomena around the transition point to the cps .one phenomenon shows a power - law scaling against , which is exhibited in the average time interval between successive phase slips .the scaling exponent undergoes an unsmooth change as is varied .the other phenomenon shows a stepwise behavior observed in kuramoto s order parameter for the driving force strength before the transition ( shown in fig .[ fig : worefracop.eps ] ) .in addition to the results of the case , we also show brief results for the periodic driving force with frequency hz . in this case, the quiescent periods of do not disappear ( shown in fig .[ fig : plk302o92.eps ] ) , even if the value of increases to its maximum value of for the case . as will be explained later in detail ,a stepwise behavior can be observed as well by using another observation variable .all the characteristic phenomena are inherent to systems with multiple time scales .( red solid line ) for driving forces ( blue dotted line ) with different amplitudes : ( a ) , ( b ) , ( c ) .the corresponding stps appear on the unit circle in ( d ) , ( e ) , and ( f ) , respectively .the length of the time series for plotting the stps is 1000 s. ] ( red solid line ) for driving forces ( blue dotted line ) with different amplitudes : ( a ) , ( b ) , ( c ) .the corresponding stps appear on the unit circle in ( d ) , ( e ) , and ( f ) , respectively .the length of the time series for plotting the stps is 1000 s. ] ( red solid line ) for driving forces ( blue dotted line ) with different amplitudes : ( a ) , ( b ) , ( c ) .the corresponding stps appear on the unit circle in ( d ) , ( e ) , and ( f ) , respectively .the length of the time series for plotting the stps is 1000 s. ] figures [ fig : noncpsk001o9.eps](a)[fig : noncpsk001o9.eps](c ) show the time series of , together with the sinusoidal driving force . for a weak driving force with , as shown in fig .[ fig : noncpsk001o9.eps](a ) , there is no synchronization between the spikes and the force . on the other hand ,when , as shown in fig .[ fig : noncpsk001o9.eps](b ) , the system exhibits phase synchronization , i.e. , a one - to - one correspondence between a single spike and one period of the driving force .additionally , there are fluctuations in the inter - spike intervals ( isis ) .therefore , this state is considered to be one that exhibits cps . in the case of a strong driving force with , as shown in fig .[ fig : noncpsk001o9.eps](c ) , classic phase locking ( cpl ) is observed with two periodically alternating isis .more precisely , each pair of successive spikes is observed at the same points in the period of the force . while the state shown in fig .[ fig : noncpsk001o9.eps](a ) does not exhibit phase synchronization in terms of stps , the states shown in figs .[ fig : noncpsk001o9.eps](b ) and 3(c ) exhibit phase synchronization .figures [ fig : noncpsk001o9.eps](d)3(f ) show the stps on the unit circle for a certain time interval .the length of the time series for plotting the stps is 1000 s. this length of the time series is sufficiently long to determine the localization of the distribution of the stps , as mentioned in the definition of the localization of the stps . in fig .[ fig : noncpsk001o9.eps](e ) , when , we can detect cps between the spikes and the force , because the stps are localized yet distributed on the unit circle . as shown in fig .[ fig : noncpsk001o9.eps](f ) , when , the cpl can be detected on the basis of the fact that all the stps are concentrated at two points on the unit circle .this means that each pair of successive spikes completely synchronizes with two periods of the force .however , no synchronization can be detected in fig .[ fig : noncpsk001o9.eps](d ) , when , because the stps are not localized . in other wordsthe entire circle is filled with stps . for ( blue dashed line ) , ( green dotted line ) , ( red solid line ) .( b ) time series of ( red solid line ) with the periodic force ( blue dotted line ) for .phase slips are observed in the region indicated by the dashed arrows.,height=188 ] for ( blue dashed line ) , ( green dotted line ) , ( red solid line ) .( b ) time series of ( red solid line ) with the periodic force ( blue dotted line ) for .phase slips are observed in the region indicated by the dashed arrows.,height=188 ] to confirm the cps , as shown in fig .[ fig : noncpsk001o9.eps](b ) , from another perspective , let us define the phase difference between the spiking oscillation and the driving force and then observe its time evolution for different values of around the transition point .suppose that at each instance when the membrane potential exceeds the threshold value , the phase variable of the spiking oscillation , , increases by .the instantaneous phase variable of the external force , , is determined at the spiking time without taking it to be modulo unlike considered in the case of stps in section [ model ] .thus , the phase difference is defined as .it should be noted that phase synchronization between spikes and the external force can be defined as figure [ fig : phaseslip.eps](a ) shows the time evolution of for , , and . for ,the time evolution of shows an oscillation with decreasing tendency . for , temporarily fluctuates within a bounded region ( plateau ) but sometimes exhibits a sudden phase slip . finally , for , always fluctuates within a bounded region ; that is , phase slip does not occur .it should be noted that there exists a transition point to cps near .therefore , we can confirm that the state shown in fig .[ fig : noncpsk001o9.eps](b ) represents cps in the sense of the conventional definition , as well , given that is beyond the transition point .it should also be noted that phase slips occur when the quiescent period of takes a relatively long time .these slips are indicated by the dashed arrows in fig .[ fig : phaseslip.eps](b ) . with respect to .cps and cpl represent chaotic phase synchronization and classic phase locking , respectively.,height=170 ] let us clarify the route to phase synchronization by the dynamics of for the driving force at the spike , which is an angle of the stps on the unit circle . the bifurcation diagram for with respect to shown in fig .[ fig : worefractoriness.eps ] . for values of that are relatively small, can take any value in the range between and , which means that cps does not occur ( a non - cps state ) .after the first transition at , the value of is confined within a localized region .first , we examine the system behavior around .figure [ fig : criseso9.eps ] shows the return plots in the space .the points shown in fig .[ fig : criseso9.eps](b ) ( ) are distributed within a limited region , whereas the points shown in fig . [fig : criseso9.eps](a ) ( ) are distributed throughout the entire space .therefore , these plots imply that the system undergoes a boundary crisis . here , the crowding of points shown in fig .[ fig : criseso9.eps](b ) represents cps ( a cps state ) as in the case of the localization of the stps on the unit circle , implying that phase slip does not occur .moreover , the second transition at seems to be an interior crisis , which is indicated by the change between fig .[ fig : criseso9.eps](c ) ( ) after the crisis and fig .[ fig : criseso9.eps](d ) ( ) before the crisis . more precisely , the two disconnected attracting sets shown in fig .[ fig : criseso9.eps](d ) are included in the single attractor shown in fig .[ fig : criseso9.eps](c ) .after the second transition , a typical sequence of inverse period - doubling bifurcations occurs . as a result , cpl ( a cpl state )is observed in the regime where the driven system fires periodically . in the space for ( a ) , ( b ) , ( c ) , and ( d ) .,height=245 ] around the phase transition point .( b ) log - log plot of the average time intervals with respect to the difference between the parameter and its critical value .the slopes represent the scaling exponents .the results are averaged over 100 and 1000 different realizations for ( a ) and ( b ) , respectively.,height=188 ] around the phase transition point .( b ) log - log plot of the average time intervals with respect to the difference between the parameter and its critical value .the slopes represent the scaling exponents .the results are averaged over 100 and 1000 different realizations for ( a ) and ( b ) , respectively.,height=188 ] we characterize the average time interval between two successive phase slips ( i.e. , the plateau length ) with respect to around the transition point , as shown in fig .[ fig : sliploglog.eps](a ) .figure [ fig : sliploglog.eps](b ) shows the log - log plot of in dependence on , where is the transition point to cps . here, is approximately determined by the point at which the distribution of the stps begins to become localized .we numerically find a power - law scaling , where the scaling constant suddenly changes from to .although the range of the scaling region is relatively short , this scaling behavior is clearly different from that observed for a system with a single time scale .the scaling behavior with a single time scale is related to type - i intermittency and is described by .in general , for a periodically driven chaotic system with a single time scale and a single rotation center , a simplified mapping model can explain the transition to cps between the system and the driving force .in the map model , the boundary between the synchronization state and the non - synchronization state is explained by a saddle - node bifurcation of unstable and stable periodic orbits of the map .however , in the present system , phase locking can not simply be related to a saddle - node bifurcation , because multiple time scales exist .that is , the spiking period is characterized by fast oscillations , related to the variables and , and the quiescent period is characterized by slow oscillations , related to the variable .hence , the dynamics of the system on the threshold , which corresponds to the map model , is affected by both the fast and the slow oscillations .therefore , the mechanism of a sudden change in the scaling law differs from the case of a single time scale but is still an open problem . as a function of .the dashed line indicates or the transition point to cps .( b ) enlargement of ( a ) for .it should be noted that denotes the step region before the transition to cps .the results are averaged over 1000 different realizations.,height=170 ] as a function of .the dashed line indicates or the transition point to cps .( b ) enlargement of ( a ) for .it should be noted that denotes the step region before the transition to cps .the results are averaged over 1000 different realizations.,height=170 ] the distribution of the stps is now used to detect phase synchronization . in order to quantify the degree of phase synchronization , we employ kuramoto s order parameter defined as , where is the number of stps .the parameter satisfies .this corresponds to the amplitude of an average of stps .the more localized the distribution of the stps is , the greater is the value of .figure [ fig : worefracop.eps](a ) shows the order parameter as a function of , with averaged over 1000 different realizations for a time interval of 1000 s. the transition point to cps is denoted by the dashed line in fig .[ fig : worefracop.eps](a ) .we find that a stepwise behavior precedes the transition to cps , with the step region , as shown in fig .[ fig : worefracop.eps](b ) .this stepwise behavior indicates that there exists a region between the non - cps state and the cps state , where the degree of synchronization is not sensitive to .in addition , decreases just before the transition point . to the best of our knowledge ,such a stepwise behavior in the transition has not been observed for cps in coupled systems with a single time scale . in what follows, we will explain how the stepwise transition and the decrease in are related to the existence of both the slow and the fast dynamics in the present system .in particular , we will investigate the probability density distribution of at with , as indicated in fig .[ fig : worefracop.eps](b ) . figure [ fig : p_theta_o9_2.eps ] shows the shape of the probability density distributions of the stps on the unit circle , i.e. , the distributions of for and for .it should be noted that cps is not detected in either case ; that is , the points are well distributed around the circle .however , we can find a peak in the probability density distribution for , whereas the distribution for is almost uniform .the appearance of the peak , as shown in fig .[ fig : p_theta_o9_2.eps ] , and a shift in the peak in the step region , as explained in appendix [ mech ] , are two consecutive stages that constitute the entire stepwise phenomenon . in the first stage , with an increase in from zero to a value near , a peak in of the stps emerges near .then , in the second stage , the position of the peak shifts because of the effect of slow dynamics , i.e. , an increase in the number of short inter - burst intervals . during the shift of the peak , does not change very significantly because the value of varies in .thus , the step region can be observed .these two stages are investigated in detail in appendix [ mech ] . for ( red solid line ) and ( green dashed line).,height=170 ] in the above results , we have primarily investigated the phase synchronization between the spikes and each period of the driving force , wherein quiescent periods disappear for the values of after phase synchronization . on the other hand ,when the frequency of the driving force is hz , the quiescent periods do not disappear even if the amplitude increases to the same level as in the case hz .it should be noted that the cpl can be observed for a sufficiently large , e.g. , . for values of , phase synchronization can not be clearly observed between the spikes and the driving force .however , if we define at the time when approaches the minimum voltage in each specific quiescent period , where decreases under the threshold , we can detect cps in the sense of localization of the distribution of on the unit circle .it is important to note that the change in the order parameter ( derived from ) with respect to does not depend on the value of , sensitively . in other words ,the value of that is less than approximately causes a stepwise transition in the order parameter for hz using , as shown below .figure [ fig : wrefractoriness.eps](a ) shows the bifurcation diagram of depending on .we also observe a stepwise behavior before a transition to the cps in the variation of the order parameter computed from , as shown in fig .[ fig : wrefractoriness.eps](b ) .moreover , in the region of cps , fine tuning of yields cpl between a burst and periods of the force , where , etc .figure [ fig : plk302o92.eps ] shows cpl for . with respect to .( b ) the corresponding values of the order parameter .the inset shows a magnified image for .the results are averaged over 1000 different realizations.,height=170 ] with respect to .( b ) the corresponding values of the order parameter .the inset shows a magnified image for .the results are averaged over 1000 different realizations.,height=170 ]we have investigated cps in a spiking - bursting neuron model under periodic forcing with a small amplitude but with a frequency as high as that of the spikes .first , we observed cps between the spikes and the periodic force .this cps has been detected on the basis of the fact that a set of points , which are conditioned by the phases of the periodic force at each spiking time , was concentrated on a sector of the unit circle .in addition to cps , we observed two characteristic phenomena around the transition point to cps .one phenomenon involves a change in the power - law scaling for the average time intervals between phase slips , as shown in fig .[ fig : sliploglog.eps](b ) .this scaling behavior is different from that exhibited by the conventional system , i.e. , a chaotic system whose attractor has a single rotation center with only one characteristic time scale . in such a conventional system ,the scaling exponent for the transition to cps takes a unique value of .this might be because the phase synchronization in question can not be simply associated with a saddle - node bifurcation , owing to the interaction between the slow and the fast dynamics .the other phenomenon shows a stepwise behavior before the transition to cps , as shown in fig .[ fig : worefracop.eps](b ) .this phenomenon has been found by the observation that the degree of phase synchronization is not sensitive to the amplitude of the force just before the transition point .moreover , we found that a decrease in the degree of synchronization appears ( at in fig .[ fig : worefracop.eps](b ) ) even if the amplitude of the force is increased .the stepwise behavior and this decrease could be induced by the effect of slow dynamics ( see appendix [ mech ] ) . from the viewpoint of neuroscience, our system might be regarded as a simple model whose fast driving force corresponds to sharp wave - ripples .this phenomenon involves a very fast oscillation of local field potentials observed in the hippocampus in the brain .let us discuss , below , a possible interpretation of our results in terms of synchronization phenomena in real neuronal systems .first , the observed phenomena in our model , particularly cps with a fast driving force , can be interpreted as a consequence of the interaction between the ripples and the activity of a single neuron .in fact , it has been experimentally shown that such ripples synchronize in phase with the spikes of a single neuron .furthermore , it has been suggested that firing sequences accompanying ripples in the hippocampal network form a representation of stored information .the replay of the firing sequences during sleep mediates a consolidation of memory for the stored information in the hippocampus and the neocortex .in addition , some experimental results have indicated that such a memory replay is conducted on a shorter time scale than the actual experience , where the spatiotemporal structure of the firing sequences plays a key role in the stored information .thus , if our spiking - bursting system with a fast driving force can be regarded as a model for such biological neurons , our result would imply that the precision of the temporal structure of the spiking patterns might be enhanced by cps in real neuronal systems .it should be noticed that cps is not affected even if there exist small fluctuations in isis and that cps extends the detection of the temporal structures of the firing patterns in the short time scale of a spiking - bursting behavior .additionally , from another point of view , the slow oscillations along the lower envelope of the membrane potential can be considered as transitions between the up and down states in a cortical neuron .specifically , we focus on the up and down states during the slow - wave sleep . here , the neurons in the up state fire synchronously with higher frequency , whereas the activity of the neurons in the down state is relatively quiescent . from this perspective , the disappearance of the quiescent periods ( down states ) in the forced spiking in our simulation may be interpreted as a persistently depolarized up state observed for cortical neurons in the awake states . in fact , the prolonged down states during sleep are likely to occur owing to a decrease in the excitatory input . therefore ,if this input can be regarded as our sinusoidal driving force , cps may efficiently help a weak input to depolarize a down state into a persistent up one ._ reported that a neural network model can qualitatively reproduce the experimental results in using a time - discrete map model , which is simpler than our model .therefore , we may infer that our results on cps at high frequency in the simple spiking - busting neuron model provide some suggestions for neuroscience to understand the mechanisms of the abovementioned real neuronal activity , particularly in terms of nonlinear dynamics .the following topics might be considered from the viewpoint of extending this study .first , for the future study it would be important to confirm whether our findings can be observed in other slow - fast models such as the hindmarsh - rose model and other biophysical models .a comparison with the other models will provide further insight into slow - fast dynamics in the neuronal spiking - bursting activity .moreover , it should be important to clarify the general onset mechanism of the observed phenomena using map - based models , as was clarified in .it should also be important to investigate the response of coupled bursting systems to sinusoidal forcing in terms of the interaction between the slow and the fast dynamics .the authors are grateful to prof .m. tatsuno , prof .h. hata , prof .m. baptista , dr .k. morita , dr .s. kang , and dr .g. tanaka for their valuable suggestions .this work is partly supported by aihara complexity modelling project , erato , jst , and the ministry of education , science , sports and culture , grant - in - aid for scientific research no .21800089 and no .20246026 , and the aihara project , the first program from jsps , initiated by cstp , and grants of the german research foundation ( dfg ) in the research group for 868 computational modeling of behavioral , cognitive , and neural dynamics .in what follows , we explain how increases toward the step region in fig . [fig : worefracop.eps](b ) and how it does not increase in the step region .these phenomena can be explained on the basis of the changes in peaks in the probability distribution as follows .first , we investigate the quiescent periods in terms of the extent in the decrease in , i.e. , the extent of hyperpolarization , with an increase in . as shown in fig .[ fig : times_relax_o9.eps](a ) , we observe two types of hyperpolarizations in the quiescent periods , namely , shallow and deep hyperpolarizations , when .shallow hyperpolarizations are observed between and , whereas deep hyperpolarizations are observed below .therefore , the two types of hyperpolarizations are distinguished by two appropriate thresholds for , i.e. , and , as shown in fig .[ fig : times_relax_o9.eps](a ) .the threshold detects both the shallow and the deep hyperpolarizations , whereas the threshold detects only deep hyperpolarization .figure [ fig : times_relax_o9.eps](b ) shows the ratio of shallow and deep hyperpolarizations over all the hyperpolarizations detected by as a function of , when counted in the interval of s and summed over different realizations . with two thresholds , ( blue dashed line ) and ( green dashed - dotted line ) , for .( b ) the ratio of shallow ( green dashed - dotted line ) and deep ( blue dashed line ) hyperpolarizations with respect to . the results for ( b ) are averaged over 100 different realizations . also shownis ( red solid line ) ., height=170 ] with two thresholds , ( blue dashed line ) and ( green dashed - dotted line ) , for .( b ) the ratio of shallow ( green dashed - dotted line ) and deep ( blue dashed line ) hyperpolarizations with respect to . the results for ( b ) are averaged over 100 different realizations . also shownis ( red solid line ) ., height=170 ] as shown in fig .[ fig : times_relax_o9.eps](b ) , the number of deep hyperpolarizations ( blue dashed line ) decreases with an increase in .this implies that the number of longer isis corresponding to deep hyperpolarizations decreases , and this decrease is related to an emergence of a peak in , as explained below .let be the length of the isi between the spike and the spike .then , figs .[ fig : ts_thetaisik005o9.eps](a)((c ) ) and ( b)((d ) ) show the time series of and for ( ) , respectively . in fig .[ fig : ts_thetaisik005o9.eps](c ) , we observe gradually shifting envelopes for the oscillations of , as indicated by the dotted rectangles , whereas no such specific tendency is observed in fig .[ fig : ts_thetaisik005o9.eps](a ) .the gradual shifts in the envelope occur during the time intervals other than those during which deep hyperpolarization is observed , which are indicated by the double - headed arrows in fig .[ fig : ts_thetaisik005o9.eps](d ) .it should be noted that deep hyperpolarization corresponds to the oscillations of exceeding and that fig .[ fig : ts_thetaisik005o9.eps](b ) shows a greater number of deep hyperpolarizations than those shown in ( d ) .these observations give rise to a change in the shape of in fig .[ fig : p_theta_o9_2.eps ] .that is , is uniformly distributed for , owing to the fluctuation of with a few gradually shifting envelopes , as shown in fig .[ fig : ts_thetaisik005o9.eps](a ) . on the other hand, has a peak for , attributed to the gradually shifting envelopes , as shown in fig .[ fig : ts_thetaisik005o9.eps](c ) .the region for the envelopes is around and corresponds to the peak of in fig .[ fig : p_theta_o9_2.eps ] .this appearance of the peak in is attributed to the decrease in the number of deep hyperpolarizations , as seen in figs .[ fig : ts_thetaisik005o9.eps](b ) and ( d ) .( a)((c ) ) and ( b)((d ) ) for ( ) .the regions indicated by the double - headed arrows in ( d ) correspond to the bursting periods other than the period of deep hyperpolarizations.,height=188 ] ( a)((c ) ) and ( b)((d ) ) for ( ) .the regions indicated by the double - headed arrows in ( d ) correspond to the bursting periods other than the period of deep hyperpolarizations.,height=188 ] we explain how as a function of remains flat in . figure [ fig : times_relax_o9.eps](b ) shows that the number of shallow hyperpolarizations increases with an increase in in , as well ( green dashed - dotted line ) . in order to investigate the increase in shallow hyperpolarizations in , we consider the distribution of the isis .figure [ fig : p_isi_o9_2.eps](a ) shows the distribution of isis for , and , denoted by , and , respectively . in this figure, we can observe peaks located at } ] in fig .[ fig : p_isi_o9_2.eps](a ) correspond to the inter - burst intervals during such a transient phase synchronization .in contrast , the two other peaks at small indicated by ( ii ) in fig . [ fig : p_isi_o9_2.eps](a ) correspond to the two isis for the three consecutive spikes in each burst .we also calculate the corresponding distributions of , as shown in fig .[ fig : p_isi_o9_2.eps](b ) . the peaks in the distribution of related to the peaks in .the second and the third spikes in each burst during the transient phase synchronization correspond to the peaks in around and ( ( iii ) and ( iv ) in fig .[ fig : p_isi_o9_2.eps](b ) ) , respectively .it should be noted that for , the distribution of is concentrated , with a large peak ( unimodal ) located near }$ ] .then , for , that peak is weakened and other peaks ( multimodal ) appear near and .the shift in the peak in the distribution of from unimodal to multimodal does not significantly influence the amplitude of the average of the stps whose distribution on the unit circle reflects the shape of .therefore , the value of the order parameter does not increase in .however , as increases further , the peak located at becomes higher , while increases steeply , and the transition to cps occurs . finally , looking at figs .[ fig : p_isi_o9_2.eps](a ) and [ fig : p_isi_o9_2.eps](b ) in more detail , we find that the peaks in both and for are sharper than those for and .the sharpness of the peaks corresponds to the duration of the transient phase synchronization .in fact , the contribution to from the newly emerged peaks in around and is the greatest when . on the other hand ,the contribution of the first concentrated peak around decreases at that value of . as a consequence ,even though increases , which would normally cause the degree of synchronization to increase , the value of decreases near . a similar phenomenon has been reported as anomalous phase synchronization , whereby coupling among interacting oscillator systems increases the natural frequency disorder before synchronization .we take different realizations in order to smooth out the averaged curves in our numerical simulations , where the different realizations with respect to different initial conditions are statistically equivalent as their long - time average due to the ergodicity . for different realizations ,we choose random initial values of from the distribution , i.e. , the normal distribution with average and variance , and fixed initial values of , and exclude transient processes for the averaging . by intensive simulations for in the region where parameter is closer to the transition point , we are able to numerically observe another power - law scaling that is similar to the case of the transition to chaotic phase synchronization for a system that has a single rotation center , with only one characteristic time scale. however , further investigations are needed to confirm whether this scaling corresponds to the conventional case .
we investigate the entrainment of a neuron model exhibiting a chaotic spiking - bursting behavior in response to a weak periodic force . this model exhibits two types of oscillations with different characteristic time scales , namely , long and short time scales . several types of phase synchronization are observed , such as phase locking between a single spike and one period of the force and phase locking between the period of slow oscillation underlying bursts and periods of the force . moreover , spiking - bursting oscillations with chaotic firing patterns can be synchronized with the periodic force . such a type of phase synchronization is detected from the position of a set of points on a unit circle , which is determined by the phase of the periodic force at each spiking time . we show that this detection method is effective for a system with multiple time scales . owing to the existence of both the short and the long time scales , two characteristic phenomena are found around the transition point to chaotic phase synchronization . one phenomenon shows that the average time interval between successive phase slips exhibits a power - law scaling against the driving force strength and that the scaling exponent has an unsmooth dependence on the changes in the driving force strength . the other phenomenon shows that kuramoto s order parameter before the transition exhibits stepwise behavior as a function of the driving force strength , contrary to the smooth transition in a model with a single time scale .
the temperature fluctuations in the cosmic microwave background ( cmb ) are gaussian to a high degree of accuracy .non - gaussianity , if any , enters at a highly subdominant level. it could be either primordially generated along with gaussian fluctuations by exotic inflationary models , and/or it could arise from secondary anisotropies , such as gravitational lensing , sunyaev - zeldovich ( sz ) , or sachs - wolfe ( sw ) effects . quantifying the degree and nature of non - gaussianity in the cmbconstrains specific inflationary models , as well as enhances our understanding of the secondary processes the cmb underwent beyond the surface of last scattering .interpretation of any such measurement is complicated by the fact that systematics and foreground contaminations might also produce non - gaussian signatures .given the nearly gaussian nature of the cmb , -point correlation functions , and their harmonic counterparts , polyspectra , are the most natural tools for the perturbative understanding of non - gaussianity .if it were generated by inflationary models admitting a term , the leading order effect would be a -point function .on the other hand some secondary anisotropies , such as lensing , are known to produce 4-point non - gaussianity at leading order .the skewness ( or integrated bispectrum ) was measured by and -point correlation function by .many alternative statistics have been used to investigate non - gaussianity in cmb .a partial list includes wavelet coefficients , minkowski functionals , phase correlations between spherical harmonic coefficients , multipole alignment statistics , statistics of hot and cold spots , higher criticism statistic of pixel values directly .most of these measurements are consistent with gaussianity , although some claim detections of non - gaussianity up to 3- level .these alternative statistics , albeit often easier to measure , typically depend on -point functions in a complex way , thus they can not pin - point as precisely the source of non - gaussianity . among the three - point statistics , there is a perceived complementarity between harmonic and real space methods .the bispectrum can be relatively easily calculated for a full sky map , although the present methods have a somewhat slow scaling .methods put forward so far use the `` pseudo - bispectrum '' , ignoring the convolution with the complicated geometry induced by galactic cut and cut - out holes .in contrast with harmonic space , the corresponding pixel space edge effect corrections are trivial , since the window function is diagonal .unfortunately , simple methods to measure three - point clustering exhibit a prohibitive scaling if the full configuration space is scanned .to remedy the situation , most previous measurements of the -point function only deal with an ad - hoc sub - set of triangular configurations .both of these papers covered the full configuration space on small scales ; the former paper also appears to have estimated most configurations on large scales , missing intermediate configurations with mixes scales .this work presents a novel method , which , at a given resolution , scans the full available configuration space for -point level statistics using realistic computational resources .we find that the resulting configuration space itself is overwhelming to such a degree that interpretation of the results also requires novel methods .we introduce false discovery rate ( fdr ) technique as a tool to interpret three - point correlation function measurements .the next section introduces our algorithm to measure the -point correlation function , 3 illustrates it with an application to the wmap first year data release , and 4 introduces the fdr method and applies it to our results .we summarize and discuss our results in 5 .the three point correlation function ( e.g. , * ? ? ?* ) is defined as a joint moment of three density fields at three spatial positions . for cmb studies denotes temperature fluctuations at position on the sky , and stands for ensemble average . if the underlying distribution is spatially isotropic , will only depend on the shape and size of a ( spherical ) triangle arising from the three positions .a number of characterizations of this triangle are possible and convenient .the most widely used are the sizes of its sides ( measured in radians ) , or two sizes and the angle between them .this latter angle is measured on the spherical surface of the sky .one can use the ergodic principle of replacing ensemble averages with spatial averages to construct a nearly optimal , edge corrected estimators with heuristic weights where we symbolically denoted a particular triangular configuration with ( any parametrization would suffice ) , and if pixels , and otherwise .we also defined a multiplicative weight for each pixel : this is if a pixel is masked out , and it could take various convenient values depending on our noise weighting scheme if the pixel is inside the survey ; e.g. , in the case of flat weights it is simply .this simple estimator has been widely used in large scale structure , and it is nearly optimal with appropriate weights .( e.g. , * ? ? ?* ; * ? ? ?it is entirely analogous to the successful estimators used for the measurements of the s for the cmb ( up to harmonic transform , * ? ? ?* ; * ? ? ?the naive realization of equation [ eq : estimator ] has a prohibitive scaling if one needs to scan through triplets of pixels and assign them to a particular bin .the summation can be restricted and thus made faster if one restricts the number of configurations and the resolution ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , or it can be sped up by using tree - data structures .neither of these methods is able to scan through all possible configurations in megapixel maps with reasonable amount of computing resources .here we propose a new method which uses both hierarchical pixelization and fourier methods motivated by to scan through all the triangles simultaneously .note that comes closest to our aims , but their simple two - step approach is not systematic enough to cover all possible triangles at a given resolution , and it is not fast enough for massive monte carlo simulations . in the followingwe will choose a parametrization of the triangle using two of its sides , and the angle between them .we define the configuration space as a set of ( logarithmic ) bins for the sides , and linear bins for the angle in their full possible range , i.e. , ( remember that the sides of the triangle on the sky are also measured in radians ) .the given resolution is determined by the number of bins for , and the number of bins for .note that a particular triangle might appear more than once in this scheme , albeit with different resolutions .different triangular bins of the three - point function are strongly correlated anyway , and the correlation from duplicating triangles can be taken into account in the general statistical framework over correlated bins . given a triangular configuration , and a pixel , all other pixels which enter the summation in equation [ eq : estimator ] are located on two concentric rings of size and . as a consequence , the summation over fixed can be thought of as an unnormalized ( raw ) two - point correlation function between two rings . to obtain three - point correlation function ,one has to multiply this two - point correlation function with the value of the center pixel and finally sum over .calculating the two - point correlation function of rings can be fast if one repixellizes the map ( c.f .[ fig : rings ] ) into rings with sizes matching the binning scheme for , and uniform division in .such a repixellization , resulting in ring - pixels as shown in fig .[ fig : rings ] , would take only steps even in a naive way ; the healpix hierarchical scheme allows it to be done in time .we use the following algorithm : let us start a recursive tree walk at the coarsest map , in the healpix scheme . for each pixel in this map ,we determine , using its center , which ring - pixel it would belong to . if the size of the pixel is much smaller than this ring - pixel ( how much smaller is a parameter of our algorithm : in this paper we used the condition that the pixel has to be smaller then the bin width which is also the approximate size of the ring - pixels ) , we record it . if not , algorithm splits the quad - tree , and calls itself recursively for each four sub - pixels .this procedure ends at the latest when the highest resolution ( i.e. the one of the underlying map ) is reached .if the bins are chosen appropriately such that large ring - pixels are set up for large triangles , for many pixels it will finish earlier .as noted above , the map has to be regridded around each pixel into rings of ring - pixels . in total , this takes time . calculating the two - point correlation function between ringsspeeds up using fast fourier transform ( fft ) methods , such as those put forward in .the recipe is the following .first , fft every ring to obtain complex coefficients ; then calculate for every pair of rings the `` pseudo power spectrum '' , where means complex conjugate . due to the u(1 ) symmetry of the ringan inverse transform will give the ( raw ) two - point correlation function between the two rings ( c.f . * ? ? ?* ; * ? ? ?if we have rings , each of them ring - pixels , each fft can be done in time , and there is cross correlations to be calculated for a full scan of configurations . all the above needs to be performed for each pixel as a center point .the total scaling ( including the initial regridding ) takes , where we took into account that the two opposite pixels can be handled in one go if a symmetric set of bins around are used for .while the above procedure to calculate raw ( unnormalized ) correlation functions appears somewhat complex , we have checked with direct calculation that it gives numerically the same result as calculating correlations on the rings in a naive way . in order to obtain normalized correlation functions, the same procedure has to be followed for the rings associated with weights / masks .each configuration of the raw three - point function is divided with the mask / weight three - point function for the final result .for many realizations with the same mask , such as in the case of massive monte carlo simulations , the mask correlations need to be estimated only once , representing negligible cost .the above abstract scheme and calculation will be illustrated and further clarified with a practical application to wmap next .we demonstrate our method to calculate the three - point correlation function with an application to wmap .we downloaded first year foreground cleaned maps from lambda website .there are total 8 maps for 8 differencing assemblies ( da ) in q , v and w bands : q1 , q2 , v1 , v2 , w1 , w2 , w3 , and w4 , already in healpix format .following the two - point analysis of wmap , we only used cross correlations , i.e. three - point correlation functions calculated from three different das .we produced 100 ( gaussian ) simulations with synfast in healpix package .the input power spectrum , also from the lambda website , was taken from model using a scale - dependent ( running ) primordial spectral index which best fits the wmap , cbi and acbar cmb data , plus the 2df and lyman - alpha data .every simulation consists of 8 assembly maps as the data .these 8 maps were generated with a same random seed , representing the same primordial cmb , but 8 different beam transfer functions .then different simulated noise maps from lambda web were added to the synfast output maps . since the non - gaussian signal is exceedingly small , and on the smallest scales the data are noise dominated , we degraded all maps ( simulations and data ) to after applying the kp2 mask .more precisely , we added up pixel and weight values for each map ; our two weighting schemes are presented in the next subsection . at the heart of our algorithmis the regridding of figure [ fig : rings ] which matches our binning of the triangles .we chose 19 rings for half of the sphere surface , and the same bins are repeated on the other half symmetrically around . the 19 bins are chosen to be uniformly distributed in logarithm between and .the number 19 was chosen such that , which gives a logarithmic resolution of .every ring was divided to 20 ring - pixels in .this number renders the resulting ring - pixels fairly compact , and it is also convenient for our chosen implementation of fft ( fftw , * ? ? ?weight maps were constructed using the kp2 mask and the noise profile of the maps .we used two weighting schemes : flat weighting where is 1 or 0 depending on the mask , and ( inverse ) noise weighting ; for the latter we used the effective number of observations of the pixel .the weights need to be determined only up to multiplicative factor , as their overall normalization cancels from the algorithm .the average noise level for each da is used when combining over different cross correlations .the total number of triangular configurations in rings with possible values of ( angles large than count to ) is for autocorrelations .the same number is valid for a cross correlations ( which we will be exclusively doing ) of 3 das .we introduce the notation ( da1 , da2 , da3 ) , for the central pixel , the first ring and the second ring sampled from the three da in this order .in addition we restrict the `` first ring '' has no larger than that of the `` second ring '' .then the total number of cross correlations between the 8 da s is .effectively , each triangle is calculated six times for a given triplet of da s due to the possible 6 permutations .however , each sample has a different resolution , therefore we opt to keep all possibilities .the resulting correlations are taken into account when dealing with correlated bins in general . in total, there are about triangular configurations for each data or simulation set .note that the total number of triplets of ring - pixels examined is more , or .this is still a lot smaller than checking triplets of pixels naively in an map .these numbers suggest that our algorithm even without fft should take order of days , while the naive algorithm would need over 200 years of cpu . for a batch of simulations ( comprising of da maps ) ,the calculation of the full three - point function in all the configuration takes about 90 hours on an intel xeon 2.4ghz cpu .this means each cross - correlation takes only about minutes on average !about 10 hours are saved by batch processing 10 sets of simulations : to estimate in the data alone ( one set of 8 da files ) took 10 hours .clearly , the given resolution does not extract all the information from the data , as there are approximately distinct configurations of the bispectrum or three - point function .however , it surely must be redundant to extract more configurations than the amount of data .the ratio of data points vs. configurations is about for our chosen bins .it is unlikely that it were fruitful to push this number towards much smaller values , although the speed of our algorithm would allow higher resolution .figure [ fig : zeta ] shows a typical set of measurements for the ( w2,w3,w4 ) da cross - correlation .the results from wmap lie comfortably in the 68% range of results from gaussian simulations .similar results are found for the other da combinations , or when all the 24 possible w - only da results are averaged .although different combinations have different effective beam , full averaging is meaningful on large scales .we checked that averaging all 336 possible combinations is also consistent with gaussian .finally , repeating all the measurement with noise weighting produced no obvious departure from gaussianity either .at the same time , the scatter in the simulations , i.e. , the probability density function ( pdf ) of from 100 simulation , shows a slightly non - gaussian signature .for ( w2,w3,w4 ) , figure [ fig : hist ] shows a histogram derived from all simulated values normalized by their measured median and 68% levels .slight deviations from gaussian distribution are evident : student distribution of degree 3 fits better the overall distribution .this same distribution produces lower when applied to _ individual _ triangular configurations according to the inset , i.e. it is a ( marginally ) better fit than gaussian .we fully take this into account in our hypothesis testing which is described next .our goal is to test the null - hypothesis of gaussianity against our measurements by means of comparing the values measured from the data with the corresponding probability distribution function ( pdf ) determined from gaussian simulations .a crucial step in the traditional method appears to be computationally infeasible due to the large number of configurations : calculation of the ( pseudo ) inverse of an matrix for , the total number of our highly correlated configurations .moreover , as seen above , the underlying pdf marginally violates gaussian assumption , even for gaussian simulations . even if it were possible to calculate the inverse of the covariance matrix , and we were to accept the accuracy of the gaussianity in pdf of the individual bins , it is not possible to determine the underlying covariance matrix with sufficient accuracy .in fact , one would need ( e.g. , * ? ? ? * ) at least ( and likely much more than ) 2.6 million simulations for that purpose . shown that using simulations with uncorrelated noise might result in spurious detection of non - gaussianity .therefore we chose to use only the wmap supplied correlated noise simulations , of which 110 is available at present .it is straightforward to test the null - hypothesis with a single configuration : we can calculate a -value from the best fit student distribution from our simulations .the -value is defined as the probability of obtaining a value that is at least as extreme as the one measured from wmap . for a threshold , the null - hypothesis is rejected at level if the -value of the datum is smaller than .a problem arises from combining 2.6 million tests when all the data are used .for instance , even if the hypothesis were true , about 260 bins would still be rejected at the level ( ignoring the correlations in the data ) .fortunately a robust and simple method exists for massive hypothesis testing , which is insensitive to correlations between the tests , and makes no assumption on the gaussianity of the underlying error distribution : the method of false discovery rate ( fdr ) . in astronomy , it has been successfully applied in the context of image processing and finding outliers by , which can be consulted for a more detailed introduction .the fdr method combines the same -value as defined above for individual tests using a threshold for rejecting the null hypothesis .this combination is insensitive to correlations and has more statistical power than naive combination .our goal is to adapt this powerful method for hypothesis testing of three - point correlation function measurements with overwhelming number of configurations .the fdr method gives a simple prescription for finding a threshold for rejection .in particular , the recipe suggests that we choose a threshold such that we control the rate of _ false rejections _ or fdr .the parameter , taking a similar role to the confidence interval in more traditional tests , is the maximum rate of fdr .if we fix an such that , the fdr procedure will guarantee in ensemble average .next we describe the recipe to control fdr ; more details can be found in .let denote the -values calculated from the measurements of configurations , _ sorted _ from smallest to largest .let where is a constant depending on the level of correlations between different configurations . for uncorrelated data ; while can be used for correlated data .note that technically one would have to adjust to the degree of correlations in the data .the suggested value for correlated data is extremely conservative , and should be considered as a strong upper limit . even using this conservativeadjustment decreases the statistical power of the technique only logarithmically ; the final results are expected to be robust regardless of the degree of correlations .if configurations with are rejected , equation [ eq : fdr ] will hold , i.e. the fdr is controlled according to our preset parameter .the procedure is represented graphically on figure [ fig : fdr ] : is plotted against superposed with the line through the origin of slope .all -values reject the null hypotheses which are to the left from the last point at which falls below the line .these might include some false discoveries which are guaranteed to be a smaller fraction than in ensemble average .we have applied the fdr recipe to all of our individual cross - three point functions , as well as our full data set .since is a constant , initially we kept . for a fixed ,the results can be subsequently reinterpreted in terms of any . for da combination ( w2 , w3 , w4 ), there is no rejection for , i.e. , allowing as high as 81% false rejections , not a single configuration rejected our gaussian null hypothesis .correlations might increase , but it must be .the true , when correlations are taking into account , can only be larger than our effective for . in other words ,the data are fully consistent with gaussianity . as a sanity check, we repeated the fdr analysis in our simulations as well . by scanning through different values from 0 to 1, we find that 50 out of 100 simulations have rejections with .this means that the wmap measurements are fully consistent with gaussianity at a level better than 1- in the traditional sense . in summary , at the three - point level , scanning all configurations , we did not find any significant non - gaussianty which would be localized in pixel space triangular configurations .we performed fdr analysis on all measurements individually , as well as on the combination of all those measurements with million configurations in total .none of these cases produced credible evidence for non - gaussianity and all of them were fully consistent with our null hypothesis at .we presented a new method to measure angular three - point correlation functions on spherical maps .we achieve an unprecedented scaling with a combination of hierarchical and fourier algorithms .the speed of our technique allows a systematical scan of the full available configuration space at a given resolution .such speed is especially useful for cross correlations and monte carlo simulations , where a vast number of configurations and measurements need to be performed .we have achieved a speed of about minutes per cross correlations , when 336 cross - correlations have been estimated simultaneously in healpix maps using a single intel xeon 2.4ghz cpu .this is to be contrasted with a naive approach , which would have taken about 200 years per cross - correlations ; a million fold speed up . as a first application of our code we analyzed the first year wmap data along with 100 realistic simulations .we have calculated cross - correlations for about triangular configurations , or about triplets in total in maps of corresponding the da s .the ratio of pixels / configurations is about 50 for each measurement .comparing our measurements from 100 gaussian simulations with realistic correlated noise , we found wmap to be comfortably within the 68% percent range for most configurations . any significant departure from gaussianity at the three - point level ,even if localized in particular triangular configurations , would have shown up clearly in our full scan of the available configuration space .our main result is that there is no credible evidence of non - gaussianity at the three - point level at any of the triangular configurations we examined . as a consequence ,if the tentative detection of non - gaussianity claimed in previous works holds up , it should correspond to either 4-point or higher order correlations , or to spatially localized features which break rotational invariance ( e.g. , * ? ? ?* ; * ? ? ?in contrast with our measurements , all previous studies of higher order statistics used autocorrelations .comparison of our errorbars with that of appears to show that this increases the errors by a factor of two ( see the discussions below ) .in addition , many measurements used uncorrelated noise simulations . according to the findings of , this might increase the likelihood of finding spurious non - gaussianity .analysis of our gaussian simulations revealed that there is a slight non - gaussianity in the error distribution of individual configurations .this is not surprising , since three - point correlation function is a non - linear construction of the gaussian random variables ( c.f .the error distribution is well fit by a student distribution with 3 degrees of freedom .to quantify any possible departure of the overall data set from gaussianity , we introduced a new technique , fdr , to interpret three - point statistics .this corresponds to an optimized multiple hypothesis testing , and it is insensitive to the unavoidable correlations in the data . all of our fdr tests , whether applied to any of the 336 cross correlations , or the combined data set , were fully consistent with gaussianity with better then 1- .this quantifies our previous assertion based on examination of the individual configurations under the assumption of statistical isotropy .the above model independent tests showed that there is no credible evidence of any non - gaussianity in the data .next we illustrate how our measurements yield constraints on specific non - gaussian models .we choose a simple phenomenological model corresponding to the quadratic expansion of the density field in terms of one parameter , , as put forward by : to obtain constraints on this parameter , we construct an estimator for where is our measurement in data or simulation maps .we calculated the two - point correlation function analytically , to avoid any bias from the non - linear construction ( c.f .we used the same best fit power spectrum as for the simulations , as well as taking into account beam and pixel window functions .since previous measurements already established the weakness of non - gaussianity , our gaussian simulations should be accurate enough to calculate the variance .applying the same estimator to our 100 simulations , we obtained error bars for estimated from each particular configuration .the simplicity of the phenomenological model lies in the fact that a constant value of is assumed .we do not attempt to combine our estimates optimally , instead we use simple considerations .the signal increases towards small scales in this model , while noise dominates on the smallest scales .since we already discarded the smallest scales when using , it is intuitively clear that most signal pertaining to this model will be concentrated in the small fraction of the triangles corresponding to small scales , in particular the skewness . to confirm this we generated and analyzed a set of non - gaussian simulations according to equation [ eq : nong ] , with equal to 1000 , 2000 , 3000 , 4000 , and 6000 .inspection of the configurations together with the errorbars from the gaussian simulations confirmed the above idea .therefore we decided to use the skewness , which corresponds to giving zero weight to all other configurations when combining our estimators . from thesewe obtain where the errorbar was estimated from the gaussian simulations . quotes similar constraints for a low quadrupole cdm model , but their errorbars are a factor of two larger for a model similar to the one we use . the fact that we obtained a factor of two tighter constraints than suggests that using cross - correlations is superior to auto - correlations for three - point statistics of wmap as a sanity check , we calculated the mean value of the skewness estimator for 100 gaussian simulations ; it yields about . on the other hand ,we also demonstrate that we can recover from the non - gaussian simulations .all the simulations have the same underlying gaussian signal and noise , the only difference is the value of .according to figure [ fig : fnlt ] the errors might be underestimated when , and/or there might be a small low bias , but it is clear from the figure that we could detect non - gaussianity if it were present . a suboptimal combination of estimates from all configurations weighted by their inverse variance yields about , a significantly weaker result , confirming the intuitive idea that most of the signal is concentrated on small scales . some of the results in this paper have been derived using the healpix package .we acknowledge the use of the legacy archive for microwave background data analysis ( lambda ) .support for lambda is provided by the nasa office of space science .the authors were supported by nasa through aisr nag5 - 11996 , and atp nasanag5 - 12101 as well as by nsf grants ast02 - 06243 , ast-0434413 and itr 1120201 - 128440 .
we present a new method to estimate three - point correlations in cosmic microwave background maps . our fast fourier transform based implementation estimates three - point functions using all possible configurations ( triangles ) at a controlled resolution . the speed of the technique depends both on the resolution and the total number of pixels . the resulting scaling is substantially faster than naive methods with prohibitive scaling . as an initial application , we measure three - point correlation functions in the first year data release of the wilkinson anisotropy probe . we estimate 336 cross - correlations of any triplet of maps from the 8 differential assemblies , scanning altogether 2.6 million triangular configurations . estimating covariances from gaussian signal plus realistic noise simulations , we perform a null - hypothesis testing with regards to the gaussianity of the cosmic microwave background . our main result is that at the three - point level wmap is fully consistent with gaussianity . to quantify the level of possible deviations , we introduce false discovery rate analysis , a novel statistical technique to analyze for three - point measurements this confirms that the data are consistent with gaussianity at better than 1- level when jointly considering all configurations . we constrain a specific non - gaussian model using the quadratic approximation of weak non - gaussianities in terms of the parameter , for which we construct an estimator from the the three - point function . we find that using the skewness alone is more constraining than a heuristic suboptimal combination of all our results ; our best estimate is assuming a concordance model .
today s global economy is more interconnected and complex than ever , and seems out of any particular institution s control .the diversity of markets and traded products , the complexity of their structure and regulation , make it a daunting challenge to understand behaviours , predict trends or prevent systemic crises . the neo - classical approach , that aimed at explaining global behaviour in terms of perfectly rational actors , has largely failed .yet , persistent statistical regularities in empirical data suggest that a less ambitious goal of explaining economic phenomena as emergent statistical properties of a large interacting system may be possible , without requiring much from agents rationality ( see e.g. ) .one of the most robust empirical stylised fact , since the work of pareto , is the observation of a broad distribution of wealth which approximately follows a power law .such a power law distribution of wealth does not require sophisticated assumptions on the rationality of players , but it can be reproduced by a plethora of simple models ( see e.g. ) , in which it emerges as a typical behaviour i.e. as the behaviour that the system exhibits with very high probability within quite generic settings .the debate on inequality has a long history , dating back at least to the work of kutznets on the u - shaped relationship of inequality on development .much research has focused on the relation between inequality and growth ( see e.g. ) .inequality has also been suggested to be positively correlated with a number of indicators of social disfunction , from infant mortality and health to social mobility and crime .the subject has regained much interest recently , in view of the claim that levels of inequality have reached the same levels as in the beginning of the 20th century .saez and zucman corroborate these findings , studying the evolution of the distribution of wealth in the us economy over the last century , and they find an increasing concentration of wealth in the hands of the 0.01% of the richest .figure [ fig : data ] shows that the data in saez and zucman is consistent with a power law distribution , with a good agreement down to the 10% of the richest ( see caption ref . reports the fraction of wealth in the hands of the and richest individuals . if the fraction of individuals with wealth larger than is proportional to , the wealth share in the hands of the richest percent of the population satisfies ( for ) .hence is estimated from the slope of the relation between and , shown in the inset of fig .[ fig : data ] ( left ) for a few representative years .the error on is computed as three standard deviations in the least square fit . ] ) .the exponent has been steadily decreasing in the last 30 years , reaching the same levels it attained at the beginning of the 20th century ( in 1917 ) .of the wealth distribution ( left y - axis ) as a function of time .both time series refer to the us .the data on the money velocity is retrieved from , the data on the wealth distribution is taken from .inset : relation between the fraction of wealth owned by the percent wealthiest individuals , and for the years 1980 , 1990 , 2000 and 2010 ( see footnote [ foot : beta fit ] ) . right : mzm velocity of money ( mzmv , central y - axis ) as a function of , for the same data .liquidity , defined as the probability that a unit - money random exchange takes place , ( right y - axis ) as a function of , in the synthetic economy described by our model ( see eq .[ def : pavg ] and figure [ fig : k10_ps_beta ] for details on the numerical simulations).,scaledwidth=100.0% ] rather than focusing on the determinants of inequality , here we focus on a specific consequence of inequality , i.e. on its impact on liquidity .there are a number of reasons why this is relevant .first of all , the efficiency of a market economy essentially resides on its ability to allow agents to exchange goods .a direct measure of the efficiency is the number of possible exchanges that can be realised or equivalently the probability that a random exchange can take place .this probability quantifies the `` fluidity '' of exchanges and we shall call it _ liquidity _ in what follows .this is the primary measure of efficiency that we shall focus on .secondly , liquidity , as intended here , has been the primary concern of monetary polices such as quantitative easing aimed at contrasting deflation and the slowing down of the economy , in the aftermath of the 2008 financial crisis . a quantitative measure of liquidity is provided by the _ velocity of money _ , measured as the ratio between the nominal gross domestic product and the money stock and it quantifies how often a unit of currency changes hand within the economy .as figure [ fig : data ] shows , the velocity of money has been steadily declining in the last decades .this paper suggests that this decline and the increasing level of inequality are not a coincidence .rather the former is a consequence of the latter . without clear yardsticks marking levels of inequality that seriously hamper the functioning of an economy , the debate on inequality runs the risk of remaining at a qualitative or ideological level .our main finding is that , in the simplified setting of our model , there is a sharp threshold beyond which inequality becomes intolerable .more precisely , when the power law exponent of the wealth distribution approaches one from above , liquidity vanishes and the economy halts because all available ( liquid ) financial resources concentrate in the hands of few agents .this provides a precise , quantitative measure of when inequality becomes too much .our main goal in the present work is thus to isolate the relation between inequality and liquidity in the simplest possible model that allows us to draw sharp and robust conclusions .specifically , the model is based on a simplified trading dynamics in which agents with a pareto distributed wealth randomly trade goods of different prices .agents receive offers to buy goods and each such transaction is executed if it is compatible with the budget constraint of the buying agent .this reflects a situation where , at those prices , agents are indifferent between all feasible allocations .the model is in the spirit of random exchange models ( see e.g. ) , but our emphasis is not on whether the equilibrium can be reached or not .in fact we show that the dynamics converges to a steady state , which corresponds to a maximally entropic state where all feasible allocations occur with the same probability .rather we focus on the allocation of cash in the resulting stationary state and on the liquidity of the economy , defined as the fraction of attempted exchanges that are successful .we remark that since the wealth distribution is fixed , the causal link between inequality and liquidity is clear in the simplified setting we consider . within our model ,the freezing of the economy occurs because when inequality in the wealth distribution increases , financial resources ( i.e. cash ) concentrate more and more in the hands of few agents ( the wealthiest ) , leaving the vast majority without the financial means to trade .this ultimately suppresses the probability of successful exchanges , i.e. liquidity ( see figure [ fig : data ] , right ) .this paper is organised as follows : we start by describing the model and its basic characteristics in section [ sec : modeldescription ] , providing a quick overview of the main results and features of the model in section [ sec : main_features ] . in section[ sec : solutionsnumerical_and_analytical ] we explain in more detail how these features can be understood by an approximated solution of the master equation governing the trading dynamics .details on the analytical derivations and monte carlo simulations are thoroughly presented in the appendices .we conclude with some remarks in section [ sec : con ] .the model consists of agents , each with wealth with .agents are allowed to trade among themselves objects .each object has a price .a given allocation of goods among the agents is described by an allocation matrix with entries if agent owns good and zero otherwise .agents can only own baskets of goods that they can afford , i.e. whose total value does not exceed their wealth .the wealth not invested in goods corresponds to the cash ( liquid capital ) that agent has available for trading .the inequality for all indicate that lending is not allowed .therefore the set of feasible allocations those for which for all is only a small fraction of the conceivable allocation matrices .starting from a feasible allocation matrix , we introduce a random trading dynamics in which a good is picked uniformly at random among all goods .its owner then attempts to sell it to another agent drawn uniformly at random among the other agents . if agent has enough cash to buy the product , that is if , the transaction is successful and his / her cash decreases by while the cash of the seller increases by .we do not allow objects to be divided .notice that the total capital of agents does not change over time , so and the prices are parameters of the model .the entries of the allocation matrix , and consequently the cash , are dynamical variables , which evolve over time according to this dynamics .this model belongs to the class of zero - intelligent agent - based models , in the sense that agents do not try to maximize any utility function .an interesting property of our dynamics is that the stochastic transition matrix is symmetric between any two feasible configurations and : .we note that any feasible allocation can be reached from any other feasible allocation by a sequence of trades .this implies that the dynamics satisfies the detailed balance condition , with a stationary distribution over the space of feasible configurations that is uniform : .alternative choices of dynamics which also fulfil these conditions are explored in appendix [ app : rules_enumeration ] .in particular , we focus on realisations where the wealth is drawn from a pareto distribution , for for each agent .we let vary to explore different levels of inequality , and compare different economies in which the ratio between the total wealth and the total value of all objects is kept fixed .we use so as to have feasible allocations .we consider cases where the objects are divided into a small number of classes with objects per class ( ) ; objects belonging to class have the same price .if is the number of object of class that agent owns , then takes the form .the main result of this model is that the flow of goods among agents becomes more and more congested as inequality increases until it halts completely when the pareto exponent tends to one from above .the origin of this behaviour can be understood in the simplest setting where , i.e. all goods have the same price ( we are going to omit the subscript in this case ) .figure [ fig : picturesque_richpoor_transition_beta ] shows the capital composition for all agents in the stationary state , where is the average number of goods owned by agent .the population of agents separates into two distinct classes : a _ class of cash - poor _ agents , who own an average number of goods that is very close to the maximum allowed by their wealth , and a _ cash - rich class _ , where agents have on average the same number of goods .these two classes are separated by a sharp crossover region .the inset of figure [ fig : picturesque_richpoor_transition_beta ] shows the cash distribution ( where represents the number of goods they are able to buy ) for some representative agents .while cash - poor agents have a cash distribution peaked at , the wealthiest agents have cash in abundance .agents , , and .points denote the average composition of capital for different agents obtained in monte carlo simulations .this is compared with the analytical solution obtained from the master equation ( green dashed line ) given by eq .( [ eq : me_solution_1good ] ) .the vertical dashed line at indicates the analytically predicted value of the crossover wealth that separates the two classes of agents .insets : cash distributions of the indicated agents . ]these two observations allow us to trace the origin of the arrest in the economy back to the shrinkage of the _ cash - rich class _ to a vanishingly small fraction of the population , as . as we ll see in the next section , when is smaller than the fraction of agents belonging to this class vanishes as . in this regime ,not only the wealthiest few individuals own a finite fraction of the whole economy s wealth , as observed in ref . , but they also drain all the financial resources in the economy .these findings extend to more complex settings .figure [ fig : k10_ps_beta ] illustrates this for an economy with classes of goods ( see figure caption for details ) and different values of . in order to visualise the freezing of the flow of goods we introduce the success rate of transactions for goods belonging to class , denoted as .figure [ fig : k10_ps_beta ] shows that , as expected , for a fixed value of the pareto exponent the success rate increases as the goods become cheaper , as they are easier to trade .secondly it shows that trades of all classes of goods halt as tends to unity , that is when wealth inequality becomes too large , independently of their price . [ cols="^,^ " , ] the decrease of when inequality increases ( i.e. as decreases ) is a consequence of the concentration of cash in the hands of the wealthiest agents .this can be observed in the right panel of figure [ fig : k10_ps_beta ] , which shows the average cash of agents with a given wealth , for different values of .the freezing of the economy when decreases occurs because fewer and fewer agents can dispose of enough cash ( i.e. have ) to buy the different goods ( prices correspond to the dashed lines ) .note finally that quantifies liquidity in terms of goods . in order to have an equivalent measure in terms of cash that can be compared to the velocity of money , we average over all goods this quantifies the frequency with which a unit of cash changes hand in our model economy , as a result of a successful transaction .it s behaviour as a function of for the same parameters of the economy in figure [ fig : k10_ps_beta ] is shown in the right panel of figure [ fig : data ] .in order to shed light on the findings described above , in this section we describe how to derive them within an analytic approach .we start by dealing with the simpler case where all the goods in the system have the same price , ( i.e. ) . a formal approach to this problem consists in writing the complete master equation that describes the evolution of the probability to find the economy in a state where each agent has a definite number of goods .taking the sum over all values of for , one can derive the master equation for a single agent with wealth .the corresponding marginal distribution in the stationary state can be derived from the detailed balance condition where is the maximum number of goods which agent can buy with wealth and is the probability that a transaction where agent sells one good ( i.e. ) is successful .( [ eq : mastereq3_1good_1guy ] ) says that , in the stationary state , the probability that agent has objects and buys a new object is equal to the probability to find agent with objects , selling successfully one of them .the factor enforces the condition that agent can afford at most goods and it implies that for .exchanges are successful if the buyer does not already have a saturated budget . so the probability is also given by where the last relation holds because when the dependence on becomes negligible .this is important , because it implies that for large the variables can be considered as independent , i.e. , and the problem can be reduced to that of computing the marginals self - consistently . the solution of eq .( [ eq : mastereq3_1good_1guy ] ) can be written as a truncated poissonian with parameter \theta\left(m_i - z \right ) \label{eq : me_solution_1good}\end{aligned}\ ] ] with is a normalization factor that can be fixed by .finally , the value of or equivalently of can be found self - consistently , by solving eq .( [ eq : ps_1guy ] ) .notice that the most likely value of for an agent with is given by this provides a natural distinction between cash - poor agents those with that often can not afford to buy further objects , and cash - rich ones those with who typically have enough cash to buy further objects .this separation into two classes of agents was already pointed out in figure [ fig : picturesque_richpoor_transition_beta ] . in terms of wealth ,the poor are defined as those with whereas the rich ones have , where the threshold wealth is given by .notice that when , a condition that occurs when the economy is nearly frozen ( ) , the distribution is sharply peaked around so that its average is .then the separation between the two classes becomes rather sharp , as in figure [ fig : picturesque_richpoor_transition_beta ] . in this regime, we can also derive an estimate of in the limit , for .indeed , we have for , so a rough estimate of is given by .taking the average over agents , as in eq .( [ eq : ps_1guy ] ) , and assuming a distribution density of wealth for and for , one finds ( see appendix [ app : otherderivationofps_using_balanceinthemastereqfashion ] ) ^{1/(1 - \beta ) } , \label{eq : c1_correct } \\ p^{(\text{suc } ) } & = \frac{m}{n \lambda } \simeq \frac{\pi}{c } \frac{{\mathbb{e}\left [ c \right ] } } { c^{(1)}}. \label{eq : ps_correct}\end{aligned}\ ] ] here } = \beta/(\beta-1) ] diverges as , but also that within this approximation the threshold wealth diverges much faster , with an essential singularity .more precisely , we note that , so that is a number smaller than 1 ( yet positive ) . from eq .( [ eq : c1_correct ] ) , we have .therefore the liquidity vanishes as .for finite , this approximation breaks down when gets too close to or smaller than one .also , } ] have their budget saturated with goods of class and can not afford more expensive objects ( here , and ) .an estimate for the thresholds can be derived following the same arguments as for , by observing that when analysing the dynamics of goods of type , all agents in class are effectively frozen and can be neglected . combining this with the conservation of the total number of objects of each kind , we obtain a recurrence relation for .we refer the interested reader to the appendix [ app : recurrence ] for details on the derivation , and report here the result in the case of goods with , large enough , with and in the limit : ^{\frac{1}{1-\beta } } , \label{eq : ps_and_ck_generalcaseanymathcalm1 } \\p_k^{(\text{suc } ) } & = \frac{m_k}{n \lambda_k } \simeq \frac{\pi}{k c } \frac{{\mathbb{e}\left [ c \right ] } } { c^{(k)}}. \label{eq : ps_and_ck_generalcaseanymathcalm2}\end{aligned}\ ] ] in the limit of large inequality , close inspection . ] of eq .( [ eq : ps_and_ck_generalcaseanymathcalm1 ] ) shows that , which implies that all agents become cash - starved except for the wealthiest few . since } /c^{(k)} ] is the expected value of the wealth per agent .we also use the fact that we fill in the system a number of goods in such a way to have a fixed ratio . performing the integral on the r.h.s of eq .( [ eq : app_ps_analytic1 ] ) gives an equation for : } } { c^{(1 ) } } = { c^{(1)}}^{-\beta } \left ( \frac{1}{1-\beta }\right ) - \frac{\beta}{1 - \beta } \frac{1}{c^{(1)}},\ ] ] that simplifies into : ^{1/(1 - \beta)}.\ ] ] [ [ app : recurrence ] ] derivation of and in the large limit for several types of good .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ an analytic derivation for the and can be obtained also for the cases of several goods , but only in the limit in which prices are well separated ( i.e. ) and the total values of good of any class is approximately constant ( we use ) . in this limitwe expect to find a sharp separation of the population of agents into classes .this is because implies that the market is flooded with objects of the class , which constantly change hands and essentially follow the laws found in the single type of object case .on top of this dense gas of objects of class , we can consider objects of class as a perturbation ( they are picked times less often ! ) . on the time scale of the dynamics of objects of type ,the distribution of cash is such that all agents with a wealth less than have their budget saturated by objects of type and typically do not have enough cash to buy objects of type nor more expensive ones .likewise , there is a class of agents with that will manage to afford goods of types and , but will hardly ever hold goods more expensive that . in brief, the economy is segmented into classes , with class composed of all agents with who can afford objects of class up to , but who are excluded from markets for more expensive goods , because they rarely have enough cash to buy goods more expensive than .this structure into classes can be read off from figure [ fig : k10_ps_beta ] , where we present the average cash of agents , given their cash in a specific case ( see caption ) .the horizontal lines denote the prices of the different objects , and the intersections with the horizontal lines define the thresholds .agents that have just above are cash - filled in terms of object of class , but are cash - starved in terms of objects .the liquidities can be given by the following expression according to the previous discussion of segmentation of the system into classes , and using the same approximation for this threshold probability discussed in the case of 1 type of good , we assume then in this case now we have } } { c^{(k)}}\ ] ] with similar calculations to the ones showed for the previous case , one can easily get to the recurrence relation : ^{\frac{1}{1-\beta}}.\ ] ] iterating , we explicit this into : ^{\frac{1}{1-\beta}},\ ] ] a comparison between the analytical estimate and numerical simulations , presented in figure [ fig : summary_section_one_object ] , shows that this approximation provides an accurate description of the collective behaviour of the model . as a function of the pareto exponent .comparison between numerical simulations and analytical estimates for one class of goods ( left panel ) and two classes of goods ( right panel ) .the blue solid circles are the result of monte carlo simulations performed for agents and averaged over 5 realizations . herethe error bars indicate the min and max value of over all realizations ( we used the `` adjusted pareto '' law for the right panel , see appendix [ app : adjusted_pareto_capital_distribution ] ) .the red lines are the analytic estimates according to eq .( [ eq : ps_correct ] ) and eq .( [ eq : ps_and_ck_generalcaseanymathcalm1 ] ) for left and right panels , respectively .the green crossed lines correspond to numerically ( see appendix [ app : iterativemethoddetailexplained ] ) solving the analytical solution for a population composed of ( kind of ) agents ., title="fig:",scaledwidth=50.0% ] as a function of the pareto exponent .comparison between numerical simulations and analytical estimates for one class of goods ( left panel ) and two classes of goods ( right panel ) .the blue solid circles are the result of monte carlo simulations performed for agents and averaged over 5 realizations . herethe error bars indicate the min and max value of over all realizations ( we used the `` adjusted pareto '' law for the right panel , see appendix [ app : adjusted_pareto_capital_distribution ] ) .the red lines are the analytic estimates according to eq .( [ eq : ps_correct ] ) and eq .( [ eq : ps_and_ck_generalcaseanymathcalm1 ] ) for left and right panels , respectively .the green crossed lines correspond to numerically ( see appendix [ app : iterativemethoddetailexplained ] ) solving the analytical solution for a population composed of ( kind of ) agents ., title="fig:",scaledwidth=50.0% ] see also in fig .[ fig : gini_intro ] how the liquidity over - concentrates ( with respect to capital concentration ) .there , we compare the liquid and capital concentrations , measured via their gini coefficients , for various values of in the system of fig .[ fig : k10_ps_beta ] ( ) . of the cash distribution ( liquid capital ) in the stationary state of the model as a function of the gini of the wealth distribution .the dashed line indicates proportionality between cash and wealth , in which case the inequality in both is exactly the same .the wealth follows a pareto distribution with exponent that tunes the degree of inequality ( the higher is , the more egalitarian the distribution).,width=302 ] in particular , note that the limit is singular , as reaches one around , with smaller yielding also .this is an alternative way to see how the concentration of capital generates an over - concentration of liquidities .we perform our monte carlo simulations of the trading market for agents .prices generally start from and increase by a factor between each good class .the minimal wealth is .the ratio is fixed as indicated in captions , and most importantly is kept constant between different realizations .as the total wealth fluctuates , so does the total number of goods .there are no peculiar difficulties with the numerical method ( apart from the large fluctuations in the average wealth , addressed below ) .the only thing one has to be careful with is to ensure that the stationary state has been reached , i.e. that all observables have a stationary value , an indication that the ( peculiar ) initial condition has been completely forgotten .the codes for this monte carlo simulation are available online . for we have predictions for the regime , in which the average wealth is particularly fluctuating from realization to realization .because the value of } ] . this effect is well known and well documented for power laws , but we present a concrete example of it in figure [ fig : nonconvergencelargen ] to emphasize its intensity . for the sample size that is typically manageable in our simulations ,i.e. , the typical value for the average value of the wealth ( using e.g. ) is of the order of the half of its expected value : } /2 ] .we see that even for huge samples , the typical s are significantly smaller than the expected } ] , we select an agent at random and increase its wealth until we have exactly } ] , we select the wealthiest agent and decrease its wealth until we have exactly } ] .as can be seen in figure [ fig : nonconvergencelargen ] , the most common case is the first one .the corresponding adjustment is equivalent to re - drawing the wealth of a single agent until it is such that } ] .it is quite crucial to use this `` adjusted '' pareto law for the small s ( i.e. for ) .see figure [ fig : various_capital_distro ] to have an idea of what this modified distribution means : the only changes in the two sample shown would be in the values of the wealthiest agent .here we describe the algorithm used to converge to a self - consistent set of values for , i.e. solving eq .( [ eq : ps_kobjs ] ) for ( or more simply eq .( [ eq : ps_1guy ] ) in the case of a single type of goods ) .it can be generalized straightforwardly to , although it may become numerically extremely expensive ( see also our code , ) .the results ( green crosses ) presented in figure [ fig : summary_section_one_object ] were obtained using the method described here . for each agentthere is a constant to be determined self - consistently .this presents a technical difficulty , as for a true power - law distribution , each agent gets a different wealth and thus the number of constants to compute is . a way to tacklethis difficulty is to consider a staircase - like distribution of wealth , where agents are distributed in groups with homogeneous wealth and where the number of agents per group is , so that individual agents approximately follow a power law with exponent .see figure [ fig : various_capital_distro ] ( green crosses ) to have an idea of what this modified distribution means concretely .this kind of staircase distribution is not a true power - law , in particular because its maximum is always deterministic and finite .however , as we now have , we can numerically solve the equations and thus find the exact value of .of course , the value of found in this way perfectly matches with monte carlo results if and only if we use the exact same distribution of wealth and goods in the simulation .this is not surprising at all , and merely validates our iterative scheme .however , we note that staircase - like wealth distributions turn out to be very good approximations of true power laws , when the wealth levels are sufficiently refined and the number of classes sufficiently large . in particular , using with a base , it can be seen that for large enough , the average wealth converges to a value very close to the expected one } ] . * for each , if , set to .* for each , set to . *if the obtained is smaller than , set it to and divide by . *if the obtained is larger than , set it to . *loop until all the are smaller than the predefined allowed error and/or the is close enough to .typically , the that is sufficient to achieve a reasonable approximated convergence can be estimated by taking a quick look at how much the is close to the } $ ] . running this algorithm at different values of , one can directly probe the convergence : when increasing does not change the values of the more than the errorbar allowed , then one considers the method to have converged . in practice , it is fairly fast to converge for large s , and the number of operations exponentially explodes as is decreased towards .
inequality and its consequences are the subject of intense recent debate . using a simplified model of the economy , we address the relation between inequality and liquidity , the latter understood as the frequency of economic exchanges . assuming a pareto distribution of wealth for the agents , that is consistent with empirical findings , we find an inverse relation between wealth inequality and overall liquidity . we show that an increase in the inequality of wealth results in an even sharper concentration of the liquid financial resources . this leads to a congestion of the flow of goods and the arrest of the economy when the pareto exponent reaches one .
after extracting the names of proteins occurring in a particular life stage of _ caenorhabditis elegans _ , experimentally verified protein - protein interactions are collected from various bioinformatics repositories .next , six different ppi networks are constructed by treating each life stage as a layer of the multilayer ppi network ( fig .[ fig1 ] ) . for each layer , all the proteins are enlisted and these proteins are occurring for their functional or structural activities in that life stage .the proteins are the nodes and connections are assigned if a pair of proteins and has an interaction between them .thus , six different sub - networks for multilayer ppi network are obtained .the adjacency matrix of each layer of the multilayer network is denoted as and elements are defined as , all the adjacency matrices are symmetric ( _ i.e. , _ = ) where .the most basic structural parameter of a network is the degree of a node ( ) , which is defined as a number of edges connected to the node ( ) .the degree distribution , , is calculated which is the probability that a randomly chosen node has connections .the second parameter , the clustering coefficient , is the ratio of the number of interactions a neighbor of particular node is having and the possible number of connections the neighbors can have among themselves .further , the network diameter ( ) is defined as the longest of the shortest paths between all the pair of nodes in a network .another property of the network which turns out to be crucial in distinguishing the individual layer of the multilayer ppi network is the pearson degree - degree correlation ( ) , which measures the tendency of nodes with the similar numbers of edges to connect .it can be defined as , - [ \frac{1}{m } \sum_i \frac{1}{2}(j_i + k_i)^2 ] } { [ \frac{1}{m } \sum_{i } \frac{1}{2 } ( j_i^2 + k_i^2 ) ] - [ \frac{1}{m } \sum_i \frac{1}{2}(j_i + k_i)^2 ] } \label{assortativity}\ ] ] where and are the degrees of the nodes connected through the edge , and is the number of edges in the network .the value of being zero corresponds to a random network where as the negative(positive ) values correspond to dis(assortative ) networks .further , correlation between link betweenness centrality and overlap of the neighborhood of two connected nodes , is calculated .link betweenness centrality ( ) is defined for an undirected link as , where is the number of shortest paths between and that contain , and is the total number of shortest paths between and .the overlap of the neighborhood ( ) of two connected nodes and is defined as , where is the number of neighbors common to both nodes and . here and represent the degree of the and nodes .further , pearson correlation coefficient ( ) of and can be defined as , in particular , negative value of coefficient suggests the importance of weak ties , a concept borrowed from social sciences into the network analysis .the eigenvalues of the adjacency matrix are denoted by , i = such that . the duplicated nodes in a network can be identified from corresponding adjacency matrix in the following manner . when ( i ) two rows ( columns ) have exactly same entries , it is termed as the complete row ( column ) duplication_ i.e. _ , ( ii ) when a combination of rows ( columns ) have exactly same entries as another combination of rows ( columns ) then it is termed as the partial duplication of rows ( columns ) , for example .satisfying any one of the conditions ( i ) and ( ii ) lowers the rank of the matrix exactly by one .in addition , the rank is also lowered if ( iii ) there is an isolated node .all these conditions lead to the zero eigenvalues in the matrix spectra .since there is no isolated node in the network , ensured in the beginning itself by considering only the largest connected cluster for this analysis , conditions ( i ) and ( ii ) are the only conditions responsible for occurrence of the zero degeneracy .further , the von neumann entropy of the graph is calculated as , $ ] where is the combinatorial laplacian matrix and d is the diagonal matrix of the degrees re - scaled by .formally , has all the properties of a density matrix _ i.e. , _ it is positive semi - definite and .therefore , can be written as , the maximum entropy , a network of size can achieve , is .further , the network properties are compared between ppi layers and corresponding erds r ' enyi ( er ) random networks .this allows to estimate the probability that a random network with certain constraints has of belonging to a particular architecture , and thus assess the relative importance of different network architecture and help discern the mechanisms responsible for given real - world networks .c c c c c c c c c c c layer & & & & & & & & & & + blastula & 2876 & 22880 & 12 & 16 & 0.24 & 0.24 & 812(28.2 ) & 716(24.9 ) & -0.39 & 9.68 + gastrula & 2848 & 22802 & 12 & 16 & 0.25 & 0.24 & 791(27.7 ) & 692(24.3 ) & -0.39 & 9.73 + embryo & 3568 & 25741 & 12 & 14 & 0.23 & 0.27 & 1144(32.1 ) & 1025(28.7 ) & -0.37 & 9.18 + nematode & 4755 & 35708 & 12 & 15 & 0.33 & 0.16 & 2087(43.9 ) & 1928(40.5 ) & -0.30 & 9.99 + primeadult & 3112 & 24126 & 12 & 16 & 0.24 & 0.26 & 926(29.8 ) & 833(26.8 ) & -0.38 & 9.59 + lifecycle & 3415 & 25255 & 12 & 15 & 0.23 & 0.27 & 1057(30.9 ) & 937(27.4 ) & -0.36 & 9.33 + [ table1 ] first , the structural properties of ppi networks are analyzed for six developmental stages .the average degree , which gives a measure of the average connectivity of individual network , remains same for all the layers of the multilayer ppi network ( table [ table1 ] ) .this indicates that though there are differences in the number of nodes participating in each layer as well in the connections _i.e. , _ the average connectivity is conserved across all life stages .further , the network diameter indicates how much far are the two most distant nodes in a network .the diameter being small for all the layers suggests that all the nodes are in proximity and the graph is compact .the diameter , and thus the compactness , of a ppi network , can be interpreted as the overall easiness of the proteins to communicate or influence their reciprocal function .further , an intriguing observation of degree distribution is found which follows two distinct fitting scales in all the layers _i.e. , _ two power law in each layer ( fig .[ fig2 ] ) .many network studies have reported an absence of the perfect power law for the overall range of the degree .various real systems show power law in the central part of data only and deviation from it in the small or the large scale .furthermore , the value of first power law exponent is lower than the second one in all the layers .several models have been used to explain the origin of two power laws found in many systems .the models include , the geometric brownian motion model , the preferential attachment model and the generalized model of the creation of new links between old nodes which increases with evolution time .these models suggest that the evolution of a network is characterized by two parts _ i.e. , _( i ) a leading ingredient of a network and ( ii ) fluctuations within existing connections between nodes , being one of the reasons to lead the double power law nature of degree distribution . here, the power law nature of ppi layer implicates that robustness of a network is maintained not only by acquisition of new interactions by hub proteins but also by contribution of new or altered interactions within existing proteins for ease of pathway processes which might have arisen due to the presence of internal physiological and external environmental factors during the development of an organism . ''refers to the exponent of power law.,width=453,height=264 ] what follows that the individual network exhibits overall similar statistics for widely investigated structural properties _i.e. , _ smaller diameter , and larger average clustering coefficient than the corresponding random networks as well as existence of two power law but the crucial differences among them , are revealed through the analysis of the degree - degree correlation and spectral properties .all the network layers show overall positive degree - degree correlation ( table [ table1 ] ) . though most of the biological networks exhibit disassortative nature , a positive degree - degree correlation is observed in many other biological networks .it is reported that assortative networks are strongly clustered and can have functional modules .a high value of here suggests the presence of functional modules as functional areas of ppi network , but there is lack of clear evidence to prove whether a functional area forms a functional module .the corresponding random networks have value close to zero ( table [ table2 ] ) .this is not surprising as the networks with the same average degree and size may still differ significantly in various network features since the nodes are randomly connected and the value of assortativity coefficient of a network is determined by degrees of interacting nodes .the assortativity observed in ppi networks implicates that overall interaction patterns of nodes having similar degree is conserved in all the layers .next , blastula and gastrula ppi layer have same value which is not surprising as a large number of proteins and , hence their interactions are common in these two layers .further , though all ppi layers show value close to each other , the value of nematode is lower than other ppi layers as well as the difference in the value from that of other layers is very high .it suggests that nematode ppi layer is not as assortative as other layers which implicates in the presence of less structural modules in nematode as of other layers .further , values close to zero of the random networks suggest that the network with random interactions tend to have less values .all these suggest that nematode ppi layer is more random than the other layers .c c c c c c c | c network & & & & & & & + blastula & 0.005 & 5 & 0 & 0 & -0.1 & 11.43 .02 & 11.49 + gastrula & 0.005 & 5 & 0 & 0 & -0.1 & 11.39 .04 & 11.48 + embryo & 0.005 & 4 & 0 & 0 & -0.1 & 11.39 .04 & 11.80 + nematode & 0.006 & 5 & 0 & 0 & -0.11 & 12.12 .04 & 12.21 + prime adult & 0.005 & 5 & 0 & 0 & -0.09 .04 & 11.34 .04 & 11.60 + life cycle & 0.006 & 5 & 0 & 0 & -0.13 & 11.81 .04 & 11.74 + [ table2 ] to get deeper insights into the organization of connections in ppi networks , these networks are further analyzed with the weak ties hypothesis . here, the links having low overlap in their end nodes are termed as the weak ties and links having high link betweeness centrality are the ones known to be stronger as they help in connecting in different modules .all the network layers exhibit the negative value of correlation coefficient ( table [ table1 ] ) which suggests the presence of weak ties in ppi network .it is suggested that the complete architecture of ppi network is composed of different biological pathways and metabolic cycles , and a protein involved in particular pathway plays role in regulating other pathways as well , termed as cross talk between pathways .the corresponding random networks exhibit negative value ( table [ table2 ] ) , but ppi layers have more negative value than the corresponding random networks .it implicates that the network with random interactions tends to have more value .therefore , the highest value in nematode suggests that nematode ppi layer is more random than the other layers .nevertheless , understanding of the evolution of embryogenesis in _ c. elegans_ is still fragmentary , more randomness shown by the less value of assortativity coefficient , and high value of coefficient than other layers may suggest that the presence of more randomness in this layer .it is reported that the processes of cellular diversification plays vital role in the larval nematode development , which may be the reason for more randomness in this network layer .so far the study is focused on various structural aspects of all the layers which have demonstrated distinguishable structural features of the nematode from other layers .further , the spectra of these networks are analyzed since spectra is known to be fingerprint of corresponding network .the network spectra not only provide insight to functional modules and randomness in the connection architecture , but also relate with the dynamical behavior of the system as a whole .it is observed that the spectra of all the ppi layers have high degeneracy at the zero eigenvalue .the occurrence of degeneracy at zero eigenvalue for ppi layers is not surprising here as many biological and technological networks are known to exhibit high degeneracy at zero eigenvalue .interestingly , the number of zero eigenvalue has direct relation with complete and partial duplication of nodes as discussed in material and methods section .the number of complete duplicates which contributes to the same number of zero eigenvalue ( ) , is listed in table [ table1 ] .the appearance of duplicate nodes in biological networks have been emphasized to be arising due to the gene duplication process as a consequence of evolution .the corresponding random networks do not exhibit degeneracy at zero eigenvalue , indicating that this count of duplicate nodes in ppi layers reside not only in the sheer number of proteins and interactions taking part in particular layer , but also in how individual ppi layer is evolved or designed to fulfill the cellular functions .an important observation is that despite overall similar spectral properties _i.e. _ degeneracy at zero eigenvalue as well as shape of eigenvalue distribution , the height of the peak at zero eigenvalue differs in all the layers .since size of the networks differ at different life stages , in order to take care of the impact of size on the occurrence of the zero degeneracy , the count of zero eigenvalues are normalized by dividing with and find that these normalized values also differ in ppi layers ( table [ table1 ] ) .what is important here is that the genome of an organism remains same in all the life stages , still there is occurrence of different count of duplicate nodes in ppi layers . in ppi network ,gene duplication is understood as duplication of protein since the duplicated protein is the expressed product of the duplicated gene as well as it is the identical copy of the parent one .the duplicated protein initially shares common function as of the parent protein which results in the same interaction partners , later it functionally diversifies to acquire different interaction partners .since every protein contributes to the specific physiological and developmental process , and different physiological and developmental processes in each life stage would require different set of proteins , may result in different count of duplicate nodes .taken together , it suggests that there may be the role of specific biological responses at each developmental stage in the process of gene duplication .further , nematode ppi layer exhibits more number of zero eigenvalues than other layers .it is reported that nematode is a crucial stage for cellular diversification and organogenesis , as well as there are developmental transitions during continuous and interrupted larval nematode development , which might result in more duplicates in nematode than other life stages .furthermore , the von neumann entropy of all the layers are calculated and observed that there are different values in each ppi layer ( table [ table1 ] ) . to get more insights on this , entropy of each ppi layeris compared with the maximum entropy of that layer , also with of the corresponding random network .firstly , ppi layers as well as random networks display lesser entropy than corresponding maximum entropy which is quite intuitive , since any network with size can have maximum entropy .secondly , corresponding random networks display higher entropy than ppi layers .the networks with random interactions among nodes tend to show higher entropy .the comparison of entropy of each ppi layer with the maximum entropy as well as with the corresponding random network indicates presence of varying complexity in each ppi layer .it recites the similar notion of varying complexity present in each layer which is deduced earlier by structural features .it is potentially important since it may be in consequence with the specific developmental and evolutionary stimuli associated with each life stage .further , nematode has the highest value which suggests the presence of more complexity in this layer than all other ppi layers . taken together , more complexity in nematode than all other ppi layers and the least value and the highest suggest in more randomness in this ppi layer .as it is discussed earlier , the contribution of specific developmental factors at larval nematode development may result here in more complexity in ppi layer of nematode life stage than all other ppi layers .the proteome analysis of each layer of _ c. elegans _ multilayer ppi network exhibits the overall similarity in structural features such as smaller diameter and larger average clustering coefficient than the corresponding random networks as also observed for other biological networks .the degree distribution following power law in each of the ppi layer is indicative of the robustness of the underlying system .although the widely studied structural properties exhibit similar statistics , the crucial differences in the network layers through the analysis of the degree - degree correlation and spectral properties , which also turn out to be of potential importance in understanding varying complexity in each layer .the values of , and coefficients of each layer ubiquitously behave in similar manner and are found to be comparable with other ppi layers of underlying system .interestingly , the layer of nematode life stage exhibits notable distinguishing properties than other layers , which in overall indicates that this layer being the most random among all the layers .further , each ppi layer exhibits different degeneracy at the zero eigenvalue which is related to node duplication , suggesting the role of specific biological responses at each developmental stage in the process of gene duplication . to summarize , an extent of varying complexity is observed in the organization of ppi networks of individual layers of multilayer ppi network .it recites the fact that biological complexity arises at several levels in the development of _ c. elegans _ such as , from single cell embryo to multicellular completely developed organism where each life stage is associated with different physiological and molecular changes .the varying complexity observed in life stages can further be used to understand and capture important developmental changes in an organism .99 ( 2009 ) 323 721 .\(1998 ) 8417 .\(1996 ) 368 .\(1996 ) 22 .\(1974 ) 71 .\(2001 ) 127 .\(2015 ) 124 .\(2003 ) 11426 ; sol r. and valverde s. _ complex networks _ * 207 * ( 2004 ) 189 ; braustein s. , gosh s. and severini s. _ ann .comb . _ * 10 * ( 2006 ) 291 .\(2003 ) 280 ; boccaletti s. _ et al . , _ _ phys .* 424 * ( 2006 ) 175 .\(2009 ) 51 .( 2006 ) 307 ; jalan s. , solymosi n. , vattay g. , and li b. _ phys .* 81(4 ) * ( 2010 ) 046118 .( 2011 ) .http://figshare.com/articles/supplementary_material_a_multilayer_ppi_network_analysis_of_different_life_stages/1539528[supplementary material ] for detailed information on life stages of _ c. elegans _ and data collection . .\(2010 ) 91 .\(2003 ) 026126 .\(1973 ) 1360 - 1380 ; onnela j. _ et al . , __ new j. phys . _ * 9(6 ) * ( 2007 ) 179 . in _ in an introduction to the theory of graph spectra_ cambridge - new york ( 2010 ) .\(2015 ) 043110 .\(2006 ) 291 .( 2015 ) . .\(2010 ) 108702 .( 2011 ) .\(2002 ) 469 ; dorogovtsev s. and mendes j. _ proc .london , ser .b _ * 268 * ( 2001 ) 2603 ; han d. , qian j. and ma y. _ europhysics letters _ * 94.2 * ( 2011 ) 28006 .( 2015 ) .( 2010 ) .\(2012 ) 66 .\(2014 ) 220 ; welch j. j. , and waxman d. _ evolution _ * 57.8 * ( 2003 ) 1723 .( 2012 ) .\(2009 ) 791 - 803 ; typas a. and sourjik v. _ nat .microbiol . _ * 13(9 ) * ( 2015 ) 559 .( 2011 ) .\(2009 ) 056114 .( 2014 ) ; mndez - bermdez j. a. _ et al . , _ _ phys .* 91.3 * ( 2015 ) 032122 .\(2003 ) 014101 .\(2005 ) 016106 ; agrawal a. _ et al . , _ _ physica a _ * 404 * ( 2014 ) 359 .\(2009 ) 2425 ; kamp c. and christensen k. _ phys .e _ * 71 * ( 2005 ) 041911 .\(2005 ) 061911 .\(1976 ) 287 ; rougvie a. and moss e. _ current topics in developmental biology _ * 80 * ( 2013 ) 153 .978 - 0262514224 ( 2006 ) .sj is grateful to department of science and technology ( dst ) , government of india and council of scientific and industrial research ( csir ) , government of india project grants emr/2014/000368 and 25(0205)/12/emr - ii for financial support , respectively .ps acknowledges dst for the inspire fellowship ( if150200 ) as well as the complex systems lab members for timely help and useful discussions .
molecular networks act as the backbone of cellular activities , providing an excellent opportunity to understand the developmental changes in an organism . while network data usually constitute only stationary network graphs , constructing multilayer ppi network may provide clues to the particular developmental role at each stage of life and may unravel the importance of these developmental changes . the developmental biology model of _ caenorhabditis elegans _ analyzed here provides a ripe platform to understand the patterns of evolution during life stages of an organism . in the present study , the widely studied network properties exhibit overall similar statistics for all the ppi layers . further , the analysis of the degree - degree correlation and spectral properties not only reveals crucial differences in each ppi layer but also indicates the presence of the varying complexity among them . the ppi layer of nematode life stage exhibits various network properties different to rest of the ppi layers , indicating the specific role of cellular diversity and developmental transitions at this stage . the framework presented here provides a direction to explore and understand developmental changes occurring in different life stages of an organism . * a multilayer ppi network analysis of different life stages in c. elegans * + pramod shinde , and sarika jalan + _ centre for biosciences and biomedical engineering , indian institute of technology indore , simrol , indore 452020 , india + _ complex systems lab , discipline of physics , indian institute of technology indore , simrol , indore 452020 , india + e - mail : pramodshinde119.com , .com _ _ recent developments in the quantitative analysis of complex networks have rapidly been translated to studies of different biological network organizations . developmental biology is the study of the molecular and cellular events that lead to the generation of a multicellular organism from a fertilized egg . although much is known about the morphological changes that take place during the development , there is a lesser understanding of the mechanisms by which these changes occur . due to lack of this knowledge , and because of the interest in understanding how something as complex as a living organism can develop from a single cell , developmental biology is one of the most active areas of biological research today . the intimidating complexity of cellular systems appears to be a major hurdle in understanding internal organization of molecular pathways and their development in large scale evolutionary biological networks . for instance , during the development phase of _ caenorhabditis elegans ( c. elegans ) _ from undeveloped embryo to completely developed nematode , it undergoes multiple physiological and physiochemical changes . many key discoveries , both in basic biology and medically relevant areas , were first made in the worm . since its introduction , _ c. elegans _ has been used to study a much larger variety of biological processes . together , these studies revealed a surprisingly strong conservation in molecular and cellular pathways between worms and mammals . indeed , subsequent comparison of the human and _ c. elegans _ genomes confirmed that the majority of human disease genes and disease pathways are present in _ c. elegans _ . here , the global architecture of protein - protein interaction ( ppi ) network of each life stage of this model system is considered and focus is given to understand the behavior of individual layer of this multilayer network . it is worth noticing that layers of a multilayer network can be tackled from different perspectives and might in principle be used to understand the developmental biological changes in different life stages of this organism . recently , following information from theoretical and statistical mechanics paradigms , several structural and spectral measures for randomness and complexity have been proposed for social , technological and biological networks . these measures have been shown to be extremely successful in quantifying the level of organization encoded in structural features of networks . the measures like degree - degree correlation and von neumann entropy allow us to capture differences and similarities between networks , which furthers our understanding of the information encoded in complex networks . this complexity resides not only in the sheer number of proteins and interactions taking part in particular layer of multilayer network , but also in how individual layer is evolved or designed to fulfill the cellular functions . to understand this , an extend of varying randomness and complexity is deduced using degree - degree correlation and spectral properties of each layer of the multilayer ppi network . further , the early nematode developmental stage is more complex among all the life stages of _ c. elegans_. this analysis provides a direction to understand and capture important developmental changes in an organism .
non - linear iterated maps are now known as an universal tool in numerous scientific domains , including for instance mechanics , hydrodynamics and economy .they often appear because the differential equations describing the dynamics of a system can be reduced to non - linear iterations , with the help of poincar recurrence maps for instance .the resulting iterations combine a great mathematical simplicity , which makes them convenient for numerical simulations , with a large variety of interesting behaviors , providing generic information on the properties of the system . in particular , they are essential to characterize one of the routes to chaos , the cascade of period doublings . in musical acoustics , mc intyre _et al . _ have given , in a celebrated article , a general frame for calculating the oscillations of musical instruments , based upon the coupling of a linear resonator and a non - linear excitator ( for reed instruments , the flow generated by a supply pressure in the mouth and modulated by a reed ) .in an appendix of their article they show that , within simplified models of self - sustained instruments , the equations of evolution can also be reduced to an iterated map with appropriate non - linear functions . for resonators with a simple shape such as a uniform string or a cylindrical tube ,the basic idea is to choose variables that are amplitudes of the incoming and outgoing waves ( travelling waves ) , instead of usual acoustic pressure and volume velocity in the case of reed instruments .if the inertia of the reed is ignored ( a good approximation in many cases ) , and if the losses in the resonator are independent of frequency , the model leads to simple iterations ; the normal oscillations correspond to the so called helmholtz motion , a regime in which the various physical quantities vary in time by steps , as in square signals .square signals obviously are a poor approximation of actual musical signals , but this approach is sufficient when the main purpose is to study regimes of oscillation , not tone - color . in the case of clarinet - like systems ,the idea was then expanded , giving rise to experimental observations of period doubling scenarios and to considerations on the relations between stability of the regimes and the properties of the second iterate of the non - linear function ; see also and especially for a review of the properties of iterations in clarinet - like systems and a discussion of the various regimes ( see also ) .more recent work includes the study of oscillation regimes obtained in experiments , computer simulation as well as theory .the general form of the iteration function that is relevant for reed musical instruments is presented in section [ iteration ] .it it is significantly different from the usual iteration parabola ( i.e. the so - called logistic map ) .moreover , it will be discussed in more detail that the control parameters act in a rather specific way , translating the curve along an axis at rather than acting as an adjustable gain .the purpose of the present article is to study the iterative properties of functions having this type of behavior , and their effect on the oscillation regimes of reed musical instruments .we will study the specificities and the role of the higher order iterates of this class of functions , in particular in the regions of the so called `` periodicity windows '' , which take place beyond the threshold of chaos .these windows are known to contain interesting phenomena , for instance period tripling or a route to intermittence , which to our knowledge have not yet been studied in the context of reed musical instruments .moreover , the iterates give a direct representation of the zones of stability of the different regimes ( period doublings for instance ) , directly visible on the slope of the corresponding iterate . for numerical calculations , it is necessary to select a particular representation of the non - linear function , which in turn requires to choose a mathematical expression of the function giving the volume flow rate as a function of the pressure difference across the reed .a simple and realistic model of the quasi - static flow rate entering a clarinet mouthpiece was proposed in 1974 by wilson and beavers , and discussed in more detail in 1990 by hirschberg _this model provides a good agreement with experiments and leads to realistic predictions concerning the oscillations of a clarinet . using this mathematical representation of the flow rate, we will see that iterations lead to a variety of interesting phenomena .our purpose here is not to propose the most elaborate possible model of the clarinet , including all physical effects that may occur in real instruments .it is rather to present general ideas and mathematical solutions as illustration of the various class of phenomena that can take place , within the simplest possible formalism ; in a second step , one can always take this simple model as a starting point , to which perturbative corrections are subsequently added in order to include more specific details .we first introduce the model in [ model ] , and then discuss the properties of the iteration function in [ properties].the bifurcations curves are obtained in [ bifurcations ] and , in [ iterated ] , we discuss the iterated functions and their applications in terms of period tripling and intermittence .in particular we see how the graph of high order iterates give visible information on the regime of oscillation ( number of period doublings for instance ) or the appearance of a chaotic regime , while nothing special appears directly in the graph of the first iterate .two appendices are added at the end .we briefly recall the basic elements of the model , the non - linear characteristics of the excitator , and the origin of the iterations within a simplified treatment of the resonator . in a quasi static regime, the flow entering the resonant cavity is modelled with the help of an approximation of the bernoulli equation , as discussed e.g. in .we note the acoustic pressure inside the mouthpiece , assumed to be equal to the one at the output of the reed channel , the pressure inside the mouth of the player ; for small values of the difference : the reed remains close to its equilibrium position , and the conservation of energy implies that is proportional to , where is the sign of ( we ignore dissipative effects at the scale of the flow across the reed channel ) ; for larger values of this difference , the reed moves and , when the difference reaches the closure pressure , it completely blocks the flow .these two effects are included by assuming that if the flow is proportional to ] ; ] ) , ( see e.g. ) . ] and radiation occurs but , since losses remain a relatively small correction in musical instruments , using eq .( [ 4c ] ) is sufficient for our purposes .we now assume that all acoustical variables vanish until time , and then that the excitation pressure in the mouth suddenly takes a new constant value ; this corresponds to a heaviside step function for the control parameter . between time and time , according to ( [ 4c ] ) , the incoming amplitude remains zero , but the outgoing amplitude has to jump to value in order to fulfil eqs.([1 ] to [ 1ter ] ) . at time , the variable jumps to value , which immediately makes jump to a new value , in order to still fulfil eqs.([1 ] to [ 1ter]).this remains true until time , when jumps to value and to a value , etc . by recurrence ,one obtains a regime where all physical quantities remain constant in time intervals , in particular for the pressure and for the flow , with the recurrence relation : in what follows , it will be convenient to use as a natural time unit .we will then simply call time the time interval .notice that in order to get higher regimes ( with e.g. triple frequency ) , the previous choice of transient for needs to be modified ( see e.g. ) .now , by combining eqs.([1 ] to [ 1ter ] ) and [ 4b-1 ] ) , one can obtain a non - linear relation between and : which , combined with ( [ recurrence ] ) , provides the relation : with , by definition : the equation of evolution of the system are then equivalent to a simple mapping problem with an iteration function .the graph of this function is obtained by rotating the non - linear characteristics of fig .[ p - u - relation ] by ( in order to obtain ) , then applying a symmetry ( to include the change of sign of the variable ) and finally a horizontal rescaling by a factor ; the result is shown in fig . [ fonction - iteree].this provides a direct and convenient graphical construction of the evolution of the system ; fig .[ iteration ] shows how a characteristic point is transformed into its next iterate etc ... by the usual construction , at the intersection of a straight line with the iteration curve , i.e. by transferring the value of to the axis and reading the value of the function at this abscissa in order to obtain ] even if , obviously , very large values of the variables are not physically plausible . nevertheless , analyzing the different cases corresponding to eqs.([1 ] to [ 1ter ] ), one can show that the function has a maximum obtained for : \label{xmax}\ ] ] with value : where is defined by : \text { . }\label{5b}\ ] ] it can be shown that this maximum is unique for large value of ( ; for smaller values , a second maximum exists at a very large negative values of , i.e. for very large negative flow , but we will see below that such values of the flow can not be obtained after a few iterations. therefore we focus our attention only on the maximum , which varies slowly as a function of because increases monotonically from 0 for to a small value ( for ) .the geometrical construction of fig . [ iteration ] shows that , after a single iteration , the characteristic point m necessarily falls at an abscissa .let us call the ordinate of the point on the iteration function with abscissa .the two vertical lines and , together with the two horizontal lines and , define a square in the plane , from which an iteration can not escape as soon as the iteration point has fallen inside it . , which means that the iteration curve crosses the left side of the square , as is the case in fig [ iteration ] . ]conversely , since every characteristic point has at least two antecedents , the iteration can bring a point that was outside the square to inside .in other words , the square determines a part of the curve which is invariant by action of the function . for usual initial conditions , such as ,the starting point already lies within the square , so that all points of the iteration keep this property .we have checked that , even if one starts with very large and unphysical pressure differences ( positive or negative ) , the iterations rapidly converge to the inside the square . in what follows ,we call it the iteration square .the net result is that , if we do not consider transients , we can consider that the function defines an application of the interval ] , with neither contact with the mouthpiece nor negative flow , as one could expect physically . [ [ schwarzian - derivative ] ] schwarzian derivative the schwarzian derivative of is equal to : ^{2}\text{,}\ ] ] where , and indicate the first , second and third derivatives of , respectively . if , it is zero ; if , using the change of variables given in appendix [ app1 ] , can be shown to be equal to : , \ ] ] where is a function of - see eqs .( [ a1 ] ) to ( [ a3 ] ) .therefore its sign does not depend on the loss parameter .after some calculations , the schwarzian derivative is found to be negative for all ] ; therefore the extrema of are at either the same abscissa or the same ordinate as those of ; * more generally , for , if , then , and it is at a maximum ( its first derivative vanishes and the second one is negative ) , and if , then , and it is at a minimum ( its first derivative vanishes and the second one is positive ) ; * the kink of the first iterate ( beating limit point ) is also visible on the iterates ; * a well known property of the schwarzian derivative is as follows : if the schwarzian derivative of is negative , the schwarzian derivatives of all iterates are negative as well . and , of order 1 , 2 , 4 , 8 and 16 .the convergence to the 2-state regime is visible.,width=377 ] figure [ iteree_16 ] shows the higher order iterates ( of order 4 , 8 and 16 ) in the same conditions as figure [ iteree_gamma=042 ] .we observe that the iterates become increasingly close together when their order increases , with smaller and smaller slopes at the fixed points corresponding to the 2-state regime .moreover , they resemble more and more a square function , constant in various domains of the variable .this was expected : in the limit of very large orders , whatever the variable is ( i.e. whatever the initial conditions of the iteration are ) one reaches a regime where only two values of the outgoing wave amplitude are possible ; these values then remain stable , meaning that the action of more iterations will not change them anymore .so , one can read directly that the limit cycle is a 2-state on the shape of , which has two values ; it would for instance have 4 in the limit cycle was a 4-state regime for these values of the parameters .for the clarity of the figure , we have shown only iterates with orders that are powers of , but it is of course easy to plot all iterates . for a 2-state regime ,even orders are sufficient to understand the essence of the phenomenon , since odd order iterates merely exchange the two fixed points and . in table 1, the existence of two different stable regimes for the same value of the parameters signals an inverse bifurcation ; figure [ iteree_16b ] shows an example of such a situation . for , both the static and 2-state regimes are then stable , depending on the initial conditions . for the static regime ,the curve coincides with the second diagonal , a case in which the fixed point is presumably stable ( the stability becomes intuitive when one notices that the tangents of the higher order curves lie within the angle of the two diagonals ) . for the 2-state regime, the state of positive pressure value corresponds to a beating reed . and , of order 1 , 2 , 3 , 4 , and 8 .the curves of and are almost perfectly superimposed . around ,the convergence to the static regime appears to be very slow . on the contrarythe convergence to the 2-state regime is rapid ., width=377 ] and , of order 1 , 2 , 4 , 8 and 16 ., width=377 ] finally fig.[iteree_16c ] shows another case of existence of two different regimes for the same value of the parameters .a 2-state regime can occur , as well as a 4-state regimes can occur .it appears that the second one is more probable than the first one , when initial conditions are varied .we now investigate some regimes occurring in a narrow range of excitation parameter .\(i ) we first examine a chaotic regime occurring just before a 6-state regime ( period tripling ) and the transition between the two regimes .figure [ iteree_12a ] shows the iterated functions of order 1 , 2 , 6 , and 12 .the 6th iterated function crosses the first diagonal at the same points than the first and the second iterates only , which means that no 6-state regime is expected .by contrast , the 12th iterate cuts the diagonal at more points , but with a very high slope , indicating that the corresponding fixed points can not be stable .this , combined with the fact that no convergence to a square function ( constant by domains ) , such as in figure [ iteree_16 ] , suggests an aperiodic behavior ; the time dependent signal shown in fig.[figtransit_04445 ] looks indeed chaotic ( nevertheless the flow always remains positive ) .the periodic / chaotic character of the signal can be distinguished by examining the time series , but a complementary method is the computation of an fft . for the signal of fig.[figtransit_04445 ] , the spectrum is more regular than the spectrum of a 6-state periodic regime .nevertheless the frequencies of the latter ( the `` normal '' frequency of the 2-state regime with the frequencies and ) remains visible in the spectrum of the first one , as it is often the case for signals corresponding to very close values of the parameter .a consequence is that these frequencies clearly appear when listening the sound . and , of order 1 , 2 , 6 and 12 .a convergence to an aperiodic regime is visible .the arrow indicates a region where is very close to the first diagonal , but does not yet cross it.,width=377 ] for , , ; the upper part shows the the pressure , the lowest part the values of the flow .the regime looks chaotic.,width=377 ] figure [ iteree_12b ] is similar to figure [ iteree_12a ] , but with a slightly larger value of ( instead of ) . in the region indicated by the arrow ,one notices that the 6th iterated function now cuts the first diagonal .they are 12 points of intersection ( plus 1 common point with the first iterate as well as two common points with the second iterate , all unstable ) ; the slope of the tangent shows that 6 of them are stable , so that one obtains a 6-state , periodic , regime .the variations of higher order iterates , e.g. , remain very fast ; the convergence to the limit cycle is then much slower than for fig .[ iteree_16 ] , except if the initial point is close to a limit point ( e.g. that shown by an arrow : it turns out that the 12th iterated function is very close to the 6th one ) . as a consequence ,the initial transient to the 6-state regime can be rather chaotic , as shown in fig .[ figtransit_04469 ] , but convergence to a periodic regime does occur later .this existence of periodic regimes above the threshold for chaos is called periodicity windows , which appears as a narrow whiter region in fig .[ fig - bif ] . a difference with the usual -state regimes ( when is below the chaotic range ) , for instance corresponding to fig .[ iteree_gamma=042 ] , is that one obtains intersections with the diagonal , stable or unstable ; by contrast , for the 6-state regime , they are 6 stable and 6 unstable points . and , of order 1 , 2 , 6 and 12 . a convergence to a 6-state regime is observed .the arrow indicates a region where cuts the first diagonal.,width=377 ] for , , the regime is periodic ( 6-state).,width=377 ] \(ii ) we now examine the transition between a 6-state regime and a 4-state regime through chaotic regimes or intermittency regimes . for , a 6-state regimeis obtained .[ iteree_6a ] shows the iterates of order 1 , 2 , 4 and 6 .the 4th and 6th iterates have common intersections with the first and second iterates , since both 4 and 6 are multiples of 2 .the 6th iterate intersects the first diagonal at 12 other points , while the 4th cuts the diagonal at 4 points only .these 4 points are unstable , thus no 4-state regime can exist . on the contrary , for the 6th iterate , half of the 12 points are stable ( i.e. with a small slope of the tangent line ) , so that one obtains a 6-state stable regime . and , of order 1 , 2 , 4 and 6 . a convergence to a 6-state regime is observed ., width=377 ] what happens for a higher value of namely corresponding to a 4-state regime is shown in fig .[ iteree_6b ] , with again the iterates of order 1 , 2 , 4 , 6 .the 4th iterate curve crosses the diagonal for the same number of points than previously , but the 4 points are now stable .the 6th order iterate does not intersect the diagonal , except at the common points with the two first iterates . and , of order 1 , 2 , 4 and 6 . a convergence to a 4-state regimeis observed ., width=377 ] between the two preceding values of the parameter , both chaotic and intermittent regimes can exist . for , figure [ figtransit_inter2 ] shows intermittencies between a chaotic and a 6-state behaviors ( upper curve ) , and figure [ iteree_046623 ] shows that the 6th iterate is tangent to the first diagonal in 6 points , so that the resulting permanent regime can be interpreted as a kind of hesitation between two behaviors .the 4 intersections of the 4th iterate remain unstable .+ the lower curve in figure [ figtransit_inter2 ] shows another , more visible , example of intermittencies , obtained with slightly different values of the parameters , between a chaotic regime and a 4-state one ( actually it is a 8-state one , very close to a 4-state regime ) . for , ( upper curve ) : intermittencies between chaos and a 6-state regimeare observed .however the lower curve ( for , ) shows a more clear situation of intermittencies between chaos and a 4-state regime ., title="fig:",width=377 ] for , ( upper curve ) : intermittencies between chaos and a 6-state regime are observed .however the lower curve ( for , ) shows a more clear situation of intermittencies between chaos and a 4-state regime ., title="fig:",width=377 ] and , of order 1 , 2 , 4 and 6 , corresponding to intermittencies .the sixth iterate is tangent to the diagonal ., width=377 ]the study of the iteration model of the clarinet should not be limited to the first iterate : higher order iterates give interesting information on possible regimes of oscillation . in the limit of very high orders ,their shape gives a direct indication of the number of states involved in the limit regime , or of chaotic behavior .one can also predict an intermittent regime of the iterations , which takes place when an iterate is almost tangent to the first diagonal , so that the iterations are trapped for some time in a narrow channel .the phenomenon might be related to some kinds of multiphonic sounds produced by the instrument .it is true that this phenomenon takes place only in a rather narrow domain of parameters , but this is also the case of the period doubling cascade , which has been observed experimentally .one can therefore reasonably hope that the present calculations will be followed by experimental observations .this work was supported by the french national agency anr within the consonnes project .we thank also the conservatoire neuchtelois and the high school arc - engineering in neuchtel .finally we wish to thank sami karkar and christophe vergez for fruitful discussions .our purpose is to obtain an analytical expression of the iteration function from the basic model ( eqs .( [ 1 ] to [ 1ter ] , [ 4b-1 ] , [ recurrence ] ) ) , the following quantities can be defined : can be obtained from the knowledge of the function given by the solving of : for the non - beating reed case , the study of function leads to a direct analytical solution , as explained below , at least if ( otherwise it is a multi - valued function ) . finally , with the notation and , if is the heaviside function , the iteration function is obtained , as : for this case , both and are positive and smaller than unity , because writing , eq .( [ a2 ] ) is written as : .\ ] ] the study of function shows that it is monotonously increasing from to when increases from to therefore the equation has a unique solution when with this condition , it appears that the equation has three real solutions , and that the interesting solution ( located between and ) is the intermediate one . as a conclusion ,it is possible to use the classical formula for the solution of the cubic equation : + \frac{1}{3\zeta } \text { \ ; } \\ \psi & = & \frac{1}{\zeta ^{2}}\text { ; } \eta = \sqrt{3+\psi } \text { ; } \mu = % \frac{9}{2}(3y-1).\end{aligned}\ ] ] for this case , both and are negative writing , eq .( [ a3 ] ) is written as follows : .\ ] ] the study of the function shows that it is monotonously decreasing from when increases from therefore the equation has a unique real , positive solution when the two other solutions are either real and negative or complex conjugate , with a negative real part , because the sum of the three solutions is negative ( ) . as a conclusion , the solution can be written by using the following formulae : [ [ if - the - discriminant - is - positive ] ] if the discriminant is positive \text { \ ; } r=-\frac{\psi + \mu } { % 27\zeta } \text { } . \\ \sqrt{-x } & = & z = s_{1}-\frac{q}{s_{1}}-\frac{1}{3\zeta } \text { ; } s_{1}=\left [ r+\sqrt{discr}\right ] ^{1/3}.\end{aligned}\ ] ] [ [ if - the - discriminant - is - negative ] ] if the discriminant is negative -\frac{1}{% 3\zeta } \text { ; } \\\eta ^{\prime } & = & \sqrt{-3+\psi } .\end{aligned}\ ] ]the condition of existence of negative flow is given by .this is equivalent to the condition on the antecedents , , where is the larger antecedent of , such as because is decreasing for all ( see fig .[ fonction - iteree]) therefore the volume flow is negative at time . in order to determine the limit value , the following equations are to be used : being positive ( a reasonable hypothesis for the normal playing ) , the unknown needs to be larger than the quantity eliminating in the above equations implies the following equation , with : or with : an example of function is shown in fig . [ figfonctionh(x ) ] .it appears that no solutions exist if and two solutions exist if , i.e. if inequation ( [ 10h ] ) holds .the two solutions can be obtained analytically .however , for sake of simplicity , we give the exact solution for the larger one , , and an approximation for the smaller one , , obtained at the first order in : \text { , } \label{10a } \\ & & \text { with } \varepsilon = \frac{\lambda -1 + 2\lambda a_{\zeta } } { ( \lambda + 1)\zeta + \lambda -1}.\end{aligned}\ ] ] this error is found to be less than in comparison with the exact value .condition ( [ 10h ] ) can be shown to be necessary and sufficient .we do not give the entire proof , but it can be shown that another necessary condition for having two solutions is , or , but it is implied by condition ( [ 10h ] ) .[ limits ] shows that the first negative flow threshold is very close to the threshold , and slightly smaller . for a given , the limit value of such as corresponds to the equality between the beating reed threshold and the negative flow one . for a given negative flowis possible above a certain value of . for rather strong losses ,if , no negative flow can occur . for a cylindrical resonator , this implies that .collet , p. and eckmann , j.p ., `` properties of continuous maps of the interval to itself '' , _ mathematical problems in theoretical physics _, k. osterwalder ( ed . ) , springer - verlag , heidelberg , 1979 ; _ iterated maps on the interval as dynamical systems _ ,birkhuser , basel , 1980 .feigenbaum , j. , `` the universal metric properties of nonlinear transformations '' ._ journal of statistical physics , _ 21 , 1979 , 669 - 706 ; `` the metric universal properties of period doubling bifurcations and the spectrum for a route to turbulence '' , _ annals of the new york academy of science _ , 357 , 1980 , * * * * 330 - 336 .kergomard , j. , `` elementary considerations on reed - instrument oscillations '' . in _ mechanics of musical instruments _ , vol .* 335 * ( a. hirschberg/ j. kergomard/ g. weinreich , eds),of _ cism courses and lectures _ , pages 229290 .springer - verlag , wien , 1995 .lize , a. , doublement de priode dans les instruments anche simple de type clarinette , master degree thesis , paris 2004 , http://www.atiam.ircam.fr/archives/stages0304/lizee.pdf [ http://www.atiam.ircam.fr/archives/stages0304/lizee.pdf ] idogawa , t. , kobata , t. , komuro , k. and masakazu , i. , nonlinear vibrations in the air column of a clarinet artificially blown , _ journal of the acoustical society of america , _ 93 , 1993 , 540551 .kergomard , j. , dalmont , j.p . ,gilbert , j. and guillemain , ph . .`` period doubling on cylindrical reed instruments '' . _ in proceedings ot the joint congress cfa / daga04 , _ pages 113114 , strasbourg , 22th 25th march 2004 .dalmont , j .-p . , gilbert , j. , kergomard , j. and ollivier , s. `` an analytical prediction of the oscillation and extinction thresholds of a clarinet '' ._ journal of the acoustical society of america _ , 118 , 2005 , 32943305 .hirschberg , a. , van de laar , r. w. a. , marrou - maurires , j. p. , wijnands , a. p. j. , dane , h. j. , kruijswijk , s. g. and houtsma , a. j. m. a quasi - stationary model of air flow in the reed channel of single - reed woodwind instruments ._ acustica _ , 70 , 1990 , 146154 .dalmont , j .-, gilbert , j. and ollivier , s. .`` nonlinear characteristics of single - reed instruments : quasistatic volume flow and reed opening measurements '' , _ journal of the acoustical society of america , _ 114 , 2003 , 22532262 .dalmont , j .-p . and frapp , c. , `` oscillation and extinction thresholds of the clarinet : comparison of analytical results and experiments '' ._ journal of the acoustical society of america _ , 122 , 2007 , 11731179 .causs , r. , kergomard , j. , lurton , x. , `` input impedance of brass musical instruments - comparison between experiment and numerical model '' , _ journal of the acoustical society of america , _ 75 , 1984 , 241 - 254 .mayer - kress , g. and haken , h. , `` attractors of convex maps with positive schwarzian derivative in the presence of noise '' , _physica 10d _ , 1984 , 329 - 339 .parlitz , u. , englisch , v. , scheffczyk , c. and lauterborn , w. , `` bifurcation structure of bubble oscillators '' , _ journal of the acoustical society of america , _ 88 , 1990 , 1061 - 1077 .scheffczyk , c. , parlitz , u. , kurz , t. , knop , w. , lauterborn , w. , `` comparison of bifurcation structures of driven dissipative nonlinear oscillators'',_physical review a _ , 43 , 1991 , 6495 - 6502 .
the dynamical equations of clarinet - like systems are known to be reducible to a non - linear iterated map within reasonable approximations . this leads to time oscillations that are represented by square signals , analogous to the raman regime for string instruments . in this article , we study in more detail the properties of the corresponding non - linear iterations , with emphasis on the geometrical constructions that can be used to classify the various solutions ( for instance with or without reed beating ) as well as on the periodicity windows that occur within the chaotic region . in particular , we find a regime where period tripling occurs and examine the conditions for intermittency . we also show that , while the direct observation of the iteration function does not reveal much on the oscillation regime of the instrument , the graph of the high order iterates directly gives visible information on the oscillation regime ( characterization of the number of period doubligs , chaotic behaviour , etc . ) . keywords : bifurcations , iterated maps , reed musical instruments , clarinet , acoustics .
students must learn effective problem solving strategies in order to develop expertise in physics . specifically , they must be able to solve problems beyond those that can be solved using a plug - and - chug approach . research shows that converting a problem from the initial verbal representation to other suitable representations such as diagrammatic , tabular , graphical or algebraic can make further analysis of the problem easier . similarly , using analogies or considering limiting cases are also useful strategies for solving problems . many traditional courses do not explicitly teach students effective problem solving heuristics .rather , they may implicitly reward inferior problem solving strategies that many students engage in .instructors may implicitly assume that students appreciate the importance of initial qualitative analysis , planning , evaluation , and reflection phases of problem solving and that these phases are as important as the implementation phase . consequently , they may not explicitly discuss and model these strategies while solving problems in class .recitation is usually taught by the teaching assistants ( tas ) who present homework solutions on the blackboard while students copy them in their notebooks . without guidance , most textbook problems do not help students monitor their learning , reflect upon the problem solving process and pay attention to their knowledge structure. quantitative and conceptual problem solving both can enhance problem solving and reasoning skills , but only if students engage in effective problem solving strategies rather than treating the task purely as a mathematical chore or guess - work . without guidance, many introductory physics students do not perceive problem solving as an opportunity for learning to interpret the concepts involved and to draw meaningful inferences from them .instead , they solve problems using superficial clues and cues , and apply concepts at random without concern for their applicability . with explicit training, these same problem solving tasks can be turned into learning experiences that help students organize new knowledge coherently and hierarchically .the abstract nature of the laws of physics and the chain of reasoning required to draw meaningful inferences make it even more important to teach students effective problem solving strategies explicitly .reflection is an integral component of effective problem solving . while experts in a particular field reflect and exploit problem solving as an opportunity for organizing and extending their knowledge , students often need feedback and support to learn how to use problem solving as an opportunity for learning . there are diverse strategies that can be employed to help students reflect upon problem solving .one approach that has been found to be useful is self - explanation " or explaining what one is learning explicitly to oneself . chi et al . found that , while reading science texts , students who constantly explained to themselves what they were reading and made an effort to connect the material read to their prior knowledge performed better on problem solving on related topics given to them after the reading . inspired by the usefulness of self - explanation , yerushalmi et al . investigated how students may benefit from being explicitly asked to diagnose mistakes in their own quizzes with different levels of scaffolding support . they found that students benefited from diagnosing their own mistakes .the level of scaffolding needed to identify the mistakes and correct them depended on the difficulty of the problems .another activity that may help students learn effective problem solving strategies while simultaneously learning physics content is reflection with peers . in this approach ,students reflect not only on their own solution to problems , but reflect upon their peers solutions as well .integration of peer interaction ( pi ) with lectures has been popularized in the physics community by mazur from harvard university . in mazur s pi approach , the instructor poses conceptual problems in the form of multiple - choice questions to students periodically during the lecture . the focal point of the pi method is the discussion among students , which is based on conceptual questions ; the lecture component is limited and intended to supplement the self - directed learning .the conceptual multiple choice questions give students an opportunity to think about the physics concepts and principles covered in the lecture and discuss their answers and reasoning with peers .the instructor polls the class after peer interaction to obtain the fraction of students with the correct answer .on one hand , students learn about the level of understanding that is desired by the instructor by discussing with each other the concrete questions posed .the feedback obtained by the instructor is also invaluable because the instructor learns about the fraction of the class that has understood the concepts at the desired level . this pi strategy keeps students alert during lectures and helps them monitor their learning , because not only do students have to answer the questions , they must explain their answers to their peers . the method keeps students actively engaged in the learning process and lets them take advantage of each others strengths .it helps both the low and high performing students , because explaining and discussing concepts with peers helps students organize and solidify concepts in their minds . heller et al .have shown that group problem solving is especially valuable both for learning physics and for developing effective problem solving strategies . they have developed many `` context - rich '' problems that are close to everyday situations and are more challenging and stimulating than the standard textbook problems .these problems require careful thought and the use of many problem representations . working with peers in heterogeneous groups with students with high, low and medium performance is particularly beneficial for learning from the context - rich " problems and students are typically assigned the rotating roles of manager , time keeper and skeptic by the instructor . our prior research has shown that , even with minimal guidance from the instructors , students can benefit from peer interaction . in our study , those who worked with peers not only outperformed an equivalent group of students who worked alone on the same task , but collaboration with a peer led to co - construction " of knowledge in of the cases . co - construction of knowledge occurs when neither student who engaged in the peer collaboration was able to answer the questions before the collaboration , but both were able to answer them after working with a peer on a post - test given individually to each person . here, we describe a study in which algebra - based introductory physics students in the peer reflection group ( pr group ) were provided guidance and support to reflect upon problem solving with peers and undergraduate and graduate teaching assistants in the recitation class . on the other hand, other recitation classes were run in a traditional manner with the ta answering students homework questions and then giving a quiz at the end of each recitation class .our assessment method was novel in that it involved counting the number of problems in which students drew diagrams or did scratchworks on scratch books when there was no partial credit for these activities because the questions were in the multiple - choice format .we find that the pr group drew more diagrams than the traditional group ( statistically significant ) even when there was no external reward for drawing them .the peer - reflection process which was sustained throughout the semester requires students to evaluate their solutions and those of their peers and involves high level of mental processing . the reflection process with peers can also help students monitor their learning . we also find that there is a positive correlation between the number of diagrams drawn and the final exam performance . in particular , students who drew diagrams for more problems performed better than others regardless of whether they belonged to the traditional group or the pr group .the investigation involved an introductory algebra - based physics course mostly taken by students with interest in health related professions .the course had 200 students and was broken into two sections both of which met on tuesdays and thursdays and were taught by the same professor who had taught both sections of the course before .a class poll at the beginning of the course indicated that more than of the students had taken at least one physics course in high school , and perhaps more surprisingly , more than of the students had taken at least one calculus course ( although the college physics course in which they were enrolled was an algebra - based course ) .the daytime section taught during the day was the traditional group and had 107 students whereas the evening section called the peer reflection " group or pr group had 93 students .the lectures , all homework assignments , the midterm exams and the final exam were identical for the daytime and evening sections of the course .moreover , the instructor emphasized effective problem solving strategies , e.g. , performing a conceptual analysis of the problem and planning of the solution before implementing the plan and importance of evaluating the solution throughout the semester in both the traditional and peer - reflection groups . each week, students in both groups were supposed to turn in answers to the assigned homework problems ( based upon the material covered in the previous week ) using an online homework system for some course credit .in addition , students in both groups were supposed to submit a paper copy of the homework problems which had the details of the problem solving approach at the end of the recitation class to the ta for some course credit . while the online homework solution was graded for correctness , the ta only graded the paper copies of the submitted homework for completeness on a three point scale ( full score , half score or zero ) .the weighting of each component of the course , e.g. , midterm exams , final exam , class participation , homework and the scores allocated for the recitation were the same for both classes .also , as noted earlier , all components of the course were identical for both groups except the recitations which were conducted very differently for the pr and traditional groups .although the total course weighting assigned to the recitations was the same for both groups ( since all the other components of the course had the same weighting for both groups ) , the scoring of the recitations was different for the two groups .students were given credit for attending recitation in both groups .attendance was taken in the recitations using clickers for both the traditional group and the pr group .the traditional group recitations were traditional in which the ta would solve selected assigned homework problems on the blackboard and field questions from students about their homework before assigning a quiz in the last 20 minutes of the recitation class .the recitation quiz problems given to the traditional group were similar to the homework problems selected for peer reflection " in the pr group recitations ( but the quiz problems were not identical to the homework problems to discourage students in the traditional group from memorizing the answers to homework in preparation for the quiz ) .students in the pr group reflected on three homework problems in each recitation class but no recitation quiz was given to the students in this group at the end of the recitation classes , unlike the traditional group , primarily due to the time constraints .the recitation scores for the pr group students were assigned based mostly on the recitation attendance except students obtained bonus points for helping select the best " student solution as described below .since the recitation scoring was done differently for the traditional and pr groups , the two groups were curved separately so that the top of the students in each group obtained a and b grades in view of the departmental policy .as noted earlier , both recitation sections for the evening section ( 93 students total ) together formed the pr group . the pr group intervention was based upon a field - tested cognitive apprenticeship model of learning involving modeling , coaching , and fading to help students learn effective problem solving heuristics . in this approach , modeling " means that the ta demonstrates and exemplifies the effective problem solving skills that the students should learn . coaching " means providing students opportunity to practice problem solving skills with appropriate guidance so that they learn the desired skills . fading " means decreasing the support and feedback gradually with a focus on helping students develop self - reliance . the specific strategy used by the students in the pr group involved reflection upon problem solving with their peers in the recitations , while the ta and the undergraduate teaching assistants ( utas ) exemplified the effective problem solving heuristics .the utas were chosen from those undergraduate students who had earned an grade in an equivalent introductory physics course previously .the utas had to attend all the lectures in the semester in which they were utas for a course and they communicated with the ta each week ( and periodically with the course instructor ) to determine the plan for the recitations .we note that , for effective implementation of the pr method , two utas were present in each recitation class along with the ta .these utas helped the ta in demonstrating and helping students to learn effective problem solving heuristics . in our intervention ,each of the three recitation sections in the traditional group had about 35 - 37 students .the two recitations for the pr group had more than 40 students each ( since the pr group was the evening section of the course , it was logistically not possible to break this group into three recitations ) . at the beginning of each pr recitation , students were asked to form nine teams of three to six students chosen at random by the ta ( these teams were generated by a computer program each week ) .the ta projected the names of the team members on the screen so that they could sit together at the beginning of each recitation class .three homework questions were chosen for a particular recitation .the recitations for the two sections were coordinated by the tas so that the recitation quiz problems given to the traditional group were based upon the homework problems selected for peer reflection " in the pr group recitations .each of the three competitions " was carefully timed to take approximately 15 minutes , in order for the entire exercise to fit into the allotted fifty - minute time slot .after each question was announced to the class , each of the nine teams were given three minutes to identify the best " solution by comparing and discussing among the group members .if a group had difficulty coming up with a winner " , the ta / uta would intervene and facilitate the process .the winning students were asked to come to the front of the room , where they were assembled into three second - round groups .the process was repeated , producing three finalists .these students handed in their homework solutions to the tas , after which the ta / uta evaluation process began .a qualitative sketch of the team structures at various stages of the competition is shown in figure 1 .the three finalists solutions were projected one at a time on a screen using a web cam and computer projector .each of the three panelists ( the ta and two utas ) gave their critique of the solutions , citing what each of the finalists had done well and what could be done to further enhance the problem solving methodology in each case .in essence , the ta and utas were judges " similar to the judges in the television show american idol " and gave their critique " of each finalist s problem solving performance .after each solution had been critiqued by each of the panelists , the students , using the clickers , voted on the best " solution .the ta and utas did not participate in the voting process . in order to encourage each team in the pr group to select the student with the most effective problem solving strategy as the winner for each problem , all students from the teams whose member advanced to the final round to win " the competition " were given course credit ( bonus points ) .in particular , each of these team members ( consolation prize winners ) earned one third of the course credit given to the student whose solution was declared to be the winner " .this reward system made the discussions very lively and the teams generally made good effort to advance the most effective solution to the next stage .figure 1 shows one possible team configuration at various stages of pr activities when there are 27 students in the recitation class initially . due to lack of space ,each of the initial teams ( round 1 ) in figure 1 is shown with 3 members whereas in reality this round on average consisted of five team members .in each team , the student with the dark border in figure 1 is the winner of that round " and advances to the next stage .all the students who participated at any stage in helping select the winner " ( those shown in gray ) were the consolation prize winners and obtained one third of the course credit that was awarded to the winner for that problem .while we video - taped a portion of the recitation class discussions when students reflected with peers , a good account of the effectiveness and intensity of the team discussions came from the ta and utas who generally walked around from team to team listening to the discussions but not interrupting the team members involved in the discussions unless facilitation was necessary for breaking a gridlock . the course credit and the opportunity to have the finalists solutions voted on by the whole class encouraged students to argue passionately about the aspects of their solutions that displayed effective problem solving strategies .students were constantly arguing about why drawing a diagram , explicitly thinking about the knowns and target variables , and explicitly justifying the physics principles that would be useful before writing down the equations are effective problem solving strategies . furthermore , the american idol " style recitation allowed the tas to discuss and convey to students in much more detail what solution styles were preferred and why .students were often shown what kinds of solutions were easier to read and understand , and which were more amenable to error - checking .great emphasis was placed on consistent use of notation , setting up problems through the use of symbols to define physical quantities , and the importance of clear diagrams in constructing solutions . at the end of the semester ,all of the students were given a final exam consisting of 40 multiple choice questions , 20 of which were primarily conceptual in nature and 20 of which were primarily quantitative ( students had to solve a numerical or symbolic problem for a target quantity ) .although the final exam was all multiple - choice , a novel assessment method was used .while students knew that the only thing that counted for their grade was whether they chose the correct option for each multiple - choice question , each student was given an exam notebook which he / she could use for scratchworks .we hypothesized that even if the final exam questions were in the multiple - choice format , students who value effective problem solving strategies will take the time to draw more diagrams and do more scratchworks even if there was no course credit for such activities . with the assumption that the students will write on the exam booklet andwrite down relevant concepts only if they think it is helpful for problem solving , multiple - choice exam can be a novel tool for assessment. it allowed us to observe students problem solving strategies in a more native " form closer to what they really think is helpful for problem solving instead of what the professor wants them to write down or filling the page when a free - response question is assigned with irrelevant equations and concepts with the hope of getting partial credit for the effort .we decided to divide the students work in the notebooks and exam - books into two categories : diagrams and scratchworks .the scratchworks included everything written apart from the diagrams such as equations , sentences , and texts .both authors of this paper agreed on how to differentiate between diagrams and scratchworks .instead of using subjectivity in deciding how good " the diagrams or scratchworks for each student for each of the 40 questions were , we only counted the number of problems with diagrams drawn and scratchworks done by each student .for example , if a student drew diagrams for 7 questions out of 40 questions and did scratchworks for 10 questions out of 40 questions , we counted it as 7 diagrams and 10 scratchworks .we hypothesized that the pr intervention in the recitation may be beneficial in helping students learn effective problem solving strategies because students solved the homework problems , discussed them with their peers to determine the top three student solutions and then were provided expert feedback from the utas and ta about those solutions .chi et al . have theorized that students often fail to learn from the expert " solutions even when they realize that their own approaches are deficient because they may not take the time to reflect upon how the expert model is different from their own and how they can repair their own knowledge structure and improve their problem solving strategies .chi et al .therefore emphasize that simply realizing that the problem solving approach of the instructor is superior than theirs may not be enough to help students learn effective strategies .they noted that the feedback and support by instructors to help students understand why the expert solution is better after the students realize that they are better is a critical component of learning effective approaches to problem solving . in the approach adopted in the pr group recitations , the utas and ta provided valuable expert " feedback to help students make these connections right after the students had thought about effective approaches to problem solving themselves .in fact , since high performing utas are only a year ahead of the students , their feedback may be at the right level and even more valuable to the students . such reasoning was our motivation for exploring the effect of the pr intervention .our goal was to examine both inter - group effects and group - independent effects .inter - group effects refer to the investigation of the differences between the traditional group and the pr group . for example, we investigated whether there was a statistical difference in the average number of problems for which students drew diagrams and wrote scratchworks in the pr group and the traditional group .we also examined group - independent effects , findings that hold for students in both the traditional group and the pr group .one issue we examined was whether students who drew more diagrams , despite knowing that there was no partial credit for these tasks , were more likely to perform better in the final exam .we also investigated the correlation between the number of problems with diagrams or scratchworks and the final exam performance when quantitative and conceptual questions were considered separately .we also explored whether students were more likely to draw diagrams on quantitative or conceptual questions on the final exam .although no pretest was given to students , there is some evidence that , over the years , the evening section of the course is somewhat weaker and does not perform as well overall as the daytime section of the course historically .the difference between the daytime and evening sections of the course could partly be due to the fact that some students in the evening section work full - time and take classes simultaneously .for example , the same professor had also taught both sections of the course one year before the peer reflection activities were introduced in evening recitations and thus all recitations for both sections of the course were taught traditionally that year .thus , we first compare the averages of the daytime and evening sections before and after the peer reflection activities were instituted in the evening recitation classes .table 1 compares the difference in the averages between the daytime and evening classes the year prior to the introduction of peer reflection ( fall 2006 ) and the year in which peer reflection was implemented in the evening recitation classes ( fall 2007 ) . in table 1 ,the p - values given are the results of t - tests performed between the daytime and evening classes . statistically significant difference ( at the level of ) between groups only exists between the average midterm scores for the year in which peer reflection was implemented .the evening section scored lower on average than the daytime section on the final exam but the difference is not statistically significant ( p=0.112 for 2006 and p=0.875 for 2007 ) , as indicated in table 1 .we note that while the midterm questions differed from year to year ( since the midterms were returned to students and there was a possibility that the students would share them ) , the final exam , which was not returned to students , was almost the same both years ( except the instructor had changed a couple of questions on the final exam from 2006 to 2007 ) .we also note that the final exam scores are lower than the midterm scores because there was no partial credit for answering multiple - choice questions on the final exam .the final exam which was comprehensive had 40 multiple - choice questions , half of which were quantitative and half were conceptual .there was no partial credit given for drawing the diagrams or doing the scratchworks .one issue we investigated is whether the students considered the diagrams or the scratchworks to be beneficial and used them while solving problems , even though students knew that no partial credit was given for showing work .as noted earlier , our assessment method involved counting the number of problems with diagrams and scratchworks .we counted any comprehensible work done on the exam notebook other than a diagram as a scratchwork . in this sense , quantifying the amount of scratchwork does not distinguish between a short and a long scratchwork for a given question . if a student wrote anything other than a diagram , e.g. , equations , the known variables and target quantities , an attempt to solve for unknown etc ., it was considered scratchwork for that problem .similarly , there was a diversity in the quality of diagrams the students drew for the same problem .some students drew elaborate diagrams which were well labeled while others drew rough sketches .regardless of the quality of the diagrams , any problem in which a diagram was drawn was counted .we find that the pr group on average drew more diagrams than the traditional group .table 2 compares the average number of problems with diagrams or scratchworks by the traditional group and the pr group on the final exam .it shows that the pr group had more problems with diagrams than the traditional group ( statistically significant ) .in particular , the traditional group averaged 7.0 problems with diagrams per student for the whole exam ( 40 problems ) , 4.3 problems with diagrams for the quantitative questions per student and 2.7 problems with diagrams for the conceptual questions per student . on the other hand ,the pr group averaged 8.6 problems with diagrams per student , 5.1 problems with diagrams for the quantitative questions per student and 3.5 diagrams for the conceptual questions per student .figures 2 and 3 display the histograms of the total number of problems with diagrams on the final exam ( with only multiple - choice questions ) vs. the percentage of students for the traditional and pr groups respectively .figures 2 shows that some students in the pr group drew many more diagrams than the average number of diagrams drawn on the exam .the histograms of the total number of quantitative or conceptual questions with diagrams vs. the percentage of students for the traditional and pr groups can be found here. we also find that students drew more diagrams for the quantitative questions than the conceptual questions .tables 2 shows that , regardless of whether they belonged to the traditional group or the pr group , students were more likely to draw diagrams for the quantitative questions than for the conceptual questions .the comparison of the number of problems with diagrams for quantitative vs. conceptual problems for each of the traditional ( n=107 ) and pr ( n=93 ) groups displayed in table 2 gives statistically significant difference at the level of .it was not clear _ a priori _ that students will draw more diagrams for the quantitative questions than for the conceptual questions and we hypothesize that this trend may change depending on the expertise of the individuals and the type of questions asked .table 2 also shows that , while there was no statistical difference between the two groups in terms of the total number of problems with scratchworks performed , the students were far more likely to do scratchworks for the quantitative questions than for the conceptual questions .figures 4 and 5 display the histograms of the total number of problems with scratchworks on the final exam vs. the percentage of students for the traditional and pr groups respectively .the histograms of the total number of quantitative or conceptual questions with scratchworks vs. the percentage of students for the traditional and pr groups can be found here. we note that the sample sizes are approximately equal and not too small , so the t - test will not be much affected even if the distributions are skewed . the comparison of the number of problems with scratchworks for quantitative vs. conceptual problems for each of the traditional and pr groups displayed in table 2 gives statistically significant difference at the level of .this difference between the quantitative and the conceptual questions makes sense since problems which are primarily quantitative require calculations to arrive at an answer .we also find a positive correlation between the final exam score and the number of problems with diagrams .table 3 investigates whether the final exam score is correlated with the number of problems with diagrams or scratchworks for the traditional group and the pr group separately ( r is the correlation coefficient ) .the null hypothesis in each case is that there is no correlation between the final exam score and the variable considered , e.g. , the total number of problems with diagrams drawn . table 3 shows that , for both the traditional group and the pr group , the students who had more problems with diagrams and scratchworks were statistically ( significantly ) more likely to perform well on the final exam . this correlation holds regardless of whether we look at all questions or the quantitative questions only ( labeled diagram ( q ) and scratch(q ) for the quantitative diagrams and scratchworks respectively ) or conceptual questions only ( labeled diagram(c ) and scratch(c ) for the conceptual diagrams and scratch work respectively ) .there is no prior research related to physics learning at any level that we know of that shows a positive correlation between the exam performance and the number of problems with diagrams drawn when answering multiple - choice questions where there is no partial credit for drawing diagrams .it is evident from table 3 that for both conceptual and quantitative questions , diagrams are positively correlated with final exam performance .in particular , we investigated the correlations between the number of problems with diagrams drawn ( or the amount of scratchwork ) and the final exam scores on the quantitative questions alone and the conceptual questions alone .table 3 shows that within each group ( traditional or pr ) , the correlation between the number of diagrams drawn and the final exam score is virtually identical for the quantitative and conceptual questions ( r = 0.19 for quantitative vs. r=0.20 for conceptual in the traditional group , r=0.36 for quantitative vs. r=0.36 for conceptual in the peer reflection group ) .we find that the diagrams drawn by the pr group explain more of the final exam performance .in particular , the comparison of the traditional group and the pr group in table 3 shows that for each case shown in the different rows , the correlation between the number of diagrams or amount of scratchwork and the final exam score is stronger for the pr group than for the traditional group .for example , the correlation coefficient for the number of problems with diagrams drawn vs. the final exam score is higher for the pr group compared to the traditional group ( r=0.40 vs. r=0.24 ) .the pr group also showed a stronger correlation than the traditional group even when the quantitative and conceptual questions were considered separately for the correlation between the number of problems with diagrams drawn and the final exam scores .similarly , the correlation coefficient for the number of scratchworks vs. the final exam score is higher for the pr group compared to the traditional group ( r=0.53 vs. r=0.39 ) .we also find that scratchworks explain more of the performance for quantitative questions than conceptual questions .in particular , table 3 also shows that , within each group ( traditional and pr ) , the correlation between the amount of scratchwork and the final exam score was stronger for the quantitative questions than for the conceptual questions .while correlation does not imply causality , the stronger correlation may be due to the fact that students do not necessarily have to perform algebraic manipulations for conceptual questions but it may be a pre - requisite for identifying the correct answer for a quantitative question .further examination of the quantitative - only correlations between the scratchwork and the final exam scores ( r = 0.42 for the traditional group , r = 0.59 for the pr group ) shows that the correlations are stronger for the pr group than the traditional group .in this study , the recitation classes for an introductory physics course primarily for the health - science majors were broken into a traditional group and a peer reflection " or pr group .we investigated whether students in the pr group use better problem - solving strategies such as drawing diagrams than students in the traditional group and also whether there are differences in the performance of the two groups .we also explored whether students who perform well are the ones who are more likely to draw diagrams or write scratchworks even when there is no partial credit for these activities . in the pr group recitation classes, students reflected about their problem solving in the homework with peers each week .appropriate guidance and support provided opportunities for learning effective problem solving heuristics to the students in the pr group . in particular , students in the pr group reflected in small teams on selected problems from the homework and discussed why solutions of some students employed better problem solving strategies than others .the ta and utas in the pr group recitations demonstrated effective approaches to problem solving and coached students so that they learn those skills .each small team in the pr group discussed which student s homework solutions employed the most effective problem solving heuristics and selected a winner " .then , three teams combined into a larger team and repeated the process of determining the winning " solution .typically , once three finalists " were identified in this manner , the ta and utas put each finalist s solution on a projector and discussed what they perceived to be good problem - solving strategies used in each solution and what can be improved . finally , each student used clickers to vote on the best " overall solution with regard to the problem solving strategies used .there was a reward system related to course credit that encouraged students to be involved in selecting the solution with the best problem solving strategy in each round .students in the traditional group had traditional recitation classes in which they asked the ta questions about the homework before taking a quiz at the end of each recitation class .each problem selected for peer reflection " was adapted into a quiz problem for the traditional group .the assessment of the effectiveness of the intervention was novel .the final exam had 40 multiple - choice questions , half of which were quantitative and half were conceptual .for the multiple - choice questions , students technically do not have to show their work to arrive at the answer .however , students may use effective approaches to problem solving such as drawing a diagram or writing down their plan if they believe it may help them answer a question correctly .although students knew that there was no partial credit for drawing diagrams or writing scratchworks , we compared the average number of problems with diagrams or scratchworks in the traditional and experimental groups .our hypothesis was that students who value effective problem solving strategies will have more problems in which diagrams are drawn or scratchworks are written despite the fact that there is no partial credit for these activities .in fact , the fact that there was no partial credit for the diagrams or the scratchworks helped eliminate the possibility of students drawing the diagrams or writing scratchworks for the sole purpose of getting partial credit for the effort displayed ( even if it is meaningless from the perspective of relevant physics content ) .we note that to help understand the statistical differences between the pr and traditional sections , we chose to quantify the number of diagrams drawn in the multiple - choice final examination .it should be stressed that the pr recitations emphasized a wide variety of preferred problem - solving strategies ( not just drawing diagrams ) . diagrams ( and scratchworks ) were chosen simply because they were more straightforward to quantify than other strategies .we hypothesize that the use of diagrams represents a good marker " for overall improvement in problem solving strategies .our findings can be broadly classified into inter - group and group - independent categories .the inter - group findings that show the difference between the traditional group and pr group can be summarized as follows : * on the multiple - choice final exam where there was no partial credit for drawing diagrams , the pr group drew diagrams in more problems ( statistically significant ) than the traditional group . * the diagrams drawn by the pr group explain more of the final exam performance than those by the traditional group .findings that are independent of group ( which are true even when the traditional group and pr group are not separated and all students are considered together ) can be summarized as follows : * there is a statistically significant positive correlation between how often students wrote scratchworks or drew diagrams and how well they performed on the final exam regardless of whether they were in the traditional group or the pr group .in particular , those who performed well in the multiple - choice final exam ( in which there was no partial credit for showing work ) were much more likely to draw diagrams than the other students . while one may assume that high - performing students will draw more diagrams even when there is no partial credit for it , no prior research that we know of has explicitly demonstrated a correlation between the number of genuinely drawn " diagrams and student performance at any level of physics instruction . * the correlations between the number of problems with diagrams drawn and the final exam scores on the quantitative questions alone and the conceptual questions alone are comparable and positive . * students in both groups were more likely to draw diagrams or write scratchworks for quantitative problems than for the conceptual questions .while more scratchworks are expected on quantitative problems , it is not clear _ a priori _ that more diagrams will be drawn for the quantitative problems than for the conceptual questions .we hypothesize that this trend may depend upon the expertise of the individuals , explicit training in effective problem solving strategies and the difficulty of the problems .we note that the students in the traditional group were given weekly recitation quizzes in the last 20 minutes of the recitation class based upon that week s homework .there were no recitation quizzes in the pr group due to the time constraints .it is sometimes argued by faculty members that the recitation quizzes are essential to keep students engaged in the learning process during the recitations .however , this study shows that the pr group was not adversely affected by not having the weekly quizzes that the traditional group had and instead having the peer reflection activities .the mental engagement of students in the pr group throughout the recitation class may have more than compensated for the lack of quizzes .the students in the pr group were evaluating their peer s work along with their own which requires high level of mental processing . they were comparing problem solving strategies such as how to do a conceptual analysis and planning of the solution , why drawing two separate diagrams may be better in certain cases ( e.g. , before and after a collision ) than combining the information into one diagram , how to define and use symbols consistently etc .in addition , after the active engagement with peers , students got an opportunity to learn from the ta and utas about their critique of each winning " solution highlighting the strengths and weaknesses .we also note that the pr recitations do not require any additional class time or effort on the part of the instructor . according to chi , students are likely to improve their approaches to problem solving and learn meaningfully from an intervention if both of the following happen : i ) students compare two artifacts , e.g. , the expert solution and their own solution and realize their omissions , and ii ) they receive guidance to understand why the expert solution is better and how they can improve upon their own approaches .the pr approach uses such a two tier approach in which students first identify that other student s solution may be better than their own and then are guided by the utas / ta to reflect upon the various aspects of the winning " solutions .we are very grateful to dr .louis pingel from the school of education and dr .allan r. sampson and huan xu from the department of statistics at the university of pittsburgh for their help in data analysis .we thank f. reif , r. glaser , r. p. devaty and j. levy for helpful discussions .we thank j. levy for his enthusiastic support of helping us implement the peer reflection " activities in his class .we thank the national science foundation for financial support .j. larkin , understanding , problem representations , and skill in physics " in s. f. chipman , j. w. segal and r. glaser ( eds . ) , thinking and learning skills ( lawrence erl . , hillsdale , nj ) , * 2 * , pp . 141 - 159 ( 1985 ) .j. kaput , representation and problem solving : methodological issues related to modeling " , e. a. silver ( ed . ) , in teaching and learning mathematical problem solving : multiple research perspectives , lawrence erl . ,hillsale , nj , pp . 381 - 398 , 1985 .a. schoenfeld , `` learning to think mathematically : problem solving , metacognition , and sense - making in mathematics , '' in _ handbook for research on mathematics teaching and learning _, edited by d. grouws ( mcmillan , ny , 1992 ) , chap .334370 .a. schoenfeld , `` teaching mathematical thinking and problem solving , '' in _ toward the thinking curriculum : current cognitive research _, edited by l. b. resnick and b. l. klopfer ( ascd , washington , dc , 1989 ) , pp .83103 .r. dufresne , j. mestre , t. thaden - koch , w. gerace , and w. leonard , `` knowledge representation and coordination in the transfer process , '' in _ transfer of learning from a modern multidisciplinary perspective _ , edited by j. p. mestre ( information age publishing , greenwich , ct , 2005 ) , pp .155215 . c. singh , assessing student expertise in introductory physics with isomorphic problems , part ii : examining the effect of some potential factors on problem solving and transfer " , phys .res . * 4 * , 010105(1 - 10 ) , ( 2008 ) .e. yerushalmi , c. singh , and b. eylon , physics learning in the context of scaffolded diagnostic tasks ( i ) : the experimental setup " , in _ proceedings of the phys .conference , greensboro , nc , _ edited by l. hsu , l. mccullough , and c. henderson , aip conf .951 , 27 - 30 , ( 2007 ) .e. mazur , understanding or memorization : are we teaching the right thing " in conference on the introductory physics course on the occasion of the retirement of robert resnick , ed .j. wilson , 113 - 124 , wiley , ny , ( 1997 ) . for example , see page 291 in the chapter on t - test assumptions and robustness " .g. glass and k. hopkins , statistical methods in psychology and education " 3rd edition , needham heights , ma , allyn and bacon , 1996 .r. j. marzano , r. s. brandt , c. s. hughes , b. f. jones , b. z. presseisen , s. c. rankin , and c. suhor , _ dimensions of thinking : a framework for curriculum and instruction _ , alexandria , va : association for supervision and curriculum development , ( 1988 ) .a. collins , j. s. brown , and s. e. newman , _ cognitive apprenticeship : teaching the crafts of reading , writing and mathematics _ , in l. b. resnick ( ed . ) , knowing , learning , and instruction : essays in honor of robert glaser , hillsdale , nj : lawrence erlbaum ., 453 - 494 , 1989 .figure 1 : illustration of the team structure at the three stages of peer - reflection activities . before the students voted in the third round using the clickers , the ta and utas critiqued each of the three solutions at the final stage .due to lack of space , only 3 team members per team are shown in round 1 but there were on average five members in each group in this round ( as generated by a computer program each week ) .the consolation prize winners in gray obtained 1/3rd of the course credit awarded to the winner ..means and p - values for comparisons of the daytime and evening classes during the year before peer reflection was introduced ( fall 2006 ) and during the year in which it was introduced ( fall 2007 ) .the following were the number of students in each group : fall 2006 daytime n=124 , evening n=100 , fall 2007 daytime n=107 , evening n=93 . [ cols="^,^,^,^ " , ]
we describe a study in which introductory physics students engage in reflection with peers about problem solving . the recitations for an introductory physics course with 200 students were broken into the peer reflection " ( pr ) group and the traditional group . each week in recitation , students in the pr group reflected in small teams on selected problems from the homework and discussed why solutions of some students employed better problem solving strategies than others . the graduate and undergraduate teaching assistants ( tas ) in the pr group recitations provided guidance and coaching to help students learn effective problem solving heuristics . in the recitations for the traditional group , students had the opportunity to ask the graduate ta questions about the homework before they took a weekly quiz . the traditional group recitation quiz questions were similar to the homework questions selected for peer reflection " in the pr group recitations . as one measure of the impact of this intervention , we investigated how likely students were to draw diagrams to help with problem solving . on the final exam with only multiple - choice questions , the pr group drew diagrams on more problems than the traditional group , even when there was no external reward for doing so . since there was no partial credit for drawing the diagrams on the scratch books , students did not draw diagrams simply to get credit for the effort shown and must value the use of diagrams for solving problems if they drew them . we also find that , regardless of whether the students belonged to the traditional or pr groups , those who drew more diagrams for the multiple - choice questions outperformed those who did not draw them . 2em -0.25 in 6.9 in 9.62 in 1ex -.75 in
diseases spread over networks .the spreading dynamics are closely related to the structure of networks . for this reason network epidemiologyhas turned into of the most vibrant subdisciplines of complex network studies . a topic of great practical importance within network epidemiology is the vaccination problem : how should a population be vaccinated to most efficiently prevent a disease to turn into an epidemic ? for economic reasons it is often not possible to vaccinate the whole population .some vaccines have severe side effects and for this reason one may also want to keep number of vaccinated individuals low .so if cheap vaccines , free of side effects , does not exist ; then having an efficient vaccination strategy is essential for saving both money and life . if all ties within the population is known , then the target persons for vaccination can be identified using sophisticated global strategies ( cf . ) ; but this is hardly possible for nation - wide ( or larger ) vaccination campaigns . in a seminal paper cohen _ et al . _ suggested a vaccination strategy that only requires a person to estimate which other persons he , or she , gets close enough to for the disease to spread to i.e ., to name the `` neighbors '' in the network over which the disease spreads .for network with a skewed distribution of degree ( number of neighbors ) the strategy to vaccinate a neighbor of a randomly chosen person is much more efficient than a random vaccination . in this workwe assume that each individual knows a little bit more about his , or her , neighborhood than just the names of the neighbors : we also assume that an individual can guess the degree of the neighbors and the ties from one neighbor to another .this assumption is not very unrealistic people are believed to have a good understanding of their social surroundings ( this is , for example , part of the explanation for the `` navigability '' of social networks ) .finding the optimal set of vaccinees is closely related to the attack vulnerability problem .the major difference is the dynamic system that is confined to the network disease spreading for the vaccination problem and information flow for the attack vulnerability problem . to be able to protect the networkefficiently one needs to know the worst case attacking scenario .large scale network attacks are , presumably , based on local ( rather than global ) network information .so , a grave scenario would be in the network was attacked with the same strategy that is most efficient for vaccination .we will use the vaccination problem as the framework for our discussion , but the results applies for network attack as well .in our discussion we will use two measures of network structure : the _ clustering coefficient _ of the network defined as the ratio of triangles with respect to connected triples normalized to the interval ] controlling the clustering. we will use and giving the maximal clustering for the given and .2 . the networked seceder model , modeling social networks with a community structure and exponentially decaying degree distributions . briefly, it works by sequentially updating the vertices by , for each vertex , rewiring all s edges to the neighborhood of a peripheral vertex . with a probability an edge of be rewired to a random vertex ( so controls the degree of community structure ) .we use the parameter values , and iterations on an erds - rnyi network .the watts - strogatz ( ws ) model generates networks with exponentially decaying degree distributions and tunable clustering .the ws model starts from the vertices on a circular topology with edges between vertices separated by 1 to steps on the circle .then one goes through the edges and rewire one side of them to randomly selected vertices with a probability .we use and ..statistics of the networks .note that the arxiv , prison and seceder model networks are not connected the largest connected components contains , and nodes respectively . [ cols="<,<,<,<,<",options="header " , ]now we turn to the definition of the strategies .we assume a fraction of the population is to be vaccinated . as a reference we consider random vaccination ( rnd , equivalent to site percolation ) .we use the above mentioned _ neighbor vaccination _ ( rnb)to vaccinate the neighbor of randomly chosen vertices and the trivial improvement if knowledge about the neighbors degrees are included : pick a vertex at random and vaccinate one ( randomly chosen ) of its highest - degree neighbors ( we call it deg ) . to avoid overvaccination of a neighborhood one can consider to vaccinate neighbors of a vertex with a maximal number of edges out of s neighborhood ( out ) . for all strategies except rndwe also consider `` chained '' versions were one , instead of vaccinating a neighbor of a randomly chosen vertex , vaccinates a neighbor of the vertex vaccinated in the previous time step ( if all neighbors are vaccinated a neighbor of a random vertex is chosen instead ) .for the acronyms of the chained versions a suffix `` c '' is added .the results of this paper are presented in three sections : first we study how the number of vertices in the largest connected subgraph depends on the fraction of vaccinated vertices .then we show that the conclusions from also hold for dynamical simulations of disease spreading . to interpret the results we also investigate for a fixed as a function of the clustering and assortative mixing coefficients . as a static efficiency measure we consider the size of the average largest connected component of susceptible ( non - vaccinated ) vertices , .we average over runs of the vaccination procedures .the model networks are also averaged over network realizations .( smaller or larger and does not make any qualitative difference . ) in fig .[ fig : s1 ] we display as a function of .for all except the ws model network the deg and out ( chained and unchained versions ) form the most efficient set of strategies . within this groupthe order of efficiency varies : for the arxiv network the out strategy is more than twice as efficient as any other for . for the hk and seceder model networks the chained strategies are considerably more efficient than the unchained ones .we note that the difference between the chained and unchained versions of out and deg is bigger than between out and deg ( or outc and degc ) .out do converge to deg in the limit of vanishing but all networks we test have rather high clustering .another interesting observation is that even if the degree distribution is narrow , such as for the seceder model of fig .[ fig : s1](e ) ( where ) the more elaborate strategies are much more efficient than random vaccination .this is especially clear for higher which suggests that the structural change of the network of susceptible vertices during the vaccination procedure is an important factor for the overall efficiency . for the ws model networkthe chained algorithms are performing poorer than random vaccination .this is in contrast to all other networks .we conclude that epidemiology related results regarding the ws model networks should be cautiously generalized to real - world systems .static measures of vaccination efficiency are potential over - simplifications there is a chance that the interplay between disease dynamics and the underlying network structure has a significant role . to motivate the use of we also investigate the sis and sir models on vaccinated networks . in the sis modela vertex goes from `` susceptible '' ( s ) to `` infected '' ( i ) and back to s. in the sir model is just the same , except that an infected vertex goes to the `` removed '' ( r ) state and remain there .the probability to go from to ( per contact ) is zero for vaccinated vertices and for the rest .the i state lasts time steps .we use synchronous updating and one randomly chosen initially infected person .the disease dynamics are averaged times for all runs of the vaccination schemes . in fig .[ fig : dyn](a ) we plot the average number of individuals that at least once have been infected during an outbreak .e . , until there are no i - vertices left , or ( for sis ) has reached an endemic state ( defined in the simulations as when there are no susceptible vertices that have not had the disease at least once)for the arxiv network .other networks and simulation parameters give qualitatively similar results .qualitatively , the large picture from the calculations remains the chained and unchained deg and out strategies are very efficient , and the chained versions are more efficient than the unchained .a difference is that the unchained rnb also performs rather well .quantitatively , the differences between the strategies are huge , this is a result of the threshold behaviors of the sis and sir models .the conclusion of fig .[ fig : dyn ] ( and similar plots for other networks ) is that the order of the strategies efficiencies are largely the same as concluded from the -curves .but if high resolution is required , the measurement of network fragility has to be specific for the studied system . to gain some insight how the network structure govern the relative efficiencies of the strategies we measure for varying assortative mixing and clustering coefficients .the results hold for other small values .we keep the size and degree sequence constant to the values of the arxiv network .to perform this sampling we rewire pairs of edges and to and ( unless this would introduce a self - edge or multiple edges ) . to ensure that the rewiring realizations are independent we start with rewiring pairs of edges .then we go through pairs of edges randomly and execute only changes that makes the current or closer to their target values . when the value of or are within of the target value the iteration is braked .the results seen in fig . [ fig : rew ] shows that , just as before the out and deg strategies , chained or unchained , are most efficient throughout the parameter space .the unchained versions are most efficient for .an explanation is that , for high , the chained versions will effectively only vaccinate the high - connected vertices ( that are grouped together for very high ) and leave chains of low - degree vertices unvaccinated . the -dependence plotted in fig .[ fig : rew](b ) shows that the unchained versions outperform the chained versions for .this is possibly a result of that the chains , for combinatorial reasons , get stuck in one part of the network .it is not an effect of biased degree - degree correlations since if the rewiring procedure is conditioned to a fixed fig .[ fig : rew](b ) remains essentially unaltered .we note that the structure of the original arxiv network differs from the rewired networks .for example , at of fig .[ fig : s1](a ) the out is 22% more efficient than outc , but in fig . [fig : rew ] the out and outc curves differ very little . for the rnb strategy the chained version is better than the unchained throughout the range of and values .to summarize , we have investigated strategies for vaccination and network attack that are based only on the knowledge of the neighborhood information that humans arguably possess and utilize .both static and dynamical measures of efficiency are studied . for most networks , regardless of the number of vaccinated vertices , the most efficient strategies are to choose a vertex and vaccinate a neighbor of with highest degree ( deg ) , or the neighbor of with most links out of s neighborhood ( out ) . can be picked either as the lastly vaccinated vertex ( chained selection ) or at random ( unchained selection ) . for real - world networksthe chained versions tend to outperform the unchained ones , whereas this situation is reversed for the three types of model networks we study .we investigate the relative efficiency of chained and unchained strategies further by sampling random networks with a fixed degree sequence and varying assortative mixing and clustering coefficients .we find that the unchained strategies are preferable for networks with a very high clustering or strong positive assortative mixing ( larger values than in seen in real - world networks ) . in ref . the authors propose the strategy to vaccinate a random neighbor of a randomly selected vertex .this strategy ( rnb ) requires less information of the neighborhood than deg and out do .thus the practical procedure gets simpler : one only has to ask a person `` name a person you meet regularly '' rather than `` name the acquaintance of yours who meet most people you are not acquainted with regularly '' ( for out ) .( `` meet with regularly '' should be replaced with some phrase signifying a high risk of infection transfer for the pathogen in question . ) on the other hand , if the information of the neighborhoods is incomplete deg and out will , effectively , be reduced to rnb ( and thus not perform worse than rnb ) .to epitomize , choosing the people to vaccinate in the right way will save a tremendous amount of vaccine and side - effect cases .the best strategy can only be selected by considering both the structure of the network the pathogen spreads over , and the disease dynamics . if nothing of this is known the outc strategy our recommendation it is better , or not much worse , than the best strategy in most cases .together with degc , outc is most efficient for low clustering and assortative mixing coefficients , which is the region of parameter space for sexually transmitted diseases the most interesting case for network based vaccination schemes ( due to the well - definedness of sexual networks ) .the author is grateful for comments from m. rosvall and acknowledges support from the swedish research council through contract no.2002 - 4135 .
we study how a fraction of a population should be vaccinated to most efficiently stop epidemics . we argue that only local information ( about the neighborhood of specific vertices ) is usable in practice , and hence we consider only local vaccination strategies . the efficiency of the vaccination strategies is investigated with both static and dynamical measures . among other things we find that the most efficient strategy for many real - world situations is to iteratively vaccinate the neighbor of the previous vaccinee that has most links out of the neighborhood .
the differential emission measure ( dem ) diagnostic technique offers crucial information about the thermal structuring of the solar and stellar atmospheres , providing a measure of the temperature distribution of the plasma along the line of sight ( los ) .however , to derive the dem from a set of observations is a complex task , due to the inverse nature of the problem , and the understanding of its robustness and accuracy is still relevant today ( e.g. * ? ? ?* ; * ? ? ?spectrometers are by nature better suited to dem analysis than broad band imagers .but , because these latter generally offer a higher signal to noise ratio over a larger field of view ( fov ) , dem codes have nevertheless been applied to the three coronal bands of the extreme - ultraviolet imaging telescope ( eit ) or the transition region and coronal explorer ( trace , * ? ?however , these instruments were shown not to constrain the dem enough to reach conclusive results . in recent years , the multiplication of passbands in instruments such as the x - ray telescope ( xrt ) on _ hinode _ and the atmospheric imaging assembly ( aia ) telescope has brought new prospects to reliably estimate the dem simultaneously over a large fov .case studies of the properties of the inversion using these instruments have been published by e.g. , or .building on these results , the central objective of the work presented in this series of papers is to provide a systematic characterization of the dem reconstruction problem to assess both its accuracy and robustness . using our technique ,the capabilities of a given instrument can be evaluated , and new tools facilitating the dem interpretation are presented .we illustrate our methodology in the specific case of the six coronal bands of aia , but the same principle can be applied to any set of broad band or spectroscopic measurements . initially introduced for element abundance measurements ,then further developed by , e.g. , and , the dem formalism has been extensively used in the past decades , on most types of coronal structures , such as polar coronal holes , polar plumes ( e.g. * ? ? ?* ) , streamers ( e.g. * ? ? ?* ) , prominences ( e.g. * ? ?? * ; * ? ? ?* ) quiet sun ( e.g. * ? ? ?* ; * ? ? ?* ) , bright points or active regions ( e.g. * ? ? ?the thermal structuring of the stellar coronae is also investigated using the dem analysis ( e.g. * ? ? ?in particular , the dem is one of the tools commonly used to study the thermal stability of the coronal structures just mentioned , and to diagnose the energy source balancing the observed radiative losses .for example , it can help to discriminate between steady or impulsive heating models predicting different loop thermal structures ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?one of the approaches is to establish the cross field thermal structure of resolved loops which is then compared to the dem simulated for unresolved multi - stranded and monolitic loops , impulsively or steadily heated . butreliably inferring the dem from observations has proved to be a genuine challenge .the fundamental limitations in the dem inversion have been discussed by , e.g. , , including measurement noises , systematic errors , the width and shape of the contribution functions , and the associated consequences of multiple solutions and limited temperature resolution .many dem inversion algorithms have been proposed to cope with these limitations , each with its own strengths and weaknesses ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . in parallel to these developments ,authors have been attentive early on to estimate the accuracy of the inversions ( e.g. * ? ? ?* ) , eventually comparing several algorithms ( e.g. * ? ? ?. due to the intrinsic underconstraint of inverse problems and to the inevitable presence of random and systematic measurement errors , multiple physical solutions consistent with the observations exist , even if mathematical uniqueness and stability can be ensured via , e.g. , regularization .it is nevertheless possible to quantify the amount of knowledge , or ignorance , on the physical parameter of interest by rigorously defining levels of confidence in the possible solutions or classes of solutions that can explain the observations within the uncertainties .this is a desirable feature for any inversion scheme if it is to be able , for example , to discriminate or even to define , isothermality and multithermality .in this perspective , we developed a technique to systematically explore the whole space of solutions , in order to determine their respective probabilities and quantify the robustness of the inversion with respect to plasma parameters , random and systematic errors .we used data simulated with simple dem forms to systematically scan a wide range of plasma conditions , from isothermal to broadly multithermal , and several inversion hypotheses . comparing the dem solutions to the input of the simulations ,it is possible to quantify the quality of the inversion . following this strategy ,we are able to completely characterize the statistical properties of the inversion for several parametric dem distributions .we argue that even though the specifics may vary , the main conclusions concerning the existence of multiple solutions and the ability to distinguish isothermality from multithermality also apply to more generic forms of dem distributions . in this first paper, we focus on the response of aia to isothermal plasmas . the properties of the isothermal inversion thus observed will serve as building blocks for the interpretation of the more complex dem solutions studied in the second paper ( hereafter paper ii ) .section [ sec_2 ] describes the general methodology and the practical implementation in the case of aia , including the data simulation , the inversion scheme , the sources of random and systematic errors , and the different dem distribution models considered .results for isothermal plasmas are presented and discussed in section [ sec : iso_response ] .a summary introducing the treatment of more generic dem forms is given in conclusion .under the assumption that the observed plasma is optically thin , integration along the line of sight ( los ) of collisional emission lines and continua produces in the spectral band of an instrument an intensity where , the response of the instrument to a unit volume of plasma of electron number density and temperature , is given by the first term of the right member accounts for each spectral line of each ionic species of abundance , and the second term represents the contribution of the continua . is the spectral sensitivity of the band of the instrument .the respective contribution functions and of the lines and continua contain the physics of the radiation emission processes ( e.g. * ? ? ?* ) and can be computed using the relevant atomic data . as long as one considers total line intensities , equations ( [ eq_1 ] ) and ( [ eq_2 ] ) are generic and apply to imaging telescopes as well as to spectrometers .summarizing the original reasoning of , since the function is generally weakly dependent on the density and is peaked with temperature , gives a measure of where the integration is now limited to the portions of the los where the temperature is such that significant emission is produced .if measurements are available at several wavebands , it is possible to plot as a function of the bands peak temperatures .generalizing this logic into a differential form , and assuming that the element abundances are constant , equation ( [ eq_1 ] ) can be reformulated as where is the dem , that provides a measure of the amount of emitting plasma as a function of temperature ) .the dem can also be defined in linear scale as .there is a factor between the two conventions . ] .as demonstrated by , is the mean square electron density over the regions of the los at temperature , weighted by the inverse of the temperature gradients in these regions . the total emission measure ( em )is obtained by integrating the dem over the temperature solving the dem integral equation ( [ eq_3 ] ) implies reversing the image acquisition , los integration and photon emission processes to derive the distribution of temperature in the solar corona from observed spectral line intensities. we will now investigate the properties of this inversion .let us consider a plasma characterized by a dem .the corresponding intensities observed in spectral bands are noted . in order to solve the dem inverse problem - estimating from the observations -one uses a criterion that defines the distance between the data and the theoretical intensities computed using equations ( [ eq_2 ] ) and ( [ eq_3 ] ) for any dem . by definitionthe dem solution of the inversion is the one that minimizes this criterion : since the are affected by measurement noises and the by systematic errors in the calibration and atomic physics , the inversion can yield different solutions of probabilities for a given dem of the plasma .bayes theorem then gives which is the conditional probability that the plasma has a dem knowing the result of the inversion . is the total probability of obtaining whatever . in the bayesian framework, is called the _prior_. it is uniformly distributed if there is no _ a priori _ information on the dem of the plasma .conversely , _ a priori _ knowledge or assumptions on the plasma are represented by a varying .for example , zero probabilities can be assigned to non physical solutions . contains all the information that can be obtained from a given set of measurements on the real dem of the plasma and as such , it is a desirable quantity to evaluate . indeed ,if the dem is to be used to discriminate between physical models , as it is for example the case in the coronal heating debate , finding a solution that minimizes the criterion is necessary , but it is not sufficient .it is also crucial to be able to determine if other solutions are consistent with the uncertainties , what are their respective probabilities , and how much they differ from each other . in principle , and without _ a priori _ on the plasma , and thus can be estimated for any minimization scheme using monte - carlo simulations . for each ,the observed are simulated using equations ( [ eq_2 ] ) and ( [ eq_3 ] ) and adding photon and instrumental noises .systematic errors are incorporated to the and the resulting criterion is minimized . is then evaluated from the solutions corresponding to realizations of the random variables .but since several can potentially yield the same , the derivation of from equation ( [ eq_6 ] ) requires to know , the probability to obtain whatever .this is generally not possible , for it requires the exploration of an infinite number of plasma dems .this is why dem inversion research often focuses on the minimization part of the problem , being supposed to be well behaved because of the proper choice of _ prior _ and the multiplication of passbands or spectral lines .however , can be computed if the dem of the plasma can be described by a limited number of parameters . in this case, one can scan the whole parameter space and use the monte - carlo simulations to estimate for all possible .the possibility that multiple yield an identical inversion solution being now taken into account , one can determine and thus derive from equation ( [ eq_6 ] ) .this limitation of the complexity of the dems that can be considered corresponds to adopting a non - uniform _ prior _ , while probabilistic treatments were justly developed with the opposite objective of relaxing such non - physical assumptions ( e.g. the mcmc method of * ? ? ?but rather than the development of a generic dem inversion method , our objective is to study the behaviour of in controlled experiments . andif the parameterization is properly chosen , the can still represent a variety of plasma conditions , from isothermal to broadly multithermal .in addition , we did not make any assumption on the number and properties of the spectral bands , nor on the definition of the criterion nor on the algorithm used to minimize it .the method described to compute can therefore be used to characterize any inversion scheme in the range of physical conditions covered by the chosen distributions .devising an efficient way to locate the absolute minimum of the criterion is not trivial .for example , without further assumption , its definition alone does not guarantee that it has a single minimum , so that iterative algorithms may converge to different local minima depending on the initial guess solution .furthermore , if the value of the minimum itself is a measure of the goodness of fit , it does not provide information on the robustness of the solution .how well the solution is constrained is instead related to the topography of the minimum and its surroundings ; the minimum may be deep or shallow and wide or narrow with respect to the different parameters describing the dem curve .the number of dems resulting in significantly different sets of intensities within the dynamic range of an instrument is potentially extremely large .however , a systematic mapping of the criterion aimed at revealing its minima and their topography is possible if the search is restricted to a subclass of all possible dem forms . indeed ,if the dem is fully determined by a limited number of parameters , one can regularly sample the parameter space and compute once and for all the corresponding theoretical intensities .the criterion , i.e. the distance between the and the measured , is thus computable as a function of the dem parameters for any given set of observations .it is then trivial to find its absolute minimum and the corresponding dem solution , or to visualize it as a function of the dem parameters .the procedure used to compute is summarized in figure [ fig : method ] .the parametric dem forms are described in section [ sec : dem_models ] .the intensities observed in bands are the sum of average intensities and random perturbations due to photon shot noise and measurement errors the are equal to the theoretical intensities in the case of a hypothetically perfect knowledge of the instrument calibration and atomic physics . in practice however , the are affected by systematic errors since there is no way of knowing whether the intensities that can be computed from equations ( [ eq_2 ] ) and ( [ eq_3 ] ) for any dem are overestimated or underestimated , we identify them , in which case we obtain the by adding systematic errors .the only difference between the two conventions is the sign of .the criterion and therefore the results are identical in both cases . ] to the reference theoretical intensities .the distributions of random and systematic errors are discussed in section [ sec : uncertainties ] .the detail of the calculation of the is given in section [ sec : theoretical_intensities ] .from these , we can either simulate observations by adding measurement noises ( equation ( [ eq : iobs ] ) ) , or obtain various estimates of the by adding perturbations representing the systematics ( equation ( [ eq : ith ] ) ) .the criterion and the corresponding minimization scheme are described in section [ sec : criterion ] . for any plasma dem , monte - carlo realizations of the noises and systematics yield several estimates , from which we compute .finally , is obtained after scanning all possible plasma dems ( section [ sec : monte - carlo ] ) .ensuing the discussions of sections [ sub_sec_2_2 ] and [ sub_sec_2_3 ] , the and are both constrained to belong to one of the three following classes of dem distributions defined by two or three parameters : * isothermal where the dem is reduced to a dirac function centred on the temperature . is the total emission measure defined by equation ( [ eq_4 ] ) .* gaussian in the plasma is here predominantly distributed around a central temperature with a width . * top hat in the plasmais uniformly distributed over a width around .there is no reason for the solar plasma to follow one of these distributions , nor are they the only possible choices .but even though they are simple enough to allow a detailed analysis of the properties of the dem inversion , they can nonetheless represent a variety of plasma conditions .the conclusions drawn can therefore help understand the behaviour of more generic dem forms .furthermore , since the class of solution dems does not have to be the same as that of the plasma dems , it is possible to investigate the impact of a wrong assumption on the shape of the dem .for example , one can compute for isothermal solutions while the plasma dem is multithermal ( see paper ii ) .equations ( [ eq_2 ] ) and ( [ eq_3 ] ) are used to compute the reference theoretical intensities for any dem .they are then used to form both simulated observations and various estimates of the theoretical intensities with equations ( [ eq : iobs ] ) and ( [ eq : ith ] ) . from equations ( [ eq : dem_iso ] ) , ( [ eq : dem_gauss ] ) and ( [ eq : dem_hat ] ) , we derive the expressions of these reference intensities as a function of the parameters , and for the three types of dem distributions . *isothermal * gaussian * top hat we note that in all cases , the reference theoretical intensities are equal to the convolution product of the instrument response function by the chosen dem . the are pre - computed for all possible combinations of parameters , , and .the appropriate range and resolution to be used for each parameter can be determined from plausible plasma properties and taking into account the instrument characteristics .the responses of the six aia coronal bands are computed using equation ( [ eq_2 ] ) .the contribution functions are obtained using the version 7.0 of the chianti atomic database .we used the chianti ionization balance and the extended coronal abundances .the summation is extended over the 5 nm to 50 nm spectral range for all bands .the instrument sensitivity is obtained as a function of wavelength in units of by calling the function ` aia_get_response ` provided in the aia branch of the interactive data language ( idl ) _ solar software _( ssw ) package with the ` /dn ` , ` /area ` and ` /full ` keywords .this function implements the aia pre - flight calibration as described in .since photon shot noise must be taken into account in the error budget ( section [ sec : uncertainties ] ) , the must be computed for given exposure times and not per second .we used the standard aia exposures of 2 s for the 17.1 nm and 19.3 nm bands , and 2.9 s for the others .the contribution functions are computed using chianti from to in steps of , oversampling the chianti grid by a factor 10 using cubic spline interpolations .the emission measure varies over a wide range from to in steps of .the dem width varies linearly in 80 steps from to .this choice of sampling leads to pre - computing groups of 6 aia intensities , which represents easily manageable data cubes .uncertainties due to random and systematic errors are at the heart of the problem of the dem inversion .the two affect the observations and their interpretation in different manners ( see e.g. * ? ? ?observations are mostly affected by random errors caused by both poisson photon shot noise and nearly gaussian detection noises like thermal and read noise .these noises vary randomly from pixel to pixel and from exposure to exposure . on the other hand ,the errors made on the calibration and atomic physics systematically skew the interpretation of all observed intensities by the same amount and in the same direction .it is possible to realistically simulate in the the statistical properties of the noises affecting the data .the reference intensities have units of digital numbers ( dn ) .the number of electrons collected in each pixel over the exposure time is obtained by multiplying these values by the gains ( in ) of the detectors analog to digital converters listed in ssw .the number of detected photons is then obtained by dividing the result by the quantum yield of the detector , _ i.e. _ the number of photoelectrons produced per interacting photon where 3.65 is the energy in ev required to create an electron hole pair , is the elementary charge , is the speed of light in vacuum and is planck s constant .note that in this calculation we assume that all interacting photons have the same wavelength .however , since the full width at half maximum of the aia bands is comprised between 0.2 and 1.0 nm , the error made is only a few . ] .these photon intensities are then perturbed by poisson noise and converted back to photoelectrons .22 rms of gaussian ccd read noise are finally added before conversion to dn .determining the statistical properties of the systematic errors is more challenging .the tabulated calibration and atomic physics provides a single estimate of the instrument response , but systematics nonetheless have a probability distribution .indeed , the calibration is the result of laboratory measurements themselves affected by random and systematic errors .if we could recalibrate the instrument a number of times in different facilities we would obtain a distribution of instrumental sensitivities , the adopted calibration corresponding to one of them .likewise , different atomic physics codes will give different estimates of the contribution functions , the chianti output being one of them .it is however difficult to characterize these two probability distributions .they are generally implicitly assumed to be gaussian and the adopted values to be the most probable .but the distributions may in fact be uniform , or asymmetric , or biased , _etc_. the calibration involves a complex chain of measurements , the uncertainties of which are difficult to track and estimate .after independent radiometric calibrations , comparable euv instruments on soho were found to agree only within about 25% .subsequent comparisons could not resolve the discrepancies nor identify their origin in random errors or biases in the individual calibrations .we can only say that the adopted calibration of every soho instrument introduces a systematic error in the data analysis but without being able to tell how much and in what direction .it is likely that inter - calibration between aia and other instruments would run into similar limitations .errors in the contribution functions are a major contributor to the uncertainties ( e.g. * ? ? ?* ; * ? ? ?* ) . since the properties of the known atomic transitions are derived either from measurements or modelling , they are not infinitely accurate .missing transitions lead to underestimated contributions functions , as it is the case for the 9.4 nm channel of aia ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?the abundances are affected by about 10% uncertainties , not taking into account possible local enhancements of high fip elements .these imply that , at least in some cases , the abundances are not constant along the line of sight , as assumed in the dem analysis. the plasma may not be in ionization balance , in which case the chianti calculations of transition rates are not valid .the response functions of aia are also not independent from the electron number density , which is one of the assumptions made in deriving the dem expression from equations ( [ eq_1 ] ) to ( [ eq_3 ] ) . when using spectrometers , the spectral lines are chosen so that this hypothesis is effectively verified .we plot in figure [ fig : g - vs - n ] the normalized maximum of versus electron number density . in the aia field of view, can vary from about in coronal holes at ( e.g. * ? ? ?* ) to about in dense coronal loops ( e.g. * ? ? ?* ) . in this range ,only the 9.4 nm band ( solid line ) is completely independent on the density .the response function of all other bands decreases as the density increases , the variation reaching about 35% for the 17.1 nm band ( short dashed line ) .since the contribution functions have to be computed for a constant electron number density ( we chose ) , they are respectively under or over - estimated if the observed structures are more or less dense .the impact can be mitigated if one has independent knowledge of the range of densities on the los , but it nonetheless represents an additional source of uncertainty compared to using density insensitive spectral lines .finally , these various sources of uncertainties do not affect all spectral bands by the same amount . rigorously estimating the properties of the probability distributions of the systematic errors would thus require a detailed analysis of the calibration process and of the atomic physics data and models that is beyond the scope of this paper .in these conditions , we make the simplifying assumption that all systematics are gaussian distributed and unbiased . according to , uncertainties on the pre - flight instrument calibration are of the order of 25% .this is thus interpreted as a gaussian probability distribution centred on the tabulated values with a 25% standard deviation .likewise , we used 25% uncertainty on the atomic physics for all bands , typical of the estimates found in the literature .calibration and atomic physics uncertainties were added quadratically for a net 35% uncertainty on the response functions .the are thus obtained by adding gaussian random perturbations to the . sinceinstrumental noises and systematic errors are assumed to be gaussian distributed , we use a least square criterion normalized to the total standard deviation of the uncertainties in each band . is obtained by summing quadratically the standard deviations of the four individual contributions : photon noise , read noise , calibration and atomic physics ( section [ sec : uncertainties ] ) .the value of the minimum of corresponding to the solution is noted from equations ( [ eq : ith ] ) and ( [ eq : iobs ] ) we get if the family of solutions ( dirac , gaussian or top hat ) is identical to that of the plasma dem , then in the absence of noise and the solution given by equation ( [ eq_5 ] ) is strictly equal to .however , in the presence of random and systematic errors or if the assumed dem form differs from that of the observed plasma , is not likely to be zero and the corresponding may be different from , for random fluctuations of and can compensate a difference between and . as discussed in section [ sec : fit_test ] , properly interpreting the value of provides a means of testing the pertinence of a given dem model .folding equation ( [ eq : ith_iso ] ) , ( [ eq : ith_gauss ] ) or ( [ eq : ith_hat ] ) into equation ( [ eq : criterion ] ) , we obtain the expression of for the corresponding dem distributions .given a set of observed intensities and a dem model , the criterion can therefore be easily computed for all possible combinations of the parameters , , and using the tabulated as described in section [ sec : theoretical_intensities ] .finding its minimum and thus the solution is simplified to the location of the minimum of the matrix .this minimization scheme is not fast compared to , e.g. , iterative gradient algorithms , but it ensures that the absolute minimum of the criterion is found whatever its topography .furthermore , this operation can be efficiently implemented on the graphics processing units ( gpu ) of modern graphics cards by using their cuda capability .we implemented a scheme in which each gpu core is in charge of computing an element of the matrix , with all gpu cores running in parallel .the search of the minimum of is also performed by the gpu , thus reducing the transfers between gpu to cpu to the values of and .restricting and to belong to one of the dem classes described in section [ sec : dem_models ] , and are evaluated from monte - carlo simulations . for every combination of the two or three parameters defining ( the ranges and resolutions being given in [ sec : dem_models ] ) , 5000 independent realizations of the random and systematic errors are obtained . for each of the corresponding sets of six simulated aia intensities , the inversion code returns the values of the parameters defining ( equation ( [ eq_5 ] ) ) corresponding to the absolute minimum of the criterion ( equation ( [ eq : criterion ] ) ) . from the resulting 5000 estimate the conditional probability with a resolution defined by the sampling of the parameters .integration over gives and using bayes theorem we obtain .in order to understand the fundamental properties of the dem inversion of the aia data , we first applied the method to investigate the behaviour of the isothermal solutions to simulations of isothermal plasmas . the electron temperatures and emission measures of the plasmas are noted and respectively .the corresponding inverted quantities are noted and .the probabilities and are stored in matrices of dimension 4 . to maximize the clarity of the results , and since the thermal content of the plasma is the main object of dem analysis , we reduce the number of dimensions by fixing the emission measure of the simulated plasmas to be .furthermore , the probabilities are always presented whatever the emission measure by integrating them over , even though is of course solved for in the inversion process .the chosen is typical of non flaring active regions ( e.g. * ? ? ?figure [ fig : valid_bands_iso ] shows as a function of and the number of aia bands in which a plasma produces more than 1 dn ( detection threshold ) and less than 11000 dn ( saturation ) .the left panel is for isothermal plasmas , the right panel for gaussian dems with . at the chosen , andsince we did not implement the detector saturation in our simulations , we always have exploitable signal in all six aia coronal bands , except below a few k. conversely , solar structures outside the white areas produce signal only in some of the six bands , unless spatial or temporal summation is used .therefore , the results presented in the following sections correspond to optimum conditions outside of which the combination of higher noise and possible lower number of valid bands will always lead to weaker constraints on the dem .we first present inversion results using only three bands as an illustration of the situation encountered with previous euv imaging telescopes like eit , trace or euvi .the 17.1 nm and 19.5 nm coronal passbands of eit and trace have direct equivalents in aia , but the 28.4 nm band does not .after comparison of its isothermal response ( see , e.g. , figure 9 of * ? ? ? * ) with those of aia ( figure [ fig : aia_iso_response ] ) , we chose the 21.1 nm band as its closest aia counterpart .the three bands configuration is also similar to having six bands and a low emission measure plasma . indeed , at and k, values typical of coronal loops , only three of the six aia coronal bands produce more than 1 dn ( see figure [ fig : valid_bands_iso ] ) , the others providing only upper limit constraints to the dem .panel ( a ) of figure [ fig:3bands ] shows a map of the probability and . ]it is worth noting that , as explained in section [ sub_sec_2_2 ] , and thus could be evaluated only because the limitation to simple parameterized dem forms allowed the computation of .the plot of ( and thus the horizontal structures in panel ( a ) ) shows that some temperature solutions are more probable than others for any plasma temperature . in the case of real observations ,this can be misinterpreted as the ubiquitous presence of plasma at the most likely temperatures .this caveat was already analysed by in the case of the 19.5 to 17.3 nm trace band ratio and we will discuss it further in paper ii for multithermal plasmas . both probability maps exhibit a diagonal from which several branches bifurcate .below k and above k the diagonal disappears because since the bands have little sensitivity in these regions , the signal is dominated by noises and the inversion output is thus independent from the temperature . the general symmetry with respect to the diagonal reflects the equality .the diagonal is formed by the solutions that are close to the input , while the branches correspond to significant deviations from the input . in ,these branches imply that two or more solutions can be found for a same plasma temperature .conversely , reading horizontally the image , a given temperature solution can be coherent with two or more plasma temperatures .the ( b ) and ( c ) plots give the probability of the solutions for two plasma temperatures . at , the solution may be k or k. at the typical coronal temperature k , the inversion can yield k but also k or k. it is thus possible to incorrectly conclude to the presence of cool or hot coronal plasma while observing an average million degree corona .this ambiguity has far reaching implications since the detection of hot plasma is one of the possible signatures of nano - flares ( e.g. * ? ? ? * ; * ? ? ? * ) .since by definition they correspond to the absolute minimum of the criterion , all solutions are fully consistent with the data given the uncertainties .one or more of the multiple solutions can be rejected only based on additional independent _ a priori _ information .for example , the high temperature solution corresponds to an emission measure of ( right panel of figure [ fig : criterion ] ) , which is extremely high considering the present knowledge of the corona .if no such information is available however , both low and high temperature solutions can still be correctly interpreted as also compatible with a k plasma with the aid of the and probability profiles ( f ) and ( g ) .the reason for the formation of these branches is illustrated by figure [ fig : criterion ] . on both panels , the background image is the value of the criterion for a k plasma as a function of and .the absolute minimum of the criterion , the arguments of which are the inverted parameters and ( equation ( [ eq_5 ] ) ) , corresponds to the darkest shade of grey and is marked by a white plus sign .the criterion is the sum of three components , one per waveband ( equations ( [ eq : criterion ] ) and ( [ eq : criterion_dev ] ) ) .the three superimposed curves are the loci emission measure curves for each band , i.e. the location of the ( , ) pairs for which the theoretical intensities equal the measured ones . below the loci curves ,the criterion is almost flat because at lower emission measures the are much smaller than the constant .conversely , the criterion is dominated by the at high emission measures .the darkest shades of gray and thus the minimum of the criterion are located between these two regions .the two panels correspond to two independent realizations of the random and systematic errors .for each draw , the loci curves are randomly shifted along the axis around their average position . in the absence of errors , the three loci curves would cross in a single point at the plasma temperature , giving a criterion strictly equal to zero . in the left panel , with random and systematic errorsincluded , they do not intersect at a single point but the non - zero absolute minimum of , where they are the closest together , is around . however , the criterion has two other local minima , around k and around k , where two or three of the loci curves also bundle up . in the right panel , a different random draw shifts the curves closest together around the high temperature local minimum that thus becomes the new absolute minimum . for this plasma , the inversion thus yields solutions randomly located around the several local minima with respective probabilities given by the profile of figure [ fig:3bands](c ) .when scanning the plasma temperatures , the positions of the minima vary , thus building the branches in the probability maps . in addition , depending on their location the minima can be more or less extended along one or the other axes , which results in a varying dispersion around the most probable solutions .systematic errors are simulated with random variables while they are in fact identical for all measurements .thus , the computed does not give the probability of solutions for the practical estimates of the calibration and atomic physics .in reality the output of the inversion is biased towards one or the other of the multiple solutions , but we do not know whether the calibration and atomic physics are under or over - estimated . therefore , in order to deduce the probability that the plasma has a temperature from an inverted temperature , we must account for the probabilities of the systematics as defined in section [ sec : uncertainties ] .the randomization samples their distribution , which ensures that the estimated are the probabilities relevant to interpret .figure [ fig:6bands ] is the same as figure [ fig:3bands ] , but now including the six aia coronal bands in the analysis .some secondary solutions persist at low probabilities but compared to the three bands case , most of the solutions are now concentrated on the diagonal .this illustrates that the robustness of the inversion process increases with the number of bands or spectral lines .comparison with figure [ fig:3bands ] quantifies the improvement brought by aia over previous instruments . neglecting the low probability solutions ,if independent _ a priori _ knowledge justifies the isothermal hypothesis , the six aia bands thus provide an unambiguous determination of the plasma temperature .the temperature resolution of the inversion can be estimated from the width at half maximum of the diagonal .it varies over the temperature range between 0.03 and 0.11 .it is of course be modified if we assumed different uncertainties on the calibration and atomic physics than the ones chosen in section [ sec : uncertainties ] .we tested the sensitivity of the temperature resolution to the level of uncertainties , from 10 to 55% .the higher the uncertainties , the lower the temperature resolution and the more probable the secondary solutions . for an estimated temperature of 1 mk, the temperature resolution of the inversion varies between 0.02 for 10% error and 0.08 for 55% error . in the worst case , for 55% errors , the temperature resolution decreases to for the temperature interval between 0.5 and 0.9 mk . at 1 mk ,the resolution is proportional to the uncertainty level with a coefficient of 0.15 ( ) .since by definition our method always finds the absolute minimum of the least square criterion of equation ( [ eq : criterion ] ) , the derived temperature resolution is an intrinsic property of the data and not of the inversion scheme .it is the result of the combination of the random and systematic errors and the shapes the contribution functions .its value is directly comparable with the findings of .these authors showed that the temperature resolution of the mcmc code of applied to isothermal plasmas is 0.05 log .their tests were made on simulated observations of a k plasma in 45 isolated spectral lines with 20% random errors .assuming that the mcmc method does converge towards solutions consistent with the limitations of the data , the fact that the temperature resolution is comparable for 6 aia bands and 45 spectral lines suggests that , in the isothermal case , it is driven by the uncertainties level rather than the number of observables .this conclusion is consistent with the isothermal limit of figure 6 of .the probability maps presented in the above sections are valid for a given hypothesis on the plasma dem distribution , but they would be useless without a test of its validity .the pertinence of the dem model chosen to interpret the observations can be assessed by analyzing the distribution of the sum of squared residuals defined by equations ( [ eq : criterion ] ) and ( [ eq : chi2 ] ) . if applying our inversion scheme to real data , we could compare the resulting residuals to the distribution derived from simulations for a given dem model and thus quantify the probability that the data is consistent with the working hypothesis ( e.g. isothermal or gaussian ) .the solid line histogram of figure [ fig : chi2_iso ] shows the distribution of values corresponding to the plots of figure [ fig:6bands ] .the distribution is close to a degree 4 distribution ( solid curve ) although not a perfect match , with a peak shifted to the left and an enhanced wing .the most probable value of the squared residuals is and % of them are comprised between 0 and 15 .whatever the actual plasma dem , any inversion made with the isothermal hypothesis and yielding a value in this range can thus be considered consistent with an isothermal plasma given the uncertainties .this isothermality test is similar to that recommended by , identifying our to their and our maximum acceptable to their .this does not imply however that isothermality is the only nor the best possible interpretation of the data , for different dems can produce similar values .the discrimination between dem models will be discussed in paper ii .the properties of the empirical distribution of squared residuals can be explained as follows . since we simulated observations of a purely isothermal plasma, an isothermal model can always represent the data .without errors , there would always be one unique couple ( , ) corresponding to six intensities perfectly matching the six aia observations , thus giving zero residuals . with errors ,if we forced the solution ( , ) to be the input ( , ) , the summed squared residuals resulting from a number of random draws should have by definition the probability density function ( pdf ) of a degree six distribution ( dotted curve of figure [ fig : chi2_iso ] ) , for we have six independent values of and we normalized the residuals to the standard deviation of the uncertainties .but since we solve for two parameters ( , ) by performing a least squares minimization at each realization of the errors , the solution is not exactly the input ( , ) and we should expect a pdf with two degrees of freedom less ( dashed curve ) . instead of being a pure degree 4 , the observed distributionis slightly shifted toward a degree 3 because of two factors .first , the errors are a combination of poisson photon noise and gaussian read noise , while the distribution is defined for standard normal random variables .second , as discussed below , the six residuals are not completely independent .figure [ fig : aia_iso_response ] shows the response functions of the aia bands to isothermal plasmas with electron temperatures from to k for a constant electron number density of . for each band ,the thick curve is the total response , and the labeled thin curves are the partial responses for the ions that contribute the most for at least one temperature . the fraction of the total response not accounted for by those dominant ions is shown below each main plot .ionization stages common to several bands are found across the whole range of temperatures .dominates the response at k in the 17.1 nm , 19.3 nm and 21.1 nm bands . around 1 mk ,is found in the 17.1 nm , 19.3 nm and 21.1 nm responses , and contributes to the 94 nm , 21.1 nm and 33.5 nm bands . at 2 mk , is common to the 21.1 nm and 33.5 nm bands .this is consistent with the analysis of the aia bands by .because of this redundancy , the response functions tend to have similar shapes in the regions of overlap , resulting in a correlation between the residuals .by restricting the solutions to functional forms described by a limited number of parameters , we obtained a complete statistical characterization of the dem inversion . even though they are not expected to accurately describe real coronal properties, these simple dem distributions can nonetheless model a wide range of plasma conditions .the results presented in this series of papers can thus be fruitfully used to demonstrate many important properties and guide the interpretation of the output of generic dem inversion codes .we illustrated the method by applying it to the six coronal bands of the aia telescope .in this first paper , we limited ourselves to isothermal plasmas and isothermal solutions .the case presented in section [ sec:3bands ] demonstrates the existence of multiple solutions if the number of bands is limited either by design of the instrument or by lack of signal .however , since our method provides the respective probabilities of the multiple solutions , it is possible to properly interpret the solutions as compatible with several plasma temperatures . even if some of these properties have been illustrated in case studies , we provide here a systematic analysis of a wide range of plasma parameters .the computed distribution of squared residuals can be used to test the coherence of real aia data with the isothermal hypothesis .this type of analysis can also be help to determine the optimum data acquisition parameters for aia ( e.g. spatial binning and exposure time ) ensuring that no secondary solution is present . in section [ sec:6bands ], we showed that , with enough signal , the six aia coronal bands provide a robust reconstruction of isothermal plasmas with a temperature resolution comprised between 0.03 and 0.11 .the comparison with the three bands case gives a quantification of the improvement brought by the new generation of instruments .the same method can be applied to other instruments with different response functions and different numbers of bands or spectral lines .this naturally requires the computation of the corresponding probability matrices and distribution of residuals .the temperature resolution , and more generally the details of the probability matrices presented in sections [ sec:3bands ] and [ sec:6bands ] , depend on the amplitude and distribution of the random and systematic errors .we found the resolution to be proportional to the uncertainty level ( at 1 mk , ) .we simulated plasmas with high emission measures typical of active regions . depending on the temperature , either the photon noise or the uncertainties on the calibration and atomic physics dominate .the illustrated properties of the inversion , from the multiplicity of solutions to the temperature resolution , are thus driven by both random systematic errors . while the photon noise can be reduced by increasing the exposure time or binning the data , reducing the systematicsrequires better atomic data and photometric calibration , which is not trivial .s.p . acknowledges the support from the belgian federal science policy office through the international cooperation programmes and the esa - prodex programme and the support of the institut dastrophysique spatiale ( ias ) .f.a . acknowledges the support of the royal observatory of belgium .the authors would like to thank j. klimchuk for fruitful discussions and comments .aschwanden , m. j. , & nightingale , r. w. 2005 , , 633 , 499 aschwanden , m. j. , & boerner , p. 2011, , 732 , 81 asplund , m. , grevesse , n. , sauval , a. j. , & scott , p. 2009, , 47 , 4816 boerner , p. , edwards , c. , lemen , j. , et al .2012 , sol .phys . , 275(1 - 2 ) , 41 brown , j. c. , dwivedi , b. n. , sweet , p. a. , & almleaky , y. m. 1991 , , 249 , 277 brosius , j. w. , rabin , d. m. , thomas , r. j. , & landi , e. 2008 , , 677 , 781 cargill , p. j. 1994 , , 422 , 381 craig , i. j. d. , & brown , j. c. 1976 , , 49 , 239 craig , i. j. d. , & brown , j. c. 1986 , inverse problems in astronomy : a guide to inversion strategies for remotely sensed data , research supported by serc .bristol , england and boston , ma , adam hilger , ltd .del zanna , g. , & mason , h. e. 2003 , , 406 , 1089 del zanna , g. , bromage , b. j. i. , & mason , h. e. 2003 , , 398 , 743 delaboudinire , j .- p . ,artzner , g. e. , brunaud , j. , et al .1995 , , 162 , 291 dere , k. p. 1978 , 70 , 439 dere , k. p. , landi , e. , mason , h. e. , monsignori fossi , b. c. , & young , p. r. 1997 , , 125 , 149 dere , k. p. , landi , e. , young , p. r. , et al .2009 , , 498(3 ) , 915 fludra , a. , & sylwester , j. 1986 , , 105 , 323 foster , a. r. , & testa , p. 2011, , 740(2 ) , l52 guhathakurta , m. , fludra , a. , gibson , s. e. , biesecker , d. , & fisher , r. r. 1999 , , 104(a5 ) , 9801 golub , l. , deluca , e. , austin , g. et al .2007 , , 243 , 63 goryaev , f. f. , parenti , s. , urnov , a. m. , et al .2010 , , 523 , a44 hahn , m. , landi , e. , & savin , d. w. 2011 , , 736 , 101 handy , b. n. , acton , l. w. , kankelborg , c. c. , et al . 1999 , , 187 , 229 hannah , i. g. , & kontar , e. p. 2012, , 539 , a146 huber , m. c. e. , pauhluhn , a. , & von steiger ,r. 2002 , esa sp , 508 , 213 jefferies , j. t. , orrall , f. q. , & zirker , j. b. 1972 , , 22 , 307 jordan , c. 1976 , royal society of london philosophical transactions series a , 281 , 391 judge , p. g. , hubeny , v. , & brown , j. c. 1997 , , 475 , 275 kashyap , v. , & drake , j. j. 1998 , , 503 , 450 klimchuk , j. a. 2006 , , 234 , 41 landi , e. , & landini , m. 1997 , , 327 , 1230 landi , e. , & landini , m. 1998 , , 340 , 265 landi , e. , & klimchuk , j. a. 2010 , , 723 , 320 landi , e. , reale , f. , & testa , p. 2012, , 538 , a111 lang , j. , mcwhirter , r. w. p. , & mason , h. e. 1990 , , 129 , 31 lemen , j. r. , title , a. m. , akin , d. j. , et al .2012 , , 275(1 - 2 ) , 17 mason , h. e. , & monsignori fossi , b. c. 1994 , , 6 , 123 martinez - sykora , j. , de pontieu , b. , testa , p. , & hansteen , v. 2011 , , 743(1 ) , 23 metropolis , n. , & ulam , s. 1949 , journal of the american statistical association , 44(247 ) , 335 mcintosh , s. w. 2000 , , 533 , 1043 odwyer , b. , del zanna , g. , mason , h. e. , weber , m. a. , & tripathi , d. 2010 , , 521 , a21 odwyer , b. , del zanna , g. , badnell , n. r. , mason , h. e. , & storey , p. j. 2011 , , 537 , a22 parenti , s. , bromage , b. j. i. , poletto , g. , et al .2000 , , 363 , 800 parenti , s. , & vial , j .- c .2007 , , 469 , 1109 pottasch , s. r. 1963 , , 137 , 945 pottasch , s. r. 1964 , , 3 , 816 reale , f. 2002 , , 580(1 ) , 566 reale , f. , testa , p. , klimchuk , j. , & parenti , s. 2009 , , 698 , 756 reale , f. 2010 , living rev . in sol .sanz - forcada , j. , brickhouse , n. s. , & dupree , a. k. 2003 , , 145 , 147 schmelz , j. t. , nasraoui , k. , rightmire , l. a. , et al .2009 , , 691 , 503 susino , r. , lanzafame , a. c. , lanza , a. f. , & spadaro , d. 2010 , , 709 , 499 taylor , j. 1997 , published by university science books , 648 broadway , suite 902 , new york , ny 10012 testa , p. , de pontieu , b. , martinez - sykora , j. , hansteen , v. , & carlsson , m. 2012 , arxiv:1208.4286 warren , h. p. , & brooks , d. h. 2009 , , 700 , 762 warren , h. p. , brooks , d. h. & winebarger , a. r. 2011 , , 734(2 ) , 90 weber , m. a. , deluca , e. e. , golub , l. , & sette , a. l. 2004 , in iau symposium , vol .223 , multi - wavelength investigations of solar activity , ed .a. v. stepanov , e. e. benevolenskaya , & a. g. kosovichev , 321 weber , m. a. , schmelz , j. t. , deluca , e. e. , & roames j. k. 2005 , , 635 , l101 winebarger , a. r. , schmelz , j. t. , warren , h. p. , saar , s. h. , & kashyap , v. l. 2011 , , 740 , 2 wiik , j. e. , dere , k. , & schmieder , b. 1993 , , 273 , 267 withbroe , g. l. 1975 , , 45 , 301 young , p. r. 2005 , , 439 , 361
dem analysis is a major diagnostic tool for stellar atmospheres . but both its derivation and its interpretation are notably difficult because of random and systematic errors , and the inverse nature of the problem . we use simulations with simple thermal distributions to investigate the inversion properties of sdo / aia observations of the solar corona . this allows a systematic exploration of the parameter space and using a statistical approach , the respective probabilities of all the dems compatible with the uncertainties can be computed . following this methodology , several important properties of the dem inversion , including new limitations , can be derived and presented in a very synthetic fashion . in this first paper , we describe the formalism and we focus on isothermal plasmas , as building blocks to understand the more complex dems studied in the second paper . the behavior of the inversion of aia data being thus quantified , and we provide new tools to properly interpret the dem . we quantify the improvement of the isothermal inversion with 6 aia bands compared to previous euv imagers . the maximum temperature resolution of aia is found to be 0.03 , and we derive a rigorous test to quantify the compatibility of observations with the isothermal hypothesis . however we demonstrate limitations in the ability of aia alone to distinguish different physical conditions .
entanglement of formation ( eof) and relative entropy of entanglement ( ree) are two major entanglement monotones for bipartite systems . for pure states the eof is defined as a von neumann entropy of its subsystem . on the contrary , ree is defined as minimum value of the relative entropy with separable states ; where is a set of separable states , it is called `` distance entanglement measure '' .another example of the distance entanglement measure is a geometric entanglement measure defined as , where is a maximal overlap of a given state with the nearest product state . ] .it was shown in ref. that is a upper bound of the distillable entanglement .the separable state , which yields a minimum value of the relative entropy is called the closest separable state ( css ) of .surprising fact , at least for us , is that although definitions of eof and ree are completely different , they are exactly same for all pure states .this fact may indicate that they are related to each other although the exact connection is not revealed yet .the main purpose of this paper is to explore the veiled connection between eof and ree . for mixed states eof is defined via a convex - roof method ; where the minimum is taken over all possible pure - state decompositions with and .the ensemble that gives the minimum value in eq.([two3 ] ) is called the optimal decomposition of the mixed state .thus , the main task for analytic calculation of eof is derivation of an optimal decomposition of the given mixture .few years ago , the procedure for construction of the optimal decomposition was derived in the two - qubit system , the simplest bipartite system , by making use of the time - reversal operation of spin-1/2 particles appropriately . in these referencesthe relation is used , where is a binary entropy function and is called the concurrence .this procedure , usually called wootters procedure , was re - examined in ref. in terms of antilinearity .introduction of antilinearity in quantum information theory makes it possible to derive concurrence - based entanglement monotones for tripartite and multipartite systems . due to the discovery of the closed formula for eof in the two - qubit system ,eof is recently applied not only to quantum information theory but also to many scientific fields such as life science . while eof is used in various areas of science, ree is not because of its calculational difficulty . in order to obtain reeanalytically for given mixed state one should derive its css , but still we do nt know how to derive css even in the two - qubit system except very rare cases . in ref. for bell - diagonal , generalized vedral - plenio , and generalized horodecki states were derived analytically through pure geometric arguments .due to the notorious difficulty some people try to solve the ree problem conversely .let be a two - qubit boundary states in the convex set of the separable states . in ref. authors derived entangled states , whose css are .this converse procedure is extended to the qudit system and is generalized as convex optimization problems .however , as emphasized in ref. still it is difficult to find a css of given entangled state although the converse procedure may provide some useful information on the css . in this paperwe will try to find a css for given entangled two - qubit state without relying on the converse procedure .as commented , eof and ree are identical for bipartite pure states although they are defined differently .this means that they are somehow related to each other .if this connection is unveiled , probably we can find css for arbitrary two - qubit mixed states because we already know how to compute eof through wootters procedure . to explore this issueis original motivation of this paper .we will show in the following that ree of many mixed symmetric states can be analytically obtained from eof if one follows the following procedure : 1 . for entangled two - qubit state be an optimal decomposition for calculation of eof .since are pure states , it is possible to obtain their css .thus , it is straight to derive a separable mixture .if is a boundary state in the convex set of separable states , the procedure is terminated with .if is not a boundary state , we consider . by requiringthat is a boundary state , one can fix , _ say _ .then we identify .this procedure is schematically represented in fig. 1 . in order to examine the validity of the procedure we have to apply the procedure to the mixed stateswhose ree are already known .thus , we will choose the bell - diagonal , generalized vedral - plenio and generalized horodecki states , whose ree were computed in ref. through different methods .also , we will apply the procedure to the less symmetric mixed states such as vedral - plenio - type and horodecki - type states whose ree were computed in ref. by making use of the the converse procedure introduced in ref. .the paper is organized as follows . in sectionii we show that the procedure generates the correct css for bell - diagonal states . in sectioniii and section iv we show that the procedure generates the correct css for generalized vedral - plenio and generalized horodecki states , respectively . in sectionv we consider two less symmetric states , vedral - plenio - type and horodecki - type states .it is shown that while the procedure generates a correct css for the former , it does not give a correct one for the latter .in section vi a brief conclusion is given .in appendix we prove that eof and ree are identical for all pure states by making use of the schmidt decomposition .the schmidt bases derived in this appendix are used in the main body of this paper .in this section we will show that the procedure mentioned above solves the ree problem of the bell - diagonal states : where , and the css and ree of were obtained in many literatures through various different methods .if , for convenience , , the css and ree of are now , we will show that the procedure we suggested also yields the same result .following wootters procedure , one can show that the optimal decomposition of for , is a separable state . ]is where and all have the same concurrence and , hence , the same ( defined in eq . ([ def-2 - 2 ] ) ) as the schmidt bases of can be explicitly derived by following the procedure of appendix a and the result is \nonumber \\ & & { \lvert 1_a \rangle } = \frac{-1}{n_- } \left [ \left(\sqrt{1 - \lambda_3 } - \sqrt{\lambda_4}\right ) { \lvert 0 \rangle } - \left(\sqrt{\lambda_1 } - i \sqrt{\lambda_2}\right ) { \lvert 1 \rangle } \right ] \\\nonumber & & { \lvert 0_b \rangle } = \frac{1}{n_+ } \left [ \left(\sqrt{\lambda_1 } + i \sqrt{\lambda_2}\right ) { \lvert 0 \rangle } + \left(\sqrt{1 - \lambda_3 } + \sqrt{\lambda_4}\right ) { \lvert 1 \rangle } \right ] \nonumber \\ & & { \lvert 1_b \rangle } = \frac{1}{n_- } \left [ \left(\sqrt{\lambda_1 } + i \sqrt{\lambda_2}\right ) { \lvert 0 \rangle } - \left(\sqrt{1 - \lambda_3 } - \sqrt{\lambda_4}\right ) { \lvert 1 \rangle } \right ] , \nonumber\end{aligned}\ ] ] where the normalization constants are thus the css of , say , can be straightforwardly computed by making use of eq .( [ result-2 ] ) ; where similarly , one can derive the schmidt bases for other and the corresponding css .then , one can show that the separable state with for all is this is a boundary state in the convex set of the separable states , because the minimal eigenvalue of its partial transposition , _ say _ , is zero .thus , the procedure mentioned in the introduction is terminated with identifying .in fact , it is easy to show that is exactly the same with in eq .( [ bd-3 ] ) .thus , the procedure we suggested correctly derives the css of the bell - diagonal states .in this section we will derive the css of the generalized vedral - plenio ( gvp ) state defined as by following the procedure mentioned above .in fact the css and ree of the gvp were explicitly derived in ref. using a geometric argument , which are where .\ ] ] now , we define and $ ] .we also define the unnormalized states , where are eigenstates of ; \\\nonumber & & { \lvert \lambda_{- } \rangle } = \frac{1}{n } \left[\lambda_1 { \lvert 01 \rangle } - \left ( \sqrt{\lambda_1 ^ 2 + \left(\lambda_2 - \lambda_3\right)^2 } + ( \lambda_2 - \lambda_3 )\right ) { \lvert 10 \rangle } \right].\end{aligned}\ ] ] in eq . ( [ vp-5 ] ) is a normalization constant given by then , following ref. , the optimal decomposition of for eof is , where and \left ( { \lvert v_+ \rangle } + i { \lvert v_- \rangle } \right ) \\\nonumber & & { \lvert \psi_2^{vp } \rangle } = \frac{-i}{\omega } \left [ 2b + i \left\{\sqrt{(a - c)^2 + 4 b^2 } - ( a - c ) \right\ } \right ] \left ( { \lvert v_+ \rangle } - i { \lvert v_- \rangle } \right).\end{aligned}\ ] ] following appendix a one can derive the css for directly .then , one can realize that and have the same css , which is identical with .thus , the procedure also gives a correct css for the gvp states .in this section we will show that the procedure also generates the correct css for the generalized horodecki states with and , becomes a separable state . ] .the css and ree of were derived in ref. using a geometrical argument and the results are following ref. one can straightforwardly construct the optimal decomposition of for eof , which is , where and in order to treat as an unified manner let us consider . then , defined in eq .( [ def-2 - 2 ] ) is where .since is independent of , this fact indicates that of are equal to eq.([gh-4 ] ) for all . following appendix a , it is straightforward to show that the schmidt bases of are } } { \lvert 0 \rangle } + \sqrt{\frac{r - \left(\sqrt{\lambda_2 } - \sqrt{\lambda_3 } \right)}{2 r } } e^{-i \theta } { \lvert 1 \rangle } \nonumber \\ & & { \lvert 1_a \rangle } = -\sqrt{\frac{\lambda_1}{r \left[r + \left(\sqrt{\lambda_2 } - \sqrt{\lambda_3 } \right ) \right ] } } { \lvert 0 \rangle } + \sqrt{\frac{r + \left(\sqrt{\lambda_2 } - \sqrt{\lambda_3 } \right)}{2 r } } e^{-i \theta } { \lvert 1 \rangle } \\\nonumber & & \hspace{3.0 cm } { \lvert 0_b \rangle } = e^{i \theta } { \lvert 0_a \rangle } \hspace{1.0 cm } { \lvert 1_b \rangle } = -e^{i \theta } { \lvert 1_a \rangle}.\end{aligned}\ ] ] then the css of is where \\ \nonumber & & { \cal b } = \frac{\sqrt{2 \lambda_1}}{4 r^2 } \left[2 \sqrt{\lambda_3 } + \left(\sqrt{\lambda_2 } + \sqrt{\lambda_3 } \right)\left(\lambda_1 - 2 \sqrt{\lambda_2 \lambda_3}\right ) \right].\end{aligned}\ ] ] thus , the css of can be obtained by letting , , , respectively. then , with reduces however , is not a boundary state in the convex set of the separable states , because the minimum eigenvalue of is positive .thus , we define the condition that the minimum eigenvalue of is zero fixes as inserting eq.([gh-10 ] ) into , one can show that reduces to .thus , our procedure gives a correct css for the generalized horodecki states .in the previous sections we have shown that the procedure generates the correct css and ree for various symmetric states such as bell - diagonal , gvp , and generalized horodecki states . in this section we will apply the procedure to the less symmetric states .the first quantum state we consider is where and .of course , if , , and , reduces to in eq .( [ vp-1 ] ) .thus , we call as vedral - plenio - type state . in order to apply the procedure to introduce \hspace{1.0 cm } \lambda_2 = \frac{1}{2 } \left[\left ( a_2 + a_3\right ) - r \right ] \\\nonumber & & { \lvert \lambda_1 \rangle } = \cos \theta { \lvert 01 \rangle } + \sin \theta { \lvert 10 \rangle } \hspace{1.0 cm } { \lvert \lambda_2 \rangle } = \sin \theta { \lvert 01\rangle } - \cos \theta { \lvert 10 \rangle}.\end{aligned}\ ] ] applying ref. , it is possible to derive the optimal decomposition of for eof ; , where \hspace{1.0 cm } p_2 = \frac{1}{2 } \left[1 - \frac{a_2 - a_3}{\sqrt{1 - 4 d^2 } } \right]\ ] ] and \\ \nonumber & & { \lvert w_2 \rangle } = \frac{1}{{\mathcal y}_- } \left [ \left(\sqrt{\xi_+ \eta_- } - \sqrt{\xi_- \eta_+ } \right ) \sqrt{\lambda_1 } { \lvert \lambda_1 \rangle } - \left(\sqrt{\xi_+ \eta_+ } + \sqrt{\xi_- \eta_- } \right ) \sqrt{\lambda_2 } { \lvert \lambda_2 \rangle } \right].\end{aligned}\ ] ] in eq .( [ vpt-4 ] ) , , and are .\end{aligned}\ ] ] following appendix a , one can derive the css and of and after long and tedious calculation .the final results are ^ 2 { \lvert 01 \rangle } { \langle 01 \lvert } \nonumber \\ & + & \left[\frac{\sin \theta \sqrt{\lambda_1 } \left(\sqrt{\xi_+ \eta_+ } + \sqrt{\xi_- \eta_- } \right ) - \cos \theta \sqrt{\lambda_2 } \left(\sqrt{\xi_+ \eta_- } - \sqrt{\xi_- \eta_+ } \right ) } { { \mathcal y}_+}\right]^2 { \lvert 10 \rangle } { \langle 10 \lvert } \\\nonumber \sigma_2&= & \left[\frac{\cos \theta \sqrt{\lambda_1 } \left(\sqrt{\xi_+ \eta_- } - \sqrt{\xi_- \eta_+ } \right ) - \sin \theta \sqrt{\lambda_2 } \left(\sqrt{\xi_+ \eta_+ } + \sqrt{\xi_- \eta_- } \right ) } { { \mathcal y}_-}\right]^2 { \lvert 01 \rangle } { \langle 01 \lvert } \\ \nonumber & + & \left[\frac{\sin \theta \sqrt{\lambda_1 } \left(\sqrt{\xi_+ \eta_- } - \sqrt{\xi_- \eta_+ } \right ) + \cos \theta \sqrt{\lambda_2 } \left(\sqrt{\xi_+ \eta_+ } + \sqrt{\xi_- \eta_- } \right ) } { { \mathcal y}_-}\right]^2 { \lvert 10 \rangle } { \langle 10 \lvert}.\end{aligned}\ ] ] then , simply reduces to this is manifestly boundary state in the convex set of separable states .thus , the procedure states that is a css of .this is exactly the same with theorem of ref. .the second less symmetric quantum state we consider is where and .if , , and , reduces to in eq .( [ gh-1 ] ) .thus , we call as horodecki - type state . applying ref., one can derive the optimal decomposition of for eof as , where for all and in order to consider all together , we define for the schmidt bases are \nonumber \\ & & { \lvert 1_a \rangle } = \frac{1}{2 { \mathcal z}_- } \bigg [ \sqrt{2 } \left ( \sqrt{a - d } \sqrt{1 + { \mathcal c } } - \sqrt{a + d } \sqrt{1 - { \mathcal c } } \right ) { \lvert 0 \rangle } \\ \nonumber & & \hspace{2.0 cm } + e^{-i \theta } \left\ { \left(\sqrt{a_1 } + \sqrt{a_4 } \right ) \sqrt{1 + { \mathcal c } } + \left(\sqrt{a_1 } - \sqrt{a_4 } \right ) \sqrt{1 - { \mathcal c } } \right\ } { \lvert 1 \rangle } \bigg ] \\\nonumber & & { \lvert 0_b \rangle } = \frac{1}{2 { \mathcal z}_+ } \bigg [ \sqrt{2 } e^{i \theta}\left\ { \sqrt{a + d } \left(\sqrt{a_1 } + \sqrt{a_4 } \right ) + \sqrt{a - d } \left(\sqrt{a_1 } - \sqrt{a_4 } \right ) \right\ } { \lvert 0 \rangle } \\ \nonumber & & \hspace{4.0 cm } + \left\ { - \left(a_1 - a_4 \right ) + 2 \sqrt{a^2 - d^2 } + \sqrt{1 - { \mathcal c}^2 } \right\ } { \lvert 1 \rangle}\bigg ] \\\nonumber & & { \lvert 1_b \rangle } = \frac{1}{2 { \mathcal z}_- } \bigg [ \sqrt{2 } e^{i \theta}\left\ { \sqrt{a + d } \left(\sqrt{a_1 } + \sqrt{a_4 } \right ) + \sqrt{a - d } \left(\sqrt{a_1 } - \sqrt{a_4 } \right ) \right\ } { \lvert 0 \rangle } \\\nonumber & & \hspace{4.0 cm } + \left\ { - \left(a_1 - a_4 \right ) + 2 \sqrt{a^2 - d^2 } - \sqrt{1 - { \mathcal c}^2 } \right\ } { \lvert 1 \rangle}\bigg],\end{aligned}\ ] ] where and .\ ] ] thus , the css is similarly , it is straightforward to derive the css of . then , one can show \nonumber \\ & & \hspace{2.0 cm } = \left ( \begin{array}{cccc } a_1 & 0 & 0 & 0 \\ 0 & a & d & 0 \\ 0 & d & a & 0 \\ 0 & 0 & 0 & a_4 \end{array } \right)\end{aligned}\ ] ] where \nonumber \\ & & a_4 = \frac{1}{4 ( 1 - { \mathcal c}^2 ) } \bigg [ ( 1 + { \mathcal c } ) \left(\sqrt{a_1 } + \sqrt{a_4 } \right)^2 + ( 1 - { \mathcal c } ) \left(\sqrt{a_1 } - \sqrt{a_4 } \right)^2 \nonumber \\ & & \hspace{9.0 cm } - 2 ( 1 - { \mathcal c}^2 ) \left(a_1 - a_4 \right ) \bigg ] \\ \nonumber & & a = \frac{1}{2 ( 1 - { \mathcal c}^2 ) } \left [ ( 1 + { \mathcal c } ) \left(a - d \right ) + ( 1 - { \mathcal c } ) \left(a + d \right ) \right ]\\ \nonumber & & d = \frac{2 a \sqrt{a_1 a_4 } + d \left ( a_1 + a_4 \right)}{1 - { \mathcal c}^2}.\end{aligned}\ ] ] one can show that if , , and , reduces to eq .( [ gh-8 ] ) .since is not a boundary state in the set of separable states , we define then , the css condition of is \left[x \left(a_4 - a_4 \right ) + a_4 \right ] = \left[x ( d - d ) + d \right]^2.\ ] ] in the horodecki state limit eq .( [ ht-10 ] ) gives a solution ( [ gh-10 ] ) .using and where the solution of , _ say _ , can be obtained by solving the quadratic equation ( [ ht-10 ] ) .inserting in eq .( [ ht-9 ] ) , one can compute explicitly , which is a candidate of css for . the css of was derived in the theorem of ref. by using the converse procedure introduced in ref. .the explicit form of the css is where \\ \nonumber & & r_4 = \frac{1}{f } \left[2a_4 ( a_2 + a_4 ) ( a_1 + a_2 + a_4 ) + d^2 ( a_1 - a_4 ) + \delta \right ] \\\nonumber & & r = \frac{1}{f } \left [ 2(a_1 + a_2 ) ( a_2 + a_4 )( a_1 + a_2 + a_4 ) - d^2 ( a_1 + 2 a_2 + a_4 ) - \delta \right]\end{aligned}\ ] ] and . in eq . ([ th-2 - 3 ] ) and are our candidate does not coincide with the correct css .thus , the procedure does not give a correct ree for , although it gives correct ree for bell - diagonal , gvp , generalized horodecki , and vedral - plenio - type states .in this paper we examine the possibility for deriving the closed formula for ree in two - qubit system without relying on the converse procedure discussed in ref. .since ree and eof are identical for all pure states in spite of their different definitions , we think they should have some connection somehow . in this contextwe suggest a procedure , where ree can be computed from eof .the procedure gives correct ree for many symmetric states such as bell - diagonal , gvp , and generalized horodecki states .it also generates a correct ree for less symmetric states such as .however , the procedure failed to produce a correct ree for the less symmetric states .this means our procedure is still incomplete for deriving the closed formula of ree .we think still the connection between eof and ree is not fully revealed .if this connection is sufficiently understood in the future , probably the closed formula for ree can be derived .we hope to explore this issue in the future . * acknowledgement * : this research was supported by the basic science research program through the national research foundation of korea(nrf ) funded by the ministry of education , science and technology(2011 - 0011971 ) .99 c. h. bennett , d. p. divincenzo , j. a. smokin and w. k. wootters , _ mixed - state entanglement and quantum error correction _ , phys .rev . * a 54 * ( 1996 ) 3824 [ quant - ph/9604024 ] .v. vedral , m. b. plenio , m. a. rippin and p. l. knight , _ quantifying entanglement _* 78 * ( 1997 ) 2275 [ quant - ph/9702027 ] . v. vedral and m. b. plenio , _ entanglement measures and purification procedures _ , phys. rev . * a 57 * ( 1998 ) 1619 [ quant - ph/9707035 ] .a. shimony , _ degree of entanglement _ , in d. m. greenberg and a. zeilinger ( eds . ) , fundamental problems in quantum theory : a conference held in honor of j. a. wheeler , ann . n. y. acad .* 755 * ( 1995 ) 675 ; h. barnum and n. linden , _ monotones and invariants for multi - particle quantum states _ , j. phys .a : math . gen . * 34 * , ( 2001 ) 6787 [ quant - ph/0103155 ] ; t .- c . wei and p. m. goldbart , _ geometric measure of entanglement and application to bipartite and multipartite quantum states _ , phys . rev . * a 68 * ( 2003 ) 042307 [ quant - ph/0307219 ] .a. uhlmann , _ fidelity and concurrence of conjugate states _ ,phys . rev . * a 62 * ( 2000 ) 032307 [ quant - ph/9909060 ] .s. hill and w. k. wootters , _ entanglement of a pair of quantum bits _ ,* 78 * ( 1997 ) 5022 [ quant - ph/9703041 .w. k. wootters , _ entanglement of formation of an arbitrary state of two qubits _ , phys .lett . * 80 * ( 1998 ) 2245 [ quant - ph/9709029 ] . v. coffman , j. kundu and w. k. wootters , _distributed entanglement _ ,phys . rev . * a 61 * ( 2000 ) 052306 [ quant - ph/9907047 ] .a. osterloh and j. siewert , _constructing -qubit entanglement monotones from antilinear operators _ , phys . rev . * a 72 * ( 2005 ) 012337 [ quant - ph/0410102 ] ; d. .dokovi and a. osterloh , _ on polynomial invariants of several qubits _, j. math .* 50 * ( 2009 ) 033509 [ arxiv:0804.1661 ( quant - ph ) ] .m. sarovar , a. ishizaki , g. r. fleming , k. b. whaley , _ quantum entanglement in photosynthetic light harvesting complexes _ , nature physics , * 6*(2010 ) 462 [ arxiv:0905.3787 ( quant - ph ) ] and references therein .o. krueger and r. f. werner , _ some open problems in quantum information theory _ , quant - ph/0504166 .r. horodecki and m. horodecki , _ information - theoretic aspects of inseparability of mixed states _ , phys. rev . * a 54 * , ( 1996 ) 1838 [ quant - ph/9607007 ] .h. kim , m. r. hwang , e. jung and d. k. park , _ difficulties in analytic computation for relative entropy of entanglement _ , phys .* a 81 * ( 2010 ) 052325 [ arxiv:1002.4695 ( quant - ph ) ] .d. k. park , relative entropy of entanglement for two - qubit state with -directional bloch vectors , int .* 8 * ( 2010 ) 869 [ arxiv:1005.4777 ( quant - ph ) ] . m. horodecki , p. horodecki , and r. horodecki , in _ quantum information : an introduction to basic theoretical concepts and experiments _ , edited by g. alber _( springer , berlin , 2001 ) , p. 151 .a. miranowicz and s. ishizaka , _ closed formula for the relative entropy of entanglement _ , phys . rev .* a78 * ( 2008 ) 032310 [ arxiv:0805.3134 ( quant - ph ) ] .s. friedland and g gour , _ closed formula for the relative entropy of entanglement in all dimensions _ , j. math .* 52 * ( 2011 ) 052201 [ arxiv:1007.4544 ( quant - ph ) ] .m. w. girard , g. gour , and s. friedland , _ on convex optimization problems in quantum information theory _ , arxiv:1402.0034 ( quant - ph ) . in this sectionwe will show that ree and eof are identical for two - qubit pure states .this fact was already proven in theorem of ref. .we will prove this again more directly , because explicit schmidt bases are used in the main body of the paper .let us consider a general two - qubit pure state with .then , its concurrence is .now , we define where \hspace{1.0 cm } { \cal n}_{\pm}^2 = |\alpha_1^ * \alpha_2 + \alpha_3^ * \alpha_4|^2 + |\lambda_{\pm } - ( |\alpha_1|^2 + |\alpha_3|^2)|^2.\ ] ] now , we consider matrix , whose components are then schmidt bases for each party are defined as where using eq .( [ schmidt-1 ] ) , one can show straightforwardly that reduces to .thus , its css are simply expressed in terms of the schmidt bases as applying eq .( [ ree-1 - 1 ] ) , one can show easily , which is exactly the same with eof .
it is well - known that entanglement of formation ( eof ) and relative entropy of entanglement ( ree ) are exactly identical for all two - qubit pure states even though their definitions are completely different . we think this fact implies that there is a veiled connection between eof and ree . in this context , we suggest a procedure , which enables us to compute ree from eof without relying on the converse procedure . it is shown that the procedure yields correct ree for many symmetric mixed states such as bell - diagonal , generalized vedral - plenino , and generalized horodecki states . it also gives a correct ree for less symmetric vedral - plenio - type state . however , it is shown that the procedure does not provide correct ree for arbitrary mixed states .
physicists have recently shown that network analysis is a powerful tool to study the statistical properties of complex biological , technological and social systems of diverse kinds .many networks exhibit a scale - free degree distribution in which the probability that a vertex is connected to other vertices falls as a power .this property is not sufficient to completely describe natural networks because such systems also exhibit degree correlations the degrees of the vertices at the end points of any given edge are not independent .it is not surprising that natural systems depend on properties that do not appear explicitly in degree distributions . in particular , protein interaction networks depend on the availability of sufficient binding free energy to cause interactions to occur ( links between vertices to exist ) .caldarelli _ et al ._ and sderberg proposed models in which vertices are characterized by a fitness parameter assigned according to a chosen probability distribution .then , pairs of vertices are independently joined by an undirected edge with a probability depending on the fitnesses of the end points . generalized these models as a class of models with hidden variables and presented a detailed formalism showing how to compute network properties using the conditional probability ( propagator ) that a vertex with a given value of a hidden variable is connected to other vertices .this formalism , valid for any markovian ( binary ) network , provides the generating function for the propagator , but not the propagator itself .the purpose of this paper is twofold .we first use a mean field approximation to derive a general analytic formula for the propagator , therefore finding a general approximate solution to to the inversion problem .this enables one to compute network properties without the use of a simulation procedure , thereby simplifying the computational procedure and potentially broadening the ability of scientists from all fields to use network theory .the validity of the method is assessed by comparing the results of using our approximation with published results .we then use this method to compute clustering coefficients of a specific hidden variable model for protein - protein interaction networks ( pin ) from several organisms developed by us that previously had obtained degree distributions in agreement with measured data .we show that two models with the same degree distribution have very different clustering coefficients .we outline this in more detail .[ sec : formalism ] reviews the hidden variable formalism and our approximate solution to the inversion problem . we distinguish between sparse ( which have been solved in ref . ) and non - sparse networks which are solved here .the next section [ sec : models ] studies the models of refs . and .our averaging procedure is found to work well for most situations .our own model is presented in [ sec : pin ] .we present an analytic result for the average connection probability and extend the results of to computing the clustering coefficients .the final section [ sec : summary ] is reserved for a brief summary and discussion .we present the formalism for hidden variable models . the probability that a node has a hidden continuous variable is given by , normalized so that its integral over its domain is unity .this function is chosen to be an exponential in and a gaussian in .the connection probability for two nodes of is defined to be .this is taken as a step function in , and a fermi function in .the two functions and can be chosen in a wide variety of ways to capture the properties of a given network .reference presents the probability generating function , , that determines in terms of the generating function for the propagator , , as g_0(z)= dg(g ) _ 0(z , g),[bog1]where _ 0(z , g)= ndg ( g)(1-(1-z)p(g , g ) ) .[ gbog]the propagator giving the conditional probability that a vertex of hidden variable is connected to other vertices is given implicitly by _0(z , g)=_k=0^z^kg_0(k , g ) . [ g0 kg ]knowledge of determines the conditional probability that a node of degree is connected to a node of degree , ( as well as ) , and those two functions completely define a markovian network .once is the determined , all of the properties of the given network are determined .the most well - known example is the degree distribution : p_k=_0^dg _ ( g)g_0(k , g ) .it would seem that determining from eq .( [ gbog ] ) is a simple technical matter , but this is not the case .the purpose of the present section is to provide a simple , analytic and accurate method to determine .we obtain from eq .( [ gbog ] ) by using the tautology p(g , g)=|p(g ) + ( p(g , g)-|p(g)[exp ] ) in eq . ( [ gbog ] ) , choosing so as to eliminate the effects of the second term , and then treating the remaining higher powers of as an expansion parameter . using eq .( [ exp ] ) in eq .( [ gbog ] ) yields & & _ 0(z , g)= _ 0(z , g)=(1-(1-z)|p(g))^n- n(1-z)dg(g)(|p(g)-p(g , g))1-(1-z)|p(g ) & & -n_n=2^dg(g)(p(g , g)-|p(g))1-(1-z)|p(g))^n .[ gbog01 ] in analogy with the mean - field ( hartree ) approximation of atomic and nuclear physics , we find that the second term of eq .( [ gbog01 ] ) vanishes if we choose to be the average of over : |p(g)=dg(g)p(g , g).[pave]with eq .( [ pave ] ) the effects of the term of first order in vanish .we therefore obtain the result : _ 0(z , g)=(1-(1-z)|p(g))^n - n_n=2^dg(g)(p(g , g)-|p(g))1-(1-z)|p(g))^n , [ gbog1 ] with the putative term with vanishing by virtue of eq .( [ pave ] ) .we treat the first term of eq .( [ gbog1 ] ) as the leading order ( ) term and regard the remainder as a correction .the validity of this approach can be checked by comparison with simulations , or ( in certain cases ) with analytic results .numerical results for the pin of current interest indicate that the corrections to the _ lo _ terms induce errors in of no more than a few percent and that the approximation becomes more accurate for large values of . therefore we use the _ lo _ approximation . using exponentiation and the binomial theorem in the first term of eq .( [ gbog1 ] ) leads to the result ^(lo)_0(k , g)= ( cn + k ) ( 1-|p(g))^n - k|p(g)^k,[glo]which is of the form of a random binomial distribution in which the connection probability depends on the hidden variable .( [ glo ] ) is our central new general result that can be used for any hidden variable network .this binomial distribution has both the normal gaussian and poisson distributions as limiting cases . ref . explained the difference between sparse and nonsparse networks .sparse networks have a well - defined thermodynamic limit for the average degree , while this quantity diverges as the network size approaches infinity . defines criteria for sparseness by pointing out the relevance of of eq .( [ pave ] ) in determining whether or not a network is sparse . given this quantitythe average degree is k = dg ( g)|p(g)= dgdg ( g)p(g , g)(g).if the is independent of the only way to obtain a non - divergent value is for the connection probability to scale as : p^sparse(g , g)=c(g , g)n , .[sparse ] under the specific assumption that eq .( [ sparse ] ) holds , ref . finds a very interesting result . in our notation , this amounts to using eq .( [ sparse ] ) in eq .( [ gbog ] ) and taking the limit that approaches infinity . then g_0^sparse(z , g)=(z-1)dg(g)c(g , g ) .this shows that the poisson limit of eq .( [ glo ] ) is obtained for the very special case of sparse networks in which the connection probability scales as .none of the models of interest here are sparse , so it is our present result ( [ glo ] ) that is widely applicable . turning to the use of the use of the propagator, we obtain the degree distribution as p_k = dg ( g ) g_0(k , g)dg ( g ) g_0^(lo)(k , g ) .[ ours]this expression can be thought of as averaging a binomial distribution over the hidden variable and is a natural generalization of classical graph theory .a similar expression for has been obtained , in the poisson limit , in ref . . in that work, is presented as an integral of the poisson distribution for multiplied by the `` representation '' of a density matrix .comparing eq .( [ glo ] ) with the result ( 3 ) of shows that our propagator is proportional to the representation , essentially our . shows , how under certain assumptions , to use to determine the representation .our method allows underlying network properties , denoted by and , to predict various network properties . the clustering coefficient which measures transitivity :if vertex is connected to vertex and vertex to vertex , there is an increased probability that vertices and are connected . in graph theory , the clustering coefficient is the ratio of the number of triangles to the number of pairs , computed for nodes of degree . shows that & & c(k)= dg ( g)g_0(k , g)c(g ) [ eq:21 ] + & & c(g)= dg dg(g)p(g , g)|p(g ) p(g , g)(g)p(g,g)|p(g).[cofg ] our calculations replace by of eq .( [ glo ] ) .one way to verify the _ lo _ approximation is to show that it reproduces analytic results for previously published models .we consider the models of and in this section . in both of these models is taken as a step function ( the 0 temperature limit of our model ) : p(g , g)= ( g+g-).[sharp]the two models differ in their choice of , but the use of eq .( [ sharp ] ) allows one to obtain compact general expressions for the generating functions and .we present these first and discuss specific details of the individual models in separate sub - sections . the use of eq .( [ sharp ] ) in eq .( [ gbog ] ) yields _ 0(z , g)= n(z)=n|p(g)(z ) , [ gbogsharp ] so that _ 0(z , g)=z^n|p(g).[gsharp ] it is interesting to observe that eq .( [ gbog1 ] ) reduces to the above result .this is because powers of for eq .( [ sharp ] ) , so that the integration appearing in eq .( [ gbog1 ] ) leads to an expression that is a function of then the use of the binomial theorem allows the second term of eq . ( [ gbog1 ] ) to be expressed as a summable power series in which ultimately leads to the result eq .( [ gsharp ] ) . if we follow and treat as a continuous variable ( which requires large values of ) we find _ 0(k , g)=(k - n|p(g ) ) = , , [ gdelta ] + where is the solution of the equation k = n|p(g).[gn]note that for , can take on any value greater than .the result eq .( [ gdelta ] ) is the same as eq.(34 ) of , but written in a more compact form . the use of eq .( [ gdelta ] ) in eq .( [ ours ] ) and eq .( [ eq:21 ] ) yields the results p_k=(g_n(k))n|p(g_n(k ) ) + this model is defined by using , but we generalize to take the form _ ( g)=(-g).ref . works out this model using their green s function formalism .our purpose here is to compare the results of our averaging approximation with their results . for this modelthe average interaction probability is given by |p(g)=_0^dg ( g+g-)= ( g- ) + ( -g).[pbarss]then our approximation eq .( [ ours ] ) for the degree distribution is given by p_k= ( cn + k ) _ 0^dg ( 1- ) ^n - kdefine the integration variable }$ ] so that & & p_k= ( cn + k ) e^-_t_0 ^ 1 dtt^2t^k(1-t)^n - k , t_0e^- + & & p_k>1= ( cn + k ) e^-((n+1-k)(k-1)(n)-b_t_0(k-1,n+1-k)),[pkif ] + & & p_k=1=n e^-(1-t_0)^nn_2f_1(1,n;n+1,1-t_0)where is the confluent hypergeometric function and is the incomplete beta function ( and with the beta function ) : b_z(a , b)_0^z dt t^a-1 ( 1-t)^b-1,b_1(a , b)=b(a , b).consider the case 1<k,10,(the latter is typical of our biological model ) so that the second term of eq. ( [ pkif ] ) can be neglected .evaluating the remaining gamma functions gives p_k = e^-nk(k-1).[cutus]ref . computes the degree distribution for this model in analytic manner , using the approximation eq .( [ gdelta ] ) in which is treated as a continuous variable and therefore `` is expected to perform poorly for small values of '' .the result of is p_k^bps = e^-nk^2 + e^-(k- n)[cutum]which corresponds to agreement ( for ) within the stated domain of accuracy of ref .the confluence of eq .( [ cutus ] ) and eq .( [ cutum ] ) provides a verification of the accuracy of the averaging approximation . the results for seem to disagree ,so we examine this more closely . use eq .( [ gsharp ] ) directly to obtain the generating function as .one obtains a result for all values of ( ) such that .using this generating function yields the result p_k = n = dg(g ) ( g-).the specific value of the integral depends on the choice of , but the result is a finite number for any choice of that satisfies the normalization condition that its integral over its domain is unity .thus we believe that the correct result of using the propagator ( eq(34 ) of in their eq(11 ) ) is p_k^bps = e^-nk^2[cutum1]instead of eq .( [ cutum ] ) , which is in agreement with our result .our approximation works very well in reproducing the computed clustering coefficient of .in particular , we evaluate of eq .( [ cofg ] ) to find that ( 2g-+1)).numerical evaluation of this approximate expression accurately reproduces the result of fig . 3 of ref .thus our mean field approximation is accurate for both our model and the model of ref .our principal application is to the the pin of ref .this model is based on the concept of free energy of association . for a given pair of proteins the association free energy ( in units of )is assumed to deviate from an average value a number contributed by both proteins additively as .this is a unique approximation to first - order in and .thermodynamics and the assumption that the interaction probability is independent of concentration allows us to write p(g , g)=1/ ( 1+e^-g - g),[pdef]which reduces to a step function in the zero temperature limit , but otherwise provides a smooth function . increasing the value of weakens the strength of interactions , and previous results showed the existence of an evolutionary trend to weaker interactions in more complex organisms .the probability that a protein has a value of is given by the probability distribution _( g)=ee^-g , -1g+,[rhodef]where the positive real value of governs the fluctuations of .we previously chose the species - dependent values of and so as to reproduce measured degree distributions obtained using the yeast two - hybrid method ( y2h ) that reports binary results for protein - protein binding under a controlled setting .those parameters are displayed in table i. the impact of the parameters and are explained in ref . and displayed in fig .3 of that reference . increasing the value of increases the causes a more rapid decrease of slope of increases in magnitude . increasing the value of decreases the magnitude of without altering the slope much for values of greater than about 10 .the ability to vary both the slope and magnitude of gives this model flexibility that allows us to describe the available degree distributions for different species .[ table1 ] .parameters obtained in ref . [ cols="^,^,^,^",options="header " , ] we obtain an analytic form for the for eq . ( [ pave ] ) of this model .given eq .( [ rhodef ] ) and eq .( [ pdef ] ) we find an analytic result : |p(g,)=_2f_1(1,;+1;-),[analpbar]where is the confluent hypergeometric function .the special case yields a closed form expression : obtained in contrast with the result of the sharp cutoff model eq . ( [ pbarss ] ) .this shown in fig . [fig : pbar ] ..95 cm ( 10,10)(5,0 ) ( color online ) average connection probability , .solid ( red ) : result of eq .( [ special ] ) ; dashed ( blue ) ( containing the step function ) result of eq .( [ pbarss ] ) .the approach to unity is smooth for eq .( [ special]).,title="fig:",width=10,height=377 ] it is useful to define the variable > 0 , and note that an integral representation _2f_1(n,;+1;-)=_0 ^ 1dtt^-1(1+t)^-n,[alg]is convenient for numerical evaluations .knowledge of the propagator eq .( [ glo ] ) allows us to compute the clustering coefficients of diverse species .the resulting degree distributions of ( shown for the sake of completeness ) and the newly computed clustering coefficients for yeast _ s. cerevisiae _ , worm _ c. elegans _ and fruit fly_ d. melanogaster _ are shown in fig .the parameters and are those of , so the calculations of the clustering coefficients represent an independent major new prediction of our model .results of numerical simulations and our analytic procedure are presented .the excellent agreement between the two methods verifies the approximation .more importantly , the agreement between our calculations and the measured clustering coefficients is generally very good , so our model survives a very significant test .this bolsters the notion that the properties of a pin are determined by a distribution of free energy . the clustering coefficient for yeast drops rapidly for large values of ( where statistics are poor ) , a feature not contained in our model .it is worthwhile to compare our model with that of .that work chooses a gaussian form of , based on hydrophobicity , a step function form of , and is applied only to yeast .we found that of is scale free only for a narrow range of parameters , and we could not reproduce the data for diverse species using that model ..95 cm ( 10,10)(2,-1 . )( color online ) degree distributions and clustering coefficients of diverse species .degree distributions : the solid ( red ) curves are derived from the theory .the black dots are the results of experimental data as referenced in the text .the small ( blue ) circles are the results of a numerical simulation using the procedure of . clustering coefficients : the solid ( red ) curves are derived from the theory .the small ( blue ) dots are the results of a numerical simulation using the procedure of and the heavy ( black ) dots represent the measured data ., title="fig:",width=10,height=566 ] the human interactome is of special interest .[ hdegc]a shows the human degree distributions computed with two sets of parameters , one from ref . ( table i ) and the other using values of shown in the caption .the degree distributions are essentially identical , so only one curve can be shown .each is approximately of a power law form and each describes the measured degree distribution very well .calculations of degree correlations allows one to distinguish the two parameter sets .figure [ hdegc]b shows that the cluster coefficients differ by a factor of two .we find that decreases substantially as increases .the increase in reduces the allowed spread in the value of and reduces the value of integrand of eq .( [ eq:21 ] ) .it is interesting to note that the two existing measurements of the human differ by a factor of about an order of magnitude with the measurements of ref. obtaining much smaller values than those of .the results of are closer to our computed results for .in contrast with the results for other species , our lie significantly above the data . however ,the two data sets disagree substantially ( by a factor of as much as 100 for certain values of ) and both show a clustering coefficient that is generally significantly smaller than that of the other species . several possibilities may account for the discrepancies between these two measurements of in humans and also for the differences between our model predictions and the experimental results .i ) the human studies sample a limited subset of links of the complete network and this could bias the results .ii ) the human protein subsets used in the two studies differ .iii ) the human interactome is truly less connected than that of other species .this demonstrates the importance of measuring degree correlations to determine the underlying properties of the network .the current model and these considerations suggest the need for better design of future pin studies that will not only include other species , but also comparisons between the pins of different organs of a given species .furthermore , comparisons between normal and malignant tissues could also be very fruitful ..9 cm ( color online ) human degree distribution ; the solid ( red ) curve is obtained using both set a and set b .the black dots represent the experimental data .the data set is that of , but nearly identical data is obtained from .human cluster coefficient : the solid ( red ) curve is computed using set a and the dashed ( green ) using set b . measured human clustering coefficients are from triangles ( blue ) and heavy dots ( pink ) ., title="fig:",width=9 ]in summary , this work provides a method to obtain the properties of hidden variable network models . the use of the approximation eq .( [ pave ] ) , used to obtain the propagator eq .( [ glo ] ) , provides an excellent numerical approximation to exact results for the models considered here . if necessary, the method can be systematically improved through the calculation of higher order corrections .our principal example is the pin of ref . . not only does the use of eq .( [ glo ] ) provide an accurate numerical result , but the model correctly predicts the clustering coefficients of most species . for the human interactome ,two different parameter sets yield nearly the same degree distribution but very different clustering coefficients , showing the importance of measuring degree correlations to determine the underlying nature of the network .99 s. h. strogatz , nature ( london ) * 410*,268 ( 2001 ) .r. albert and a .- l.barabsi , rev .mod . phys .* 74*,47 ( 2002 ) , siam rev .* 45 * , 167 ( 2003 ) .pastor - satorras , a. vzquez , and a. vespignani , phys .lett . * 87 * , 258701 ( 2001 ) .b. alberts _ et al ._ , _ the cell _ , ( garland science , new york 2002 ). g. caldarelli , a. capocci , p.delosrios , and m. a. muoz , phys .* 89 * , 258702 ( 2002 ) .b. sderberg , phys .e * 66 * , 066121 ( 2002 ) .m. bogu and r. pastor - satorras , phys .e * 68 * , 036112 ( 2003 ) .yi y. shi , g.a .miller , h. qian , and k. bomsztyk , proc .sci . * 103 * , 11527 ( 2006 ) .deeds , o. ashenberg , and e.i .shakhonovich , proc .sci . * 103 * , 311 ( 2006 ) .m. abramowitz and i. a. stegun , _ handbook of mathematical functions _ , ( dover , new york 1970 ) . s. abe and s. thurner , phys .e * 72 * , 036102 ( 2005 ) ; s. abe and s. thurner , int . j. modc * 17 * , 1303 ( 2006 ) .s. fields and s. song , nature * 340 * , 245 ( 1989 ) .networks / resources / protein / bo.dat.qz s. li , _ et al ._ , science * 303 * , 540 ( 2004 ) .l. giot _ et al ._ , science * 302 * , 1727 ( 2003 ) .the value of is a testable result of our model , even though experimentalists do not measure this quantity .the predicted number of proteins with no interactions is , where the value of is given in table i. the experimentalists conventionally normalize their distributions as , so we multiply our computed by a factor of so that the computed sum is unity .rual , _ et al ._ nature * 437 * , 1173 ( 2005 ) u. stelzl , _ el al ._ cell * 122 * , 957 ( 2005 )
the properties of certain networks are determined by hidden variables that are not explicitly measured . the conditional probability ( propagator ) that a vertex with a given value of the hidden variable is connected to k of other vertices determines all measurable properties . we study hidden variable models and find an averaging approximation that enables us to obtain a general analytical result for the propagator . analytic results showing the validity of the approximation are obtained . we apply hidden variable models to protein - protein interaction networks ( pins ) in which the hidden variable is the association free - energy , determined by distributions that depend on biochemistry and evolution . we compute degree distributions as well as clustering coefficients of several pins of different species ; good agreement with measured data is obtained . for the human interactome two different parameter sets give the same degree distributions , but the computed clustering coefficients differ by a factor of about two . this shows that degree distributions are not sufficient to determine the properties of pins . = 10000 0.5 cm
abstractive summarization has gained popularity due to its ability of generating new sentences to convey the important information from text documents .an abstractive summarizer should present the summarized information in a coherent form that is easily readable and grammatically correct .readability or linguistic quality is an important indicator of the quality of a summary . several text - to - text ( t2 t ) generation techniques that aim to generate novel text from textual inputhave been developed . however , to the best of our knowledge , none of the above methods explicitly model the role of linguistic quality and only aim at maximizing information content of the summaries . in this work ,we address readability by assigning a log probability score from a language model as an indicator of linguistic quality . more specifically, we build a novel optimization model for summarization that jointly maximizes information content and readability .extractive summarizers often lose a lot of information from the input as they only `` extract '' a few important sentences from the documents to create the final summary .we prevent information loss by aggregating information from multiple sentences .we generate clusters of similar sentences from a collection of documents .multi - sentence compression ( msc ) can be used to fuse information from sentences in a cluster .however , msc might generate sentences that convey similar information from two different clusters .by contrast , our integer linear programming ( ilp ) based approach prevents redundant information from being included in the summary using a inter - sentence redundancy constraint .consequently , our experiments reveal that our method generates more informative and readable summaries than msc .our proposed approach to abstractive summarization consists of the following two steps : ( 1 ) aligning similar sentences from multiple - documents and ( 2 ) generating the most informative and linguistically well - formed sentence from each cluster , and then appending them together . in multi - document summarization ,all documents are not equally important ; some documents contain more information on the main topics in the document set .our first step estimates the importance of a document in the whole dataset using _ lexrank _ , _ pairwise cosine similarity _ and _ overall document collection similarity_. each sentence from the most important document are initialized into separate clusters .thereafter , each sentence from the other documents are assigned to the cluster that has the highest similarity with the sentence . in the generation step ,we first generate a word - graph structure from the sentences in each cluster and construct shortest paths from the graph between the start and end nodes .we formulate a novel integer linear programming ( ilp ) problem that maximizes the information content and linguistic quality of the generated summary .our ilp problem represents each of the shortest paths as a binary variable .the coefficients of each variable in the objective function is obtained by combining the information score of the path and the linguistic quality score .we introduce several constraints into our ilp model .we ensure that only one sentence is generated from each cluster .second , we avoid redundant sentences that carry the same or similar information from different clusters .the solution to the optimization problem decides the paths that would be included in the final abstractive summary . on the duc2004 and duc2005 datasets, we demonstrate the effectiveness of our proposed method .our proposed method outperforms not only some popular baselines but also the state - of - the - art extractive summarization systems .rouge scores obtained by our system outperforms the best extractive summarizer on both the datasets .our method also outperforms an abstractive summarizer based on multi - sentence compression when measured by rouge-2 , rouge - l and rouge - su4 scores .further , manual evaluation by human judges shows that our technique produces summaries with acceptable linguistic quality and high informativeness .several researchers have developed abstractive summarizers .genest and lapalme used natural - language - generation ( nlg ) systems .however , nlg requires a lot of manual effort in terms of defining schemas as well as using deeper natural language analysis .wang and cardie and oya _ et al . _ induced templates from the training set in their meeting summarization tasks .such induction of templates , however , is not very effective in news summarization because of the variability in topics . unlike these methods, our method does not induce any templates but generates summaries in an unsupervised manner by combining information from several sentences on the same topic .berg - kirkpatrick _ et al . _used an ilp formulation that jointly extracts and compresses sentences to generate summaries .however , their method is supervised and requires significant manual effort to define features for subtree deletions , which is required to compress sentences .graph - based techniques have also been very popular in summarization .et al . _ employed a graph - based approach to generate concise abstractive summaries from highly redundant opinions . compared with their opinionated texts such as product reviews ,the target documents in multi - document summarization do not contain such high level of redundancy .more recently , mehdad _ et al . _ proposed a supervised approach for meeting summarization , in which they generate an entailment graph of sentences .the nodes in the graph are the linked sentences and edges are the entailment relations between nodes ; such relations help to identify non - redundant and informative sentences .their fusion approach used msc , which generates an informative sentence by combining several sentences in a word - graph structure .however , filippova s method produces low linguistic quality as the ranking of generated sentences is based on edge weights calculated only using word collocations . by contrast , our method selects sentences by jointly maximizing informativeness and readability and generates informative , well - formed and readable summaries .figure [ approach - general ] shows our proposed abstractive summarization approach , which consists of the following two steps : * _ sentence clustering _ , * _ summary sentence generation_. given a document set , that consists of documents ( , , , ... , ) , our approach first generates clusters ( , , , ... , ) of similar sentences , and then use the individual clusters to create word - graphs .a maximum of one novel sentence is generated from each word - graph with the goal of maximizing information content and linguistic quality of the entire summary .the sentence clustering step ( s1 ) has two important components : the first ( s1 - 1 ) identifies the most important document in before the final cluster generation step ( s1 - 2 ) that generates clusters of similar sentences .we experiment with several techniques to identify , and then align sentences from other documents to the sentences in .it proves to be a simple , yet effective technique for generating clusters containing similar information .our approach is inspired by the findings of wan that showed how the incorporation of document impact can improve the performance of summarization . in ( s2 ) , we create a directed word - graph structure from the sentences in each cluster . in the word - graph ,the nodes represent the words and the edges define the adjacency relations in the sentences . from the word - graph ,multiple paths between the start and the end nodes can be extracted .we extract shortest paths from each cluster , and finally retain the paths that maximize information content and linguistic quality using an ilp based approach .we impose constraints on the maximum number of sentences that are generated from each cluster and also impose constraints to avoid redundancies such that similar information from different clusters are not included in the summary .information content is measured using textrank , which scores sentences based on the presence of keywords .we measure linguistic quality using a 3-gram language model that assigns confidence values to sequences of words in the sentences . in this section ,we describe both steps s1 and s2 .we initialize clusters of sentences using each sentence from the most important document , , in a document set .our intuition behind this approach is that consists of the most important content relevant across all the documents in . in other words ,the document that is most close to the central content of the collection is the most informative .we propose several techniques to identify .* lexrank ( ) : * lexrank constructs a graph of sentences where the edge weights are obtained by the inter - sentence cosine similarities . while the original lexrank constructs a graph of sentences ,we construct a graph of documents to compute document importance .equation ( [ lpr - equation2 ] ) shows how lexrank scores are computed using weighted links between the nodes ( documents ) .this equation measures the salience of a node in the graph , which is the importance of the document in the entire document collection .let be the centrality of node .lexrank is then defined as follows : }\frac{\textrm{idf - modified - cosine}(u , v)}{\sum_{z\in adj[v]}\textrm{idf - modified - cosine}(z , v)}p(v), ] and are the set of nodes that are adjacent to and the total number of nodes in the graph , respectively .the damping factor is usually set to 0.85 , and we set to this value in our implementation . is determined as the document that has the highest lexrank score in once the above equation converges .* pairwise cosine similarity ( ) : * this method computes the average cosine similarity between the target document and the other documents in the dataset .the average similarity is calculated using the following formula : where denotes the number of documents in the document set . *overall document collection similarity ( ) : * this method computes the cosine similarity between the target document and the whole document set .we create the whole document set by concatenating text from all the documents in .this method is defined as follows : in , , and mentioned above , we select the document with the highest score as the most important one in the dataset .next , we generate the clusters by aligning sentences and re - ordering them based on original positions of the sentences in the documents .the sentences from each of the other documents ( ) in are assigned to the clusters one - by - one based on cosine similarity measure .our approach computes pairwise cosine similarity of each sentence in to all the sentences in .for example , a sentence in , has the highest similarity with , a sentence in .then , we assign to cluster , in which was initially assigned . some sentences in might not be similar to any of the sentences in .hence , we only align sentences when the similarity .further , we only retain clusters that have at least sentences , assuming that a content is relevant only if it exists in half of the documents in .* cluster ordering : * we implement two _ cluster ordering _ techniques that reorder clusters based on the original position of the sentences in the documents . 1 ._ majority ordering ( mo ) : _ given two clusters , and , the set of common documents from which the sentences are assigned to the two clusters are identified .if and have sentences and ( ) , respectively , where is the common document , then precedes . the final order is determined based on overall precedence of the sentences of one cluster over the others ._ average position ordering ( apo ) : _ the sentences in any cluster are each assigned a normalized score . for example , the normalized score of is computed as the ratio of the original position of the sentence and the total number of sentences in ( here , belongs to document ) . when ordering two clusters , the cluster that has the lower score obtained by averaging the normalized scores of all the sentences is ranked higher than the others . in order to generate a one - sentence representation from a cluster of redundant sentences , we use multi - sentence compression .we generate multiple sentences from a cluster using a word - graph .suppose that a cluster contains sentences , . a directed graph is created by adding sentences from to the graph in an iterative fashion .each sentence is connected to dummy _ start _ and _ end _ nodes to mark the beginning and ending of the sentences .the vertices or nodes are the words along with the parts - of - speech ( pos ) tags .we connect adjacent words in the sentences with directed edges .once the first sentence is added , words from the following sentences are mapped onto a node in the graph provided that they have exactly the same word form and the same pos tag .the sequence of rules used for the word - graph construction is as follows : the context of the words are taken into consideration if multiple mappings are possible , and the word is mapped to that node that has the highest directed context .we also add punctuations to the graph .figure [ fig : word - graph - generation ] shows a simple example of the word - graph generation technique .we do not show pos and punctuations in the figure for clarity .consider the following two sentences as an illustration of our generation approach : as shown in the examples above , the two sentences contain similar information , but they are syntactically different .the solid directed arrows connect the nodes in eg.1 , whereas the dotted arrows join the nodes in eg.2 .we can obtain several shortest paths between the start and end nodes . in figure[ fig : word - graph - generation ] , we highlight one such path using gray rectangles .several other paths are possible , for example : the original input sentences from the cluster are also valid paths between the _ start _ and _ end _ nodes .to ensure pure abstractive summarization , we remove such paths that are same or very similar ( cosine similarity 0.8 ) to any of the original sentences in the cluster .similar to filippova s word - graph construction , we set the minimum path length ( in words ) to eight to avoid incomplete sentences .finally , we retain a maximum of 200 randomly selected paths from each cluster to reduce computational overload of the ilp based approach .our aim is to select the best path from all available paths . from 200 paths in each cluster ,we choose at most one path that maximizes information content and linguistic quality together .let be each path in a cluster , namely , , where the total number of shortest paths is equal to = $ ] where refers to the maximum number of paths that can be generated from a cluster .we argue that the shortest paths that we select in the final summary should be informative as well as linguistically readable .hence , we introduce two factors _ informativeness _ ( ) and _ linguistic quality _( ) . [ cols="<,^,^,<,^,^ " , ] * informativeness : * in principle, we can use any existing method that computes the importance of a sentence to define _ informativeness_. in our model , we use textrank scores to generate an importance value of a sentence within a cluster .textrank creates a graph of words from the sentences .the score of each node in the graph is calculated as shown in equation ( [ lpr - textrank ] ) : where represents the words , denotes the adjacent nodes of and is the damping factor set to 0.85 .the computation converges to return final word importance scores .the informativeness score of a path ( ) is obtained by adding the importance scores of the individual words in the path . *linguistic quality : * in order to compute _ linguistic quality _, we use a language model .more specifically , we use a 3-gram ( trigram ) language model that assigns probabilities to sequence of words .suppose that a path contains a sequence of words .the score assigned to each path is defined as follows : where is defined as : as can be seen from equation ( [ eqn : ll ] ) , we obtain the conditional probability of different sets of 3-grams in the sentence .the scores are combined and averaged by , the number of conditional probabilities computed .the scores are negative ; with higher magnitude implying lower readability .therefore , in equation ( [ eqn : ll2 ] ) , we take the reciprocal of the logarithmic value with smoothing to compute . in our experiments, we used a 3-gram model that is trained on the english gigaword corpus[multiblock footnote omitted ] . to select the best paths from the clusters , we combine informativeness and linguistic quality in an optimization framework .we maximize the following objective function : each represents a binary variable , that can take 0 or 1 , depending on whether the path is selected in the final summary or not .in addition , , the number of tokens in a path , is also taken into consideration and the term assigns more weight to shorter paths so that the system can favor shorter informative sentences .we introduce several constraints to solve the problem .first , we ensure that a maximum of one path is selected from each cluster using equation ( [ eqn : maxone ] ) .we introduce equation ( [ eqn : sim ] ) so that we can prevent similar information ( cosine similarity 0.5 ) from being selected from different clusters . in figure[ fig : word - graph - generation ] , this constraint ensures that only one of the several possible paths mentioned in the example is included in the final summary as they contain redundant information .we evaluated our approach on the duc 2004 and 2005 datasets on multi - document summarization .we use rouge ( recall - oriented understudy of gisting evaluation ) for automatic evaluation of summaries ( compared against human - written model summaries ) as it has been proven effective in measuring qualities of summaries and correlates well to human judgments .we proposed three document importance measures and two different sentence ordering techniques as described in section [ sec : method ] .hence , we have six different systems in total . to the best of our knowledge ,no publicly available abstractive summarizers have been used on the duc dataset .therefore , we compare our system to msc that generates a sentence from a collection of similar sentences using only syntactical information from the source sentences . in msc ,the input is a pre - defined cluster of similar sentences .therefore , we compare our ilp based technique with msc using the same set of input clusters obtained by our system . table [ table : comprouge ] shows the following rouge scores for our evaluation : the summaries generated by the baselines and the state - of - the - art extractive summarizers on the duc 2004 data were collected from .rouge-2 and rouge - su4 scores have been found to be highly correlated with human judgments .therefore , we computed rouge-2 and rouge - su4 scores of the other systems on the duc2004 summaries directly using rouge . however , the system - generated summaries ( baselines and state - of - the - arts ) were not available for the duc 2005 dataset .hence , we used rouge scores of the various systems as reported in . according to table [table : comprouge ] , all of the rouge scores obtained by our systems outperform all the baselines on both datasets . hereafter , we refer to the best performing system as * ilpsumm*. we perform paired t - test and observe that ilpsumm shows statistical significance compared to all the baselines .the summarization method using measure as the most informative document and ranked by majority ordering ( mo ) outperforms all of the other techniques .the document that has the highest similarity to the total content captures the central idea of the documents .the clustering scheme that works best with msc is + apo .ilpsumm also outperforms the msc - based method , i.e. , our approach can generate more informative summaries by globally maximizing content selection from multiple clusters of sentences . in summary , content selection of our proposed abstractive systems work at par with the best extractive systems .* discussion : * our proposed system identifies the most important document , which is a general human strategy for summarization .the majority ordering strategy prioritizes clusters that contain sentences which should be mentioned earlier in a summary .other systems tackle redundancy as a final step ; however , we integrate linguistic quality and informativeness to select the best sentences in the summary using our ilp based approach .we performed the rest of our experiments only on the duc 2004 dataset as it has been widely used for multi - document summarization .we also determine readability of the generated summaries by obtaining ratings from human judges . following liu and liu ,we ask 10 evaluators to rate 10 sets of four summaries on two different factors _ informativeness _ and _ linguistic quality_. the ratings range from 1 ( lowest ) to 5 ( highest ) .all the evaluators have a good command of english and seven of them are native speakers .evaluators were asked to rate the summaries based on informativeness ( the amount of information conveyed ) and linguistic quality ( readability of the summary ) .we randomized the sets of summaries to avoid any bias .0.52l|cc|c * type * & * inf * & * lq * & * avg.ll * + human written & 4.42 & 4.35 & -129.02 + extractive ( dpp ) & 3.90 & 3.81 & -142.70 + abstractive ( msc ) & 3.78 & 2.83 & -210.02 + abstractive ( ilpsumm ) & 4.10 & 3.63 & -180.76 + the four summaries provided to the evaluators are human - written summary ( one summary collected randomly from four model - summaries per cluster ) , extractive summary ( dpp ) , abstractive summary generated using msc ( msc ) and abstractive summary generated using our ilp based method ( ilpsumm ) .we asked each evaluator to complete 10 such tasks , each containing four summaries as explained above .we normalize ratings of different evaluators to the same scale .table [ tab : humaneval ] shows the results obtained by manual evaluation .according to the judges , the linguistic quality of * ilpsumm * ( 3.63 ) is significantly better than that of * msc * ( 2.83 ) .further , our summaries ( ilpsumm ) are more informative than dpp ( 3.90 ) and msc ( 3.78 ) .dpp is extractive in nature , hence linguistically , the sentences are generally more readable . to obtain a coarse estimate of grammaticality, we also compute the confidence scores of the summaries using the stanford dependency parser .a language model assigns probabilities to sequence of words ; in contrast , the confidence score of a parser assigns probabilities to grammatical relations .the values ( the lower the magnitude , the better ) are shown in the column _ avg.ll_. _avg.ll _ obtained by ilpsumm ( -180.76 ) is better than that obtained by msc ( -210.02 ) , indicating that the language model based linguistic quality estimation helps generate more readable summaries than the msc method .table [ tab : samplesumm ] shows a comparison of summaries from the different systems using the duc 2004 dataset . as can be seen , the linguistic quality of the abstractive summaries ( ilpsumm ) is acceptable , and the content is well - formed and informative . our ilp framework can combine information from various sentences and present a fairly well - formed readable summary .p0.45 ' '' '' * abstractive summary ( ilpsumm ) : * hun sen s cambodian people s party won 64 of the 122 parliamentary seats in july .opposition ally sam rainsy charged that hun sen s party has rejected allegations of intimidation and fraud .hun sen and ranariddh are to form working groups this week to divide remaining government posts . buta deal reached between hun sen and his chief rival , prince norodom ranariddh s ally , sam rainsy . +* extractive summary ( dpp ) : * ranariddh and sam rainsy have charged that hun sen s victory in the elections was achieved through widespread fraud .hun sen said his current government would remain in power as long as the opposition refused to form a new one .cambodian leader hun sen , who heads the cpp , has offered to share the legislature s top job with the royalist funcinpec party of prince norodom ranariddh in order to break the impasse . + * human - written summary : * cambodian prime minister hun sen rejects demands of 2 opposition parties for talks in beijing after failing to win a 2/3 majority in recent elections .sihanouk refuses to host talks in beijing .opposition parties ask the asian development bank to stop loans to hun sen s government .ccp defends hun sen to the us senate .funcinpec refuses to share the presidency .hun sen and ranariddh eventually form a coalition at summit convened by sihanouk . ' '' '' ' '' '' * abstractive summary ( ilpsumm ) : * lebanese foreign minister kamal kharrazi made the mediation offer sunday , in a telephone conversation with his syrian counterpart , farouk al - sharaa .egyptian president hosni mubarak met here sunday with syrian president hafez assad to show lebanon s support for syria and turkey . in a show of force on friday, turkish troops were deployed this week on the turkish - syrian border to eradicate krudish rebel bases .+ * extractive summary ( dpp ) : * egyptian president hosni mubarak met here sunday with syrian president hafez assad to try to defuse growing tension between syria and turkey .the talks in damascus came as turkey has massed forces near the border with syria after threatening to eradicate kurdish rebel bases in the neighboring country .egypt already has launched a mediation effort to try to prevent a military confrontation over turkish allegations that syria is harboring turkish kurdish rebels . +* human - written summary : * tensions between syria and turkey increased as turkey sent 10,000 troops to its border with syria . the dispute comes amid accusations by turkey that syria helping kurdish rebels based in syria .kurdish rebels have been conducting cross border raids into turkey in an effort to gain kurdish autonomy in the region . ' '' '' * error analysis : * there is still room for improvement in the linguistic quality of the generated summaries .we analyzed the summaries that were given lower ratings than the other options on the basis of linguistic quality .consider the following sentence in a system generated summary , which received low scores from human judges : as can be seen , the phrase `` killed 270 people killed '' is not coherent .the language model fails to identify such cases as the 3-gram sequences of _ killed 270 people _ and _ 270 people killed _ are both grammatically coherent .in addition to a language model , we can also use a dependency parser to assign lower weights to paths that have redundant dependencies on the same nodes . consider the following example : + the last phrase `` a government formed '' , is grammatically incoherent in the context of the sentence .linguistically correct modifications could be _ a government being formed _ or _ a government formation_. in future work , we plan to address such issues of grammaticality using dependency parses of sentences rather than just adjacency relations when building the word - graph .we have proposed an approach to generate abstractive summaries from a document collection .we capture the redundant information using a simple yet effective clustering technique .we proposed a novel ilp based technique to select the best shortest paths in a word - graph to maximize information content and linguistic quality of a summary .experimental results on the duc 2004 and 2005 datasets show that our proposed approach outperforms all the baselines and the state - of - the - art extractive summarizers . based on human judgments ,our abstractive summaries are linguistically preferable than the baseline abstractive summarization technique . in future work, we plan to use paraphrasing techniques to further enhance quality of the generated summaries .we also plan to address phrase level redundancies to improve coherence .this material is based upon work supported by the national science foundation under grant no . 0845487 .
abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries . in this work , we aim at developing an abstractive summarizer . first , our proposed approach identifies the most important document in the multi - document set . the sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences . second , we generate -shortest paths from the sentences in each cluster using a word - graph structure . finally , we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming ( ilp ) model with the objective of maximizing information content and readability of the final summary . our ilp model represents the shortest paths as binary variables and considers the length of the path , information score and linguistic quality score in the objective function . experimental results on the duc 2004 and 2005 multi - document summarization datasets show that our proposed approach outperforms all the baselines and state - of - the - art extractive summarizers as measured by the rouge scores . our method also outperforms a recent abstractive summarization technique . in manual evaluation , our approach also achieves promising results on informativeness and readability .
the 22-yr cycle of solar activity is a magnetic cycle which consists of two 11-years sunspot cycles .the 22-yr cycles begin on even sunspot cycles according to the zrich numbering ( ) and manifest themselves in reversal of polarity of sunspots ( hale s law ) . from one 11-yr cycle to another, the polarity of the preceding ( ) and following ( ) sunspots reverses .this reversal corresponds to the changing of the toroidal magnetic field ( -component ) . during even - numbered cycles ( the direction of the magnetic vector coincides with the direction of solar rotation ) in the northern ( ) hemisphere and in the southern ( ) hemispherethis relation is reversed during odd - numbered cycles . in parallel with this , the poloidal magnetic field , or background field ( -component ) , shows a 22-yr periodicity ( ) , and other studies ( ) have suggested that the low - latitude and polar fields similarly show a 22-yr periodicity .furthermore , these studies have hinted at a shorter term periodicity of about 2 yr ( high - frequency component ) .in addition , during the periods of the three - fold polar magnetic field reversals which occur during some sunspot cycle maxima , the temporal separation of the zones of alternated polarity of the magnetic field on charts is approximately equal to 1.52.5 years ( , ) .similar periodicities were found in variations of radio flux on ( ) , flares and sunspot areas ( ) . the high - frequency , 2-year , component is substantially weaker than the main 22-year cycle and its intensity varies with time .the two year component more clearly appeared in northern hemisphere in cycle 20 and in southern hemisphere in cycle 21 ; its value was smaller in cycle 22 ( ) .however , the biennial cycle represents a challenge to the solar dynamo theories which usually explain only the main cycle . in this letter, i present a new evidence of the two components of the solar cycle , and an attempt to explain the results in term of parker s dynamo theory ( ) .using kitt peak magnetograph data we show how synoptic ( longitude independent ) magnetic field patterns evolve through cycles 21 and 22 . each of the synoptic maps is represented by the values of as a function sine latitude and carrington longitude .the carrington coordinate system is a reference frame rotating rigidly at the rate which corresponds to the synodic rotation rate of sunspots at a latitude of about .the observed line - of - sight components ( ) averaging over all longitudes for each carrington rotation is represented in figure 1b .the time step in the time set of values corresponds to carrington rotation period ( days ) .the relative sunspot number is shown in figure 1a .component ( figure 1c ) , was calculated from observational data assuming that the true average field direction is radial and .the radial component will be easily found at all latitudes besides near the poles : where is colatitude . to separate the high - frequency component in the data , we apply a difference filter by computing for different time intervals .this is a reasonable procedure if the poloidal magnetic field consists of two components and is represented as two dynamo waves : where is the amplitude of the low - frequency radial component of the magnetic field ; is ratio between amplitudes of low - frequency and high - frequency components ; is the frequency of the hale s solar cycle ; is the frequency of biennial cycle ; is a phase .the expression for the can be written as when , as it took place in cycle 20 , then the low - frequency term dominates in the expression for ( equation [ field ] ) and the high - frequency term prevails in the expression for ( equation [ deriv ] ) .thus , presuming the existence of the double magnetic cycle , one finds that the low - frequency term prevails in the -component , while the high - frequency term dominates in .contour plots of as a function (or ) and time , _ t _ , measured in carrington rotation are represented in figure 1d for . during the cycles 21 and 22 the zones of increasing and decreasing strength of the surface magnetic field appear in both the and hemispheres ( see figure 1d ) .the width of these zones is approximately 2 years .this result coincides with investigations done earlier based on wilcox and mount wilson synoptic maps .corresponding power spectra for show that the component with period years is dominant in both hemispheres ( , ) .these results are the basis of our suggestion of a two component of solar cycle .the possible explanation of the double magnetic cycle is that the magnetic fields are generated by parker s dynamo acting in convective zone .the low - frequency component is generated at the base of the convective zone due to large scale radial shear ; is the angular velocity .the high - frequency component may be generated in subsurface regions due to latitudinal shear or due to radial shear .the recent investigations of solar interior rotation show a significant radial gradient of angular velocity exists in subsurface of the convective zone together with the latitudinal gradient of the angular velocity ( ) .for simplicity we use only latitudinal shear for generation of the high - frequency component .cartesian coordinates are employed , with denoting the radial , the azimuthal and the latitudinal coordinate .we consider axisymmetrical solutions ( ) . at the base of the convection zone turbulenceis suppressed by strong magnetic field ( ) and , therefore , diffusivity in the first layer will could be less then in the second layer .the axisymmetrical mean magnetic field is decomposed into toroidal and poloidal parts .the following set of equations describe evolution of the magnetic field in thin layers at the levels and : where is the toroidal magnetic field and is the azimuthal vector potential which gives poloidal field , is diffusivity and is a kinetic helicity , is the variable part of kinetic helicity .the first and the second equations in the both systems ( [ x1 ] ) and ( [ x2 ] ) are a generation of parker s equations for large - scale radial shear and large - scale latitudinal shear correspondingly .we have used the third equation in the both systems according to kleeorin and ruzmaikin ( 1982 ) in a simple form , for feedback of the magnetic field on the helicity . in these equations and are parameters of the feedback .these two non - linear differential equation systems describe the evolution of two independent sources of the magnetic field . in the frame of this modelit is difficult to explain observed variations of the high - frequency component .therefore , we assume that the erupted low - frequency magnetic field can influence the physical conditions in the region where high - frequency component operates , through modifying the helicity there .this allows the high - frequency component to vary with time . in this case , the equation for becomes , where the parameter captures the feedback of the low - frequency component of the magnetic field on the helicity in regions where the high - frequency component is found . following weiss et al .( 1984 ) the solutions of these equations were found as and , i m for level and the same expressions for level where is replaced by . are complex functions as , .it is convenient to present the system in dimensionless units .since there will be an underlying periodicity in all calculations , we use those periods as relative units of time . in relative units , , , and , where is some mean value of the kinetic helicity .therefore , and are represented as dynamo numbers in our model .therefore , set of the partial differential equations reduced to the following equations were then solved numerically . we have investigated solutions of these two non - linear systems as functions of the parameter .results for three principal cases , , and are represented in figure 3 , where , , , , , , which are in the reasonable range for the sun . if we have two independent sources of the magnetic field which together generate low - frequency and high - frequency signatures ( fig . 2 a , b ) .for a weak interaction between these sources ( ) we still get a low and high frequencies response .. 2 c , d ) .moreover , the high- frequency component becomes more regular , and consequently is decreased when increases . with further increase of the feedback of the low - frequency magnetic field on the helicity near the top surfacethe high - frequency component disappears and only a low - frequency mode regime is established ( fig .2 e , f ) .therefore , in the case of weak interaction between two sources of magnetic field it is possible to obtain a stable double magnetic cycle .our model simulates the double magnetic cycle and temporal variations of the biennial cycles from one 11-year cycle to another . according to the magnetograph data ( )the 2-year component more clearly appeared in the northern hemisphere in cycle 20 and in the southern hemisphere in cycle 21 . in cycle 22it was present in both hemispheres but was of lower amplitude . in our model ,a reduced high - frequency component occurs when the erupted magnetic field of the main ( hale s ) cycle imposes the helicity in the regions of generation of the high - frequency component .the high - frequency component is more pronounced when the effect of erupting magnetic fields of the main cycle on helicity in this region is small .thus , this simple model provides a qualitative explanation of the double magnetic cycle .the next step to a quantitative model is in development of a 2d model in spherical geometry , since this simple model gives only a qualitative picture of the two - components of the solar magnetic cycle .i am grateful to dr .j.w.harvey for providing us the kitt peak magnetograph data and to drs .p.a.gilman , j.t.hoeksema , a.g.kosovichev and p.h.scherrer for useful discussions and russian federal programme `` astronomy '' , grant 1.5.3.4 . 99 akioka , m. , kubota , j. , suzuki , m. et al . 1987 , , * 112 * , 313 belmont , a.d . , darff , d.c . , ultad , m.s .j. atmos.sci._ , * 23 * , 314 benevolenskaya , e.e .1991 , in _ the sun and cool stars : activity , magnetism , dynamos _ , tuominen , i. , moss , d. and rdiger , g. ( eds . ) , springer - verlag , 234 benevolenskaya , e.e .1995 , , * 161 * , 1 benevolenskaya , e.e .1996 , , * 167 * , 47 gnevyshev , m.n . , and ohl , a.i .1948 , , * 25 * , 18 howard , r. , and labonte , b.j . 1981 , , * 74 * , 131 kleeorin , n.i . , and ruzmaikin , a.a .1982 _ magnitnaya gidrodinamika _ , no 2 , 17 parker , e.n . 1979 , _ cosmic magnetic fields _ , oxford university press parker , e.n .1993 , , * 408 * , 707 schou , j. , et al ., 1998 , , in press stenflo , j.o .1994 , in _ solar surface magnetism _ , rutten , r.j . and schrijver , c.j, nato advanced research workshop , kluwer waldmeier , m. 1973 , , * 28 * , 389 weiss , n.o ., cattaneo , f. , jones , c.a .1984 , _ geophys ._ , * 30 * , 305
it has been argued that the solar magnetic cycle consists of two main periodic components : a low - frequency component ( hale s 22-year cycle ) and a high - frequency component ( quasi - biennial cycle ) . the existence of the double magnetic cycle on the sun is confirmed using stanford , mount wilson and kitt peak magnetograph data from 1976 to 1996 ( solar cycles 21 and 22 ) . in the frame of the parker s dynamo theory a model of the double magnetic cycle is presented . this model is based on the idea of two dynamo sources separated in space . the first source of the dynamo action is located near the bottom of the convection zone , and the second operates near the top . the model is formulated in terms of two coupled systems of non - linear differential equations . it is demonstrated that in the case of weak interaction between the two dynamo sources the basic features of the double magnetic cycle such as existence of two component and observed temporal variations of high - frequency component can be reproduced . # 1#2
ebola virus disease ( evd ) is caused by a genus of the family _ filoviridae _ called _ ebolavirus_. the first recorded outbreak took place in sudan in 1976 with the longest most severe outbreak taking place in west africa during 2014 - 2015 .studies have estimated disease growth rates and explored the impact of interventions aimed at reducing the final epidemic size . despite these efforts , research that improves and increases our understanding of evd and the environments where it thrives is still needed . +this chapter is organized as follows : section 2 reviews past modeling work ; section three introduces a single patch model , its associated basic reproduction number , and the final size relationship ; section four introduces a two - patch model that accounts for the time spent by residents of patch on patch ; section 5 includes selected simulations that highlight the possible implications of policies that forcefully restrict movement ( _ cordons sanitaires_);and , section 6 collects our thoughts on the relationship between movement , health disparities , and risk .chowell et _ al . _ estimated the basic reproduction numbers for the 1995 outbreak in the democratic republic of congo and the 2000 outbreak in uganda .model analysis showed that control measures ( education , contact tracing , quarantine ) if implemented within a reasonable window in time could be effective .legrand et _ al . _ built on the work in through the addition of hospitalized and dead ( in funeral homes ) classes within a study that focused on the relative importance of control measures and the timing of their implementation .lekone and finkenstdt made use of an stochastic framework in estimating the mean incubation period , mean infectious period , transmission rate and the basic reproduction number , using data from the 1995 outbreak .their results turned out to be in close agreement with those in but the estimates had larger confidence intervals .the 2014 outbreak is the deadliest in the history of the virus and naturally , questions remain .chowell et _ al_. in recently introduced a mathematical model aimed at addressing the impact of early detection ( via sophisticated technologies ) of pre - symptomatic individuals on the transmission dynamics of the ebola virus in west africa .patterson - lomba et _ al_. in explored the potential negative effects that restrictive intervention measures may have had in guinea , sierra leone , and liberia .their analysis made use of the available data on ebola virus disease cases up to september 8 , 2014 .the focus on was on the dynamics of the``effective reproduction number '' , a measure of the changing rate of epidemic growth , as the population of susceptible individuals gets depleted . appeared to be increasing for liberia and guinea , in the initial stages of the outbreak in densely populated cities , that is , during the period of time when strict quarantine measures were imposed in several areas in west africa .their report concluded , in part , that the imposition of enforced quarantine measures in densely populated communities in west africa , may have accelerated the spread of the disease . in , the authors showed that the estimated growth rates of evd cases were growing exponentially at the national level .they also observed that the growth rates exhibited polynomial growth at the district level over three or more generations of the disease .it has been suggested that behavioral changes or the successful implementation of control measures , or high levels of clustering , or all of them may nave been responsible for polynomial growth .a recent review of mathematical models of past and current evd outbreaks can be found in and references therein .inspired by these results , we proceed to analyze the effectiveness of forcefully local restrictions in movement on the dynamics of evd .we study the dynamics of evd within scenarios that resemble evd transmission dynamics within locally interconnected communities in west africa ._ cordons sanitaire _ or `` sanitary barriers '' are designed to prevent the movement , in and out , of people and goods from particular areas .the effectiveness of the use of _ cordons sanitaire _ have been controversial .this policy was last implemented nearly one hundred years ago . in desperate attempts to control disease ,ebola - stricken countries enforced public health officials decided to use this medieval control strategy , in the evd hot - zone , that is , the region of confluence of guinea , liberia and sierra leone . in this chapter , a framework that allows , in the simplest possible setting , the possibility of assessing the potential impact of the use of a _ cordon sanitaire _ during an evd outbreak , is introduced and `` tested '' .the population of interest is subdivided into susceptible ( ) , latent ( ) , infectious ( ) , dead ( ) and recovered ( ) . the total population ( including the dead ) is therefore .the susceptible population is reduced by the process of infection , which occurs via effective `` contacts '' between an infectious ( ) or a dead body ( ) at the rate of and susceptible .evd - induced dead bodies have the highest viral load , that is , more infectious than individuals in the infectious stage ( ) ; and , so , it is assumed that .the latent population increases at the rate . howeversince some latent individuals may recover without developing an infection , it is assumed that exposed individuals develop symptoms at the rate or recover at the rate .the population of infectious individuals increases at the rate and decreases at the rate .further , individuals leaving the infectious stage at rate , die at the rate or recover at the rate .the class includes recovered or the removed individuals from the system ( dead and buried ) . by definition the -class increases , the arrival of previously infected , grows at the rate .a flow diagram of the model is in fig .[ flow1 ] , the definitions of parameters are collected in table [ tab : par ] , including the parameter values used in simulations where .variables and parameters of the contagion model . [ cols="<,<,^",options="header " , ] the mathematical model built from fig .[ flow1 ] , that models evd dynamics is given by the following nonlinear systems of differential equations : the total population is constant and the set is a compact positively invariant , that is , solutions behave as expected biologically .hence model ( [ ebolaasym ] ) is well - posed . following the next generation operator approach ( on , and ) ,we find that the basic reproductive number is given by that is , is given by the sum of the secondary cases of infection produced by infected and dead individuals during their infection period .the final epidemic size relation that includes dead ( to simplify the maths ) being given by work of eubank _ et al . _ , sara de valle _ et al . _ , chowell _ et al . _ , and analyze heterogeneous environments .castillo - chavez and song , for example , highlight the importance of epidemiological frameworks that follow a lagrangian perspective , that is , models that keep track of each individual ( or at least its place of residence or group membership ) at all times .the figure [ lagmov ] represents a schematic representation of the lagrangian dispersal between two patches .bichara _ et al . _ uses a general susceptible - infectious - susceptible ( sis ) model involving n - patches given by the following system of nonlinear equations : where , and denote the per - capita birth , natural death and recovery rates respectively .infection is modeled as follows : = \underbrace{\beta_j}_{\textbf{the risk of infection in patch } } \times \underbrace{p_{ij}s_i}_{\textbf{susceptible from patch who are currently in patch } } \ ] ] where the last term accounts for the _ effective _ infection proportion in patch at time .the model reduces to the single n - dimensional system with a basic reproduction number that it is a function of the risk vector and the residence times matrix , , where denotes the proportion of the time that an -resident spends visiting patch . in , it is shown that when is irreducible ( patches are strongly connected ) , the disease free state is globally asymptotically stable if ( g.a.s . ) while , whenever there exists a unique interior equilibrium which is g.a.s .the patch - specific basic reproduction number is given by where are the _ local _ basic reproduction number when the patches are isolated .this patch - specific basic reproduction number gives the dynamics of the disease at patch level , that is , if the disease persists in patch .moreover , if for all and whenever , it has been shown that the disease dies out form patch if .the authors in also considered a multi - patch sir single outbreak model and deduced the final epidemic size .the sir single outbreak model considered in is the following : where , and and denotes the population of susceptible , infected and recovered immune individuals in patch , respectively .the parameter is the recovery rate in patch and , for . in this chapterwe will be making use of this modeling framework , but with a slightly different formulation , to test under what conditions the movement of individuals from high risk areas to nearby low risk areas due to the use of _ cordon sanitaire _ , is effective in reducing _ overall _ transmission by considering two - patch single outbreak that captures the dynamics of ebola in a two - patch setting .it is assumed that the community of interest is composed of two adjacent geographic regions facing highly distinct levels of evd infection .the levels of risk account for differences in population density , availability of medical services and isolation facilities , and the need to travel to a lower risk area to work .so , we let denote be the population in patch - one ( high risk ) and be the population in patch - two ( low risk ) . the classes , , , represent respectively , the susceptible , exposed , infectious and recovered sub - populations in patch ( ) .the class represents the number of disease induced deaths in patch .the dispersal of individuals is captured via a lagrangian approach defined in terms of residence times , a concept developed for communicable diseases for patch setting and applied to vector - borne diseases to an arbitrary number of host groups and vector patches in .+ we model the new cases of infection per unit of time as follows : * the density of infected individuals mingling in patch 1 at time t , who are only capable of infecting susceptible individuals currently in patch 1 at time , that is , the _ effective _ infectious proportion in patch 1 is given by where denotes the proportion of time residents from patch 1 spend in patch 1 and the proportion of time that residents from patch 2 spend in patch 1 .* the number of new infections within members of patch 1 , in patch 1 is therefore given by * the number of new cases of infection within members of patch 1 , in patch 2 per unit of time is therefore where denotes the proportion of time that residents from patch 1 spend in patch 2 and the proportion of time that residents from patch 2 spend in patch 2 ; given by the effective density of infected individuals in patch 1 while the _ effective _ density of infected individuals in patch 2 is given by further , since , and then we see that the sum of ( * ) and ( * * ) gives the density of infected individuals in both patches , namely , as expected .if we further assume that infection by dead bodies occurs only at the local level ( bodies are not moved ) then , by following the same rationale as in model [ ebolaasym ] , we arrive at the following model : the difference , in the formulation of the infection term , from the one considered in is the _ effective _ density of infected . here ,the _ effective _ density of infected in patch 1 , for example , is whereas in , it is focusing on the changes on and and making use of the next generation approach we arrive at the basic reproductive number for the entire system , namely , we see , for example , that whenever the residents of patch ( ) live in communities where travel is not possible , that is , when or , then the populations decouple and , consequently , we have that where for ; that is , basic reproduction number of patch , , if isolated . we keep track of the dead to make the mathematics simple .that is , to assuming that the population within each patch is constant . andso , from the model , we get that with initial conditions we use the above model to find an `` approximate '' final size relationship .we make use of the notation for and for .we see that our analysis results guarantee that if is a positive decreasing function then . since , then and since .if we now consider that then it follows that .similarly , it can be shown that focusing on the first two equations of system ( [ ebopatchyfin ] ) , we arrive at consequently , since , we have that and therefore using the equation for , we find that similarly , we can deduce the analogous relationships for patch 2 , namely that , from the equation for susceptible populations in patch 1 , we have that and , therefore that , for the second patch , we have that rewriting the expressions of and in terms of , , and , we arrive at the following two - patch `` approximate '' ( since we are counting the dead ) , the final size relation . more precisely , with , we have that or in vectorial notation , we have that where furthermore , we note that also appears in the next generation matrix , used to compute .further , we also have that , note that the vector in ( [ fsematrix ] ) is given by representing the proportion of people in patches one and two able to transmit ebola including transmission from handling dead bodies . , , we conclude that the matrix and the next generation matrix have the same eigenvalues , a result also found in .the basic model parameters used in the simulations are taken directly from the literature .we consider two patches and , for simplicity , it is assumed that they house the same number of individuals , namely , .however , implicitly , it is assumed that the density is considerably higher in the high risk area .we assume that an outbreak starts in the high risk patch 1 with .it propagates into patch 2 , low risk , defined by .the difference between and or provides a rough measure of the capacity to transmit , treat and control ebola within connected two - patch systems .the initial conditions are set as .the local basic reproductive numbers for each patch under isolation are and .we chose to report on three different mobility scenarios : one way movement , symmetric and asymmetric mobility .for the first case , only residents from patch 1 travel , that is and . giventhat patch 1 is facing an epidemic , it is reasonable to assume that people in patch 2 prefer to avoid traveling to patch 1 , and so , it is reasonable to assume that .mobility is allowed in both directions in a symmetric way , that is , residents of patch 1 spend the same proportion of time in patch 2 that individuals from patch 2 spend in patch 1 ; i.e. .the third scenario assumes that mobility is asymmetric , and so , we make use , in this case , of the relation .simulations show that when only individuals from patch 1 are allowed to travel , the prevalence and final size are lower that under a cordon sanitaire .figure [ owinfected ] , shows the levels of patch prevalence when and . for low s, prevalence decreases in patch 1 but remains high in both patches , which as expected , has a direct impact in the final size of the outbreak . and , with parameters : . ] in figure [ owfinal ] , simulations show that the total final size is only greater than the cordoned case when , possibly the result of the assumption that and . however , we see under the assumption of higher body disposal rates in patch 2 , that the total final size under may turn out to be smaller than in the cordoned case . that is , it is conceivable that a safer patch 2 , may emerge as a result of a better health care infrastructure and efficient protocols in the handling of dead bodies . and , with parameters : . ] finally , figure [ owfinal ] shows that mobility can produce the opposite effect ; that is , reduce the total final epidemic size , given that ( for the parameters used ) the residence times are greater than but smaller than .simulations under symmetric mobility show that prevalence and final size are severely affected when compared to the cordoned case .figure [ sinfected ] shows that the prevalence in patch 1 exhibits the same behavior as in the one way scenario .however , in this case the prevalence in patch 1 is decreasing at a slower rate due to the secondary infections produced by individuals traveling from patch 2 . on the other hand ,prevalence in patch 2 is much bigger than in the one way scenario , the result of secondary infections generated by individuals traveling from patch 2 to patch 1 . and , with parameters : . ]we saw that final size in patch 1 decreases when residency increases while an increment of the final size in patch 2 .that is , the total final size curve may turn out to be greater than in the cordoned case for almost all residence times . as seen in figure[ sfinal ] , allowing symmetric travel would negatively affect the total final size ( almost always ) . and , with parameters : . ]in order to clarify the effects of residence times and mobility on the total final size .we analyze its behavior under one way and symmetric mobility ( figure [ maxonew ] ) .figure [ maxonew](a ) shows , one way mobility , the existence of a proportional resident time interval when the total final size is reduced below that generated under the cordoned case . for residence times between and .in particular , the best case scenario takes place when , that is , when the final size reaches its all time minimum . .] figure [ maxonew](b ) shows that under symmetric mobility , the total final size increases for almost all resident times .therefore traveling under these initial conditions has a deleterious effect to the overall population for almost all residence times .it is important to notice that reductions in the total final size are related not only to residence times and mobility type but also to the prevailing infection rates . in figure[ betas ] simulations show the existence of an interval of residence times for which the total final size is less than the final size under the cordoned case under . .] simulations ( see figure [ ro ] ) show that mobility is always beneficial , that is , it reduces the global .however , mobility on its own is not enough to reduce below the threshold ( less than ) .bringing would require reducing local risk , that is , getting a lower . with parameters: .a west - africa calibrated two - patch model of the transmission dynamics of evd is used to show that the use of _ cordons sanitaires _ not always leads to the best possible global scenario and neither does allowing indiscriminate mobility .mobility may reduced the total epidemic size as long as the low risk patch 2 is `` safe enough '' , otherwise mobility would produce a detrimental effect . having an infection rate in patch 2 guarantees ( under our simulations ) the existence of non - trivial residence times that reduce the total final size under one way mobility .the global basic reproductive number may be brought bellow one by mobility , whenever a the transmission rate in patch 2 is low enough .finally , the choice of non zero , that is , the recovery rate of asymptomatic that do not develop infection , bring the reproduction number below one much faster for one way mobility than the case of for a wide range of residence times . .10 , _ a final size relation for epidemic models _ , math .eng . , 4 ( 2007 ) , pp. 159175 . , _ symptomless infection with ebola virus _ , the lancet , 355 ( 2000 ) , pp .2178 2179 ., _ ebola control : effect of asymptomatic infection and acquired immunity _ , the lancet , 384 ( 2014 ) ., _ vector - borne diseases models with residence times - a lagrangian perspective _ , arxiv preprint arxiv:1509.08894 , ( 2015 ) . , _ sis and sir epidemic models under virtual dispersal _ , the bulletin of mathematical biology , doi : 10.1007/s11538 - 015 - 0113 - 5 , ( 2015 ) ., _ some simple epidemic models _ , math .eng . , 3 ( 2006 ) .height 2pt depth -1.6pt width 23pt , _ age of infection and final epidemic size _ , math .eng . , 5 ( 2008 ) , pp .681690 . , _ mathematical models in population biology and epidemiology _ , vol .40 of texts in applied mathematics , springer - verlag , new york , 2001 . ,_ age of infection epidemic models with heterogeneous mixing _ , j biol dyn , 3 ( 2009 ) , pp . 32430 ., _ encyclopedia of pestilence , pandemics , and plagues : am _ , vol . 1 , abc - clio , 2008 . , _ molecular evolution of viruses of the family filoviridae based on 97 whole - genome sequences _ , j virol , 87 ( 2013 ) , pp .260816 . ,_ an epidemic model with virtual mass transportation : the case of smallpox _ , bioterrorism : mathematical modeling applications in homeland security , 28 ( 2003 ) , p. 173 . ,_ modelling the effect of early detection of ebola _ , the lancet infectious diseases , 15 ( 2015 ) , pp. 148149 . , _ the basic reproductive number of ebola and the effects of public health measures : the cases of congo and uganda _ , j. theoret .biol . , 229 ( 2004 ) , pp .119126 . , _ scaling laws for the movement of people between locations in a large city _, physical review e , 68 ( 2003 ) , p. 066102 ., _ transmission dynamics and control of ebola virus disease ( evd ) : a review _ , bmc medicine , 12 ( 2014 ) , p. 196 . , _ the western africa ebola virus disease epidemic exhibits both global exponential and local polynomial growth rates _ , plos currents , 7 ( 2014 ) ., _ risk behavior - based model of the cubic growth of acquired immunodeficiency syndrome in the united states _ ,proceedings of the national academy of sciences , 86 ( 1989 ) , pp . 47934797 . , _ on the definition and the computation of the basic reproduction ratio in models for infectious diseases in heterogeneous populations _ , j. math .. , 28 ( 1990 ) , pp .365382 . ,_ nyt : using a tactic unseen in a century , countries cordon off ebola - racked areas _ , august 12 , 2014 . , _ modelling disease outbreaks in realistic urban social networks _ , nature , 429 ( 2004 ) , pp .180184 . ,_ guidance for safe handling of human remains of ebola patients in us hospitals and mortuaries _ , 2014 . ,_ sars control and psychological effects of quarantine , toronto , canada _ , emerg infect dis , 10 ( 2004 ) , pp . 12061212 . , _ low seroprevalence of igg antibodies to ebola virus in an epidemic zone : ogoou - ivindo region , northeastern gabon , 1995 _ , j infect dis , 191 ( 2005 ) , pp. 964968 . , _ the mathematics of infectious diseases _ , siam rev . ,42 ( 2000 ) , pp .599653 ( electronic ) ., _ epidemiological dynamics of ebola outbreaks _ , elife , 3 ( 2014 ) . , _ a contribution to the mathematical theory of epidemics _ , proc .r. soc . , a115 ( 1927 ) , pp .700721 . , _ a three - scale network model for the early growth dynamics of 2014 west africa ebola epidemic _ , plos currents : outbreaks , ( 2014 ) ., _ understanding the dynamics of ebola epidemics _ , epidemiol infect , 135 ( 2007 ) , pp .61021 . , _ statistical inference in a stochastic epidemic seir model with control intervention : ebola as a case study _ , biometrics , 62 ( 2006 ) , pp .11701177 . , _ human asymptomatic ebola infection and strong inflammatory response _ , the lancet , 355 ( 2000 ) , pp .221015 ., _ early transmission dynamics of ebola virus disease ( evd ) , west africa , march to august 2014 _ , euro survei , 36 ( 2014 ) . , _ strategies for containing ebola in west africa _ , science , 346 ( 2014 ) , pp. 991995 . , _ an introduction to ebola : the virus and the disease _ , j infect dis , 179 ( suppl 1 ) ( 1999 ) , pp .ix xvi . , _ spatial dynamics of pandemic influenza in a massive artificial society _ , journal of artificial societies and social simulation , 10 ( 2007 ) , p. 9 . , _emerging disease dynamics : the case of ebola _ , april 2014 . , _ temporal variations in the effective reproduction number of the 2014 west africa ebola outbreak _ , plos currents : outbreaks , 1 ( 2014 ) . , _ reproduction numbers and sub - threshold endemic equilibria for compartmental models of disease transmission _ , math .biosci . , 180 ( 2002 ) , pp ., _ ebola virus disease _ , april 2015 .the total population of system ( [ ebolaasym ] ) is constant , we can consider only the system we suppose . by summing the first two equations of ( [ ebola2 ] ), we have : .this implies that .similarly by adding the first three and first four equations , we will have and .+ by integrating the first 2 equations , we have . hence
we formulate a two - patch mathematical model for ebola virus disease dynamics in order to evaluate the effectiveness of _ cordons sanitaires _ , mandatory movement restrictions between communities while exploring their role on disease dynamics and final epidemic size . simulations show that severe restrictions in movement between high and low risk areas of closely linked communities may have a deleterious impact on the overall levels of infection in the total population . * keywords : * ebola virus disease , asymptomatic , final size relation , residence times .
since the classic works by frank zerilli in early s on the particle falling in a schwarzschild geometry , a lot of research and study has been performed on this fundamental problem .one of the earliest computational calculations was made by press and his co - workers , which is now known as drpp calculation on the radiation emitted when a particle starting from rest at infinity falls into a non - spinning black hole .the collision of two black holes is , in principle , one of the most efficient mechanisms for the generation of the gravitational waves . in recent yearsthe extreme mass ratio limit of the binary system has been a special focus of research in gravitational physics .extreme - mass - ratio inspirals ( emris ) are one of the main sources of the gravitational waves for the gravitational wave detectors , such as laser interferometer space antenna ( lisa ) .emris are binary systems composed of a stellar compact object ( sco ) with a mass , in the range of inspiralling into a massive black hole ( mbh ) with a mass , in the range of located at the galactic center .thus , the mass ratios involved are . during the slow inspiral phase the system is driven by the emission of gravitational radiation ,the general features of which are now well understood .press showed that there is always an intermediate stage where the ringdown is dominated by a set of oscillating and exponentially decaying solutions , quasinormal modes ( qnms ) whose spectrum depends only on the mass of the black hole and the multipole - moment index of the initial perturbation .this regime is followed by a power - law _ tail _decay due to backscattering .for the emri , the small companion black hole is modeled as a point particle , and the problem can be framed by using the black hole perturbation theory . moreover , as the first approximation , the point particle follows the geodesics in the space - time of the central black hole .the frequency - domain approach to this problem has achieved many remarkable results .specifically the accurate determination of the energy flux of gravitational waves was obtained in the frequency - domain .however , the frequency - domain approach can take long computational time and lose accuracy for non - periodic orbits ( for example , parabolic orbits , orbits with high eccentricity or decaying orbits ) .the time - domain approach seems better suited for such orbits . for the time - domain approach , the finite - difference ( fd )method is one of the most popular numerical methods .the fd time - domain methods , however , suffer from the relatively poor accuracy at the moment unless a very high computational resolution is used .the main reason is the point particle approximation , i.e. the approximation of the singular source terms .various approaches to this issue have been attempted , including the regularizing the dirac -function using a narrow gaussian distribution and also using more advanced discrete -model .another approach of the emri problem is to use the spectral ( sp ) method . in our previous work, we used the spectral method to solve the inhomogeneous zerilli equation in time - domain and obtained good results .but the proper power - law decay was not observed . in early timethe solution agrees with the established solution but in very late time the solution is contaminated by the small - scale oscillations .these oscillations are likely due to the artificial truncation of the computational domain . in this work, we continue our previous research with the spectral method in order to obtain the proper power - law decay . for this, we developed the multi - domain hybrid method .the multi - domain method hybridizes the spectral method and the high - order finite - difference method . the spectral domain is also split into many sub - domains , each of which is also a spectral domain .the main advantage of the multi - domain method is that the computational costs can be significantly reduced by reducing the order of the interpolating polynomial in each sub - domain and the parallelization becomes robust .a fundamental reason for considering the multi - domain method is also to reduce the boundary effects due to the artificial truncation of the computational domain for obtaining the proper late time decay profile of the gravitational waveforms . in order to obtain the proper power - law decay, the outer boundary needs to be placed afar , in general .however , having a large size of the computational domain increases the computational costs significantly . in this work ,we add the finite - difference domain as the boundary domain .the spectral method is a global method and it is highly sensitive to the boundary effects . to prevent the `` fast '' propagation of these boundary effects , we use a local method instead as the boundary domain , such as the finite - difference domain . by doing this, we obtain the proper power - law decay while having the computational costs reduced and also exploiting the accuracy of the spectral method . to patch each sub - domain with others ,we derive the accurate and stable patching conditions . for the spectral and finite - difference sub - domains ,we show that the resolution across the interface needs to be closely uniform .otherwise , the cfl condition becomes strict . for the singular source term, we use both the gaussian -function method and the discrete -function method . for the gaussian method, we change the shape of the gaussian profile to mimic the -function . for the discrete -function, we generalize the discrete -function developed by sundararajan et al . into the one on the non - uniform grid .we provide numerical results that show the efficiency and robustness of the proposed hybrid method . using the hybrid methodwe could obtain the proper power - law decay with the gaussian approximation model .we use various shapes of the gaussian profile and found that the result is insensitive to the shape .that is , even a broad profile , which results in a smooth solution , yields the power - law decay successfully . with the smooth solution, the spectral method does not need to use the filter operation , which increases the computational efficiency further .we also obtain the power - law decay with the discrete -function model , but the computed slope was not accurate , which may imply that the discrete -function model yields correct results only on uniform grids .this paper is organized as follows . in section 2 ,we briefly describe the finite - difference and spectral methods . for the finite - difference method, we used the - order method . for the spectral method we use the chebyshev spectral collocation method based on the gauss - lobatto collocation points . in section 3we describe the discrete -function on non - uniform grids .section 4 explains the zerilli equation briefly . in section 5, we describe the proposed hybrid method in details .we derive the stable and accurate interface patching conditions .boundary conditions are described in section 6 . in section 7, we discuss the stability of the hybrid method . in section 8 ,numerical results are provided . in section 9 , a brief summary and future work are explained .in this work we consider both the 2nd and 4th - order finite - difference method . the 2nd - order finite - difference method for the spatial derivativesare well known and we omit those formulae .instead we briefly explain the 4th - order finite - difference method . for the 4th - order method we will derive the formula when the grid is non - uniform .this is because , in the spectral domain , we use the gauss - lobato collocation points , which are not evenly distributed and we need the finite - difference formula for the boundary conditions in the spectral domain .also we define the modified flux at the sp - sp interface , which also requires the finite - difference formulae .details of these are described in section ._ 4th - order finite - difference method : uniform grids ._ let be the grid spacing in the finite - difference domain and let be .the standard -order derivatives are given by centered difference : off - centered ( 1 - point ) difference : off - centered ( 2 - points ) difference : at the left and right boundaries , we used the 2nd - order difference method . : similarly : the centered 2nd - order derivative is given by for the spectral method , we use the chebyshev spectral collocation method based on the gauss - lobatto collocation points .the chebyshev spectral collocation method seeks a solution in the chebyshev polynomial space by the chebyshev polynomials as where is the chebyshev polynomial of degree and the corresponding expansion coefficient .the commonly used collocation points are the gauss - lobatto quadrature points given by these collocation points belong to ] is the original computational domain .the expansion coefficients are given by where if , and otherwise .we also use the spectral filtering method to minimize the possible non - physical high frequency modes .the oscillations with the spectral method possibly found near the local jump discontinuity and also generated due to inconsistent initial conditions propagate through the whole domain .our filtered approximation is given by where based on the exponential filter is the filter function according to .our filter matrix is given by where is the order of filteration and is a constant .the filtered solution at is given by -function on a uniform grid has been derived in . in sp - fd approach, the singular source term is always located inside the spectral domain . since in the spectral domainthe grid is non - uniform , we need to redefine the discrete -function on the non - uniform grid . -function which exists at , , by using the following relation equating the coefficients of from both sides yields for getting the first derivative of the function we have equating the coefficients of from both sides yields % \nonumber % \end{eqnarray } & % \\ \\\frac{(\alpha - x_{k})(\alpha - x_{k+2})}{(x_{k+3}-x_{k+2})(x_{k+3}-x_{k+1})}[\frac{\alpha - x_{k+1}}{(x_{k+4}-x_{k+2})(x_{k+3}-x_{k})(x_{k+3}-x_{k+2 } ) } & \mbox { at } x_{k+2}\\ \quad\quad -\frac{\alpha - x_{k+3}}{(x_{k+2}-x_{k})(x_{k+1}-x_{k})(x_{k+2}-x_{k+1 } ) } ] & \\ \\ \frac{(\alpha - x_{k})(\alpha - x_{k+1})(\alpha - x_{k+3})}{(x_{k+3}-x_{k+1})(x_{k+2}-x_{k})(x_{k+2}-x_{k+1})(x_{k+3}-x_{k+2})(x_{k+4}-x_{k+3 } ) } & \mbox { at } x_{k+3 } \\ \\ -\frac{(\alpha - x_{k})(\alpha - x_{k+1})(\alpha - x_{k+2})}{(x_{k+5}-x_{k+4})(x_{k+4}-x_{k+2})(x_{k+3}-x_{k})(x_{k+3}-x_{k+1})(x_{k+3}-x_{k+2} ) } & \mbox { at } x_{k+4}. \end{array } \right.\ ] ] the gaussian -function is defined by the shape of the -function depends on the full width at half maximum . if is small , is narrow and if has higher value then is broader .the value of depends on time and determines the position of the -function .the lowest order perturbation theory of the initial schwarzschild black hole spacetime leads to the inhomogeneous zerilli equation with even - parity .such an equation describes the gravitational wave in dimension given by the following second - order wave equation where is the tortoise coordinate , the potential term and the source term . for detailssee [ 3,10 ] .the tortoise coordinate , is given by where is the physical coordinate and .one can convert eq ( 9 ) into a system of equations . in some previous works ,zerilli equation was solved in time - domain and in all cases the authors converted the - order pde to the system of - order equations in space and time .we understand that some instabilities arose and to suppress those instabilities , they introduced an auxiliary field which converted the equation to a coupled set of - order equations in space and time . in this workwe do not convert the - order pde to the system of equations .instead we control the instability efficiently by introducing new interface conditions between sp - sp interfaces and sp - fd interfaces .we also correct the numerical flux accordingly .the hybrid method is basically the domain decomposition method .we use the finite - difference and the spectral methods for the hybrid method .the one - dimensional single domain is decomposed into multi - domains .each domain is carried by either the finite - difference method or the spectral method .let ] to ] similarly for the 4th - order method , \nonumber\end{aligned}\ ] ] and \nonumber .\end{aligned}\ ] ] for sp - sp hybrid process , and at the interface , , and .we can write the central flux at the interface as follows .\end{aligned}\ ] ] if we use the central flux directly then where is the spectral differentiation matrix .+\nonumber \\ & & \frac{(\delta t)^2}{2}[d^2\psi^j_{end } + d^2\phi_{1}^{j}].\end{aligned}\ ] ] if the function is not then . for the fd - sp domainthe resolutions are usually different , i.e and .but we can expand and around the and -th point respectively .for the boundary conditions at the boundaries of the whole domain , we use the simple outflow condition based on the assumption that the potential term at is negligible .that is , we use for the first - order derivative in the outflow equations , we use the 2nd - order finite - difference method , eq .( [ left2ndorder ] ) for the fd domain and eq .( [ right2ndorder2 ] ) for the sp domain .0.2 in ( 1,1 ) ( 1.25,1.25)(-1,-1)0.2 ( 0.75,1 ) ( 0.95,1 ) ( 1.1,1 ) ( 1.1,1 ) ( 1.25,1 ) ( 1.45,1 ) ( 1.27,1.25) ( -0.1,1)(5,0)2.3 ( 0,0.5) ( -0.1,0.3)diagram 3 : sp - sp patching .grids at the sp and sp interface .[ diagram3 ] for stability analysis we consider sp - sp domain for example .this analysis is for two domains .let us consider the two spectral domains denoted by sp1 and sp2 .let be the solution in sp1 and be the solution in sp2 of the zerrilli equation without the potential and singular source terms .then we have , \nonumber \\ \psi^{n+1}_0 = 2\psi^{n}_0-\psi^{n-1}_0+\frac{(\delta t)^2}{2h^2}[\phi^{n}_{n-1}-2\psi^{n}_{0}+\psi^{n}_{1 } ] , \nonumber \\ \psi^{n+1}_{j } = 2\psi^{n}_{j}-\psi^{n-1}_{j}+(\delta t)^2\tilde{d_{2}^2}\psi^{n}_{j } , \ : j = 1,\cdots , n-1.\nonumber\end{aligned}\ ] ] we assume that there is no boundary effect . also we consider the equal length intervals and so where is the original chebyshev differential matrix and is the length of the spectral sub - domains . is the sub - matrix of without the last and first rows and the first column and is the sub - matrix of without last row , first row and first column .+ collecting all the above equations in matrix form yields , = 2\left[\begin{array}{c } \phi^{n}_{1 } \\\phi^{n}_{2 } \\ \vdots \\\phi^{n}_{n}\\\psi^{n}_{0}\\ \vdots \\\psi^{n}_{n-1 } \end{array}\right ] - \left[\begin{array}{c } \phi^{n-1}_{1 } \\ \phi^{n-1}_{2 } \\ \vdots \\ \phi^{n-1}_{n}\\\psi^{n-1}_{0}\\ \vdots \\ \psi^{n-1}_{n-1 } \end{array}\right]\ ] ] \left[\begin{array}{c } \phi^{n}_{1 } \\ \phi^{n}_{2 }\\ \vdots \\ \phi^{n}_{n}\\\psi^{n}_{0}\\ \vdots \\\psi^{n}_{n-1 } \end{array}\right],\ ] ] where is the null matrix .the previous matrix equation can be written in compact form or eqs and can be written in matrix form =\left[\begin{array}{cc}2i_{2n}+\tilde{d } & -i_{2n } \\i_{2n } & 0 \end{array}\right ] \left[\begin{array}{c}w^{n}\\w^{n-1}\end{array}\right].\ ] ] let .\ ] ] for the matrix stability , we have the spectral radius of less than .i.e .+ + to check the stability we consider two spectral domains ] .for these two domains and must be equal .let be the minimum grid spacing in the left domain and similarly for the right domain .here we have .we found the following results for (number of grid points in each domain ) and (time step ) for which . 0.5 in ( 1,1 )( 1.25,1.25)(-1,-1)0.2 ( 0.75,1 ) ( 0.95,1 ) ( 1.1,1 ) ( 1.1,1 ) ( 1.25,1 ) ( 1.4,1 ) ( 1.27,1.25) ( 1.15,0.9) ( 1.30,0.9) ( 0.82,0.9) ( 1.0,0.9) ( -0.1,1)(5,0)2.3 ( 0,0.5) ( -0.1,0.3)diagram 4 : sp - fd patching .grids at the sp and fd interface .[ diagram4 ] at the sp - fd interface the grid resolutions between the adjacent sub - domains across the domain interface are different .so the grid distribution is non - uniform .the stable interface conditions derived for the uniform grid system are not enough and we need some conditions with which the spatial non - uniformity can be addressed properly [ see 26 and references therein ] .let be the number of grid points in each sp sub - domain and be the number of grid points in the fd domain .also assume be the length of each sp sub - domain and be the length of the fd sub - domain .then for stability we must have consider one spectral domain ( sp1 ) and one fd - domain ( fd1 ) .let be the solution in sp1 and be the solution in fd1 of the simple wave equation without any potential term .let number of grid points in sp1 and fd1 be and respectively which satisfy eqn ( 25 ) .we assume that there is no boundary effect .then from spectral collocation and order fd - method , we have , \nonumber \\ \psi^{n+1}_0 = 2\psi^{n}_0-\psi^{n-1}_0+\frac{(\delta t)^2}{2h^2}[\phi^{n}_{n_{1}-1}-2\psi^{n}_{0}+\psi^{n}_{1 } ] , \nonumber \\ \psi^{n+1}_{j } = 2\psi^{n}_{j}-\psi^{n-1}_{j}+(\delta t)^2\tilde{d_{2}}\psi^{n}_{j } , \ : j = 1,\cdots , n_{2}-1.\nonumber\end{aligned}\ ] ] where where is the original chebyshev matrix , is the sub - matrix of without the first and last rows and the last column and is the - order fd - differentiation matrix , which is written by .\ ] ] + and is the sub - matrix of without the first and last rows and the first column . collecting all the above equations in matrix form yields , = 2\left[\begin{array}{c } \phi^{n}_{1 } \\\phi^{n}_{2 } \\ \vdots \\\phi^{n}_{n_{1}}\\\psi^{n}_{0}\\ \vdots \\\psi^{n}_{n_{2}-1 } \end{array}\right ] - \left[\begin{array}{c } \phi^{n-1}_{1 } \\ \phi^{n-1}_{2 } \\ \vdots \\\phi^{n-1}_{n_{1}}\\\psi^{n-1}_{0}\\ \vdots \\ \psi^{n-1}_{n_{2}-1 } \end{array}\right]\ ] ] \left[\begin{array}{c } \phi^{n}_{1 } \\ \phi^{n}_{2 } \\ \vdots \\ \phi^{n}_{n_{1}}\\\psi^{n}_{0}\\ \vdots \\\psi^{n}_{n_{2}-1 } \end{array}\right],\ ] ] where is the null matrix , and is the null matrix .the previous matrix equation can be written in the following compact form or eqs and can be written in matrix form =\left[\begin{array}{cc}2\tilde{i}+\tilde{d } & -\tilde{i } \\\tilde{i } & 0 \end{array}\right ] \left[\begin{array}{c}w^{n}\\w^{n-1}\end{array}\right].\ ] ] let .\ ] ] where is the unit matrix of order .for the matrix stability , we must have the spectral radius of less than .i.e .+ + to check the stability we consider the sp - domain ] .we choose and in such a way that they must satisfy eq .we found the following results for (number of grid points in sp - domain ) , ( number of grid points in fd - domain ) and (time step ) for which .some examples are given in the following table : the table implies that the more uniformity of the grid spacing across the sp and fd interface is achieved the larger value of time step could be used for stability . for our numerical experiments in the next section ,we choose the geometric parameters for the sub - domains so that the grid resolution is uniform in the small neighborhood across the sp - fd interface . herenote that for the numerical experiments in this paper , we choose the geometric setting so that we have about .for the numerical experiments , the computational domain is split into two main sub - domains , i.e. the sp and fd domains .the -function is located in the sp sub - domain for all time ( see diagram 5 ) .the sp sub - domain is also divided into multiple smaller sp sub - domains and each sub - domain communicates with its adjacent sub - domains by the interface condition described in section 5 .the last sp sub - domain communicates with the boundary fd sub - domain . 1.2 in ( 1,1 )( 0.15,1.22)(-1,-1)0.2 ( 0.15,1.25 ) sp sub - domains ( 0.80,1.22)(1,-1)0.2 ( 1.38,1.22)(-1,-1)0.2 ( 1.30,1.25 ) fd sub - domain ( 1.90,1.22)(1,-1)0.2 ( 0.70,1)(0,1)0.04 ( 0.9,1)(0,1)0.04 ( 1.1,1)(0,1)0.04 ( 0.5,1)(0,1)0.04 ( 0.3,1)(0,1)0.04 ( 0.1,1)(0,1)0.04 ( -0.1,1)(0,1)0.04 ( 2.2,1)(0,1)0.04 ( -0.1,1)(5,0)2.3 ( -0.1,0.6)diagram 5 : sp - sp .... - sp - fd subdomains . [ diagram5 ] since the spectral method is a global method , the boundary effects spread instantly throughout the domain .thus , the sp method suffers from the boundary effects unless the range of the domain is large enough .however , the multiple sp sub - domains with the fd boundary domain help avoid the unphysical oscillations and reduce the computational cost . to determine the number of sub - domains and the order of the interpolating polynomial in each sub - domain ,two aspects are considered .the first aspect is the resolution for the singular source term and the second one is the grid uniformity across the sp and fd domain interface . as explained in the previous section , the non - uniformity of the grid resolution near the sp and fd interface makes the cfl condition strict .considering these aspects , we truncate the domain to ] and the fd sub - domain is $ ] .the multi - domain sp method reduces the computational time significantly by reducing the size of the system but still achieves the desired accuracy .the typical setup in this work is that the number of sub - domains in the sp domain is and each domain has the interpolating order of .the time - step is relatively large such as and the gaussian -function has .the waveform is collected at various values of . . , , .,scaledwidth=110.0% ] first we consider the case with the gaussian model .the first result is in figure 1 .the waveform is collected at .the left figure in the top panel shows the ringdown profile which starts around the time .the right figure in the top panel shows the power - law decay .the two figures in the lower panel show the two distinct phases . for the time approximately the solution decays exponentially and it is oscillatory .this phase is the quasi - normal - mode ( qnm ) ringing phase . after this phase the power - law decay starts .according to the seminal work by richard price , the observer measures the late time perturbation field to drop off as an inverse power law of time , specifically as . in the case of schwarzschild black hole , , where is the multipole moment of the perturbation field . in the context of our present work , .the right figure in lower panel shows the power - law decay in logarithmic scale both in the horizontal and vertical axes . in this phase ,the slope of the decay profile is about .the slopes of the power - law decay for various and different resolutions , are almost same and .this agrees well with the theoretical result : . . , , ., scaledwidth=110.0% ] in figure 2 , we use the gaussian -function with .the waveform is collected at .the top panel shows the ringdown profile and lower panel shows the qnm and the power - law decay .the similar qnm and the power - law decay are observed .also we find that the same number of oscillations are observed in the qnm regime as in figure 1 . . , , .,scaledwidth=110.0% ] in figure 3 , we use the gaussian model with .the waveform is collected at .note that the higher value of is used for this figure and the gaussian -function is much smoother than the previous two cases .we , however , found that the similar solution with the expected qnm profile and power - law decay profile was obtained . . , , ., scaledwidth=110.0% ] in figure 4 , we use the gaussian model with .the waveform was collected at . with this high value of ,the gaussian -function is even smoother .this high level of smoothness of the singular source term made it unnecessary to use the spectral filter for the solution .for the previous figures , we used the spectral filter to regularize the solution due to the possible gibbs oscillations . without the filter operation , the computational timewas reduced .this suggests that the desired decaying profile can be obtained by having the parameters in a clever way .for example , in figure 5 , we show the polynomial order that we used for figures with different values of to obtain the desired qnm and the power - law decay .the parameter values in the figure are , , , . in our current researchwe did nt attempt , but it would be an interesting study to investigate the optimal configuration of these parameters . in each sp sub - domain with the value of for the gaussian model ..,scaledwidth=60.0% ] we repeated the above experiment with the discrete -function . by definition , the discrete -function is localized and its shape changes with time because the spectral grid spacing is non - uniform. -function . , , .,scaledwidth=110.0% ] - function . , , .,scaledwidth=110.0% ] the qnm and the power - law decay profiles are depicted in fig .but the slope of the power - law tail does not match with that for the gaussian model .for example , in fig .6 we considered the case that , and the interface between the sp and fd domains is at .we use sp sub - domains with .the waveform was collected at .the decay rate is slower than expected .the estimated slope is while the slope with the gaussian model was close to .although we increase the resolution , , no significant improvement was observed .7 shows the qnm and the power - law tail with the discrete -function .here we used the very far outer boundary .the slope is still about .it seems that the discrete -function is good for the fd domain with the uniform grid but it is not ideal for the non - uniform grid with the sp domain .in this work , the inhomogeneous zerilli equation was solved numerically in time - domain , for which we developed a multi - domain hybrid method based on the chebyshev spectral collocation method and the - order explicit finite - difference method . using the developed method ,we computed the waveforms for head on collisions of black holes in one dimension . the in - falling black hole is modeled as a point source and consequently singular source terms appear in the governing equation . for the approximation of singular source terms , we used both the gaussian -function and the discrete -function . for the stable and accurate approximation, we derived the interface conditions between the spectral and spectral domains and the spectral and finite - difference domains .the main approach introduced in this work is the use of the finite - difference domain as the boundary domain . without the finite - difference domain as the boundary domain ,the multi - domain composed of only the spectral sub - domains does not yield the proper power - law decay profile unless the range of the computational domain is very large . using the multi - domain approach with the finite - difference boundary domain method ,we could obtain the proper power - law decay profile with a relatively small computational cost .that is , the cfl condition is much relaxed and the location of the outer boundary of the computational domain is not afar .numerical results show that the hybrid method obtains a proper quasi - normal and power - law decay with the gaussian -function approximation .interestingly , even with a large value of , the proper power - law decay was observed . with the large value of ,the spectral filtering operation was not necessary , so the computational time was much reduced . the hybrid method with the discrete -function approximation ,however , did not yield the proper power - law decay .the current study only considered the multi - domain spectral and finite - difference method with the -function residing in the spectral domain .we will investigate the optimal configuration of the multi - domain computational domain in our future research .99 f. j. zerilli , effective potential for even - parity regge - wheeler gravitational perturbation equations , _ phys ._ 24 ( 1970 ) 737 - 738. f. j. zerilli , gravitational field of a particle falling in a schwarzschild geometry , _ phys .2(1970 ) 2141 - 2160. j. l. barton , d. j. lazar , d. j. kennefick , g. khanna , l. m. burko , computational efficiency of frequency and time domain calculations of extreme mass ratio binaries : equatorial orbits , _ phys .d _ 78 ( 2008 ) 064042 .p. a. sundararajan , g. khanna , s. a. hughes , towards adiabatic waveforms for inspiral into kerr black holes : a new model of the source for the time domain perturbation equation , _ phys .d _ 76 ( 2007 ) , 104005(1)-104005(20 ) .p. a. sundararajan , g. khanna , s. a. hughes and s. drasco , towards adiabatic waveforms for inspiral into kerr black holes : ii .dynamical sources and generic orbits , _ phys .d _ 78 ( 2008 ) , 024022(1)-024022(13 ) .
a hybrid method is developed based on the spectral and finite - difference methods for solving the inhomogeneous zerilli equation in time - domain . the developed hybrid method decomposes the domain into the spectral and finite - difference domains . the singular source term is located in the spectral domain while the solution in the region without the singular term is approximated by the higher - order finite - difference method . the spectral domain is also split into multi - domains and the finite - difference domain is placed as the boundary domain . due to the global nature of the spectral method , a multi - domain method composed of the spectral domains only does not yield the proper power - law decay unless the range of the computational domain is large . the finite - difference domain helps reduce boundary effects due to the truncation of the computational domain . the multi - domain approach with the finite - difference boundary domain method reduces the computational costs significantly and also yields the proper power - law decay . stable and accurate interface conditions between the finite - difference and spectral domains and the spectral and spectral domains are derived . for the singular source term , we use both the gaussian model with various values of full width at half maximum and a localized discrete -function . the discrete -function was generalized to adopt the gauss - lobatto collocation points of the spectral domain . the gravitational waveforms are measured . numerical results show that the developed hybrid method accurately yields the quasi - normal modes and the power - law decay profile . the numerical results also show that the power - law decay profile is less sensitive to the shape of the regularized -function for the gaussian model than expected . the gaussian model also yields better results than the localized discrete -function .
weak value amplification ( wva ) is a concept that has been used under a great variety of experimental conditions to reveal tiny changes of a variable of interest . in all those cases ,a priori sensitivity limits were not due to the quantum nature of the light used ( _ photon statistics _ ) , but instead to the insufficient resolution of the detection system , what might be termed generally as _technical noise_. wva was a feasible choice to go beyond this limitation . in spite of this extensive evidence , its interpretation has historically been a subject of confusion " .for instance , while some authors show that weak - value - amplification techniques ( which only use a small fraction of the photons ) compare favorably with standard techniques ( which use all of them ) " , others claim that wva does not offer any fundamental metrological advantage " , or that wva `` does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection '' . however , these conclusions are criticized by others based on the idea that `` the assumptions in their statistical analysis are irrelevant for realistic experimental situations '' .the problem might reside in here we make use of some simple , but fundamental , results from quantum estimation theory to show that there are two sides to consider when analyzing in which sense wva can be useful . on the one hand, the technique generally makes use of linear - optics unitary operations .therefore , it can not modify the statistics of photons involved .basic quantum estimation theory states that the post - selection of an appropriate output state , the basic element in wva , can not be better than the use of the input state . moreover , wva uses some selected , appropriate but partial , information about the quantum state that can not be better that considering the full state . indeed , due to the unitarian nature of the operations involved , it should be equally good any transformation of the input state than performing no transformation at all .in other words , when considering only the quantum nature of the light used , wva can not enhance the precision of measurements .on the other hand , a more general analysis that goes beyond only considering the quantum nature of the light , shows that wva can be useful when certain technical limitations are considered . in this sense , it might increase the ultimate resolution of the detection system by effectively lowering the value of the smallest quantity that can detected . in most scenarios ,although not always , the signal detected is severely depleted , due to the quasi - orthogonality of the input and output states selected . however , in many applications , limitations are not related to the low intensity of the signal , but to the smallest change that the detector can measure irrespectively of the intensity level of the signal .a potential advantage of our approach is that we make use of the concept of trace distance , a clear and direct measure of the degree of distinguishability of two quantum states .indeed , the trace distance gives us the minimum probability of error of distinguishing two quantum states that can be achieved under the best detection system one can imagine .measuring tiny quantities is essentially equivalent to distinguishing between nearly parallel quantum states .therefore we offer a very basic and physical understanding of how wva works , based on the idea of how wva transforms very close quantum states , which can be useful to the general physics reader . herewere we use an approach slightly different from what other analysis of wva do , where most of the times the tool used to estimate its usefulness is the fisher information .contrary to how we use the trace distance here , to set a sensitivity bound only considering how the quantum state changes for different values of the variable of interest , the fisher information requires to know the probability distribution of possible experimental outcomes for a given value of the variable of interest .therefore , it can look for sensitivity bounds for measurements by including _ technical characteristics _ of specific detection schemes .a brief comparison between both approaches will be done towards the end of this paper .one word of caution will be useful here .the concept of weak value amplification is presented for the most part in the framework of quantum mechanics theory , where it was born .it can be readily understood in terms of constructive and destructive interference between probability amplitudes .interference is a fundamental concept in any theory based on waves , such as classical electromagnetism .therefore , the concept of weak value amplification can also be described in many scenarios in terms of interference of classical waves .indeed , most of the experimental implementations of the concept , since its first demonstration in 1991 , belong to this type and can be understood without resorting to a quantum theory formalism . for the sake of example , we consider a specific weak amplification scheme , depicted in fig .1 , which has been recently demonstrated experimentally . it aims at measuring very small temporal delays , or correspondingly tiny phase changes , with the help of optical pulses of much larger duration .we consider this specific case because it contains the main ingredients of a typical wva scheme , explained below , and it allows to derive analytical expressions of all quantities involved , which facilitates the analysis of main results .moreover , the scheme makes use of linear optics elements only and also works with large - bandwidth partially - coherent light .in general , a wva scheme requires three main ingredients : a ) the consideration of two subsystems ( here two degrees of freedom : the polarisation and the spectrum of an optical pulse ) that are weakly coupled ( here we make use of a polarisation - dependent temporal delay that is introduced with the help of a michelson interferometer ) ; b ) the _ pre - selection _ of the input state of both subsystems ; and c ) the _ post - selection _ of the state in one of the subsystems ( the state of polarisation ) and the measurement of the state of the remaining subsystem ( the spectrum of the pulse ) . with appropriate _ pre- _ and _ post - selection _ of the polarisation of the output light , tiny changes of the temporal delay can cause anomalously large changes of its spectrum , rendering in principle detectable very small temporal delays . )splits the input into two orthogonal linear polarisations that propagate along different arms of the interferometer .an additional qwp is introduced in each arm to rotate the beam polarisation by to allow the recombination of both beams , delayed by a temporal delay , in a single beam by the same pbs .after pbs , the output polarisation state is selected with a liquid crystal variable retarder ( lcvr ) followed by a second polarising beam splitter ( pbs ) .the variable retarder is used to set the parameter experimentally .finally , the spectrum of each output beam is measured using an optical spectrum analyzer ( osa ) .( , ) and ( , ) correspond to two sets of orthogonal polarisations .figure drawn by one of the authors ( luis - jose salazar serrano).,scaledwidth=70.0% ] let us be more specific about how all these ingredients are realized in the scheme depicted in fig .1 . an input coherent laser beam ( photons ) shows circular polarisation , , and a gaussian shape with temporal width ( full - width - half maximum , ) .the normalized temporal and spectral shapes of the pulse read the input beam is divided into the two arms of a michelson interferometer with the help of a polarising beam splitter ( pbs ) .light beams with orthogonal polarisations traversing each arm of the interferometer are delayed and , respectively , which constitute the weak coupling between the two degrees of freedom .after recombination of the two orthogonal signals in the same pbs , the combination of a liquid - crystal variable retarder ( lcvr ) and a second polarising beam splitter ( pbs ) performs the post - selection of the polarisation of the output state , projecting the incoming signal into the polarisation states ] .the amplitudes of the signals in the two output ports write ( not normalized ) \left\{1 + \exp \left [ i ( \omega_0+\omega ) \tau - i\gamma \right ] \right\ } \label{projections1 } \\ & & \phi_v(\tau)=\frac{\psi(\omega)}{2 } \exp \left [ i \left ( \omega_0 + \omega \right)\tau_0 \right ] \left\ { 1-\exp \left [ i ( \omega_0+\omega ) \tau-i\gamma \right ] \right\ } , \label{projections2}\end{aligned}\ ] ] where .( solid blue line ) and as ( dashed green line ) . in ( a ) the post - selection angle is , so as to fulfil the condition . in ( b )the angle is .( c ) shift of the centroid of the spectrum of the output pulse after projection into the polarisation state in pbs , as a function of the post - selection angle .green solid line : as ; dotted red line : as , and dashed blue line : as .label * i * corresponds to [ mode for as shown in ( b ) ] . label * ii * corresponds to , where the condition is fulfiled [ mode for shown in ( a ) ] .it yields the minimum mode overlap between states with and .data : m and fs.,scaledwidth=90.0% ] after the signal projection performed after pbs , the wva scheme distinguishes different states , corresponding to different values of the temporal delay , by measuring the spectrum of the outgoing signal in the selected output port .the different spectra obtained for delays and as , for two different polarisation projections , are shown in figures 2 ( a ) and 2 ( b ) . to characterize different modes one can measure , for instance , the centroid of the spectrum .2 ( c ) shows the centroid shift of the output signal for , which reads the differential power between both signals ( with and ) reads \ ] ] when there is no polarisation - dependent time delay ( ) , the centroid of the spectrum of the output signal is the same than the centroid of the input laser beam , i.e. , there is no shift of the centroid ( ) .however , the presence of a small can produce a large and measurable shift of the centroid of the spectrum of the signal .detecting the presence ( ) or absence ( ) of a temporal delay between the two coherent orthogonally - polarised beams after recombination in pbs , but before traversing pbs , is equivalent to detecting which of the two quantum states , or is the output quantum state which describes the coherent pulse leaving pbs . designates the corresponding polarisations . the spectral shape ( mode function ) writes , \label{modes_input}\ ] ] where is the central frequency of the laser pulse , is the angular frequency deviation from the the center frequency and is the spectral shape of the input coherent laser signal . the minimum probability of error that can be made when distinguishing between two quantum states is related to the trace distance between the states . for two pure state , and ,the ( minimum ) probability of error is for , . on the contrary , to be successful in distinguishing two quantum states with low probability of error ( ) requires , i.e. , the two states should be close to orthogonal .the coherent broadband states considered here can be generally described as single - mode quantum states where the mode is the corresponding spectral shape of the light pulse .let us consider two single - mode coherent beams where and are the two modes and and are the mean number of photons in modes and , respectively .the mode functions and are assumed to be normalized , i.e. , , reads where we introduce the mode overlap that reads ^{*}.\ ] ] in order to obtain eq .( [ overlap1 ] ) we have made use of ^m |0 \rangle = n ! \rho^n \delta_{nm} ] , is the largest when the modes are close to orthogonal ( ) . both effects indeed compensate , as it should be , since wva implements unitary transformations , and the trace distance between quantum states is preserved under unitary transformations .the quantum overlap between the states reads ,\end{aligned}\ ] ] so \label{output_result},\end{aligned}\ ] ] which is the same result [ see eq .( [ input_result ] ) ] obtained for the signal after pbs , but before pbs .we can also see the previous results from a slightly different perspective making use of the cramr - rao inequality .the wva scheme considered throughout can be thought as a way of estimating the value of the single parameter with the help of a light pulse in a coherent state . since the quantum state is pure , the minimum variance that can show any unbiased estimation of the parameter , the cramr - rao inequality , reads ^{-1 } , \label{cramer1}\ ] ] making use of eq .( [ state1 ] ) , one obtains that here the cramr - rao inequality reads where is the rms bandwidth in angular frequency of the pulse . in all cases of interest .the cramr - rao inequality is a fundamental limit that set a bound to the minimum variance that any measurement can achieve .it is unchanged by unitary transformations and only depends on the quantum state considered .inspection of eqs .( [ input_result ] ) and ( [ output_result ] ) seems to indicate that a measurement after projection in any basis , the core element of the weak amplification scheme , provides no fundamental metrological advantage .notice that this result implies that the only relevant factor limiting the sensitivity of detection is the quantum nature of the light used ( a _ coherent state _ in our case ) . to obtain this result, we are implicitly assuming that a ) we have full access to all relevant characteristics of the output signals ; and b ) detectors are ideal , and can detect any change , as small as it might be , if enough signal power is used . if this is the case , weak value amplification provides no enhancement of the sensitivity .however , this can be far from truth in many realistic experimental situations . in the laboratory, the quantum nature of light is an important factor , but not the only one , limiting the capacity to measure tiny changes of variables of interest . on the one hand ,most of the times we detect only certain characteristic of the output signals , probably the most relevant , but this is still partial information about the quantum state . on the other hand , detectors are not ideal and noteworthy limitations to its performance can appear . to name a few , they might no longer work properly above a certain photon number input , electronics and signal processing of data can limit the resolution beyond what is allowed by the specific quantum nature of light , conditions in the laboratory can change randomly effectively reducing the sensitivity achievable in the experiment .surely , all of these are _ technical _ rather than _ fundamental _ limitations , but in many situations the ultimate limit might be _ technical _ rather than _fundamental_. in this scenario , we show below that weak value amplification can be a _valuable _ and an _ easy _ option to overcome all of these technical limitations , as it has been demonstrated in numerous experiments . that leaves the interferometer .the two points highlighted corresponds to , which yields , and , which yields .( b ) number of photons ( ) after projection in the polarisation state ] , where is the variable that we measure , the fisher information can be written as .viza , g. i. et al .weak - values technique for velocity measurements ._ * 38 * , 2949 - 2952 ( 2013 ) .combes , j. & ferrie , c. & zhang , j. , and carlton m. caves , c. m. quantum limits on postselected , probabilistic quantum metrology , _ phys .a _ * 89 * , 052117 ( 2014 ) .weak value amplification scheme aimed at detecting extremely small temporal delays .the input pulse polarisation state is selected to be left - circular by using a polariser , a quarter - wave plate ( qwp ) and a half - wave plate ( hwp ) .a first polarising beam splitter ( pbs ) splits the input into two orthogonal linear polarisations that propagate along different arms of the interferometer .an additional qwp is introduced in each arm to rotate the beam polarisation by to allow the recombination of both beams , delayed by a temporal delay , in a single beam by the same pbs .after pbs , the output polarisation state is selected with a liquid crystal variable retarder ( lcvr ) followed by a second polarising beam splitter ( pbs ) .the variable retarder is used to set the parameter experimentally .finally , the spectrum of each output beam is measured using an optical spectrum analyzer ( osa ) .( , ) and ( , ) correspond to two sets of orthogonal polarisations .spectrum measured at the output .( a ) and ( b ) : spectral shape of the mode functions for ( solid blue line ) and as ( dashed green line ) . in ( a ) the post - selection angle is , so as to fulfil the condition . in ( b )the angle is .( c ) shift of the centroid of the spectrum of the output pulse after projection into the polarisation state in pbs , as a function of the post - selection angle .green solid line : as ; dotted red line : as , and dashed blue line : as .label * i * corresponds to [ mode for as shown in ( b ) ] .label * ii * corresponds to , where the condition is fulfiled [ mode for shown in ( a ) ] .it yields the minimum mode overlap between states with and .data : m and fs . mode overlap and insertion loss as a function of the post - selection angle .mode overlap of the mode functions corresponding to the quantum states with and as , as a function of the post - selection angle ( solid blue line ) . the insertion loss , given by indicated by the dotted green line .the minimum mode overlap , and maximum insertion loss , corresponds to the post - selection angle that fulfils the condition , which corresponds to .data : m , fs . reduction of the probability of error using a weak value amplification scheme .( a ) minimum probability of error as a function of the photon number that leaves the interferometer .the two points highlighted corresponds to , which yields , and , which yields .( b ) number of photons ( ) after projection in the polarisation state $ ] , as a function of the angle .the input number of photons is .the dot corresponds to the point and .pulse width : =1 ps ; temporal delay : = 1 as .
weak value amplification ( wva ) is a concept that has been extensively used in a myriad of applications with the aim of rendering measurable tiny changes of a variable of interest . in spite of this , there is still an on - going debate about its _ true _ nature and whether is really needed for achieving high sensitivity . here we aim at solving the puzzle , using some basic concepts from quantum estimation theory , highlighting what the use of the wva concept can offer and what it can not . while wva can not be used to go beyond some fundamental sensitivity limits that arise from considering the full nature of the quantum states , wva can notwithstanding enhance the sensitivity of _ real _ detection schemes that are limited by many other things apart from the quantum nature of the states involved , i.e. _ technical noise_. importantly , it can do that in a straightforward and easily accessible manner .