article
stringlengths
20
170k
abstract
stringlengths
34
27.7k
section_names
stringlengths
8
624
finite temperature instantons calorons have a rich structure if one allows the polyakov loop xmath1 in the periodic gauge xmath2 to be non trivial at spatial infinity specifying the holonomy it implies the spontaneous breakdown of gauge symmetry for a charge one xmath3 caloron the location of the xmath4 constituent monopoles can be identified through i points where two eigenvalues of the polyakov loop coincide which is where the xmath5 symmetry is partially restored to xmath6 ii the centers of mass of the spherical lumps iii the dirac monopoles or rather dyons due to self duality as the sources of the abelian field lines extrapolated back to the cores if well separated and localised all these coincide xcite herewe study the case of two constituents coming close together for xmath7 with an example for xmath0 the eigenvalues of xmath8 can be ordered by a constant gauge transformation xmath9 3 mm ww 3mm1 nn111 with xmath10 the constituent monopoles have masses xmath11 where xmath12 using the classical scale invariance to put the extent of the euclidean time direction to one xmath13 in the same way we can bring xmath14 to this form by a local gauge function xmath15 we note that xmath16 unique up to a residual abelian gauge rotation and xmath17 will be smooth except where two or more eigenvalues coincide the ordering shows there are xmath4 different types of singularities called defects xcite for each of the neighbouring eigenvalues to coincide the first xmath18 are associated with the basic monopoles as part of the inequivalent xmath19 subgroups related to the generators of the cartan subgroup the xmath20 defect arises when the first and the last eigenvalue still neighbours on the circle coincide its magnetic charge ensures charge neutrality of the caloron the special status xcite of this defect also follows from the so called taubes winding xcite supporting the non zero topological charge xcite to analyse the lump structure when two constituents coincide we recall the simple formula for the xmath3 action density xcite 6mmf2x22 6mmmrmym ym1 0rm1 cmsm smcm with xmath21 the center of mass location of the xmath22 constituent monopole we defined xmath23 xmath24 xmath25 as well as xmath26 xmath27 we are interested in the case where the problem of two coinciding constituents in xmath3 is mapped to the xmath28 caloron for thiswe restrict to the case where xmath29 for some xmath30 which for xmath0 is always the case when two constituents coincide since now xmath31 one easily verifies that xmath32 describing a single constituent monopole with properly combined mass reducing eq 2 to the action density for the xmath28 caloron with xmath33 constituents the topological charge can be reduced to surface integrals near the singularities with the use of xmath34 where xmath35 if one assumes all defects are pointlike this can be used to show that for each of the xmath4 types the net number of defects has to equal the topological charge the type being selected by the branch of the logarithm associated with the xmath4 elements in the center xcite one might expect the defects to merge when the constituent monopoles do a triple degeneracy of eigenvalues for xmath0 implies the polyakov loop takes a value in the center yet this can be shown not to occur for the xmath0 caloron with unequal masses we therefore seem to have at least one more defect than the number of constituents when xmath36 we will study in detail a generic example in xmath0 with xmath37 we denote by xmath38 the position associated with the xmath22 constituent where two eigenvalues of the polyakov loop coincide in the gauge where xmath39 see eq 1 we established numerically xcite that p1pz1ei3 ei3e2i3 p2pz2e2i1 ei1ei1 p3pz3ei2 e2i2ei2this is for any choice of holonomy and constituent locations with the proviso they are well separated ie their cores do not overlap in which case to a good approximation xmath40 herewe take xmath41 xmath42 and xmath43 the limit of coinciding constituents is achieved by xmath44 with this geometryit is simplest to follow for changing xmath45 the location where two eigenvalues coincide in very good approximation as long as the first two constituents remain well separated from the third constituent carrying the taubes winding xmath46 will be constant in xmath45 and the xmath0 gauge field xcite of the first two constituents will be constant in time in the periodic gauge thus xmath47 for xmath48 greatly simplifying the calculations when the cores of the two approaching constituents start to overlap xmath49 and xmath50 are no longer diagonal but still block diagonal mixing the lower xmath51 components at xmath52 they are diagonal again but xmath50 will be no longer in the fundamental weyl chamber a weyl reflection maps it back while for xmath53 a more general gauge rotation back to the cartan subgroup is required to do so see fig 1 at xmath52 each xmath54 and xmath55 lies on the dashed line which is a direct consequence of the reduction to an xmath19 caloron to illustrate this more clearly we give the expressions for xmath54 which we believe to hold for any non degenerate choice of the xmath56 when xmath57 p1pz1e2i2 e2i2e4i2 p2pz2ei2 e2i2ei2 p3pz3ei2 e2i2ei2these can be factorised as xmath58 where xmath59 describes an overall xmath60 factor in terms of xmath61 xmath62 and xmath63the xmath19 embedding in xmath0 becomes obvious it leads for xmath64 to the trivial and for xmath65 to the non trivial element of the center of xmath19 appropriate for the latter carrying the taubes winding on the other hand xmath66 corresponds to xmath67 which for the xmath19 caloron is not related to coinciding eigenvalues for xmath44 fig 2 shows that xmath68 gets stuck at a finite distance 0131419 from xmath69 the xmath19 embedding determines the caloron solution for xmath70 with constituent locations xmath71 and xmath72 and masses xmath73 and xmath74 the best proof for the spurious nature of the defect is to calculate its location purely in terms of this xmath19 caloron by demanding the xmath19 polyakov loop to equal xmath75 for this we can use the analytic expression xcite of the xmath19 polyakov loop along the xmath76axis the location of the spurious defect xmath77 is found by solving xmath78 for our example xmath79 indeed verifies this equation with the xmath19 embedded result at hand we find that only for xmath80 the defects merge to form a triple degeneracy using xmath81 this is so for coinciding constituent monopoles of equal mass for unequal massesthe defect is always spurious but it tends to stay within reach of the non abelian core of the coinciding constituent monopoles except when the mass difference approaches its extremal values xmath82 see fig 2 bottom at these extremal valuesone of the xmath0 constituents becomes massless and delocalised which we excluded for xmath53 however the limit xmath44 is singular due to the global decomposition into xmath83 at xmath52 gauge rotations xmath84 in the global xmath19 subgroup do not affect xmath59 and therefore any xmath85 gives rise to the same accidental degeneracy in particular solving xmath86 corresponding to the weyl reflection xmath87 yields xmath88 for xmath89 isolated point in fig 2 top indeed xmath90 traces out a nearly spherical shell where two eigenvalues of xmath91 coincide note that for xmath80 this shell collapse to a single point xmath92 a perturbation tends to remove this accidental degeneracy abelian projected monopoles are not always what they seem to be even though required by topology topology can not be localised no matter how tempting this may seem for smooth fields i am grateful to andreas wipf for his provocative question that led to this work i thank jan smit and especially chris ford for discussions 9 t c kraan and p van baal nucl b 533 1998 627 hep th9805168 p van baal in lattice fermions and structure of the vacuum eds vmitrjushkin and g schierholz kluwer dordrecht 2000 p 269 hep th9912035 c ford t tok and a wipf nucl b 548 1999 585 hep th9809209 phys b 456 1999 155 hep th9811248 t c kraan and p van baal phys b 435 1998 389 hep th9806034 m n chernodub t c kraan and p van baal nucl 83 2000 556 hep lat9907001 c taubes in progress in gauge field theory eds gt hooft ea plenum press new york 1984 p563 mgarca prez agonzlezarroyo a montero and p van baal jhep 9906 1999 001 hep lat9903022
we analyse what happens with two merging constituent monopoles for the xmath0 caloron identified through degenerate eigenvalues the singularities or defects of the abelian projection of the polyakov loop it follows that there are defects that are not directly related to the actual constituent monopoles 1 cm
introduction puzzle example resolution lesson acknowledgements
the first two exact solution of einstein s field equations were obtained by schwarzschild 1 soon after einstein introduced general relativity gr the first solution describes the geometry of the space time exterior to a prefect fluid sphere in hydrostatic equilibrium while the other known as interior schwarzschild solution corresponds to the interior geometry of a fluid sphere of constant homogeneous energy density xmath0 the importance of these two solutions in gr is well known the exterior solution at a given point depends only upon the total mass of the gravitating body and the radial distance as measured from the centre of the spherical symmetry and not upon the type of the density distribution considered inside the mass however we will focus on this point of crucial importance later on in the present paper on the other hand the interior schwarzschild solution provides two very important features towards obtaining configurations in hydrostatic equilibrium compatible with gr namely i it gives an absolute upper limit on compaction parameter xmath1 mass to size ratio of the entire configuration in geometrized units xmath2 for any static and spherical solution provided the density decreases monotonically outwards from the centre in hydrostatic equilibrium 2 and ii for an assigned value of the compaction parameter xmath3 the minimum central pressure xmath4 corresponds to the homogeneous density solution see eg 3 regarding these conditions it should be noted that the condition i tells us that the values higher than the limiting maximum value of xmath5 can not be attained by any static solution but what kinds of density variations are possible for a mass to be in the state of hydrostatic equilibrium the answer to this important question could be provided by an appropriate analysis of the condition ii and the necessary conditions put forward by exterior schwarzschild solution despite the non linear differential equations various exact solutions for static and spherically symmetric metric are available in the literature 4 tolman 5 obtained five different types of exact solutions for static cases namely type iii which corresponds to the constant density solution obtained earlier by schwarzschild 1 type iv type v type vi and type vii the solution independently obtained by adler 6 adams and cohen 7 and kuchowicz 8 buchdahl s solution 9 for vanishing surface density the gaseous model the solution obtained by vaidya and tikekar 10 which is also obtained independently by durgapal and bannerji 11 the class of exact solutions discussed by durgapal 12 and also durgapal and fuloria 13 solution knutsen 14 examined various physical properties of the solutions mentioned in references 6 8 10 11 and 13 in great detail and found that these solutions correspond to nice physical properties and also remain stable against small radial pulsations upto certain values of xmath3 tolman s v and vi solutions are not considered physically viable as they correspond to singular solutions infinite values of central density that is the metric coefficient xmath6 at xmath7 and pressure for all permissible values of xmath3 except tolman s v and vi solutions all other solutions mentioned above are known as regular solutions finite positive density at the origin that is the metric coefficient xmath9 at xmath10 which decreases monotonically outwards which can be further divided into two categories i regular solutions corresponding to a vanishing density at the surface together with pressure like tolman s vii solution mehra 15 durgapal and rawat 16 and negi and durgapal 17 18 and buchdahl s gaseous solution 9 and ii regular solutions correspond to a non vanishing density at the surface like tolman s iii and iv solutions 5 and the solutions discussed in the ref6 8 and 10 13 respectively the stability analysis of tolman s vii solution with vanishing surface density has been undertaken in detail by negi and durgapal 17 18 and they have shown that this solution also corresponds to stable ultra compact objects ucos which are entities of physical interest this solution also shows nice physical properties such as pressure and energy density are positive and finite everywhere their respective gradients are negative the ratio of pressure to density and their respective gradients decrease outwards etc the other solution which falls in this category and shows nice physical properties is the buchdahl s solution 9 however knutsen 19 has shown that this solution turned out to be unstable under small radial pulsations all these solutions with finite as well as vanishing surface density discussed above in fact fulfill the criterion i that is the equilibrium configurations pertaining to these solutions always correspond to a value of compaction parameter xmath3 which is always less than the schwarzschild limit i e xmath11 but this condition alone does not provide a necessary condition for hydrostatic equilibrium nobody has discussed until now whether these solutions also fulfill the condition ii which is necessary to satisfy by any static and spherical configuration in the state of hydrostatic equilibrium recently by using the condition ii we have connected the compaction parameter xmath3 of any static and spherical configuration with the corresponding ratio of central pressure to central energy density xmath12 and worked out an important criterion which concludes that for a given value of xmath13 the maximum value of compaction parameter xmath14 should always correspond to the homogeneous density sphere 20 an examination of this criterion on some well known exact solutions and equations of state eoss indicated that this criterion in fact is fulfilled only by those configurations which correspond to a vanishing density at the surface together with pressure 20 or by the singular solutions with non vanishing surface density section 5 of the present study this result has motivated us to investigate in detail the various exact solutions available in the literature and disclose the reason s behind non fulfillment of the said criterion by various regular analytic solutions and eoss corresponding to a non vanishing finite density at the surface of the configuration in this connection in the present paper we have examined various exact solutions available in the literature in detail it is seen that tolman s vii solution with vanishing surface density 15 17 18 buchdahl s gaseous solution 9 and tolman s v and vi singular solutions pertain to a value of xmath3 which always turns out to be less than the value xmath15 of the homogeneous density sphere for all assigned values of xmath13 on the other hand the solutions having a finite non zero surface density that is the pressure vanishes at the finite surface density do not show consistency with the structure of the general relativity as they correspond to a value of xmath3 which turns out to be greater than xmath15 for all assigned values of xmath13 and thus violate the criterion obtained in 20 one may ask for example what could be in fact the reasons behind non fulfillment of the criterion obtained in 20 by various exact solutions corresponding to a finite non zero density at the surface we have been able to pin point which is discussed under section 3 of the present study the main reason namely the actual total mass xmath16 which appears in the exterior schwarzschild solution in fact can not be attained by the configurations corresponding to a regular density variation with non vanishing surface density the metric inside a static and spherically symmetric mass distribution corresponds to xmath17 where xmath18 and xmath19 are functions of xmath20 alone the resulting field equations for the metric governed by eq1 yield in the following form xmath21 1r2 8pi t11 8pi p elambda nur 1r2 1r2 8pi t22 8pi t33 8pi p elambda nu2 nu24nulambda4nu lambda2r endaligned where the primes represent differentiation with respect to xmath20 the speed of light xmath22 and the universal gravitation constant xmath23 are chosen as unity that is we are using the geometrized units xmath24 and xmath0 represent respectively the pressure and energy density inside the perfect fluid sphere related with the non vanishing components of the energy momentum tensor xmath25 0 1 2 and 3 respectively eqs2 4 represent second order coupled differential equations which can be reduced to the first order coupled differential equations by eliminating xmath26 from eq 3 with the help of eqs 2 and 4 in the well known form namely tov equations tolman 5 oppenheimer volkoff 21 governing hydrostatic equilibrium in general relativity xmath27rr 2mr xmath28 and xmath29 where prime denotes differentiation with respect to xmath20 and xmath30 is defined as the mass energy contained within the radius xmath31 that is xmath32 the equation connecting metric parameter xmath19 with xmath30 is given by xmath33 1 8pi rint0r er2 dr the three field equations or tov equations mentioned above involve four variables namely xmath34 and xmath19 thus in order to obtain a solution of these equations one more equation is needed which may be assumed as a relation between xmath24 and xmath0 eos or can be regarded as an algebraic relation connecting one of the four variables with the radial coordinate xmath20 or an algebraic relation between the parameters for obtaining an exact solution the later approach is employed notice that eq9 yields the metric coefficient xmath35 for the assumed energy density xmath0 as a function of radial distance xmath31 once the metric coefficient xmath35 or mass xmath30 is defined for assumed energy density by using eqs9 or 8 the pressure xmath24 and the metric coefficient xmath36 can be obtained by solving eqs5 and 6 respectively which yield two constants of integration these constants should be obtained from the following boundary conditions in order to have a proper solution of the field equations b1 in order to maintain hydrostatic equilibrium throughout the configuration the pressure must vanish at the surface of the configuration that is xmath37 where xmath38 is the radius of the configuration b2 the consequence of eq10 ensures the continuity of the metric parameter xmath39 belonging to the interior solution with the corresponding expression for well known exterior schwarzschild solution at the surface of the fluid configuration that is xmath40 where xmath41 is the total mass of the configuration however the exterior schwarzschild solution guarantees that xmath42 which means that the matching of the metric parameter xmath43 is also ensured at the surface of the configuration together with xmath39 that is xmath44 irrespective of the condition that the surface density xmath45 is vanishing with pressure or not that is xmath46 together with eq 10 or xmath47 where xmath48 is the compaction parameter of the configuration defined earlier and xmath16 is defined as eq 8 xmath49 thus the analytic solution for the fluid sphere can be explored in terms of the only free parameter xmath48 by normalizing the metric coefficient xmath43 yielding from eq11 at the surface of the configuration that is xmath50 at xmath51 after obtaining the integration constants by using eqs10 that is xmath52 at xmath53 and 11 xmath54 at xmath51 respectively however at this place we recall the well known property of the exterior schwarzschild solution which follows directly from the definition of the mass xmath55 appears in this solution namely at a given point outside the spherical distribution of mass xmath55 it depends only upon xmath55 and not upon the type of the density variation considered inside the radius xmath56 of this sphere it follows therefore that the dependence of mass xmath55 upon the type of the density distribution plays an important role in order to fulfill the requirement set up by exterior schwarzschild solution the relation xmath57 immediately tells us that for an assigned value of the compaction parameter xmath3 the mass xmath55 depends only upon the radius xmath56 of the configuration which may either depend upon the surface density or upon the central density or upon both of them depending upon the type of the density variation considered inside the mass generating sphere we argue that this dependence should occur in such a manner that the definition of mass xmath55 is not violated we infer this definition as the type independence property of the mass xmath55 which may be defined in this manner the mass xmath55 which appears in the exterior schwarzschild solution should either depend upon the surface density or upon the central density and in any case not upon both of them so that from an exterior observer s point of view the type of the density variation assigned for the mass should remain unidentified we may explain the type independence property of mass xmath55 mentioned above in the following manner the mass xmath55 is called the coordinate mass that is the mass as measured by some external observer and from this observer s point of view if we are measuring a sphere of mass xmath55 we can not know by any means the way in which the matter is distributed from the centre to the surface of this sphere that is if we are measuring xmath55 with the help of non vanishing surface density obviously by calculating the coordinate radius xmath56 from the expression connecting the surface density and the compaction parameter and by using the relation xmath58 we can not measure it by any means from the knowledge of the central density because if we can not know by any means the way in which the matter is distributed from the centre to the surface of the configuration then how can we know about the central density and this is possible only when there exist no relation connecting the mass xmath55 and the central density that is the mass xmath55 should be independent of the central density meaning thereby that the surface density should be independent of the central density for configurations corresponding to a non vanishing surface density however if we are measuring the mass xmath55 by using the expression for central density in the similar manner as in the previous case by calculating the radius xmath56 from the expression of central density and using the relation xmath59 we can not calculate it by any means from the knowledge of the surface density in view of the type independence property of the mass xmath55 and this is possible only when there exist no relation connecting the mass xmath55 and the surface density meaning thereby that the central density should be independent of the surface density from the above explanation of type independence property of mass xmath16 it is evident that the actual total mass xmath16 which appears in the exterior schwarzschild solution should either depend upon the surface density or depend upon the central density of the configuration and in any case not upon both of them however the dependence of mass xmath16 upon both of the densities surface as well as central is a common feature observed among all regular solutions having a non vanishing density at the surface of the configuration see for example eqs21 25 29 and 33 respectively belonging to the solutions of this category which are discussed under sub sections a d of section 5 of the present study thus it is evident that the surface density of such solutions is dependent upon the central density and vice versa that is the total mass xmath55 depends upon both of the densities meaning thereby that the type of the density distribution considered inside the sphere of mass xmath55 is known to an external observer which is the violation of the definition of mass xmath55 defined as the type independence property of mass xmath16 above such structures therefore do not correspond to the actual total mass xmath16 required by the exterior schwarzschild solution to ensure the condition of hydrostatic equilibrium this also explains the reason behind non fulfillment of the compatibility criterion by them which is discussed under section 5 of the present study however it is interesting to note here that there could exist only one solution in this regard for which the mass xmath16 depends upon both but the same value of surface and centre density and for regular density distribution the structure would be governed by the homogeneous constant density throughout the configuration that is the homogeneous density solution note that the requirement type independence of the mass would be obviously fulfilled by the regular structures corresponding to a vanishing density at the surface together with pressure because the mass xmath16 will depend only upon the central density surface density is always zero for these structures see for example eqs37 and 41 discussed under sub section e and f for buchdahl s gaseous model and tolman s vii solution having a vanishing density at the surface respectively furthermore the demand of type independence of mass xmath55 is also satisfied by the singular solutions having a non vanishing density at the surface because such structures correspond to an infinite value of central density and consequently the mass xmath16 will depend only upon the surface density see for example eqs46 and 50 discussed under sub section g and h for tolman s v and vi solutions respectively both types of these structures are also found to be consistent with the compatibility criterion as discussed under section 5 of the present study the discussion regarding various types of density distributions considered above is true for any single analytic solution or equation of state comprises the whole configuration at this place we are not intended to claim that the construction of a regular structure with non vanishing surface density is impossible it is quite possible provided we consider a two density structure in such a manner that the mass xmath16 of the configuration turns out to be independent of the central density so that the property type independence of the mass xmath16 is satisfied examples of such two density models are also available in the literature see eg ref 22 but in the different context however it should be noted here that the fulfillment of type independence condition by the mass xmath16 for any two density model will represent only a necessary condition for hydrostatic equilibrium unless the compatibility criterion 20 is satisfied by them which also assure a sufficient and necessary condition for any structure in hydrostatic equilibrium this issue is addressed in the next section of the present study the above discussion can be summarized in other words as although the exterior schwarzschild solution itself does not depend upon the type of the density distribution or eos considered inside a fluid sphere in the state of hydrostatic equilibrium however it puts the important condition that only two types of the density variations are possible inside the configuration in order to fulfill the condition of hydrostatic equilibrium 1 the surface density of the configuration should be independent of the central density and 2 the central density of the configuration should be independent of the surface density obviously the condition 1 will be satisfied by the configurations pertaining to an infinite value of the central density that is the singular solutions andor by the two density or multiple density distributions corresponding to a surface density which turns out to be independent of the central density because the regular configurations governed by a single exact solution or eos pertaining to this category are not possible whereas the condition 2 will be fulfilled by the configurations corresponding to a surface density which vanishes together with pressure the configurations in this category will include the density variation governed by a single exact solution or eos as well as the two density or multiple density distributions however the point to be emphasized here is that a two density distribution in any of the two categories mentioned here will fulfill only a necessary condition for hydrostatic equilibrium unless the compatibility criterion 20 is satisfied by them which also assure a necessary and sufficient condition for any structure in the state of hydrostatic equilibrium as mentioned above the criterion obtained in 20 can be summarized in the following manner for an assigned value of the ratio of central pressure to central energy density xmath60 the compaction parameter of homogeneous density distribution xmath14 should always be larger than or equal to the compaction parameter xmath61 of any static and spherical solution compatible with the structure of general relativity that is xmath62 in the light of eq 15 let us assign the same value xmath55 for the total mass corresponding to various static configurations in hydrostatic equilibrium if we denote the density of the homogeneous sphere by xmath63 we can write xmath64 where xmath65 denotes the radius of the homogeneous density sphere if xmath66 represents the radius of any other regular sphere for the same mass xmath55 the average density xmath67 of this configuration would correspond to xmath68 eq 15 indicates that xmath69 by the use of eqs 16 and 17 we find that xmath70 that is for an assign value of xmath13 the average energy density of any static spherical configuration xmath67 should always be less than or equal to the density xmath63 of the homogeneous density sphere for the same mass xmath55 although the regular configurations with finite non vanishing surface densities represented by a single density variation can not exist because for such configurations the necessary condition set up by exterior schwarzschild solution can not be satisfied however we can construct regular configurations composed of core envelope models corresponding to a finite central with vanishing and non vanishing surface densities such that the necessary conditions imposed by the schwarzschild s exterior solution at the surface of the configuration are appropriately satisfied however it should be noted that the necessary conditions satisfied by such core envelope models at the surface may not always turn out to be sufficient for describing the state of hydrostatic equilibrium because for an assigned value of xmath13 the average density of such configurations may not always turn out to be less than or equal to the density of the homogeneous density sphere for the same mass as indicated by eqs16 and 17 respectively it would depend upon the types of the density variations considered for the core and envelope regions and the the matching conditions at the core envelope boundary thus it follows that the criterion obtained in 20 is able to provide a necessary and sufficient condition for any regular configuration to be consistent with the state of hydrostatic equilibrium the future study of such core envelope models see for example the models described in 22 and 23 based upon the criterion obtained in 20 could be interesting regarding two density structures of neutron stars and other stellar objects compatible with the structure of gr we have considered the following exact solutions expressed in units of compaction parameter xmath71 mass to size ratio in geometrized units and radial coordinate measured in units of configuration size xmath72 for convenience the other parameters which will appear in these solutions are defined at the relevant places in these equations xmath24 andxmath0 represent respectively the pressure and energy density inside the configuration the surface density is denoted by xmath73 and the central pressure and central energy density are denoted by xmath74 and xmath75 respectively the regular exact solutions which pertain to a non vanishing value of the surface density are given under the sub sections a d while those correspond to a vanishing value of the surface density are described under the sub sections e and f respectively sub sections g and h represent the singular solutions having non vanishing values of the surface densities a tolmans iv solution xmath76 xmath77 by the use of eq20 we can obtain the relation connecting central and surface densities in the following form xmath78 eq21 shows that the surface density is dependent upon the central density and vice versa by using eqs 19 and 20 we obtain xmath79 this solution finds application it can be seen from eq 19 for the values of xmath80 b adler 6 adams and cohen 7 and kuchowicz 8 solution xmath81 xmath82 where xmath83 and xmath84 eq24 gives the relation connecting central and surface densities of the configuration in the following form xmath85 thus the surface density depends upon the central density and vice versa equations 23 and 24 give xmath86 it is seen from eq23 that this solution finds application for values of xmath87 c vaidya and tikekar 10 and durgapal and bannerji 11 solution xmath88 xmath89 where the variable xmath90 and the constants xmath91 and xmath92 are given by xmath93 and xmath94 by the use of eq28 we find that the surface and central densities are connected by the following relation xmath95 it is evident from eq29 that the surface density is dependent upon the central density and vice versa by the use of eqs27 and 28 we obtain xmath96 this solution finds application for the values of xmath97 as shown by eq27 d durgapal and fuloria 13 solution xmath98 bigr xmath99 where xmath92 is a constant and xmath100 the variables xmath101 and xmath102 are given by xmath103 xmath104 where the arbitrary constant xmath105 is given by xmath106 and xmath107 eq32 gives the relation connecting central and surface densities as xmath108 eq33 indicates that the surface density is dependent upon the central density and vice versa by the use of equations 31 and 32 we get xmath109 bigr where xmath110 and xmath111 are given by xmath112 xmath113 and xmath114 as seen from eq31 this solution is applicable for the values of xmath115 e buchdahl s gaseous solution 9 xmath116 xmath117 where xmath118 n 21 u and xmath119 0 leq z leq pi eq36 shows that the surface density vanishes together with pressure thus the central density will become independent of the surface density given by the equation xmath120 by using equations 35 and 36 we obtain xmath121 it is evident from eq35 that this solution is applicable for the values of xmath87 f tolman s vii solution with vanishing surface density xmath122 xmath123 where xmath75 is the central energy density given by xmath124 and xmath125 xmath126 xmath12712 na2 u5 3x xmath1283u12 xmath12912 xmath130 by using eqs 39 and 40 we get xmath131 where xmath132 is given by xmath133 it follows from eq40 that the surface density is always zero hence the central density is always independent of the surface density eq39 indicates that his solution is applicable for the values of xmath134 g tolman s v solution xmath135 and xmath136yq bigr 2n 1 n2y2 where xmath137 is given by xmath138 and xmath139 is defined as xmath140 eq44 shows that the central density is always infinite for xmath141 together with central pressure eq43 however their ratio xmath142 is finite at all points inside the configuration and at the centre yields in the following form xmath143 the consequence of the infinite central density is that the surface density will become independent of the central density given by the equation xmath144 it is evident from eq45 that this solution is applicable for a value of xmath145 i e for a value of xmath87 h tolman s vi solution xmath147 xmath148 eqs47 and 48 indicate that the central pressure and central density are always infinite however their ratio xmath142 is finite at all points inside the structure and at the centre reduces into the following form xmath149 and the surface density obviously independent of the central density would be given by the equation xmath150 where xmath139 is defined as xmath151 eq49 indicates that this solution is applicable for a value of xmath152 let us denote the compaction parameter of the homogeneous density configuration by xmath15 and for the exact solutions corresponding to the sub sections a d by xmath153 and xmath154 respectively the compaction parameters of the exact solutions described under sub section e and f are denoted by xmath155 and xmath156 respectively and those discussed under sub sections g and h are denoted by xmath157 and xmath158 respectively solving these analytic solutions for various assigned values of the ratio of central pressure to central energy density xmath159 we obtain the corresponding values of the compaction parameters as shown in table 1 and table 2 respectively it is seen that for each and every assigned value of xmath13 the values represented by xmath153 and xmath154 respectively table 1 turn out to be higher than xmath15 that is xmath153 and xmath160 while those represented by xmath161 and xmath158 respectively table 2 correspond to a value which always remains less than xmath15 that is xmath162 and xmath163 thus we conclude that the configurations defined by xmath153 and xmath154 respectively do not show compatibility with the structure of general relativity while those defined by xmath161 and xmath158 respectively show compatibility with the structure of general relativity however this type of characteristics that is the value of compaction parameter larger than the value of xmath15 for some or all assigned values of xmath13 can be seen for any regular exact solution having a finite non vanishing surface density because such exact solutions having finite central densities with non vanishing surface densities can not possess the actual mass xmath55 required to fulfill the boundary conditions at the surface on the other hand the value of compaction parameter for a regular solution with vanishing surface density and a singular solution with non vanishing surface density will always remain less than the value of xmath15 for all assigned values of xmath13 because such solutions naturally fulfill the definition of the actual mass xmath55 required for the hydrostatic equilibrium therefore it is evident that the findings based upon the compatibility criterion carried out in this section are fully consistent with the definition of the mass xmath55 defined as the type independence property under section 3 of the present study we have investigated the criterion obtained in the reference 20 which states for an assigned value of the ratio of central pressure to central energy density xmath165 the compaction parameter xmath166 of any static and spherically symmetric solution should always be less than or equal to the compaction parameter xmath15 of the homogeneous density distribution we conclude that this criterion is fully consistent with the reasoning discussed under section 3 which states that in order to fulfill the requirement set up by exterior schwarzschild solution that is to ensure the condition of hydrostatic equilibrium the total mass xmath55 of the configuration should depend either upon the surface density that is independent of the central density or upon the central density that is independent of the surface density and in any case not upon both of them an examination based upon this criterion show that among various exact solutions of the field equations available in the literature the regular solutions corresponding to a vanishing surface density together with pressure namely i tolman s vii solution with vanishing surface density and ii buchdahl s gaseous solution and the singular solutions with non vanishing surface density namely tolman s v and vi solutions are compatible with the structure of general relativity the only regular solution with finite non vanishing surface density which could exist in this regard is described by constant homogeneous density distribution this criterion provides a necessary and sufficient condition for any static spherical configuration to be compatible with the structure of general relativity and may be used to construct core envelope models of stellar objects like neutron stars with vanishing and non vanishing surface densities such that for an assigned value of central pressure to central density the average density of the configuration should always remain less than or equal to the density of the homogeneous sphere for the same mass this criterion could provide a convenient and reliable tool for testing equations of state eoss for dense nuclear matter and models of relativistic star clusters and may find application to investigate new analytic solutions and eoss r c tolman phys 55 364 1939 r j adlar j math phys 15 727 1974 r c adams and j m cohen astrophys 198 507 1975 b kuchowicz astrophys space science l13 33 1975 h a buchdahl astrophys j 147 310 1967 p c vaidya and r tikekar j astrophys 3 325 1982 m c durgapal and r bannerji phys d27 328 1983 erratum d28 2695 m c durgapal 15 2637 1982 m c durgapal and r s fuloria gen 17 671 1985 h knutsen astrophys space science 140 385 1988 h knutsen mon not soc 232 163 1988 h knutsen astrophys space science 162 315 1989 a l mehra j aust 6 153 1966 m c durgapal and p s rawat mon not 192 659 1980 p s negi and m c durgapal astrophys space science 245 97 1996 p s negi and m c durgapal gen 31 13 1999 h knutsen gen grav 20 317 1988 p s negi and m c durgapal gravitation cosmology 7 37 2001astro ph0312516 j r oppenheimer and g m volkoff phys rev 55 374 1939 p s negi a k pande and m c durgapal class quantum grav 6 1141 1989 p s negi a k pande and m c durgapal gen rel 22 735 1990 p s negi and m c durgapal gravitation cosmology 5 191 1999 p s negi and m c durgapal astron astrophys 353 641 2000
we examine various well known exact solutions available in the literature to investigate the recent criterion obtained in ref 20 which should be fulfilled by any static and spherically symmetric solution in the state of hydrostatic equilibrium it is seen that this criterion is fulfilled only by i the regular solutions having a vanishing surface density together with the pressure and ii the singular solutions corresponding to a non vanishing density at the surface of the configuration on the other hand the regular solutions corresponding to a non vanishing surface density do not fulfill this criterion based upon this investigation we point out that the exterior schwarzschild solution itself provides necessary conditions for the types of the density distributions to be considered inside the mass in order to obtain exact solutions or equations of state compatible with the structure of general relativity the regular solutions with finite centre and non zero surface densities which do not fulfill the criterion 20 in fact can not meet the requirement of the actual mass set up by exterior schwarzschild solution the only regular solution which could be possible in this regard is represented by uniform homogeneous density distribution the criterion 20 provides a necessary and sufficient condition for any static and spherical configuration including core envelope models to be compatible with the structure of general relativity thus it may find application to construct the appropriate core envelope models of stellar objects like neutron stars and may be used to test various equations of state for dense nuclear matter and the models of relativistic stellar structures like star clusters pacs nos 0420jd 0440dg 9760jd exact solutions of einstein s field equations 25 in p s negi department of physics kumaun university nainital 263 002 india
introduction field equations and exact solutions boundary conditions: the valid and invalid assumptions for mass distribution criterion for static spherical configurations to be consistent with the structure of general relativity examination of the compatibility criterion for various well known exact solutions available in the literature results and conclusions
a great deal of progress was recently achieved in our understanding of the multifragmentation phenomenon xcite when an exact analytical solution of a simplified version of the statistical multifragmentation model smm xcite was found in refs an invention of a new powerful mathematical method xcite the laplace fourier transform allowed us not only to solve this version of smm analytically for finite volumes xcite but to find the surface partition and surface entropy of large clusters for a variety of statistical ensembles xcite it was shown xcite that for finite volumes the analysis of the grand canonical partition gcp of the simplified smm is reduced to the analysis of the simple poles of the corresponding isobaric partition obtained as a laplace fourier transform of the gcp this method opens a principally new possibility to study the nuclear liquid gas phase transition directly from the partition of finite system and without taking its thermodynamic limit exactly solvable models with phase transitions play a special role in the statistical physics they are the benchmarks of our understanding of critical phenomena that occur in more complicated substances they are our theoretical laboratories where we can study the most fundamental problems of critical phenomena which can not be studied elsewhere note that these questions in principle can not be clarified either within the widely used mean filed approach or numerically despite this success the application of the exact solution xcite to the description of experimental data is limited because this solution corresponds to an infinite system volume therefore from a practical point of view it is necessary to extend the formalism for finite volumes such an extension is also necessary because despite a general success in the understanding the nuclear multifragmentation there is a lack of a systematic and rigorous theoretical approach to study the phase transition phenomena in finite systems for instance even the best formulation of the statistical mechanics and thermodynamics of finite systems by hill xcite is not rigorous while discussing the phase transitions exactly solvable models of phase transitions applied to finite systems may provide us with the first principle results unspoiled by the additional simplifying assumptions herewe present a finite volume extension of the smm to have a more realistic model for finite volumes we would like to account for the finite size and geometrical shape of the largest fragments when they are comparable with the system volume for this we will abandon the arbitrary size of largest fragment and consider the constrained smm csmm in which the largest fragment size is explicitly related to the volume xmath0 of the system a similar model but with the fixed size of the largest fragment was recently analyzed in ref xcite in this workwe will solve the csmm analytically at finite volumes using a new powerful method consider how the first order phase transition develops from the singularities of the smm isobaric partition xcite in thermodynamic limit study the finite volume analogs of phases and discuss the finite size effects for large fragments the system states in the smm are specified by the multiplicity sets xmath1 xmath2 of xmath3nucleon fragments the partition function of a single fragment with xmath3 nucleons is xcite xmath4 where xmath5 xmath6 is the total number of nucleons in the system xmath0 and xmath7 are respectively the volume and the temperature of the system xmath8 is the nucleon mass the first two factors on the right hand side rhs of the single fragment partition originate from the non relativistic thermal motion and the last factor xmath9 represents the intrinsic partition function of the xmath3nucleon fragment therefore the function xmath10 is a phase space density of the k nucleon fragment for nucleon we take xmath11 4 internal spin isospin states and for fragments with xmath12 we use the expression motivated by the liquid drop model see details in xmath13 with fragment free energy xmath14k sigma t k23 tau 32 tln k with xmath15 heremev is the bulk binding energy per nucleon xmath17 is the contribution of the excited states taken in the fermi gas approximation xmath18 mev xmath19 is the temperature dependent surface tension parameterized in the following relation xmath2054 with xmath21 mev and xmath22 mev xmath23 at xmath24 the last contribution in eq one involves the famous fisher s term with dimensionless parameter xmath25 the canonical partition function cpf of nuclear fragments in the smm has the following form xmath26nknk biggr textstyle deltaasumk knk in eq two the nuclear fragments are treated as point like objects however these fragments have non zero proper volumes and they should not overlap in the coordinate space in the excluded volume van der waals approximation this is achieved by substituting the total volume xmath0 in eq two by the free available volume xmath27 where xmath28 xmath29 xmath30 is the normal nuclear density therefore the corrected cpf becomes xmath31 the smm defined by eq two was studied numerically in refs this is a simplified version of the smm eg the symmetry and coulomb contributions are neglected however its investigation appears to be of principal importance for studies of the liquid gas phase transition the calculation of xmath32 is difficult due to the constraint xmath33 this difficulty can be partly avoided by evaluating the grand canonical partition gcp xmath34 where xmath35 denotes a chemical potential the calculation of xmath36 is still rather difficult the summation over xmath1 sets in xmath37 can not be performed analytically because of additional xmath6dependence in the free volume xmath38 and the restriction xmath39 this problem was resolved xcite by the laplace transformation method to the so called isobaric ensemble xcite in this workwe would like to consider a more strict constraint xmath40 where the size of the largest fragment xmath41 can not exceed the total volume of the system the parameter xmath42 is introduced for convenience the case xmath43 is also included in our treatment a similar restriction should be also applied to the upper limit of the product in all partitions xmath44 xmath32 and xmath45 introduced above how to deal with the real values of xmath46 see later then the model with this constraint the csmm can not be solved by the laplace transform method because the volume integrals can not be evaluated due to a complicated functional xmath0dependence however the csmm can be solved analytically with the help of the following identity xmath47 which is based on the fourier representation of the dirac xmath48function the representation four allows us to decouple the additional volume dependence and reduce it to the exponential one which can be dealt by the usual laplace transformation in the following sequence of steps xmath49 thetavprime nonumber hspace01cmint0inftyhspace02cmdvprime intlimitsinftyinfty d xi intlimitsinftyinfty fracd eta2 pi textstyle e i eta vprime xi lambda vprime vprime cal fxi lambda i eta textstyle e vprime cal fxi lambda i eta endaligned after changing the integration variable xmath50 the constraint of xmath51function has disappeared then all xmath52 were summed independently leading to the exponential function now the integration over xmath53 in eq five can be straightforwardly done resulting in xmath54 where the function xmath55 is defined as follows xmath56 endaligned as usual in order to find the gcp by the inverse laplace transformation it is necessary to study the structure of singularities of the isobaric partition seven the isobaric partition seven of the csmm is of course more complicated than its smm analog xcite because for finite volumes the structure of singularities in the csmm is much richer than in the smm and they match in the limit xmath57 only to see thislet us first make the inverse laplace transform xmath581 endaligned where the contour xmath59integral is reduced to the sum over the residues of all singular points xmath60 with xmath61 since this contour in the complex xmath59plane obeys the inequality xmath62 now both remaining integrations in eight can be done and the gcp becomes xmath631 ie the double integral in eight simply reduces to the substitution xmath64 in the sum over singularities this is a remarkable result which can be formulated as the following the simple poles in eight are defined by the equation xmath65 in contrast to the usual smm xcite the singularities xmath66 are i are volume dependent functions if xmath46 is not constant and ii they can have a non zero imaginary part but in this case there exist pairs of complex conjugate roots of ten because the gcp is real introducing the real xmath67 and imaginary xmath68 parts of xmath69 we can rewrite eq ten as a system of coupled transcendental equations xmath70 where we have introduced the set of the effective chemical potentials xmath71 with xmath72 and the reduced distributions xmath73 and xmath74 for convenience consider the real root xmath75 first for xmath76 the real root xmath77 exists for any xmath7 and xmath35 comparing xmath77 with the expression for vapor pressure of the analytical smm solution xciteshows that xmath78 is a constrained grand canonical pressure of the gas as usual for finite volumes the total mechanical pressure xcite as we will see in section v differs from xmath78 equation twelve shows that for xmath79 the inequality xmath80 never become the equality for all xmath3values simultaneously then from eq eleven one obtains xmath81 xmath82 where the second inequality thirteen immediately follows from the first one in other words the gas singularity is always the rightmost one this fact plays a decisive role in the thermodynamic limit xmath57 the interpretation of the complex roots xmath83 is less straightforward according to eq nine the gcp is a superposition of the states of different free energies xmath84 strictly speaking xmath84 has a meaning of the change of free energy but we will use the traditional term for it for xmath81 the free energies are complex therefore xmath85 is the density of free energy the real part of the free energy density xmath86 defines the significance of the state s contribution to the partition due to thirteen the largest contribution always comes from the gaseous state and has the smallest real part of free energy density as usual the states which do not have the smallest value of the real part of free energy i e xmath87 are thermodynamically metastable for infinite volumethey should not contribute unless they are infinitesimally close to xmath88 but for finite volumes their contribution to the gcp may be important as one sees from eleven and twelve the states of different free energies have different values of the effective chemical potential xmath89 which is not the case for infinite volume xcite where there exists a single value for the effective chemical potential thus for finite xmath0 the states which contribute to the gcp nine are not in a true chemical equilibrium the meaning of the imaginary part of the free energy density becomes clear from eleven and twelve as one can see from eleven the imaginary part xmath90 effectively changes the number of degrees of freedom of each xmath3nucleon fragment xmath91 contribution to the free energy density xmath87 it is clear that the change of the effective number of degrees of freedom can occur virtually only and if xmath83 state is accompanied by some kind of equilibration process both of these statements become clear if we recall that the statistical operator in statistical mechanics and the quantum mechanical convolution operator are related by the wick rotation xcite in other words the inverse temperature can be considered as an imaginary time therefore depending on the sign the quantity xmath92 that appears in the trigonometric functions of the equations eleven and twelve in front of the imaginary time xmath93 can be regarded as the inverse decay formation time xmath94 of the metastable state which corresponds to the pole xmath83 for more details see next sections as will be shown further for xmath95the inverse chemical potential can be considered as a characteristic equilibration time as well this interpretation of xmath94 naturally explains the thermodynamic metastability of all states except the gaseous one the metastable states can exist in the system only virtually because of their finite decay formation time whereas the gaseous state is stable because it has an infinite decay formation time for xmath96 mev and xmath97 the lhs straight line and rhs of eq twelve all dashed curves are shown as the function of dimensionless parameter xmath98 for the three values of the largest fragment size xmath46 the intersection point at xmath99 corresponds to a real root of eq ten each tangent point with the straight line generates two complex roots of ten width325height226 it is instructive to treat the effective chemical potential xmath100 as an independent variable instead of xmath35 in contrast to the infinite xmath0 where the upper limit xmath101 defines the liquid phase singularity of the isobaric partition and gives the pressure of a liquid phase xmath102 xcite for finite volumes and finite xmath46 the effective chemical potential can be complex with either sign for its real part and its value defines the number and position of the imaginary roots xmath103 in the complex plane positive and negative values of the effective chemical potential for finite systems were considered xcite within the fisher droplet model but to our knowledge its complex values have never been discussed from the definition of the effective chemical potential xmath104it is evident that its complex values for finite systems exist only because of the excluded volume interaction which is not taken into account in the fisher droplet model xcite as it is seen from fig 1 the rhs of eq twelve is the amplitude and frequency modulated sine like function of dimensionless parameter xmath105 therefore depending on xmath7 and xmath106 values there may exist no complex roots xmath107 a finite number of them or an infinite number of them in fig 1 we showed a special case which corresponds to exactly three roots of eq ten for each value of xmath46 the real root xmath108 and two complex conjugate roots xmath109 since the rhs of twelve is monotonously increasing function of xmath106 when the former is positive it is possible to map the xmath110 plane into regions of a fixed number of roots of eq ten each curve in divides the xmath110 plane into three parts for xmath106values below the curve there is only one real root gaseous phase for points on the curve there exist three roots and above the curve there are five or more roots of eq ten for constant values of xmath111the number of terms in the rhs of twelve does not depend on the volume and consequently in thermodynamic limit xmath57 only the farthest right simple pole in the complex xmath59plane survives out of a finite number of simple poles according to the inequality thirteen the real root xmath112 is the farthest right singularity of isobaric partition six however there is a possibility that the real parts of other roots xmath113 become infinitesimally close to xmath77 when there is an infinite number of terms which contribute to the gcp nine region of one real root of eq ten below the curve three complex roots at the curve and five and more roots above the curve for three values of xmath46 and the same parameters as in fig 1 width325height226 let us show now that even for an infinite number of simple poles in nine only the real root xmath112 survives in the limit xmath57 for this purpose consider the limit xmath114 in thislimit the distance between the imaginary parts of the nearest roots remains finite even for infinite volume indeed for xmath115 the leading contribution to the rhs of twelve corresponds to the harmonic with xmath116 and consequently an exponentially large amplitude of this term can be only compensated by a vanishing value of xmath117 ie xmath118 with xmath119 hereafter we will analyze only the branch xmath120 and therefore the corresponding decay formation time xmath1211 is volume independent keeping the leading term on the rhs of twelve and solving for xmath122 one finds xmath123 where in the last step we used eq eleven and condition xmath119 since for xmath57all negative values of xmath67 can not contribute to the gcp nine it is sufficient to analyze even values of xmath124 which according to msixteen generate xmath125 since the inequality thirteen can not be broken a single possibility when xmath126 pole can contribute to the partition nine corresponds to the case xmath127 for some finite xmath124 assuming this we find xmath128 for the same value of xmath35 substituting these results into equation eleven one gets xmath129ll r0 the inequality mseventeen follows from the equation for xmath77 and the fact that even for equal leading terms in the sums above with xmath130 and even xmath124 the difference between xmath77 and xmath67 is large due to the next to leading term xmath131 which is proportional to xmath132 thus we arrive at a contradiction with our assumption xmath133 and consequently it can not be true therefore for large volumes the real root xmath112 always gives the main contribution to the gcp nine and this is the only root that survives in the limit xmath57 thus we showed that the model with the fixed size of the largest fragment has no phase transition because there is a single singularity of the isobaric partition six which exists in thermodynamic limit if xmath46 monotonically grows with the volume the situation is different in this case for positive value of xmath134 the leading exponent in the rhs of twelve also corresponds to a largest fragment ie to xmath135 therefore we can apply the same arguments which were used above for the case xmath136 and derive similarly equations mfourteenmsixteen for xmath68 and xmath67 from xmath137 it follows that when xmath0 increases the number of simple poles in eight also increases and the imaginary part of the closest to the real xmath59axis poles becomes very small ie xmath138 for xmath139 and consequently the associated decay formation time xmath1401 grows with the volume of the system due to xmath141 the inequality mseventeen can not be established for the poles with xmath139 therefore in contrast to the previous case for large xmath46 the simple poles with xmath139 will be infinitesimally close to the real axis of the complex xmath59plane from eq msixteen it follows that xmath142 for xmath143 and xmath144 thus we proved that for infinite volume the infinite number of simple poles moves toward the real xmath59axis to the vicinity of liquid phase singularity xmath145 of the isobaric partition xcite and generates an essential singularity of function xmath146 in seven irrespective to the sign of chemical potential xmath35 as we showed above the states with xmath147become stable because they acquire infinitely large decay formation time xmath94 in the limit xmath57 therefore these states should be identified as a liquid phase for finite volumes as well such a conclusion can be easily understood if we recall that the partial pressure xmath148 of meighteen corresponds to a single fragment of the largest possible size now it is clear that each curve in fig 2 is the finite volume analog of the phase boundary xmath149 for a given value of xmath46 below the phase boundary there exists a gaseous phase but at and above each curve there are states which can be identified with a finite volume analog of the mixed phase and finally at xmath150 there exists a liquid phase when there is no phase transition ie xmath151 the structure of simple poles is similar but first the line which separates the gaseous states from the metastable states does not change with the volume and second as shown above the metastable states will never become stable therefore a systematic study of the volume dependence of free energy or pressure for very large xmath0 along with the formation and decay times may be of a crucial importance for experimental studies of the nuclear liquid gas phase transition the above results demonstrate that in contrast to hill s expectations xcite the finite volume analog of the mixed phase does not consist just of two pure phases the mixed phase for finite volumes consists of a stable gaseous phase and the set of metastable states which differ by the free energy moreover the difference between the free energies of these states is not surface like as hill assumed in his treatment xcite but volume like furthermore according to eqs eleven and twelve each of these states consists of the same fragments but with different weights as seen above for the case xmath150 some fragments that belong to the states in which the largest fragment is dominant may in principle have negative weights effective number of degrees of freedom in the expression for xmath152 eleven this can be understood easily because higher concentrations of large fragments can be achieved at the expense of the smaller fragments and is reflected in the corresponding change of the real part of the free energy xmath153 therefore the actual structure of the mixed phase at finite volumes is more complicated than was expected in earlier works a similar situation occurs for the real values of xmath46 in this caseall sums in eqs tenthirteen should be expressed via the euler maclaurin formula xmath154 xmath155 here xmath156 are the bernoulli numbers the representation fourteen allows one to study the effect of finite volume fv on the gcp nine the above results are valid for any xmath46 dependence however the linear one ie xmath157 with xmath158 is the most natural with the help of the parameter xmath42 it is possible to describe a difference between the geometrical shape of the volume under consideration and that one of the largest fragment for instance by fixing xmath159 it is possible to account for the fact that the largest spherical fragment can not fill completely the cube with the side equal to its two radii while there is enough space available for small fragments due to the xmath160 dependence in the csmmthere are two different ways of how the finite volume affects thermodynamical functions for finite xmath0 and xmath161 there is always a finite number of simple poles in nine but their number and positions in the complex xmath59plane depend on xmath0 to see this let us study the mechanical pressure which corresponds to the gcp nine xmath162 2 biggl b2frac partial lambda npartial v sumlimitsk1kv tildephik k2 textstyle efracnunkt tildephikv times nonumber hspace07 cm textstyle efracnunkvt textstyle kv left 1 alpha fracnunt left frac12 alpha right right okvbiggr biggr hspace005 cm right endaligned where we give the main term for each xmath163 and leading fv corrections explicitly for xmath164 whereas xmath165 accumulates the higher order corrections due to the euler maclaurin eq fourteen in evaluation of fifteen we used an explicit representation of the derivative xmath166 which can be found from eqs ten and fourteen the first term in the rhs of fifteen describes the constrained grand canonical cgc complex pressure generated by the simple pole xmath167 due to its free energy density xmath168 weighted with the probability xmath169 whereas the second and third terms appear due to the volume dependence of xmath46 note that instead of the fv corrections the usage of natural values for xmath46 would generate the artificial delta function terms in fifteen for the volume derivatives now it is clear that in case xmath151 the corrections to the main term will not appear and the number of poles and their positions will be defined by values of xmath7 and xmath35 only as one can see from fifteen for finite volumes the corrections can give a non negligible contribution to the pressure because in this case xmath170 can be positive the real parts of the partial cgc pressures xmath171 may have either sign therefore if the fv corrections to the pressure fifteen are small then according to thirteen the positive cgc pressures xmath172 are mechanically metastable and the negative ones xmath173 are mechanically unstable compared to the gas pressure xmath78 the fv corrections should be accounted for to find the mechanically meta and unstable states in the general case however it is clear that the contribution of the states with xmath173 into partition and its derivatives is exponentially small even for finite volumes as we showed earlier in this section when xmath0 increases the number of simple poles in eight also increases and imaginary part of the closest to the real xmath59axis poles becomes very small therefore for infinite volume the infinite number of simple poles moves toward the real xmath59axis to the vicinity of liquid phase singularity xmath174 and thus generates an essential singularity of function xmath146 in seven in this casethe contribution of any of remote poles from the real xmath59axis to the gcp vanishes then it can be shown that the fv corrections in fifteen become negligible because of the inequality xmath175 and consequently the reduced distribution of largest fragment xmath176 and the derivatives xmath166 vanish for all xmath7values and we obtain the usual smm solution xcite its thermodynamics as we discussed is governed by the farthest right singularity in the complex xmath59plane in this work we discussed a powerful mathematical method which allowed us to solve analytically the csmm at finite volumes it is shown that for finite volumes the gcp function can be identically rewritten in terms of the simple poles of the isobaric partition six the real pole xmath112 exists always and the quantity xmath177 is the cgc pressure of the gaseous phase the complex roots xmath126 appear as pairs of complex conjugate solutions of equation ten as we discussed their most straightforward interpretation is as follows xmath178 has a meaning of the free energy density whereas xmath179 depending on sign gives the inverse decay formation time of such a state the gaseous state is always stable because its decay formation time is infinite and because it has the smallest value of free energy the complex poles describe the metastable states for xmath180 and mechanically unstable states for xmath181 we studied the volume dependence of the simple poles and found a dramatic difference in their behavior in case of phase transition and without it for the formerthis representation allows one also to define the finite volume analogs of phases unambiguously and to establish the finite volume analog of the xmath149 phase diagram see fig 2 at finite volumesthe gaseous phase exists if there is a single simple pole the mixed phase corresponds to three and more simple poles whereas the liquid is represented by an infinite amount of simple poles at highest possible particle density or xmath95 as we showed for given xmath7 and xmath35 the states of the mixed phase which have different xmath182 are not in a true chemical equilibrium for finite volumes this feature can not be obtained within the fisher droplet model due to lack of the hard core repulsion between fragments this fact also demonstrates clearly that in contrast to hill s expectations xcite the mixed phase is not just a composition of two states which are the pure phases as we showed the mixed phase is a superposition of three and more collective states and each of them is characterized by its own value of xmath183 because of that the difference between the free energies of these states is not a surface like as hill argued xcite but volume like for the case with phase transition ie for xmath184 we analyzed what happens in thermodynamic limit when xmath0 grows the number of simple poles in eight also increases and imaginary part of the closest to the real xmath59axis poles becomes vanishing for infinite volumethe infinite number of simple poles moves toward the real xmath59axis and forms an essential singularity of function xmath146 in seven which defines the liquid phase singularity xmath174 thus we showed how the phase transition develops in thermodynamic limit also we analyzed the finite volume corrections to the mechanical pressure fifteen the corrections of a similar kind should appear in the entropy particle number and energy density because of the xmath7 and xmath35 dependence of xmath163 due to ten xcite therefore these corrections should be taking into account while analyzing the experimental yields of fragments then the phase diagram of the nuclear liquid gas phase transition can be recovered from the experiments on finite systems nuclei with more confidence a detailed analysis of the isobaric partition singularities in the xmath185 plane allowed us to define the finite volume analogs of phases and study the behavior of these singularities in the limit xmath57 such an analysis opens a possibility to study rigorously the nuclear liquid gas phase transition directly from the finite volume partition this may help to extract the phase diagram of the nuclear liquid gas phase transition from the experiments on finite systems nuclei with more confidence s das gupta a majumder s pratt and a mekjian arxiv nucl th9903007 1999 k a bugaev m i gorenstein i n mishustin and w greiner phys rev c62 2000 044320 arxiv nucl th0007062 2000 k a bugaev m i gorenstein i n mishustin and w greiner phys lett b 498 2001 144 arxiv nucl th0103075 2001 p t reuter and k a bugaev phys b 517 2001 233 k a bugaev arxiv nucl th0406033 2004 k a bugaev l phair and j b elliott arxiv nucl th0406034 2004 k a bugaev and j b elliott arxiv nucl th0501080 2005
we discuss an exact analytical solution of a simplified version of the statistical multifragmentation model with the restriction that the largest fragment size can not exceed the finite volume of the system a complete analysis of the isobaric partition singularities of this model is done for finite volumes it is shown that the real part of any simple pole of the isobaric partition defines the free energy of the corresponding state whereas its imaginary part depending on the sign defines the inverse decay formation time of this state the developed formalism allows us for the first time to exactly define the finite volume analogs of gaseous liquid and mixed phases of this model from the first principles of statistical mechanics and demonstrate the pitfalls of earlier works the finite size effects for large fragments and the role of metastable unstable states are discussed numbers 2570 pq 2165f 2410 pa
introduction laplace-fourier transformation isobaric partition singularities no phase transition case finite volume analogs of phases conclusions
we present a detailed analysis of the regularity and decay properties of linear scalar waves near the cauchy horizon of cosmological black hole spacetimes concretely we study charged and non rotating reissner nordstrm de sitter as well as uncharged and rotating kerr de sitter black hole spacetimes for which the cosmological constant xmath0 is positive see figure figintropenrose for their penrose diagrams these spacetimes in the region of interest for us have the topology xmath1 where xmath2 is an interval and are equipped with a lorentzian metric xmath3 of signature xmath4 the spacetimes have three horizons located at different values of the radial coordinate xmath5 namely the cauchy horizon at xmath6 the event horizon at xmath7 and the cosmological horizon at xmath8 with xmath9 in order to measure decay we use a time function xmath10 which is equivalent to the boyer lindquist coordinate xmath11 away from the cosmological event and cauchy horizons ie xmath10 differs from xmath11 by a smooth function of the radial coordinate xmath5 and xmath10 is equivalent to the eddington finkelstein coordinate xmath12 near the cauchy and cosmological horizons and to the eddington finkelstein coordinate xmath13 near the event horizon we consider the cauchy problem for the linear wave equation with cauchy data posed on a surface xmath14 as indicated in figure figintropenrose slice of the kerr de sitter spacetime with angular momentum xmath15 indicatedare the cauchy horizon xmath16 the event horizon xmath17 and the cosmological horizon xmath18 as well as future timelike infinity xmath19 the coordinates xmath20 are eddington finkelstein coordinates right the same penrose diagram the region enclosed by the dashed lines is the domain of dependence of the cauchy surface xmath14 the dotted lines are two level sets of the function xmath10 the smaller one of these corresponds to a larger value of xmath10 the study of asymptotics and decay for linear scalar and non scalar wave equations in a neighborhood of the exterior region xmath21 of such spacetimes has a long history methods of scattering theory have proven very useful in this context see xcite and references therein we point out that near the black hole exterior reissner nordstrm de sitter space can be studied using exactly the same methods as schwarzschild de sitter space see xcite for a different approach using vector field commutators there is also a substantial amount of literature on the case xmath22 of the asymptotically flat reissner nordstrm and kerr spacetimes we refer the reader to xcite and references therein the purpose of the present work is to show how a uniform analysis of linear waves up to the cauchy horizon can be accomplished using methods from scattering theory and microlocal analysis our main result is thmintromain let xmath3 be a non degenerate reissner de sitter metric with non zero charge xmath23 or a non degenerate kerr de sitter metric with small non zero angular momentum xmath24 with spacetime dimension xmath25 then there exists xmath26 only depending on the parameters of the spacetime such that the following holds if xmath12 is the solution of the cauchy problem xmath27 with smooth initial data then there exists xmath28 such that xmath12 has a partial asymptotic expansion xmath29 where xmath30 and xmath31 uniformly in xmath32 the same bound with a different constant xmath33 holds for derivatives of xmath34 along any finite number of stationary vector fields which are tangent to the cauchy horizon moreover xmath12 is continuous up to the cauchy horizon more precisely xmath34 as well as all such derivatives of xmath34 lie in the weighted spacetime sobolev space xmath35 in xmath36 where xmath37 is the surface gravity of the cauchy horizon for the massive klein gordon equation xmath38 xmath39 small the same result holds true without the constant term xmath40 here the spacetime sobolev space xmath41 for xmath42 consists of functions which remain in xmath43 under the application of up to xmath44 stationary vector fields for general xmath45 xmath41 is defined using duality and interpolation the final part of theorem thmintromain in particular implies that xmath34 lies in xmath46 near the cauchy horizon on any surface of fixed xmath10 after introducing the reissner de sitter and kerr de sitter metrics at the beginning of secrnds and seckds we will prove theorem thmintromain in subsecrndsconormal and subseckdsres see theorems thmrndspartialasympconormal and thmkdspartialasympconormal our analysis carries over directly to non scalar wave equations as well as we discuss for differential forms in subsecrndsbundles however we do not obtain uniform boundedness near the cauchy horizon in this case furthermore a substantial number of ideas in the present paper can be adapted to the study of asymptotically flat xmath22 spacetimes corresponding boundedness regularity and polynomial decay results on reissner nordstrm and kerr spacetimes will be discussed in the forthcoming paper xcite let us also mention that a minor extension of our arguments yield analogous boundedness decay and regularity results for the cauchy problem with a two ended cauchy surface xmath14 up to the bifurcation sphere xmath47 see figure figintrobifurcation for solutions of the cauchy problem with initial data posed on xmath14 our methods imply boundedness and precise regularity results as well as asymptotics and decay towards xmath19 in the causal past of xmath47 theorem thmintromain is the first result known to the authors establishing asymptotics and regularity near the cauchy horizon of rotating black holes however we point out that dafermos and luk have recently announced the xmath48 stability of the cauchy horizon of the kerr spacetime for einstein s vacuum equations xcite in the case of xmath22 and in spherical symmetry reissner nordstrm franzen xcite proved the uniform boundedness of waves in the black hole interior and xmath49 regularity up to xmath16 while luk and oh xcite showed that linear waves generically do not lie in xmath50 at xmath16 there is also ongoing work by franzen on the analogue of her result for kerr spacetimes xcite gajic xcite based on previous work by aretakis xcite showed that for extremal reissner nordstrm spacetimes waves do lie in xmath50 we do not present a microlocal study of the event horizon of extremal black holes here however we remark that our analysis reveals certain high regularity phenomena at the cauchy horizon of near extremal black holes which we will discuss below closely related to this the study of costa giro natrio and silva xcite of the nonlinear einstein scalar field system in spherical symmetry shows that close to extremality rather weak assumptions on initial data on a null hypersurface transversal to the event horizon guarantee xmath50 regularity of the metric at xmath16 however they assume exact reissner nordstrm de sitter data on the event horizon while in the present work we link non trivial decay rates of waves along the event horizon to the regularity of waves at xmath16 compare this also with the discussions in subsecrndshighreg and remark rmkrndshighreg one could combine the treatment of reissner nordstrm de sitter and kerr de sitter spacetimes by studying the more general kerr newman de sitter family of charged and rotating black hole spacetimes discovered by carter xcite which can be analyzed in a way that is entirely analogous to the kerr de sitter case however in order to prevent cumbersome algebraic manipulations from obstructing the flow of our analysis we give all details for reissner nordstrm de sitter black holes where the algebra is straightforward and where moreover mode stability can easily be shown to hold for subextremal spacetimes we then indicate rather briefly the mostly algebraic changes for kerr de sitter black holes and leave the similar general case of kerr newman de sitter black holes to the reader in fact our analysis is stable under suitable perturbations and one can thus obtain results entirely analogous to theorem thmintromain for kerr newman de sitter metrics with small non zero angular momentum xmath24 and small charge xmath23 depending on xmath24 or for small charge xmath23 and small non zero angular momentum xmath24 depending on xmath23 by perturbative arguments indeed in these two cases the kerr newman de sitter metric is a small stationary perturbation of the kerr de sitter resp reissner nordstrm de sitter metric with the same structure at xmath16 in the statement of theorem thmintromain we point out that the amount of regularity of the remainder term xmath34 at the cauchy horizon is directly linked to the amount xmath51 of exponential decay of xmath34 the more decay the higher the regularity this can intuitively be understood in terms of the blue shift effect xcite the more a priori decay xmath34 has along the cauchy horizon approaching xmath19 the less energy can accumulate at the horizon the precise microlocal statement capturingthis is a radial point estimate at the intersection of xmath16 with the boundary at infinity of a compactification of the spacetime at xmath52 which we will discuss in subsecintrogeometry now xmath51 can be any real number less than the spectral gap xmath53 of the operator xmath54 which is the infimum of xmath55 over all non zero resonances or quasi normal modes xmath56 the resonance at xmath57 gives rise to the constant xmath40 term we refer to xcite and xcite for the discussion of resonances for black hole spacetimes due to the presence of a trapped region in the black hole spacetimes considered here xmath53 is bounded from above by a quantity xmath58 associated with the null geodesic dynamics near the trapped set as proved by dyatlov xcite in the present context following breakthrough work by wunsch and zworski xcite and by nonnenmacher and zworski xcite below resp above any line xmath59 xmath60 there are infinitely resp finitely many resonances in principlehowever one expects that there indeed exists a non zero number of resonances above this line and correspondingly the expansion can be refined to take these into account in fact one can obtain a full resonance expansion due to the complete integrability of the null geodesic flow near the trapped set see xcite since for the mode solution corresponding to a resonance at xmath61 xmath62 we obtain the regularity xmath63 at xmath16 shallow resonances ie those with small xmath64 give the dominant contribution to the solution xmath12 both in terms of decay and regularity at xmath16 the authors are not aware of any rigorous results on shallow resonances so we shall only discuss this briefly in remark rmkrndshighreg taking into account insights from numerical results these suggest the existence of resonant states with imaginary parts roughly equal to xmath65 and xmath66 and hence the relative sizes of the surface gravities play a crucial role in determining the regularity at xmath16 whether resonant states are in fact no better than xmath67 and the existence of shallow resonances which if true would yield a linear instability result for cosmological black hole spacetimes with cauchy horizons analogous to xcite will be studied in future work once these questions have been addressed one can conclude that the lack of say xmath68 regularity at xmath16 is caused precisely by shallow quasinormal modes thus somewhat surprisingly the mechanism for the linear instability of the cauchy horizon of cosmological spacetimes is more subtle than for asymptotically flat spacetimes in that the presence of a cosmological horizon which ultimately allows for a resonance expansion of linear waves xmath12 leads to a much more precise structure of xmath12 at xmath16 with the regularity of xmath12 directly tied to quasinormal modes of the black hole exterior the interest in understanding the behavior of waves near the cauchy horizon has its roots in penrose s strong cosmic censorship conjecture which asserts that maximally globally hyperbolic developments for the einstein maxwell or einstein vacuum equations depending on whether one considers charged or uncharged solutions with generic initial data and a complete initial surface andor under further conditions are inextendible as suitably regular lorentzian manifolds in particular the smooth even analytic extendability of the reissner nordstrmde sitter and kerrde sitter solutions past their cauchy horizons is conjectured to be an unstable phenomenon it turns out that the question what should be meant by suitable regularity is very subtle we refer to works by christodoulou xcite dafermos xcite and costa giro natrio and silva xcite in the spherically symmetric setting for positive and negative results for various notions of regularity there is also work in progress by dafermos and luk on the xmath48 stability of the kerr cauchy horizon assuming a quantitative version of the non linear stability of the exterior region we refer to these works as well as to the excellent introductions of xcite for a discussion of heuristic arguments and numerical experiments which sparked this line of investigation here however we only consider linear equations motivated by similar studies in the asymptotically flat case by dafermos xcite see footnote 11 franzen xcite sbierski xcite and luk and oh xcite the main insight of the present paper is that a uniform analysis up to xmath16 can be achieved using by now standard methods of scattering theory and geometric microlocal analysis in the spirit of recent works by vasy xcite baskin vasy and wunsch xcite and xcite the core of the precise estimates of theorem thmintromain are microlocal propagation results at generalized radial sets as we will discuss in subsecintrostrategy from this geometric microlocal perspective however ie taking into account merely the phase space properties of the operator xmath54 it is both unnatural and technically inconvenient to view the cauchy horizon as a boundary after all the metric xmath3 is a non degenerate lorentzian metric up to xmath16 and beyond thus the most subtle step in our analysis is the formulation of a suitable extended problem in a neighborhood of xmath69 which reduces to the equation of interest namely the wave equation in xmath32 the penrose diagram is rather singular at future timelike infinity xmath19 yet all relevant phenomena in particular trapping and redblue shift effects should be thought of as taking place there as we will see shortly therefore we work instead with a compactification of the region of interest the domain of dependence of xmath14 in figure figintropenrose in which the horizons as well as the trapped region remain separated and the metric remains smooth as xmath70 concretely using the coordinate xmath10 employed in theorem thmintromain the radial variable xmath5 and the spherical variable xmath71 we consider a region xmath72 ie we add the ideal boundary at future infinity xmath73 to the spacetime and equip xmath74 with the obvious smooth structure in which xmath75 vanishes simply and non degenerately at xmath76 it is tempting and useful for purposes of intuition to think of xmath74 as being a submanifold of the blow up of the compactification suggested by the penrose diagram adding an ideal sphere at infinity at xmath19 at xmath19 however the details are somewhat subtle see xcite due to the stationary nature of the metric xmath3 the nullgeodesic flow should be studied in a version of phase space which has a built in uniformity as xmath70 a clean way of describing this uses the language of b geometry and b analysis we refer the reader to melrose xcite for a detailed introduction and xcite and xcite for brief overviews we recall the most important features here on xmath74 the metric xmath3 is a non degenerate lorentzian b metric ie a linear combination with smooth on xmath74 coefficients of xmath77 where xmath78 are coordinates in xmath79 in fact the coefficients are independent of xmath75 then xmath3 is a section of the symmetric second tensor power of a natural vector bundle on xmath74 the b cotangent bundle xmath80 which is spanned by the sections xmath81 we stress that xmath82 is a smooth non degenerate section of xmath80 up to and including the boundary xmath73 likewise the dual metric xmath83 is a section of the second symmetric tensor power of the b tangent bundle xmath84 which is the dual bundle of xmath80 and thus spanned by xmath85 the dual metric function which we also denote by xmath86 by a slight abuse of notation associates to xmath87 the squared length xmath88 over xmath89 the b cotangent bundle is naturally isomorphic to the standard cotangent bundle the geodesic flow lifted to the cotangent bundle is generated by the hamilton vector field xmath90 which extends to a smooth vector field xmath91 tangent to xmath92 now xmath93 is homogeneous of degree xmath94 with respect to dilations in the fiber and it is often convenient to rescale it by multiplication with a homogeneous degree xmath95 function xmath96 obtaining the homogeneous degree xmath97 vector field xmath98 as such it extends smoothly to a vector field on the radial or projective compactification xmath99 of xmath80 which is a ball bundle over xmath74 with fiber over xmath100 given by the union of xmath101 with the sphere at fiber infinity xmath102 the b cosphere bundle xmath103 is then conveniently viewed as the boundary xmath104 of the compactified b cotangent bundle at fiber infinity the projection to the base xmath74 of integral curves of xmath93 or xmath105 with null initial direction ie starting at a point in xmath106 yields reparameterizations of null geodesics on xmath107 this is clear in the interior of xmath74 and the important observation is that this gives a well defined notion of null geodesics or null bicharacteristics at the boundary at infinity xmath79 we remark that the characteristic set xmath108 has two components the union of the future null cones xmath109 and of the past null cones xmath110 the red shift or blue shift effect manifests itself in a special structure of the xmath105 flow near the b conormal bundles xmath111 of the horizons xmath112 xmath113 here xmath114 for a boundary submanifold xmath115 and xmath116 is the annihilator of the space of all vectors in xmath117 tangent to xmath118 xmath119 is naturally isomorphic to the conormal bundle of xmath118 in xmath79 indeed in the case of the reissner nordstrm de sitter metric xmath120 more precisely its boundary at fiber infinity xmath121 is a saddle point for the xmath105 flow with stable or unstable depending on which of the two components xmath122 one is working on manifold contained in xmath123 and an unstable or stable manifold transversal to xmath124 in the kerr de sitter case xmath105 does not vanish everywhere on xmath125 but rather is non zero and tangent to it so there are non trivial dynamics within xmath125 but the dynamics in the directions normal to xmath125 still has the same saddle point structure see figure figintroradial is a radial null geodesic and xmath47 is the projection of a non radial geodesic right the compactification of the spacetime at future infinity together with the same two null geodesics the null geodesic flow extended to the b cotangent bundle over the boundary has saddle points at the b conormal bundles of the intersection of the horizons with the boundary at infinity xmath79 in order to take full advantage of the saddle point structure of the null geodesic flow near the cauchy horizon one would like to set up an initial value problem or equivalently a forced forward problem xmath126 with vanishing initial data but non trivial right hand side xmath127 on a domain which extends a bit past xmath16 because of the finite speed of propagation for the wave equation one is free to modify the problem beyond xmath16 in whichever way is technically most convenient waves in the region of interest xmath128 are unaffected by the choice of extension a natural idea then is to simply add a boundary xmath129 xmath130 which one could use to cap the problem off beyond xmath16 now xmath131 is timelike hence to obtain a well posed problem one needs to impose boundary conditions there while perfectly feasible the resulting analysis is technically rather involved as it necessitates studying the reflection of singularities at xmath131 quantitatively in a uniform manner as xmath132 near xmath131 one does not need the precise microlocal control as in xcite however a technically much easier modification involves the use of a complex absorbing potential xmath133 in the spirit of xcite here xmath133 is a second order b pseudodifferential operator on xmath74 which is elliptic in a large subset of xmath134 near xmath73 without b language one can take xmath133 for large xmath10 to be a time translation invariant properly supported psdo on xmath74 one then considers the operator xmath135 the point is that a suitable choice of the sign of xmath133 on the two components xmath136 of the characteristic set leads to an absorption of high frequencies along the future directed null geodesic flow over the support of xmath133 which allows one to control a solution xmath12 of xmath137 in terms of the right hand side xmath127 there however since we are forced to work on a domain with boundary in order to study the forward problem the pseudodifferential complex absorption does not make sense near the relevant boundary component which is the extension of the left boundary in figure figintropenrose past xmath6 a doubling construction as in xcite on the other hand doubling the spacetime across the timelike surface xmath131 say amounts to gluing an artificial exterior region to our spacetime with one of the horizons identified with the original cauchy horizon this in particular creates another trapped region which we can however easily hide using a complex absorbing potential we then cap off the thus extended spacetime beyond the cosmological horizon of the artificial exterior region located at xmath138 by a spacelike hypersurface xmath139 at xmath140 xmath141 at which the analysis is straightforward xcite see figure figintroextended in the spherically symmetric setting one could also replace the region xmath134 beyond the cauchy horizon by a static de sitter type space thus not generating any further trapping or horizons and obviating the need for complex absorption but for kerr de sitter this gluing procedure is less straightforward to implement hence we use the above doubling type procedure for reissner nordstrm already the construction of the extension is detailed in subsecrndsmfd creating an artificial horizon xmath142 and cap off beyond xmath142 using a spacelike hypersurface xmath139 complicated dynamics in the extended region are hidden by a complex absorbing potential xmath133 supported in the shaded region we thus study the forcing problem xmath143 with xmath127 and xmath12 supported in the future of the cauchy surface xmath14 in xmath144 and in the future of xmath139 in xmath145 the natural function spaces are weighted b sobolev spaces xmath146 where the spacetime sobolev space xmath147 measures regularity relative to xmath43 with respect to stationary vector fields as defined after the statement of theorem thmintromain more invariantly xmath148 for integer xmath44 consists of xmath43 functions which remain in xmath43 upon applying up to xmath44 b vector fields the space xmath149 of b vector fields consists of all smooth vector fields on xmath74 which are tangent to xmath76 and is equal to the space of smooth sections of the b tangent bundle xmath84 now is an equation on a compact space xmath150 which degenerates at the boundary the operator xmath151 is a b differential operator ie a sum of products of b vector fields and xmath152 is a b psdo note that this point of view is much more precise than merely stating that is an equation on a noncompact space xmath153 thus the analysis of the operator xmath154 consists of two parts firstly the regularity analysis in which one obtains precise regularity estimates for xmath12 using microlocal elliptic regularity propagation of singularities and radial point results see subsecrndsregularity which relies on the precise global structure of the null geodesic flow discussed in subsecrndsflow and secondly the asymptotic analysis of subsecrndsfredholm and subsecrndsasymp which relies on the analysis of the mellin transformed in xmath75 equivalently fourier transformed in xmath155 operator family xmath156 its high energy estimates as xmath157 and the structure of poles of xmath158 which are known as resonances or quasi normal modes this last part in which we use the shallow resonances to deduce asymptotic expansions of waves is the only low frequency part of the analysis the regularity one obtains for xmath12 solving with say smooth compactly supported in xmath153 forcing xmath127 is determined by the behavior of the null geodesic flow near the trapping and near the horizons xmath120 xmath113 near the trapping we use the aforementioned results xcite while near xmath120 we use radial point estimates originating in work by melrose xcite and proved in the context relevant for us in xcite we recall these in subsecrndsregularity concretely equation combines a forward problem for the wave equation near the black hole exterior region xmath159 with a backward problem near the artificial exterior region xmath160 with hyperbolic propagation in the region between these two called no shift region in xcite near xmath7 and xmath8 then and by propagation estimates in any region xmath161 xmath60 the radial point estimate encapsulating the red shift effect yields smoothness of xmath12 relative to a b sobolev space with weight xmath162 ie allowing for exponential growth in which case trapping is not an issue while near xmath6 one is solving the equation away from the boundary xmath79 at infinity and hence the radial point estimate encapsulating the blue shift effect there yields an amount of regularity which is bounded from above by xmath163 where xmath37 is the surface gravity of xmath16 in the extended region xmath134 the regularity analysis is very simple since the complex absorption xmath133 makes the problem elliptic at the trapping there and at xmath142 and one then only needs to use real principal type propagation together with standard energy estimates combined with the analysis of xmath156 which relies on the same dynamical and geometric properties of the extended problem as the b analysis we deduce in subsecrndsfredholm that xmath154 is fredholm on suitable weighted b sobolev spaces and in fact solvable for any right hand side xmath127 if one modifies xmath127 in the unphysical region xmath134 in order to capture the high resp low regularity near xmath164 resp xmath165 these spaces have variable orders of differentiability depending on the location in xmath74 such spaces were used already by unterberger xcite and in a context closely related to the present paper in xcite we present results adapted to our needs in appendix secvariable in subsecrndsasymp then we show how the properties of the meromorphic family xmath158 yield a partial asymptotic expansion of xmath12 as in using more refined regularity statements at xmath166 we show in subsecrndsconormal that the terms in this expansion are in fact conormal to xmath6 ie they do not become more singular upon applying vector fields tangent to the cauchy horizon we stress that the analysis is conceptually very simple and close to the analysis in xcite in that it relies on tools in microlocal analysis and scattering theory which have been frequently used in recent years as a side note we point out that one could have analyzed xmath167 in xmath32 only by proving very precise estimates for the operator xmath167 which is a hyperbolic wave type operator in xmath32 near xmath6 while this would have removed the necessity to construct and analyze an extended problem the mechanism underlying our regularity and decay estimates namely the radial point estimate at the cauchy horizon would not have been apparent from this moreover the radial point estimate is very robust it works for kerr de sitter spaces just as it does for the spherically symmetric reissner nordstrm de sitter solutions a more interesting modification of our argument relies on the observation that it is not necessary for us to incorporate the exterior region in our global analysis since this has already been studied in detail before instead one could start assuming asymptotics for a wave xmath12 in the exterior region and then relate xmath12 to a solution of a global extended problem for which one has good regularity results and deduce them for xmath12 by restriction such a strategy is in particular appealing in the study of spacetimes with vanishing cosmological constant using the analytic framework of the present paper since the precise structure of the resolvent xmath168 has not been analyzed so far whereas boundedness and decay for scalar waves on the exterior regions of reissner nordstrm and kerr spacetimes are known by other methods see the references at the beginning of secintro we discuss this in the forthcoming xcite in the remaining parts of secrnds we analyze the essential spectral gap for near extremal black holes in subsecrndshighreg we find that for any desired level of regularity one can choose near extremal parameters of the black hole such that solutions xmath12 to with xmath127 in a finite codimensional space achieve this level of regularity at xmath16 however as explained in the discussion of theorem thmintromain it is very likely that shallow resonances cause the codimension to increase as the desired regularity increases lastly in subsecrndsbundles we indicate the simple changes to our analysis needed to accommodate wave equations on natural tensor bundles in seckds then we show how kerr de sitter spacetimes fit directly into our framework we analyze the flow on a suitable compactification and extension constructed in subseckdsmfd in subseckdsflow and deduce results completely analogous to the reissner nordstrm de sitter case in subseckdsres we are very grateful to jonathan luk and maciej zworski for many helpful discussions we would also like to thank sung jin oh for many helpful discussions and suggestions for reading parts of the manuscript and for pointing out a result in xcite which led to the discussion in remark rmkrndshighreg thanks also to elmar schrohe for very useful discussions leading to appendix secsuppext we are grateful for the hospitality of the erwin schrdinger institute in vienna where part of this work was carried out we gratefully acknowledge support by avs national science foundation grants dms1068742 and dms1361432 ph is a miller fellow and thanks the miller institute at the university of california berkeley for support we focus on the case of xmath169 spacetime dimensions the analysis in more than xmath169 dimensions is completely analogous in the domain of outer communications of the 4dimensional reissner nordstrm de sitter black hole given by xmath170 with xmath171 described below the metric takes the form xmath172 here xmath173 and xmath174 are the mass and the charge of the black hole and xmath175 with xmath176 the cosmological constant setting xmath177 this reduces to the schwarzschild de sitter metric we assume that the spacetime is non degenerate defrndsnondegenerate we say that the reissner de sitter spacetime with parameters xmath178 is non degenerate if xmath179 has xmath180 simple positive roots xmath181 since xmath182 when xmath183 we see that xmath184 the roots of xmath179 are called cauchy horizon xmath165 event horizon xmath185 and cosmological horizon xmath186 with the cauchy horizon being a feature of charged or rotating see seckds solutions of einstein s field equations to give a concrete example of a non degenerate spacetime let us check the non degeneracy condition for black holes with small charge and compute the location of the cauchy horizon for fixed xmath187 let xmath188 so xmath189 for xmath190 the function xmath191 has a root at xmath192 since xmath193 is negative for xmath194 and for large xmath195 but positive for large xmath196 the function xmath197 has two simple positive roots if and only if xmath198 where xmath199 is the unique positive critical point of xmath200 but xmath201 if and only if xmath202 then lemmarndsnondegenerate suppose xmath203 satisfy the non degeneracy condition and denote the three non negative roots of xmath191 by xmath204 then for small xmath174 the function xmath179 has three positive roots xmath205 xmath113 with xmath206 depending smoothly on xmath23 and xmath207 the existence of the functions xmath205 follows from the implicit function theorem taking into account the simplicity of the roots xmath208 of xmath191 let us write xmath209 these are smooth functions of xmath210 differentiating xmath211 with respect to xmath210 gives xmath212 hence xmath213 which yields the analogous expansion for xmath214 we now discuss the extension of the metric beyond the event and cosmological horizon as well as beyond the cauchy horizon the purpose of the present section is to define the manifold on which our analysis of linear waves will take place see proposition proprndsmfd for the final result we begin by describing the extension of the metric beyond the event and the cosmological horizon thereby repeating the arguments of xcite see figure figrndsext23 the event horizon xmath17 and the cauchy horizon xmath16 we first study a region xmath215 bounded by an initial cauchy hypersurface xmath14 and two final cauchy hypersurfaces xmath216 and xmath217 right the same region compactified at infinity xmath218 in the penrose diagram with the artificial hypersurfaces put in write xmath219 so xmath220 we denote by xmath221 a smooth function such that xmath222 xmath141 small with xmath223 smooth near xmath112 to be specified momentarily thus xmath224 as xmath225 and xmath226 we then put xmath227 and compute xmath228 which is a non degenerate lorentzian metric up to xmath229 with dual metric xmath230 we can choose xmath223 so as to make xmath231 timelike ie xmath232 indeed choosing xmath233 which undoes the coordinate change up to an additive constant accomplishes this trivially in xmath164 away from xmath234 however we need xmath223 to be smooth at xmath234 as well now xmath231 is timelike in xmath235 if and only if xmath236 which holds for any xmath237 therefore we can choose xmath238 smooth near xmath185 with xmath239 for xmath240 and xmath241 smooth near xmath186 with xmath242 for xmath243 and thus a function xmath244 such that in the new coordinate system xmath245 the metric xmath3 extends smoothly to xmath229 and xmath231 is timelike for xmath246 and furthermore we can arrange that xmath247 in xmath248 by possibly changing xmath249 by an additive constant extending xmath223 smoothly beyond xmath250 in an arbitrary manner the expression makes sense for xmath251 as well as for xmath252 we first notice that we can choose the extension xmath223 such that xmath231 is timelike also for xmath253 indeed for such xmath5 we have xmath254 and the timelike condition becomes xmath255 which is satisfied as long as xmath256 there in particular we can take xmath257 for xmath258 and xmath259 for xmath260 in which case we get xmath261 for xmath258 with xmath262 and for xmath263 with xmath264 we define xmath265 beyond xmath185 and xmath186 by the same formula using the extensions of xmath238 and xmath241 just described in particular xmath266 in xmath267 we define a time orientation in xmath268 by declaring xmath231 to be future timelike we introduce spacelike hypersurfaces in the thus extended spacetime as indicated in figure figrndsext23 namely xmath269 and xmath270 rmkrndshypersurfacenotation here and below the subscript i initial resp f final indicates that outward pointing timelike vectors are past resp future oriented the number in the subscript denotes the horizon near which the surface is located notice here that indeed xmath271 and xmath272 at xmath217 so xmath231 and xmath273 have opposite timelike character there while likewise xmath274 and xmath275 at xmath216 the tilde indicates that xmath216 will eventually be disposed of we only define it here to make the construction of the extended spacetime clearer the region xmath215 is now defined as xmath276 bounded by three final cauchy hypersurfaces xmath277 xmath278 and xmath216 a partial extension beyond the cauchy horizon is bounded by the final hypersurface xmath279 and a timelike hypersurfaces xmath131 right the same region compactified at infinity with the artificial hypersurfaces put in next we further extend the metric beyond the coordinate singularity of xmath3 at xmath6 when written in the coordinates at xmath6 see figure figrndsext12 let xmath280 where now xmath281 with xmath282 for xmath283 and xmath284 smooth down to xmath6 thus by adjusting xmath285 by an additive constant we may arrange xmath286 for xmath287 notice that formally xmath288 and xmath289 in xmath290 thus xmath291 after extending xmath284 smoothly into xmath292 this expression is of the form with xmath293 xmath294 and xmath223 replaced by xmath295 xmath296 and xmath284 respectively in particular by the same calculation as above xmath297 is timelike provided xmath298 or xmath299 in xmath254 while in xmath235 any xmath300 works however since we need xmath282 for xmath5 near xmath185 where xmath254 requiring xmath297 to be timelike would force xmath301 as xmath302 which is incompatible with xmath284 being smooth down to xmath6 in view of the penrose diagram of the spacetime in figure figrndsext12 it is clear that this must happen since we can not make the level sets of xmath295 which coincide with the level sets of xmath293 ie with parts of xmath14 near xmath7 both remain spacelike and cross the cauchy horizon in the indicated manner thus we merely require xmath298 for xmath303 making xmath297 timelike there but losing the timelike character of xmath297 in a subset of the transition region xmath304 moreover similarly to the choices of xmath238 and xmath241 above we take xmath305 in xmath306 and xmath307 in xmath308 using the coordinates xmath309 we thus have xmath310 we further define xmath311 thus xmath277 intersects xmath14 at xmath312 xmath313 we choose xmath314 as follows we calculate the squared norm of the conormal of xmath277 using as xmath315 which is positive in xmath316 provided xmath60 xmath317 since xmath254 in this region therefore choosing xmath314 so that it verifies these inequalities xmath277 is spacelike put xmath318 so xmath319 at xmath320 and define xmath321 we note that xmath278 is indeed spacelike as xmath271 there and xmath279 is spacelike by construction of xmath295 the surface xmath131 is timelike hence the subscript putting xmath322 finishes the definition of all objects in figure figrndsext12 in order to justify the subscripts f we compute a smooth choice of time orientation first of all xmath297 is future timelike by choice in xmath268 furthermore in xmath323 we have xmath271 so xmath273 is timelike in xmath323 we then calculate xmath324 in xmath325 so xmath326 and xmath297 are in the same causal cone there in particular xmath326 is future timelike in xmath290 which justifies the notation xmath278 furthermore xmath297 is timelike for xmath327 with xmath328 in xmath306 using the form of the metric with xmath305 there hence xmath326 and xmath329 are in the same causal cone here thus xmath297 is past timelike in xmath327 justifying the notation xmath279 see also figure figrndsradial below lastly for xmath277 we compute xmath330 by our choice of xmath314 hence the future timelike 1form xmath326 is indeed outward pointing at xmath277 we remark that from the perspective of xmath331 the surface xmath216 is initial but we keep the subscript f for consistency with the notation used in the discussion of xmath332 bounded by the final cauchy hypersurface xmath333 and two initial hypersurfaces xmath139 and xmath278 the artificial extension in the region behind the cauchy horizonremoves the curvature singularity and generates an artificial horizon xmath142 right the same region compactified at infinity with the artificial hypersurfaces put in one can now analyze linear waves on the spacetime xmath334 if one uses the reflection of singularities at xmath131 we will describe the null geodesic flow in subsecrndsflow however we proceed as explained in secintro and add an artificial exterior region to the region xmath335 see figure figrndsext01 we first note that the form of the metric in xmath336 is xmath337 thus of the same form as define a function xmath338 such that xmath339 so xmath340 on xmath341 and xmath342 on xmath343 see figure figrndsmustar one can in fact drop the last assumption on xmath344 as we will do in the kerr de sitter discussion for simplicity but in the present situation this assumption allows for the nice interpretation of the appended region as a past or backwards version of the exterior region of a black hole solid in the region xmath134 beyond the cauchy horizon to a smooth function xmath344 dashed where different from xmath179 notice that the xmath344 has the same qualitative properties near xmath345 as near xmath164 we extend the metric to xmath346 by defining xmath347 we then extend xmath3 beyond xmath138 as in put xmath348 with xmath349 xmath350 when xmath351 xmath352 where we set xmath353 further let xmath354 for xmath355 so xmath356 in xmath357 up to redefining xmath358 by an additive constant then in xmath359coordinates the metric xmath3 takes the form near xmath360 with xmath293 replaced by xmath361 and xmath362 hence xmath3 extends across xmath138 as a non degenerate stationary lorentzian metric and we can choose xmath363 to be smooth across xmath138 so that xmath364 is timelike in xmath365 and such that moreover xmath366 in xmath367 thus ensuring the form of the metric replacing xmath293 and xmath294 by xmath361 and xmath95 respectively we can glue the functions xmath361 and xmath295 together by defining the smooth function xmath368 in xmath369 to be equal to xmath361 in xmath365 and equal to xmath295 in xmath370 define xmath371 note here that xmath364 is past timelike in xmath365 lastly we put xmath372 note that in the region xmath332 we have produced an artificial horizon xmath142 at xmath138 again the notation xmath278 is incorrect from the perspective of xmath332 but is consistent with the notation used in the discussion of xmath331 let us summarize our construction proprndsmfd fix parameters xmath178 of a reissner de sitter spacetime which is non degenerate in the sense of definition defrndsnondegenerate let xmath344 be a smooth function on xmath373 satisfying where xmath179 is given by for xmath141 small define the manifold xmath374 and equip xmath89 with a smooth stationary non degenerate lorentzian metric xmath3 which has the form xmath375cupr2delta r3delta beginsplit labeleqrndsmetrictransition g mudt2 2sj1mu cjdtdr2cjmu cj2dr2r2domega2 qquadqquad r rjleq 2delta textnormal or rinr1 2delta r2 2delta j1 endsplit beginsplit labeleqrndsmetricbeyond g mudt2 4sjdtdr 3mu1dr2r2domega2 qquadqquad rinrj2delta rjdelta j02textnormal or rinrjdelta rj2delta j13 endsplit endaligned in xmath376 where xmath377 then the region xmath378 is isometric to a region in the reissner nordstrm de sitter spacetime with parameters xmath379 with xmath21 isometric to the exterior domain bounded by the event horizon xmath17 at xmath7 and the cosmological horizon xmath18 at xmath8 xmath380 isometric to the black hole region bounded by the future cauchy horizon xmath16 at xmath6 and the event horizon and xmath381 isometric to a region beyond the future cauchy horizon see figure figrndsextfull furthermore xmath89 is time orientable one can choose the smooth functions xmath382 such that xmath383 and xmath384 the hypersurfaces xmath385 are spacelike provided xmath60 is sufficiently small here xmath386 they bound a domain xmath153 which is a submanifold of xmath89 with corners recall remark rmkrndshypersurfacenotation for our conventions in naming the hypersurfaces xmath89 and xmath153 possess natural partial compactifications xmath74 and xmath150 respectively obtained by introducing xmath387 and adding to them their ideal boundary at infinity xmath73 the metric xmath3 is a non degenerate lorentzian b metric on xmath74 and xmath150 adding xmath73 to xmath89 means defining xmath388 where xmath389 is identified with the point xmath390 and we define the smooth structure on xmath74 by declaring xmath75 to be a smooth boundary defining function the extensions described above amount to a direct construction of a manifold xmath391rtimesmathbbs2omega where we obtained the function xmath368 by gluing xmath361 and xmath295 in xmath308 and similarly xmath295 and xmath293 in xmath325 we then extend the metric xmath3 non degenerately to a stationary metric in xmath392 and xmath393 thus obtaining a metric xmath3 on xmath89 with the listed properties which is the diagram of reissner nordstrm de sitter in a neighborhood of the exterior domain and of the black hole region as well as near the cauchy horizon further beyond the cauchy horizon we glue in an artificial exterior region eliminating the singularity at xmath194 right the compactification of xmath153 to a manifold with corners xmath150 the smooth structure of xmath150 is the one induced by the embedding of xmath150 into the plane cross xmath394 as displayed here we define the regions xmath395 and xmath215 as in and respectively as submanifolds of xmath153 with corners their boundary hypersurfaces are hypersurfaces within xmath153 we denote the closures of these domains and hypersurfaces in xmath150 by the same names but dropping the superscript xmath396 furthermore we write xmath397 for the ideal boundaries at infinity one reason for constructing the compactification xmath150 step by step is that the null geodesic dynamics almost decouple in the subdomains xmath398 xmath399 and xmath400 see figures figrndsext01 figrndsext12 and figrndsext23 we denote by xmath83 the dual metric of xmath3 we recall that we can glue xmath401 in xmath398 xmath326 in xmath402 and xmath403 in xmath400 together using a non negative partition of unity and obtain a 1form xmath404 which is everywhere future timelike in xmath150 thus the characteristic set of xmath54 xmath405 with xmath406 the dual metric function globally splits into two connected components xmath407 indeed if xmath408 then xmath409 which is spacelike so xmath410 shows that xmath411 thus xmath110 resp xmath109 is the union of the past resp future causal cones we note that xmath108 and xmath136 are smooth codimension 1 submanifolds of xmath412 in view of the lorentzian nature of the dual metric xmath83 moreover xmath136 is transversal to xmath413 in fact the differentials xmath414 and xmath415 xmath75 lifted to a function on xmath80 are linearly independent everywhere in xmath412 we begin by analyzing the null geodesic flow in the b cotangent bundle near the horizons we will see that the hamilton vector field xmath93 has critical points where the horizons intersect the ideal boundary xmath416 of xmath150 more precisely xmath93 is radial there in order to simplify the calculations of the behavior of xmath93 nearby we observe that the smooth structure of the compactification xmath150 which is determined by the function xmath387 is unaffected by the choice of the functions xmath223 in proposition proprndsmfd since changing xmath223 merely multiplies xmath75 by a positive function that only depends on xmath5 hence is smooth on our initial compactification xmath150 now the intersections xmath417 are smooth boundary submanifolds of xmath74 and we define xmath418 which is well defined given merely the smooth structure on xmath150 the point of our observation then is that we can study the hamilton flow near xmath120 using any choice of xmath223 thus introducing xmath419 with xmath420 near xmath250 we find from that xmath421 let xmath422 then with xmath423 and writing b covectors as xmath424 the dual metric function xmath425 near xmath120 is then given by xmath426 correspondingly the hamilton vector field is xmath427 to study the xmath93flow in the radially compactified b cotangent bundle near xmath125 we introduce rescaled coordinates xmath428 we then compute the rescaled hamilton vector field in xmath429 to be xmath430 writing xmath431 in a local coordinate chart on xmath394 we have xmath432 thus xmath433 at xmath434 in particular xmath435 have opposite signs by definition of xmath294 and the quantity which will control regularity and decay thresholds at the radial set xmath120 is the quotient xmath436 see definition defrndsorderfunctions and the proof of proposition proprndsglobalreg for their role we remark that the reciprocal xmath437 is equal to the surface gravity of the horizon at xmath112 see eg xcite we proceed to verify that xmath438 is a source sink for the xmath105flow within xmath124 by constructing a quadratic defining function xmath439 of xmath125 within xmath440 for which xmath441 modulo terms which vanish cubically at xmath120 note that xmath442 has the same relative sign now xmath125 is defined within xmath443 by the vanishing of xmath444 and xmath445 and we have xmath446 likewise for xmath445 therefore xmath447 satisfies one can in fact easily diagonalize the linearization of xmath105 at its critical set xmath125 by observing that xmath448 modulo quadratically vanishing terms further studying the flow at xmath112 we note that xmath273 is null there and writing xmath449 a covector xmath450 is in the orthocomplement of xmath273 if and only if xmath451 using the form of the metric which then implies xmath452 in view of xmath453 since xmath454 we deduce that xmath455 at xmath456 where we let xmath457 we note that this set is invariant under the hamilton flow more precisely we have xmath458 so for xmath264 ie at xmath8 xmath273 is in the same causal cone as xmath459 hence in the future null cone thus letting xmath460 and taking xmath461 we find that xmath462 lies in the same causal cone as xmath273 but xmath462 is not orthogonal to xmath273 hence we obtain xmath463 more generally xmath464 it follows that forward null bicharacteristics in xmath110 can only cross xmath8 in the inward direction xmath5 decreasing while those in xmath109 can only cross in the outward direction xmath5 increasing at xmath138 there is a sign switch both in the definition of xmath136 because there xmath459 is past timelike and in xmath465 so the same statement holds there at xmath7 there is a single sign switch in the calculation because of xmath466 and at xmath6 there is a single sign switch because of the definition of xmath136 there so forward null bicharacteristics in xmath110 can only cross xmath6 or xmath7 in the inward direction xmath5 decreasing and forward bicharacteristics in xmath109 only in the outward direction xmath5 increasing next we locate the radial sets xmath120 within the two components of the characteristic set ie determining the components xmath467 of the radial sets the calculations verifying the initial final character of the artificial hypersurfaces appearing in the arguments of the previous section show that xmath468 at xmath165 and xmath186 while xmath469 at xmath360 and xmath185 so since xmath109 resp xmath110 is the union of the future resp past null cones we have xmath470 in view of and taking into account that xmath471 differs from xmath75 by an xmath5dependent factor while xmath472 at xmath120 we thus have xmath473 we connect this with figure figrndsextfull namely if we let xmath460 then xmath474 is the unstable manifold at xmath475 for xmath352 and the stable manifold at xmath475 for xmath476 and the other way around for xmath477 in view of xmath475 is a sink for the xmath105 flow within xmath478 for xmath352 while it is a source for xmath476 with sink source switched for the xmath479 sign see figure figrndsradial of the characteristicset and the behavior of two null geodesics the arrows on the horizons are future timelike in xmath110 all arrows are reversed we next shift our attention to the two domains of outer communications xmath480 in xmath398 and xmath21 in xmath400 where we study the behavior of the radius function along the flow using the form of the metric thus at a point xmath481 we have xmath482 so xmath472 necessitates xmath483 hence xmath484 and thus we get xmath485 now for xmath159 xmath486 vanishes at the radius xmath487 of the photon sphere and xmath488 for xmath489 likewise for xmath160 by construction we have xmath490 only at xmath491 and xmath492 for xmath493 therefore if xmath472 then xmath494 unless xmath495 in which case xmath462 lies in the trapped set xmath496 restricting to bicharacteristics within xmath497 which is invariant under the xmath93flow since xmath498 there and defining xmath499 we can conclude that all critical points of xmath500 along null geodesics in xmath501 or xmath346 are strict local minima indeed if xmath502 at xmath462 then either xmath495 in which case xmath503 unless xmath472 hence xmath504 or xmath472 in which case xmath505 unless xmath495 hence again xmath504 as in xcite this implies that within xmath79 forward null bicharacteristics in xmath501 resp xmath346 either tend to xmath506 resp xmath507 or they reach xmath7 or xmath8 resp xmath138 or xmath6 in finite time while backward null bicharacteristics either tend to xmath508 resp xmath509 or they reach xmath7 or xmath8 resp xmath138 or xmath6 in finite time for this argument we make use of the source sink dynamics at xmath510 further they can not tend to xmath511 resp xmath512 in both the forward and backward direction while remaining in xmath346 resp xmath501 unless they are trapped ie contained in xmath511 resp xmath512 since otherwise xmath513 would attain a local maximum along them lastly bicharacteristics reaching a horizon xmath112 in finite time in fact cross the horizon by our earlier observation the trapping at xmath514 is in fact xmath5normally hyperbolic for every xmath5 xcite next in xmath515 we recall that xmath273 is future resp past timelike in xmath516 and xmath517 resp xmath518 therefore if xmath453 lies in one of these three regions xmath454 implies xmath519 this is consistent with and the paragraph following it in order to describe the global structure of the null bicharacteristic flow we define the connected components of the trapped set in the exterior domain of the spacetime xmath520 then xmath521 have stable unstable manifolds xmath522 with the convention that xmath523 while xmath524 is transversal to xmath478 concretely xmath525 is the union of forward trapped bicharacteristics ie bicharacteristics which tend to xmath526 in the forward direction while xmath527 is the union of backward trapped bicharacteristics tending to xmath526 in the backward direction further xmath528 is the union of backward trapped bicharacteristics and xmath529 the union of forward trapped bicharacteristics tending to xmath530 see figure figrndsflow of the characteristic set and in the region xmath531 of the reissner de sitter spacetime the picture for xmath110 is analogous with the direction of the arrows reversed and xmath532 replaced by xmath533 the structure of the flow in the neighborhood xmath398 of the artificial exterior region is the same as that in the neighborhood xmath400 of the exterior domain except the time orientation and thus the two components of the characteristic set are reversed write xmath534 a denote by xmath535 the forward and backward trapped sets with the same sign convention as for xmath522 above we note that backward resp forward trapped null bicharacteristics in xmath536 resp xmath537 may be forward resp backward trapped in the artificial exterior region ie they may lie in xmath538 resp xmath539 but this is the only additional trapping present in our setup to statethis succinctly we write xmath540 then proprndsflow the null bicharacteristic flow in xmath541 has the following properties 1 itrndsflowbdy let xmath542 be a null bicharacteristic at infinity xmath543 where xmath544 then in the backward direction xmath542 either crosses xmath139 in finite time or tends to xmath545 while in the forward direction xmath542 either crosses xmath217 in finite time or tends to xmath546 the curve xmath542 can tend to xmath526 in at most one direction and likewise for xmath547 itrndsflowint let xmath542 be a null bicharacteristic in xmath548 then in the backward direction xmath542 either crosses xmath549 in finite time or tends to xmath550 while in the forward direction xmath542 either crosses xmath551 in finite time or tends to xmath552 itrndsflowhyp in both cases in the region where xmath518 xmath553 is strictly decreasing resp increasing in the forward resp backward direction in xmath109 while in the regions where xmath516 or xmath517 xmath553 is strictly increasing resp decreasing in the forward resp backward direction in xmath109 itrndsflowradialtrapped xmath510 xmath554 as well as xmath521 and xmath555 are invariant under the flow for null bicharacteristics in xmath110 the analogous statements hold with backward and forward reversed and xmath479 and xmath556 switched herexmath139 etc is a shorthand notation for xmath557 statement itrndsflowhyp follows from and itrndsflowradialtrapped holds by the definition of the radial and trapped sets to prove the backward part of itrndsflowbdy note that if xmath516 on xmath542 then xmath542 crosses xmath139 by if xmath138 on xmath542 then xmath542 crosses into xmath516 since xmath558 if xmath542 remains in xmath559 in the backward direction it either tends to xmath547 or it crosses xmath6 since it can not tend to xmath560 because of the sink nature of this set once xmath542 crosses into xmath32 it must tend to xmath7 by itrndsflowhyp and hence either tend to the source xmath561 or cross into xmath562 in xmath562 xmath542 must tend to xmath552 as it can not cross xmath7 or xmath8 into xmath563 or xmath517 in the backward direction the analogous statement for xmath110 now in the forward direction is immediate since reflecting xmath542 pointwise across the origin in the b cotangent bundle but keeping the affine parameter the same gives a bijection between backward bicharacteristics in xmath109 and forward bicharacteristics in xmath110 the forward part of itrndsflowbdy is completely analogous it remains to prove itrndsflowint note that xmath564 at xmath565 thus in xmath327 where xmath566 is future timelike xmath75 is strictly decreasing in the backward direction along bicharacteristics xmath567 hence the arguments for part itrndsflowbdy show that xmath542 crosses xmath139 or tends to xmath550 if it lies in xmath568 otherwise it crosses into xmath32 in the backward direction in the latter case recall that in xmath380 xmath553 is monotonically increasing in the backward direction we claim that xmath542 can not cross xmath277 with the defining function xmath569 of xmath277 we arranged for xmath570 to be past timelike so xmath571 for xmath572 ie xmath127 is increasing in the backward direction along the xmath93integral curve xmath542 near xmath277 which proves our claim this now implies that xmath542 enters xmath268 in the backward direction from which point on xmath75 is strictly increasing hence xmath542 either crosses xmath14 in xmath573 or it crosses into xmath562 in the latter case it in fact crosses xmath14 by the arguments proving itrndsflowbdy the forward part is proved in a similar fashion forward solutions to the wave equation xmath126 in the domain of dependence of xmath14 ie in xmath574 are not affected by any modifications of the operator xmath54 outside ie in xmath145 as indicated in secintro we are therefore free to place complex absorbing operators at xmath512 and xmath575 which obviate the need for delicate estimates at normally hyperbolic trapping see the proof of proposition proprndsglobalreg and for a treatment of regularity issues at the artificial horizon related to xmath576 in see also definition defrndsorderfunctions concretely let xmath577 be a small neighborhood of xmath578 with xmath579 the projection so that xmath580 in the notation of proposition proprndsmfd thus xmath577 stays away from xmath581 choose xmath152 with schwartz kernel supported in xmath582 and real principal principal symbol satisfying xmath583 with the inequality strict at xmath584 thus xmath133 is elliptic at xmath585 we then study the operator xmath586 the convention for the sign of xmath54 is such that xmath587 we will use weighted variable order b sobolev spaces with weight xmath588 and the order given by a function xmath589 in fact the regularity will vary only in the base not in the fibers of the b cotangent bundle we refer the reader to appendix a and appendix secvariable for details on variable order spaces we define the function space xmath590 as the space of restrictions to xmath150 of elements of xmath591 which are supported in the causal future of xmath592 thus distributions in xmath593 are supported distributions at xmath592 and extendible distributions at xmath551 and at xmath76 see appendix b in fact on manifolds with corners there are some subtleties concerning such mixed supported extendible spaces and their duals which we discuss in appendix secsuppext the supported character at the initial surfaces encoding vanishing cauchy data is the reason for the subscript fw forward the norm on xmath593 is the quotient norm induced by the restriction map which takes elements of xmath594 with the stated support property to their restriction to xmath150 dually we also consider the space xmath595 consisting of restrictions to xmath150 of distributions in xmath594 which are supported in the causal past of xmath551 concretely for the analysis of xmath154 we will work on slightly growing function spaces ie allowing exponential growth of solutions in xmath368 we will obtain precise asymptotics in particular boundedness in the next section thus let us fix a weight xmath596 the sobolev regularity is dictated by the radial sets xmath597 and xmath598 as captured by the following definition defrndsorderfunctions let xmath588 then a smooth function xmath599 is called a forward order function for the weight xmath51 if xmath600 with xmath576 defined in here xmath601 is any small number the function xmath602 is called a backward order function for the weight xmath51 if xmath603 backward order functions will be used for the analysis of the dual problem rmkrndsbeta1computation if xmath604 and xmath162 still a forward order function xmath602 can be taken constant and thus one can work on fixed order sobolev spaces in proposition proprndsglobalreg below this is the case for small charges xmath174 indeed a straightforward computation in the variable xmath605 using lemma lemmarndsnondegenerate shows that xmath606 note that xmath602 is a forward order function for the weight xmath51 if and only if xmath607 is a backward order function for the weight xmath608 the lower resp upper bounds on the order functions at the radial sets are forced by the propagation estimate proposition 21 which will we use at the radial sets one can propagate high regularity from xmath609 into the radial set and into the boundary red shift effect while there is an upper limit on the regularity one can propagate out of the radial set and the boundary into the interior xmath609 of the spacetime blue shift effect the definition of order functions here reflects the precise relationship of the a priori decay or growth rate xmath51 and the regularity xmath602 ie the strength of the red or blue shift effect depending on a priori decay or growth along the horizon we recall the radial point propagation result in a qualitative form the quantitative version of this yielding estimates follows from the proof of this result or can be recovered from the qualitative statement using the closed graph theorem proprndsradialrecall proposition 21 suppose xmath154 is as above and let xmath588 let xmath113 if xmath610 xmath611 and if xmath612 then xmath510 and thus a neighborhood of xmath510 is disjoint from xmath613 provided xmath614 xmath615 and in a neighborhood of xmath510 xmath616 is disjoint from xmath613 on the other hand if xmath617 and if xmath612 then xmath510 and thus a neighborhood of xmath510 is disjoint from xmath613 provided xmath614 and a punctured neighborhood of xmath510 with xmath510 removed in xmath440 is disjoint from xmath613 we then have proprndsglobalreg suppose xmath162 and xmath602 is a forward order function for the weight xmath51 let xmath618 be a forward order function for the weight xmath51 with xmath619 then xmath620 we also have the dual estimate xmath621 for backward order functions xmath622 and xmath623 for the weight xmath608 with xmath624 both estimates hold in the sense that if the quantities on the right hand side are finite then so is the left hand side and the inequality is valid the arguments are very similar to the ones used in xcite the proof relies on standard energy estimates near the artificial hypersurfaces various microlocal propagation estimates and crucially relies on the description of the null bicharacteristic flow given in proposition proprndsflow let xmath625 be such that xmath626 first of all we can extend xmath127 to xmath627 with xmath628 supported in xmath629 xmath630 still and xmath631 near xmath14 we can then use the unique solvability of the forward problem for the wave equation xmath632 to obtain an estimate for xmath12 there indeed using an approximation argument approximating xmath628 by smooth functions xmath633 and using the propagation of singularities propagating xmath634regularity from xmath635 where the forward solution xmath636 of xmath637 vanishes which can be done on this regularity scale uniformly in xmath314 we obtain an estimate xmath638 since xmath12 agrees with xmath639 in the domain of dependence of xmath14 the same argument shows that we can control the xmath640norm of xmath12 in a neighborhood of xmath139 say in xmath641 in terms of xmath642 then in xmath643 we use the propagation of singularities forwards in xmath109 backwards in xmath110 to obtain local xmath634regularity away from the boundary at infinity xmath73 at the radial sets xmath644 and xmath598 the radial point estimate proposition proprndsradialrecall allows us using the a priori xmath645regularity of xmath12 to propagate xmath646regularity into xmath647 propagation within xmath648 then shows that we have xmath646control on xmath12 on xmath649 since xmath162 we can then use theorem 32 to control xmath12 in xmath646 microlocally at xmath511 and propagate this control along xmath650 near xmath217 the microlocal propagation of singularities only gives local control away from xmath217 but we can get uniform regularity up to xmath217 by standard energy estimates using a cutoff near xmath217 and the propagation of singularities for an extended problem solving the forward wave equation with forcing xmath628 cut off near xmath217 plus an error term coming from the cutoff see proposition 213 and the similar discussion around below in the present proof we thus obtain an estimate for the xmath646norm of xmath12 in xmath268 next we propagate regularity in xmath380 using part itrndsflowhyp of proposition proprndsflow and our assumption xmath651 the only technical issue is now at xmath277 where the microlocal propagation only gives local regularity away from xmath277 this will be resolved shortly focusing on the remaining region xmath652 we start with the control on xmath12 near xmath139 which we propagate forwards in xmath109 and backwards in xmath110 either up to xmath333 or into the complex absorption hiding xmath585 see xcite for the propagation of singularities with complex absorption moreover at the elliptic set of the complex absorbing operator xmath133 we get xmath653control on xmath12 and we can propagate xmath646estimates from there the result is that we get xmath646estimates of xmath12 in a punctured neighborhood of xmath166 within xmath648 thus the low regularity part of proposition proprndsradialrecall applies we can then propagate regularity from a neighborhood xmath166 along xmath654 this gives us local regularity away from xmath655 where the microlocal propagation results do not directly give uniform estimates in order to obtain uniform regularity up to xmath655 we use the aforementioned cutoff argument for an extended problem near xmath655 choose xmath656 such that xmath657 for xmath658 xmath659 and such that xmath660 if xmath641 or xmath240 or xmath661 see figure figrndscutoff for an illustration in particular xmath6620 by the support properties of xmath133 therefore we have xmath663uquad uchi u note that we have uniform xmath664control on xmath665u by the support properties of xmath666 extend xmath667 beyond xmath655 to xmath668 with support in xmath669 so that the global norm of xmath670 is bounded by a fixed constant times the quotient norm of xmath667 the solution of the equation xmath671 with support of xmath672 in xmath673 is unique it is simply the forward solution taking into account the time orientation in the artificial exterior region but then the local regularity estimates for xmath672 for the extended problem which follow from the propagation of singularities using the approximation argument sketched above give by restriction uniform regularity of xmath12 up to xmath655 the cutoff xmath674 is supported in and below the shaded region the shaded region itself containing xmath675 is where we have already established xmath676bounds for xmath12 putting all these estimates together we obtain an estimate for xmath677 in terms of xmath678 the proof of the dual estimate is completely analogous we now obtain initial regularity that we can then propagate as above by solving the backward problem for xmath54 near xmath655 and xmath217 the estimates in proposition proprndsglobalreg do not yet yield the fredholm property of xmath154 as explained in xcite we therefore study the mellin transformed normal operator family xmath156 see xcite which in the present dilation invariant in xmath75 or translation invariant in xmath368 setting is simply obtained by conjugating xmath154 by the mellin transform in xmath75 or equivalently the fourier transform in xmath679 ie xmath680 acting on functions on the boundary at infinity xmath681 concretely we need to show that xmath156 is invertible between suitable function spaces on xmath682 for a weight xmath162 since this will allow us to improve the xmath683 error term in by a space with an improved weight so xmath593 injects compactly into it an analogous procedure for the dual problem gives the full fredholm property for xmath154 see xcite and below for details for any finite value of xmath61 we can analyze the operator xmath684 xmath685 using standard microlocal analysis and energy estimates near xmath686 and xmath687 the natural function spaces are variable order sobolev spaces xmath688 which we define to be the restrictions to xmath544 of elements of xmath689 with support in xmath629 and dually on xmath690 the restrictions to xmath416 of elements of xmath689 with support in xmath691 obtaining fredholm mapping properties between suitable function spaces however in order to obtain useful estimates for our global b problem we need uniform estimates for xmath156 as xmath157 in strips of bounded xmath64 on function spaces which are related to the variable order b sobolev spaces on which we analyze xmath154 thus let xmath692 xmath693 and consider the semiclassical rescaling xcite xmath694 we refer to xcite for details on the relationship between the b operator xmath154 and its semiclassical rescaling in particular we recall that the hamilton vector field of the semiclassical principal symbol of xmath695 for xmath696 is naturally identified with the hamilton vector field of the b principal symbol of xmath154 restricted to xmath697 where we use the coordinates in the b cotangent bundle forany sobolev order function xmath698 and a weight xmath588 the mellin transform in xmath75 gives an isomorphism xmath699 where xmath700 for xmath701 is a semiclassical variable order sobolev space with a non constant weighting in xmath702 see appendix secvariable for definitions and properties of such spaces the analysis of xmath695 xmath703 acting on xmath704type spaces is now straightforward given the properties of the hamilton flow of xmath154 indeed in view of the supported extendible nature of the b spaces xmath705 and xmath706 into account we are led to define the corresponding semiclassical space xmath707 to be the space of restrictions to xmath416 of elements of xmath704 with support in xmath629 resp then in the region where xmath602 is not constant recall that this is a subset of xmath709 xmath695 is a semiclassical real principal type operator as follows from and hence the only microlocal estimates we need there are elliptic regularity and the real principal type propagation for variable order semiclassical sobolev spaces these estimates are proved in propositions propvariablesclelliptic and propvariablesclpropagation the more delicate estimates take place in standard semiclassical function spaces these are the radial point estimates near xmath112 in the present context proved in xcite and the semiclassical estimates of wunsch zworski xcite and dyatlov xcite microlocalized in xcite at the normally hyperbolic trapping near the artificial hypersurfaces xmath710 intersected with xmath76 the operator xmath695 is a semiclassical wave operator and we use standard energy estimates there similar to the proof of proposition proprndsglobalreg but keeping track of powers of xmath702 see xcite for details we thus obtain proprndssemiclassical let xmath711 then for xmath712 and xmath713 we have the estimate xmath714 with a uniform constant xmath33 here xmath602 and xmath619 are forward order functions for all weights in xmath715 see definition defrndsorderfunctions for the dual problem we similarly have xmath716 where xmath602 and xmath619 are backward order functions for all weights in xmath717 notice here that if xmath602 were constant the estimate would read xmath718 which is the usual hyperbolic loss of one derivative and one power of xmath702 the estimate is conceptually the same but in addition takes care of the variable orders trapping causes no additional losses here since xmath719 rmkrndssemiclassicaldual we have xmath720 and xmath721 the change of sign in xmath722 when going from to the dual estimate is analogous to the change of sign in the weight xmath51 in proposition proprndsglobalreg for future reference we note that we still have high energy estimates for xmath723 in strips including and extending below the real line the only delicate part is the estimate at the normally hyperbolic trapping more precisely at the semiclassical trapped set xmath724 which can be naturally identified with the intersection of the trapped set xmath511 with xmath725 for xmath696 thus let xmath726 be the minimal expansion rate at the semiclassical trapped set in the normal direction as in xcite or xcite let us then write xmath727 for some real number xmath58 in the kerr de sitter case discussed later xmath728 is a smooth function on xmath724 see subsecrndshighreg in particular and for the ingredients for the calculation of xmath728 in a limiting case therefore if xmath729 then xmath730 the reason for the xmath731 appearing here is the following for the xmath556 case note that for xmath732 corresponding to semiclassical analysis in xmath733 which near the trapped set xmath511 intersects the forward light cone xmath109 non trivially we propagate regularity forwards along the hamilton flow while in the xmath479 case corresponding to propagation in the backward light cone xmath110 we propagate backwards along the flow using xcite see also the discussion in xcite we conclude proprndssemiclassical2 using the above notation the uniform estimates and hold with xmath734 replaced by xmath602 on the right hand sides provided xmath735 where xmath736 the effect of replacing xmath734 by xmath602 is that this adds an additional xmath737 to the right hand side ie we get a weaker estimate which in the presence of trapping can not be avoided by xcite the strengthening of the norm in the regularity sense is unnecessary but does not affect our arguments later we return to the case xmath719 if we define the space xmath738 then the estimates in proposition proprndssemiclassical imply that the map xmath739 is fredholm for xmath740 with high energy estimates as xmath157 moreover for small xmath712 the error terms on the right hand sides of and can be absorbed into the left hand sides hence in this case we obtain the invertibility of the map this implies that xmath741 is invertible for xmath740 xmath742 since therefore there are only finitely many resonances poles of xmath743 in xmath744 for any xmath711 we may therefore pick a weight xmath162 such that there are no resonances on the line xmath682 which in view of implies the estimate xmath745 where xmath746 is the manifold on which the dilation invariant operator xmath747 naturally lives here xmath602 is a forward order function for the weight xmath51 and the subscript fw on the b sobolev spaces denotes distributions with supported character at xmath748 and extendible at xmath749 we point out that the choice xmath75 of boundary defining function and the choice of xmath75dilation orbits fixes an isomorphism of a collar neighborhood of xmath416 in xmath150 with a neighborhood of xmath750 in xmath751 and the two xmath646norms on functions supported in this neighborhood given by the restriction of the xmath752norm and the restriction of the xmath753norm respectively are equivalent equipped with we can now improve proposition proprndsglobalreg to obtain the fredholm property of xmath154 first we let xmath602 be a forward order function for the weight xmath51 but with the more stringent requirement xmath754 and we require that the forward order function xmath755 satisfies xmath756 using with xmath602 replaced by xmath755 and a cutoff xmath656 identically xmath94 near xmath416 and supported in a small collar neighborhood of xmath416 the estimate then implies as in xcite xmath757uhmathrmbmathrmfwmathsfs0 1alpha qquadqquad chimathcalpnmathcalp uhmathrmbmathrmfwmathsfs0 1alphaendaligned noting that xmath758intaumathrmdiffmathrmb1 the second to last term can be estimated by xmath759 while the last term can be estimated by xmath760 thus we obtain xmath761 where the inclusion xmath762 is now compact this estimate implies that xmath763 is finite dimensional and xmath764 is closed the dual estimate is xmath765 where now xmath766 is a backward order function for the weight xmath608 and the backward order function xmath622 satisfies the more stringent bound xmath767 note that xmath158 not having a pole on the line xmath682 is equivalent to xmath768 not having a pole on the line xmath769 since xmath770 we wish to take xmath771 with xmath602 as in the estimate so if we require in addition to that xmath772 the estimates and for xmath771 imply by a standard functional analytic argument see eg proof of theorem 2617 that xmath773 is fredholm where xmath774 and the range of xmath154 is the annihilator of the kernel of xmath775 acting on xmath776 we can strengthen the regularity at the cauchy horizon by dropping cf xcite thmrndsfredholm suppose xmath162 is such that xmath154 has no resonances on the line xmath682 let xmath602 be a forward order function for the weight xmath51 and assume holds then the map xmath154 defined in is fredholm as a map with range equal to the annihilator of xmath777 let xmath778 be an order function satisfying both and so by the above discussion xmath779 is fredholm since xmath780 we a forteriori get the finite dimensionality of xmath781 on the other hand if xmath782 annihilates xmath777 it also annihilates xmath783 hence we can find xmath784 solving xmath785 the propagation of singularities proposition proprndsglobalreg implies xmath677 and the proof is complete to obtain a better result we need to study the structure of resonances notice that for the purpose of dealing with a single resonance one can simplify notation by working with the space xmath786 see rather than xmath787 since the semiclassical high energy parameter is irrelevant then lemmarndsresonancesupp 1 itrndsresonancesupp every resonant state xmath788 corresponding to a resonance xmath61 with xmath789 is supported in the artificial exterior region xmath790 more precisely every element in the range of the singular part of the laurent series expansion of xmath158 at such a resonance xmath61 is supported in xmath790 in fact this holds more generally for any xmath56 which is not a resonance of the forward problem for the wave equation in a neighborhood xmath400 of the black hole exterior itrndsresonancesupp0res if xmath791 denotes the restriction of distributions on xmath416 to xmath32 then the only pole of xmath792 with xmath793 is at xmath57 has rank xmath94 and the space of resonant states consists of constant functions since xmath12 has supported character at xmath794 we obtain xmath795 in xmath516 since xmath12 solves the wave equation xmath796 there on the other hand the forward problem for the wave equation in the neighborhood xmath400 of the black hole exterior does not have any resonances with positive imaginary part this is well known for schwarzschild de sitter spacetimes xcite and for slowly rotating kerr de sitter spacetimes either by direct computation xcite or by a perturbation argument xcite for the convenience of the reader we recall the argument for the schwarzschild de sitter case which applies without change in the present setting as well a simple integration by parts argument see eg xcite or xcite shows that xmath12 must vanish in xmath21 now the propagation of singularities at radial points implies that xmath12 is smooth at xmath7 and xmath8 where the a priori regularity exceeds the threshold value and hence in xmath517 xmath12 is a solution to the homogeneous wave equation on an asymptotically de sitter space which decays rapidly at the conformal boundary which is xmath8 hence must vanish identically in xmath517 see footnote 58 for details the same argument applies in xmath380 yielding xmath795 there therefore xmath797 as claimed an iterative argument similar to proof of lemma 83 yields the more precise result the more general statement follows along the same lines and is in fact much easier to prove since it does not entail a mode stability statement suppose xmath61 is not a resonance of the forward wave equation on xmath400 then a resonant state xmath798 must vanish in xmath400 and we obtain xmath797 as before likewise for the more precise result this proves itrndsresonancesupp for the proof of itrndsresonancesupp0res it remains to study the resonance at xmath97 since the only xmath400 resonance in the closed upper half plane is xmath97 note that an element in the range of the most singular laurent coefficient of xmath792 at xmath57 lies in xmath799 but elements in xmath799 which vanish near xmath6 vanish identically in xmath32 and hence are annihilated by xmath791 while elements which are not identically xmath97 near xmath6 are not identically xmath97 in xmath562 as well but the only non trivial elements of xmath799 which are smooth at xmath185 and xmath186 are constant in xmath21 and since xmath800 in xmath32 we deduce by unique continuation that xmath801 indeed consists of constant functions but then the order of the pole of xmath792 at xmath57 equals the order of the xmath97resonance of the forward problem for xmath54 in xmath400 which is known to be equal to xmath94 see the references above the xmath94dimensionality of xmath801 then implies that the rank of the pole of xmath802 at xmath97 indeed equals xmath94 since we are dealing with an extended global problem here involving pseudodifferential complex absorption solvability is not automatic but it holds in the region of interest xmath32 to show this we first need lemmarndssolvability recall the definition of the set xmath803 where the complex absorption is placed from under the assumptions of theorem thmrndsfredholm in particular xmath162 there exists a linear map xmath804 such that for all xmath805 the function xmath806 lies in the range of the map xmath154 in by theorem thmrndsfredholm the statement of the lemma is equivalent to xmath807 let xmath808 xmath809 we claim that xmath810 implies xmath811 in other words elements of xmath812 are uniquely determined by their restriction to xmath813 to see this note that xmath814 on xmath813 implies that in fact xmath13 solves the homogeneous wave equation xmath815 thus we conclude by the supported character of xmath13 at xmath655 and xmath217 that xmath13 in fact vanishes in xmath563 and xmath517 so xmath816 using the high energy estimates a contour shifting argument see lemma 35 and the fact that resonances of xmath154 with xmath789 have support disjoint from xmath817 by lemma lemmarndsresonancesupp itrndsresonancesupp we conclude that in fact xmath818 ie xmath13 vanishes to infinite order at future infinity but then radial point estimates and the simple version of propagation of singularities at the normally hyperbolic trapping since we are considering the backwards problem on decaying spaces see theorem 32 estimate 310 imply that in fact xmath819 nowthe energy estimate in lemma 215 applies to xmath13 and yields xmath820 for xmath821 hence xmath814 as claimed therefore if xmath822 forms a basis of xmath812 then the restrictions xmath823 are linearly independent elements of xmath824 and hence one can find xmath825 with xmath826 the map xmath827 then satisfies all requirements we can then conclude under the assumptions of theorem thmrndsfredholm all elements in the kernel of xmath154 in are supported in the artificial exterior domain xmath790 moreover for all xmath805 with support in xmath32 there exists xmath828 such that xmath137 in xmath32 if xmath677 lies in xmath763 then the supported character of xmath12 at xmath592 together with uniqueness for the wave equation in xmath516 and xmath32 implies that xmath12 vanishes identically there giving the first statement for the second statement we use lemma lemmarndssolvability and solve the equation xmath829 which gives the desired xmath12 in particular solutions of the equation xmath137 exist and are unique in xmath32 which we of course already knew from standard hyperbolic theory in the region on our side xmath32 of the cauchy horizon the point is that we now understand the regularity of xmath12 up to the cauchy horizon we can refine this result substantially for better behaved forcing terms eg for xmath830 with support in xmath32 we will discuss this in the next two sections the only resonance of the forward problem in xmath400 in xmath793 is a simple resonance at xmath57 with resonant states equal to constants see the references given in the proof of lemma lemmarndsresonancesupp and there exists xmath26 such that xmath97 is the only resonance in xmath831 this does not mean that the global problem for xmath154 does not have other resonances in this half space in the notation of proposition proprndssemiclassical2 we may assume xmath832 so that we have high energy estimates in xmath831 proprndspartialasymp let xmath26 be as above suppose xmath12 is the forward solution of xmath833 then xmath12 has a partial asymptotic expansion xmath834 with xmath30 and xmath657 near xmath73 xmath660 away from xmath73 and xmath34 is smooth in xmath32 while xmath835 near xmath6 let xmath836 and let xmath602 be a forward order function for the weight xmath837 using lemma lemmarndssolvability we may assume that xmath838 is solvable with xmath839 by modifying xmath127 in xmath134 if necessary in fact by the propagation of singularities theorem thmrndsfredholm we may take xmath602 to be arbitrarily large in compact subsets of xmath32 then a standard contour shifting argument using the high energy estimates for xmath156 in xmath831 see lemma 35 or theorem 221 implies that xmath840 has an asymptotic expansion as xmath132 xmath841 where the xmath842 are the resonances of xmath154 in xmath843 the xmath844 are their multiplicities and the xmath845 are resonant states corresponding to the resonance xmath842 lastly xmath846 is the remainder term of the expansion even though xmath154 is dilation invariant near xmath73 this argument requires a bit of care due to the extendible nature of xmath12 at xmath847 one needs to consider the cutoff equation xmath848u computing the inverse mellin transform of xmath849 generates the expansion by a contour shifting argument see lemma 31 now xmath154 annihilates the partial expansion so xmath850 on the set where xmath657 by the propagation of singularities proposition proprndsglobalreg we can improve the regularity of xmath34 on this set to xmath835 thus we have shown regularity in the region where xmath657 ie where we did not cut off however considering on an enlarged domain and running the argument there with the cutoff xmath674 supported in the enlarged domain and identically xmath94 on xmath150 we obtain the full regularity result upon restricting to xmath150 now by lemma lemmarndsresonancesupp itrndsresonancesupp all resonant states of xmath154 which are not resonant states of the forward problem in xmath400 must in fact vanish in xmath32 and by part itrndsresonancesupp0res of lemma lemmarndsresonancesupp the only term in that survives upon restriction to xmath32 is the constant term thus we obtain a partial expansion with a remainder which decays exponentially in xmath368 in an xmath43 sense we will improve this in particular to xmath851 decay in the next section suppose xmath12 solves hence it has an expansion for any killing vector field xmath79 we then have xmath852 now if xmath639 solves the global problem xmath853 using the extension operator xmath854 from lemma lemmarndssolvability then xmath855 in xmath32 by the uniqueness for the cauchy problem in this region but by proposition proprndspartialasymp xmath639 has an expansion like with constant term vanishing because xmath79 annihilates the constant term in the expansion of xmath12 and therefore xmath639 lies in space xmath856 near the cauchy horizon xmath857 as well this argument can be iterated and we obtain xmath858 any number xmath859 of vector fields xmath860 which are equal to xmath861 or rotation vector fields on the xmath394factor of the spacetime which are independent of xmath862 these vector fields are all tangent to the cauchy horizon we obtain for any small open interval xmath2 containing xmath165 that xmath863 a posteriori by sobolev embedding this gives corrndsboundedness using the notation of proposition proprndspartialasymp the solution xmath12 of has an asymptotic expansion xmath864 with xmath30 and there exists a constant xmath28 such that xmath865 in particular xmath12 is uniformly bounded in xmath32 and extends continuously to xmath16 translated back to xmath866 the estimate on the remainder states that for scalar waves one has exponentially fast pointwise decay to a constant this corollary recovers franzen s boundedness result xcite for linear scalar waves on the reissner nordstrm spacetime near the cauchy horizon in the cosmological setting the above argument is unsatisfactory in two ways firstly they are not robust and in particular do not quite apply in the kerr de sitter setting discussed in seckds however see remark rmkkdsrescarter which shows that using a hidden symmetry of kerr de sitter space related to the completely integrable nature of the geodesic equation one can still conclude boundedness in this case secondly the regularity statement is somewhat unnatural from a pde perspective thus we now give a more robust microlocal proof of the conormality of xmath34 ie iterative regularity under application of vector fields tangent to xmath6 which relies on the propagation of conormal regularity at the radial set xmath166 see proposition propbconormal first however we study conormal regularity properties of xmath156 for fixed xmath61 in particular giving results for individual resonant states from now on we work locally near xmath6 and microlocally near xmath867 and all pseudodifferential operators we consider implicitly have wavefront set localized near xmath868 as in subsecrndsflow we use the function xmath869 instead of xmath75 where xmath419 xmath870 near xmath6 hence the dual metric function xmath83 is given by since xmath471 is a smooth non zero multiple of xmath75 this is inconsequential from the point of view of regularity and it even is semiclassically harmless for xmath703 denote the conjugation of xmath154 by the mellin transform in xmath471 by xmath871 with xmath61 the mellin dual variable to xmath75 we first study standard non semiclassical conormality using techniques developed in xcite and used in a context closely related to ours in xcite we note that the standard principal symbol of xmath872 is given by xmath873 then lemmarndsmodule the xmath874module xmath875 is closed under commutators moreover we can choose finitely many generators of xmath876 over xmath874 denoted xmath877 xmath878 and xmath879 with xmath880 elliptic such that for all xmath881 we have xmath882 sumell0n cjellaellquad cjellinpsi1x where xmath883 for xmath884 since xmath166 is lagrangian and thus in particular coisotropic the first statement follows from the symbol calculus further is a symbolic statement as well since xmath885inpsi2x and the summand xmath886 is a freely specifiable first order term so we merely need to find symbols xmath887 homogeneous of degree xmath94 with xmath888 such that xmath889 with xmath890 for xmath884 note that this is clear for xmath891 since in this case xmath892 we then let xmath893 and we take xmath894 to be linear in the fibers and such that they span the linear functions in xmath895 over xmath896 we extend xmath897 to linear functions on xmath898 by taking them to be constant in xmath5 and xmath899 thus these xmath900 are symbols of differential operators in the spherical variables we then compute xmath901 which is of the desired form since xmath902 vanishes quadratically at xmath166 moreover for xmath903 one readily sees that xmath904 vanishes quadratically at xmath166 as well finishing the proof in the lagrangiansetting this is a general statement as shown by haber and vasy see lemma 21 equation 61 the positive commutator argument yielding the low regularity estimate at generalized radial sets see proposition 24 can now be improved to yield iterative regularity under the module xmath876 indeed we can follow the proof of proposition 44 which is for a generalized radial source sink in the b setting whereas we work on a manifold without boundary here so the weights in the reference can be dropped or xcite very closely we leave the details to the reader in order to compress the notation for products of module derivatives we denote xmath905 in the notation of the lemma and then use multiindex notation xmath906 the final result reverting back to xmath156 is the following recall that xmath907 is a source and xmath908 is a sink for the hamilton flow within xmath898 lemmarndsmoduleestimate let xmath909 be a vector of generators of the module xmath876 as above suppose xmath910 let xmath911 be such that xmath912 and xmath83 are elliptic at xmath907 resp xmath908 and all forward resp backward null bicharacteristics from xmath913 resp xmath914 reach xmath915 while remaining in xmath916 then xmath917 in particular corrndsresonantstateconormal if xmath12 is a resonant state of xmath154 ie xmath796 then xmath12 is conormal to xmath6 relative to xmath918 ie for any number of vector fields xmath919 on xmath79 which are tangent to xmath6 we have xmath920 indeed by the propagation of singularities xmath12 is smooth away from xmath921 and then lemma lemmarndsmoduleestimate implies the stated conormality property we now turn to the conormal regularity estimate in the spacetime b setting let us define xmath922 using the stationary xmath75invariant extensions of the vector field generators of the module xmath876 defined in lemma lemmarndsmodule together with xmath923 one finds that the module xmath924 is generated over xmath925 by xmath926 xmath927 and xmath928 with xmath929 elliptic satisfying xmath930sumell0n1 cjellaellquad cjellinpsimathrmb1m with xmath931 for xmath932 the proof of proposition 44 then carries over to the saddle point setting of proposition proprndsradialrecall and gives in the below threshold case which is the relevant one at the cauchy horizon propbconormal suppose xmath154 is as above and let xmath588 xmath933 if xmath934 and if xmath612 then xmath921 and thus a neighborhood of xmath921 is disjoint from xmath935 for all xmath936 provided xmath937 for xmath936 and provided a punctured neighborhood of xmath921 with xmath921 removed in xmath440 is disjoint from xmath938 thus if xmath939 is conormal to xmath654 ie remains in xmath940 microlocally under iterative applications of elements of xmath924 this in particular holds if xmath941 then xmath12 is conormal relative to xmath942 provided xmath12 lies in xmath943 in a punctured neighborhood of xmath166 using proposition propbconormal at the radial set xmath166in the part of the proof of proposition proprndspartialasymp where the regularity of xmath34 is established we obtain thmrndspartialasympconormal let xmath26 be as in proposition proprndspartialasymp and suppose xmath12 is the forward solution of xmath944 then xmath12 has a partial asymptotic expansion xmath864 where xmath657 near xmath73 xmath660 away from xmath73 and with xmath30 and xmath945 for all xmath859 and all vector fields xmath946 which are tangent to the cauchy horizon xmath6 here xmath947 is given by the same result holds true without the constant term xmath40 for the forward solution of the massive klein gordon equation xmath948 xmath39 small for the massive klein gordon equation the only change in the analysis is that the simple resonance at xmath97 moves into the lower half plane see eg the perturbation computation in lemma 35 this leads to the constant term xmath40 which was caused by the resonance at xmath97 being absent this implies the estimate and thus yields corollary corrndsboundedness as well the amount of decay xmath51 and thus the amount of regularity we obtain in theorem thmrndspartialasympconormal is directly linked to the size of the spectral gap ie the size of the resonance free strip below the real axis as explained in subsecrndsasymp due to the work of s barreto zworski xcite in the spherically symmetric case and general results by dyatlov xcite at xmath5normally hyperbolic trapping for every xmath5 the size of the essential spectral gap is given in terms of dynamical quantities associated to the trapping see proposition proprndssemiclassical2 we recall that the essential spectral gap is the supremum of all xmath949 such that there are only finitely many resonances above the line xmath950 thus the essential spectral gap only concerns the high energy regime ie it does not give any information about low energy resonances in this section we compute the size of the essential spectral gap in some limiting cases the possibly remaining finitely many resonances between xmath97 and the resonances caused by the trapping will be studied separately in future work we give some indications of the expected results in remark rmkrndshighreg in order to calculate the relevant dynamical quantities at the trapped set we compute the linearization of the flow in the xmath951 variables at the trapped set xmath511 we have xmath952 modulo functions vanishing quadratically at xmath511 and in the same sense xmath953 which in view of xmath954 see also gives xmath955 therefore the expansion rate of the flow in the normal direction at xmath511 is equal to xmath956 to find the size of the essential spectral gap for the forward problem of xmath54 we need to compute the size of the imaginary part of the subprincipal symbol of the semiclassical rescaling of xmath957 at the semiclassical trapped set put xmath958 xmath693 then xmath959 with xmath960 xmath588 we thus obtain xmath961 the essential spectral gap thus has size at least xmath51 provided xmath962 so xmath963 we compute the quantity on the right for near extremal reissner de sitter black holes with very small cosmological constant first using the radius of the photon sphere for the reissner nordstrm black hole with xmath22 xmath964 and the radius of the cauchy horizon xmath965 we obtain xmath966 for the size of the essential spectral gap for resonances caused by the trapping in the case xmath22 for xmath177 one finds xmath967 which agrees with equation 03 for xmath22 in the extremal case xmath968 we find xmath969 furthermore we have xmath970 thus xmath971 therefore xmath972 which blows up as xmath973 this corresponds to the fact the surface gravity of extremal black holes vanishes given xmath45 we can thus choose xmath60 small enough so that xmath974 and then taking xmath176 to be small the same relation holds for the xmath0dependent quantities xmath728 and xmath947 since there are only finitely many resonances in any strip xmath975 we conclude by theorem thmrndspartialasympconormal taking xmath976 close to xmath728 that for forcing terms xmath127 which are orthogonal to a finite dimensional space of dual resonant states corresponding to resonances in xmath977 the solution xmath12 has regularity xmath942 at the cauchy horizon put differently for near extremal reissner nordstrm de sitter black holes with very small cosmological constant xmath176 waves with initial data in a finite codimensional space within the space of smooth functions achieve any fixed order of regularity at the cauchy horizon in particular better than xmath50 rmkrndshighreg numerical investigations of linear scalar waves xcite and arguments using approximations of the scattering matrix xcite suggest that there are indeed resonances roughly at xmath978 xmath476 where xmath979 and xmath980 are the surface gravities of the cosmological horizon see as xmath981 we have xmath982 and for extremal black holes with xmath22 we have xmath983 on the static de sitter spacetime there is a resonance exactly at xmath984 as a rescaling shows for xmath985 one has xmath986 decay to constants away from the cosmological horizon xmath10 the static time coordinate see eg xcite now static de sitter space xmath987 with cosmological constant xmath176 can be mapped to xmath988 via xmath989 xmath990 where xmath991 is the surface gravity of the cosmological horizon and xmath992 resp xmath993 are static coordinates on xmath988 resp xmath987 under this map the metric on xmath988 is pulled back to a constant multiple of the metric on xmath987 thus waves on xmath987 decay to constants with the speed xmath994 which corresponds to a resonance at xmath984 our analysis is consistent with the numerical results assuming the existence of these resonances we expect linear waves in this case to be generically no smoother than xmath995 at the cauchy horizon which highlights the importance of the relative sizes of the surface gravities for understanding the regularity at the cauchy horizon for near extremal black holes where xmath996 this gives xmath997 thus the local energy measured by an observer crossing the cauchy horizon is of the order xmath998 which diverges in view of xmath999 this agrees with equation 9 we point out however that the waves are still in xmath50 if xmath1000 which is satisfied for near extremal black holes this is analogous to sbierski s criterion xcite for ensuring the finite energy of waves at the cauchy horizon of linear waves with fast decay along the event horizon the rigorous study of resonances associated with the event and cosmological horizons will be subject of future work the analysis presented in the previous sections goes through with only minor modifications if we consider the wave equation on natural vector bundles for definiteness we focus on the wave equation more precisely the hodge dalembertian on differential xmath1001forms xmath1002 in this case mode stability and asymptotic expansions up to decaying remainder terms in the region xmath215 a neighborhood of the black hole exterior region were proved in xcite the previous arguments apply to xmath1003 the only difference is that the threshold regularity at the radial points at the horizons shifts at the event horizon and the cosmological horizon this is inconsequential as we may work in spaces of arbitrary high regularity there at the cauchy horizon however one has fixing a time independent positive definite inner product on the fibers of the xmath1001form bundle with respect to which one computes adjoints xmath1004 at xmath921 with xmath1005 and xmath1006 and endomorphism on the xmath1001form bundle and one can compute that the lowest eigenvalue of xmath1006 which is self adjoint with respect to the chosen inner product is equal to xmath1007 but then the regularity one can propagate into xmath921 for xmath1008 xmath588 solving xmath1009 xmath667 compactly supported and smooth is xmath1010 as follows from proposition 21 and footnote 5 thus in the partial asymptotic expansion in theorem thmrndspartialasympconormal which has a different leading order term now coming from stationary xmath1001form solutions of the wave equation we can only establish conormal regularity of the remainder term xmath34 at the cauchy horizon relative to the space xmath1010 which for small xmath26 gives sobolev regularity xmath1011 for small xmath60 assuming that the leading order term is smooth at the cauchy horizon which is the case for example for 2forms see theorem 43 we therefore conclude that as soon as we consider xmath1001forms xmath12 with xmath1012 our methods do not yield uniform boundedness of xmath12 up to the cauchy horizon however we remark that the conormality does imply uniform bounds as xmath302 of the form xmath1013 xmath60 small a finer analysis would likely yield more precise results in particular boundedness for certain components of xmath12 and as in the scalar setting a converse result namely showing that such a blow up does happen is much more subtle we do not pursue these issues in the present work we recall from xcite the form of the kerr de sitter metric with parameters xmath176 cosmological constant xmath173 black hole mass and xmath24 angular momentum are denoted xmath1014 in xcite while our xmath1015 are denoted xmath1016 there xmath1017 where xmath1018 in order to guarantee the existence of a cauchy horizon we need to assume xmath15 analogous to definition defrndsnondegenerate we make a non degeneracy assumption defkdsnondegenerate we say that the kerr de sitter spacetime with parameters xmath1019 is non degenerate if xmath1020 has xmath180 simple positive roots xmath181 one easily checks that xmath1021 and again xmath6 in the analytic extension of the spacetime is called the cauchy horizon xmath7 the event horizon and xmath8 the cosmological horizon we consider a simple case in which non degeneracy can be checked immediately lemmakdsnondegenerate suppose xmath1022 and denote the three non negative roots of xmath1023 by xmath204 then for small xmath15 xmath1020 has three positive roots xmath1024 xmath113 with xmath206 depending smoothly on xmath1025 and xmath1026 we recall that the condition ensures the existence of the roots xmath208 as stated one then computes for xmath1027 that xmath1028 giving the first statement in order to state unconditional results later on we in fact from now on assume to be in the setting of this lemma ie we consider slowly rotating kerr de sitter black holes see remark rmkkdsresgeneral for further details as in subsecrndsmfd we discuss the smooth extension of the metric xmath3 across the horizons and construct the manifold on which the linear analysis will take place all steps required for this construction are slightly more complicated algebraically but otherwise very similar to the ones in the reissner nordstrm de sitter setting so we shall be brief thus with xmath1029 we will take xmath1030 for xmath5 near xmath250 where xmath1031 using xmath1032 and xmath1033 one computes xmath1034 using eg the frame xmath1035 xmath1036 xmath1037 and xmath1038 one finds the volume density to be xmath1039 moreover the form of the dual metric is xmath1040 this is a non degenerate lorentzian metric apart from the usual singularity of the spherical coordinates xmath1041 which indeed is merely a coordinate singularity as shown by a change of coordinates xcite see also remark rmkkdsflowvalidcoord below as in the reissner nordstrm de sitter case one can start by choosing the functions xmath238 and xmath241 so that xmath1042 for xmath1043 and xmath1044 for xmath1045 so that xmath368 in is well defined in a neighborhood of xmath164 and moreover one can choose xmath238 and xmath241 so that xmath1046 is timelike in xmath1047 indeed this is satisfied provided xmath1048 we note that in xmath1049 we can take xmath223 to be large and negative and then at xmath313 we obtain xmath1050 therefore xmath326 is future timelike for xmath518 near xmath165 then more precisely in xmath1051 we can arrange for xmath1052 to be timelike again and since xmath1053 has the opposite sign we find that xmath1054 ie xmath1052 is future timelike there in order to cap off the problem in xmath134 we again modify xmath1020 to a smooth function xmath1055 since we can hide all the possibly complicated structure of the extension when xmath1056 using complex absorption we simply choose xmath1055 such that xmath1057 see also the discussion following we can then extend the metric xmath3 past xmath360 by defining xmath1058 near xmath138 as in with xmath1020 replaced by xmath1055 and with xmath1059 we can then arrange xmath1052 to be future timelike in xmath652 and xmath273 is future timelike at xmath1060 by a computation analogous to we can now define spacelike hypersurfaces xmath1061 exactly as in bounding a domain with corners xmath153 inside xmath1062 and we will analyze the wave equation modified in xmath134 on the compactified region xmath1063 we further let xmath685 xmath544 since it simplifies a number of computations below we will study the null geodesic flow of xmath1064 ie the flow of xmath1065 within the characteristic set xmath1066 where xmath83 denotes the dual metric function by pasting xmath1052 in xmath327 xmath326 in xmath1067 and xmath1046 in xmath1068 together using a non negative partition of unity we can construct a smooth globally future timelike covector field xmath1069 on xmath150 and use it to split the characteristic set into components xmath136 as in since the global dynamics of the null geodesic flow in a neighborhood xmath1068 of the exterior region are well known with saddle points of the flow generalized radial sets at xmath1070 where we define xmath1071 and a normally hyperbolically trapped set xmath511 as in parts of the discussion in subsecrndsflow it is computationally convenient to work with xmath1072 instead of xmath368 near xmath112 where xmath1073 ie effectively putting xmath1074 let xmath422 and write b covectors as xmath1075 then the dual metric function reads xmath1076 rmkkdsflowvalidcoord valid coordinates near the poles xmath1077 are xmath1078 and writing xmath1079 one finds xmath1080 and xmath1081 thus to see the smoothness of xmath1082 near the poles one merely needs to rewrite xmath1083 as xmath1084 and notice that xmath1085 is smooth as is xmath1086 since this is simply the dual metric function on xmath394 in spherical coordinates we study the rescaled hamilton flow near xmath120 using the coordinates and introducing xmath1087 xmath1088 xmath1089 xmath1090 as the fiber variables similarly to thus xmath1091 and we find that at xmath434 where xmath1092 xmath1093 and thus the quantity controlling the threshold regularity at xmath120 is xmath1094 furthermore if we put xmath1095 then xmath1096 so the quadratic defining function xmath1097 of xmath120 within xmath1098 satisfies xmath1099 as in the reissner de sitter case this implies that xmath120 is a source or sink within xmath1100 with a stable or unstable manifold xmath1101 transversal to the boundary for xmath1102written as one can check that xmath1103 if and only if xmath1104 ie if and only if xmath1105 in xmath1106 the quantity xmath1107 therefore has a sign which is the same as in the discussion around depending on the component of the characteristic set thus null geodesics in a fixed component xmath136 of the characteristic set can only cross xmath112 in one direction furthermore in the regions where xmath1108 and thus xmath273 is timelike we have xmath1109 see also since we will place complex absorption immediately beyond xmath6 ie in xmath1110 for xmath1111 very small it remains to check that at finite values of xmath368 in this region all null geodesics escape either to xmath73 or to xmath333 but this follows from the timelike nature of xmath1046 there which gives that xmath1112 is non zero in fact bounded away from zero to summarize the global behavior of the null geodesic flow in xmath531 is the same as that of the reissner nordstrm de sitter solution see figure figrndsflow we point out that the existence of an ergoregion is irrelevant for our analysis its manifestation is merely that null geodesics tending to say the event horizon xmath7 in the backward direction may have a segment in xmath562 before possibly crossing the event horizon into xmath563 see also figure 8 we use a complex absorbing operator xmath152 as in subsecrndsregularity with xmath1113 on xmath136 and which is elliptic in xmath1114 xmath1110 where xmath1111 is chosen sufficiently small to ensure that the dynamics near the generalized radial set xmath166 control the dynamics in xmath1115 that is null geodesics near either tend to xmath166 or enter the elliptic region of xmath133 ie xmath1116 in finite time unless they cross xmath1117 ie xmath333 or xmath6 the analysis in subsecrndsregularitysubsecrndsconormal now goes through mutatis mutandis for completeness we note that the threshold quantity xmath947 see for small xmath24 is given by xmath1118 in fact to prove conormal regularity we can use the same module generators as those constructed in the proof of lemma lemmarndsmodule and the b version see the discussion around proposition propbconormal goes through without changes as well rmkkdsrescarter there exists a second order carter operator xmath1119 with principal symbol given by xmath1120 in that commutes with xmath1064 concretely in the coordinates used in which are valid near xmath6 xmath1121 since xmath1122 and xmath1123 commute with xmath1064 and since moreover the sum of the first two terms of xmath1124 is an elliptic operator on xmath394 we conclude commuting xmath1124 through the equation xmath1125 xmath531 that xmath12 is smooth in xmath10 and the angular variables thus we can deduce conormal regularity apart from iterative regularity under application of xmath1126 for xmath12 using such commutation arguments as well note however that the existence of such the hidden symmetry xmath1124 is closely linked to the complete integrability of the geodesic flow on kerr de sitter space while the microlocal argument proving conormality applies in much more general situations and different contexts see eg xcite we content ourselves with stating the analogues of theorem thmrndspartialasympconormal and corollary corrndsboundedness in the kerr de sitter setting thmkdspartialasympconormal suppose the angular momentum xmath15 is very small such that there exists xmath26 with the property that the forward problem for the wave equation in the neighborhood xmath1068 of the domain of outer communications has no resonances in xmath1127 other than the simple resonance at xmath57 let xmath12 be the forward solution of xmath944 then xmath12 has a partial asymptotic expansion xmath864 with xmath30 and xmath657 near xmath73 xmath660 away from xmath73 and xmath945 for all xmath859 and all vector fields xmath946 which are tangent to the cauchy horizon xmath6 here xmath947 is given by in particular there exists a constant xmath28 such that xmath865 and xmath12 is uniformly bounded in xmath32 again the same result holds without the constant term xmath40 for solutions of the massive klein gordon equation xmath948 xmath39 small rmkkdsresgeneral our arguments go through for general non degenerate kerr de sitter spacetimes assuming the resolvent family xmath168 admits a meromorphic continuation to the complex plane with polynomially lossy high energy estimates in a strip below the real line and the only resonance quasi normal mode in xmath793 is a simple resonance at xmath97 mode stability apart from the mode stability these conditions hold for a large range of spacetime parameters xcite while the mode stability has only been proved for small xmath24 for the kerr family of black holes mode stability is known see xcite without the mode stability assumption we still obtain a resonance expansion for linear waves up to the cauchy horizon but boundedness does not follow due to the potential existence of resonances in xmath789 or higher order resonances on the real line if such resonances should indeed exist then boundedness would in fact be false for generic forcing terms or initial data if on the other hand one assumes that the wave xmath12 decays to a constant at some exponential rate xmath26 in the black hole exterior region the conclusion of theorem thmkdspartialasympconormal still holds the analysis in secrnds and seckds relies on the propagation of singularities in b sobolev spaces of variable order in fact we only use microlocal elliptic regularity and real principal type propagation on such spaces we recall some aspects of appendix a needed in the sequel and refer the reader to xcite for the proofs of elliptic regularity and real principal type propagation in this setting since all arguments presented there are purely symbolic they go through in the b setting with purely notational changes moreover we remark that adding constant weights to the variable order b spaces does not affect any of the arguments we use sobolev orders which vary only in the base not in the fiber in order to introduce the relevant notation we consider the model case xmath1128 of a manifold with boundary and an order function xmath1129 constant outside a compact set recalling the symbol class xmath1130 we then define xmath1131 now xmath1132 for xmath1133 provided xmath1134 due to derivatives falling on xmath1135 producing logarithmic terms therefore we can quantize symbols in xmath1136 we denote the class of quantizations of such symbols by xmath1137 we will only work with xmath1138 xmath1139 in which case one can in particular transfer this space of operators to a manifold with boundary and obtain a b pseudodifferential calculus see xcite for the analogous case of manifolds without boundary thus if xmath1140 and xmath1141 for two order functions xmath1142 then xmath1143 where xmath61 denotes the principal symbol in the respective classes of operators the principal symbol of an element in xmath1144 is well defined in xmath1145 furthermore we have xmath1146inpsimathrmb1deltadeltamathsfsmathsfs1 2delta quad sigmaia bhsigmaasigmab for the purposes of the analysis in subsecrndsfredholm we need to describe the relation of variable order b sobolev spaces to semiclassical function spaces via the mellin transform we work locally in xmath1147 and the variable order function is xmath1148 with xmath602 constant outside a compact set fixing a real number xmath1149 and an elliptic dilation invariant operator xmath1150 xmath1139 the norm on xmath1151 is given by xmath1152 and all choices of xmath1153 and xmath909 give equivalent norms this follows from elliptic regularity since the xmath1154part of the norm is irrelevant in a certain sense it is only there to take care of a possible kernel of xmath909 we focus on the seminorm xmath1155 we concretely take xmath909 to be the left quantization of xmath1156 writing b1forms as xmath1157 denote the mellin transform of xmath12 in xmath75 by xmath1158 and the fourier transform of xmath1159 in xmath1160 by xmath1161 is the fourier transform of xmath12 in xmath1162 where xmath1163 then by plancherel xmath1164 where xmath1165 using xmath1166 we can rewrite this integral as xmath1167 this suggests defvariablescl for xmath1168 constant outside a compact set define the semiclassical sobolev space xmath1169 xmath712 by the norm xmath1170 where xmath1171 is a real number the particular choice of the value of xmath1153 is irrelevant see remark rmkvariablesclnorm where we also give a better invariant version of definition defvariablescl thus xmath1172 as a space but the semiclassical space captures the behavior of the norm as xmath1173 we remark that the space xmath1174 becomes weaker as one increases xmath1175 or decreases xmath602 rmkvariablesclconstorders if xmath1176 and xmath1177 are constants we can use the equivalent norm xmath1178 using and taking the xmath1179term in into account we thus have an equivalence of norms xmath1180 the semiclassical analogues of the symbol spaces which are adapted to working with the spaces xmath1174 are defined by xmath1181 with xmath1182 independent of xmath702 in our application differentiation in xmath1160 or xmath899 will in fact at most produce a logarithmic loss ie will produce a factor of xmath1183 or xmath1184 for us the main example of an element in xmath1185 is the symbol xmath1186 quantizations of symbols in xmath1185 are denoted xmath1187 and for xmath1188 and xmath1189 we have xmath1190 and xmath1191 in psih1deltadeltamathsfsmathsfs1 2deltamathsfwmathsfw2delta with principal symbols given by the product resp the poisson bracket of the respective symbols here the principal symbol of an element of xmath1187 is well defined in xmath1192 rmkvariablesclnorm using elliptic regularity in the calculus xmath1193 we see that given xmath1194 xmath1195 we have xmath1196 if and only if xmath1197 where xmath1188 is a fixed elliptic operator ie we have an equivalence of norms xmath1198 we next discuss microlocal regularity results for variable order operators general references for such results in the constant order semiclassical setting are xcite and xcite working on a compact manifold xmath79 without boundary now xmath1199 suppose we are given a semiclassical psdo semiclassical elliptic regularity takes the following quantitative form on variable order spaces propvariablesclelliptic if xmath1201 are such that xmath1202 the semiclassical elliptic set of xmath1203 and xmath83 is elliptic on xmath1204 then xmath1205 for any fixed xmath1153 this follows from the usual symbolic construction of a microlocal inverse of xmath1203 near xmath1204 the semiclassical real principal type propagation of singularities requires a hamilton derivative condition on the orders xmath1206 of the function space let xmath1200 with real valued semiclassical principal symbol xmath1207 ie xmath1208 is a classical symbol which we assume for simplicity to be xmath702independent let xmath1209 be the rescaled hamilton vector field with xmath1210 is homogeneous of degree xmath95 in the fibers of xmath898 away from the zero section thus xmath1211 is homogeneous of degree xmath97 modulo vector fields vanishing at fiber infinity and can thus be viewed as a smooth vector field on the radially compactified cotangent bundle xmath1212 at fiber infinity xmath1213 the xmath1211 flow is simply the rescaled hamilton flow of the homogeneous principal part of xmath1208 while at finite points xmath1214 xmath1211 is proportional to the semiclassical hamilton vector field propvariablesclpropagation under these assumptions let xmath1215 be order functions and let xmath1216 be open suppose xmath1217 and xmath1218 in xmath1219 suppose xmath1220 are such that xmath83 is elliptic on xmath1221 and all backward null bicharacteristics of xmath1203 from xmath1222 enter xmath1223 while remaining in xmath1224 then xmath1225 for any fixed xmath1153 for xmath1226 this gives the usual estimate of xmath13 in xmath1227 in terms of xmath1228 in xmath1229 losing xmath94 derivative and xmath94 power of xmath702 relative to the elliptic setting the proof is almost the same as that of proposition a1 so we shall be brief since the result states nothing about critical points of the hamilton flow we may assume xmath1230 on xmath1219 at xmath1231 this means that xmath1211 is non radial let xmath1232 let us first prove the propagation at fiber infinity introduce coordinates xmath1233 on xmath1231 xmath1234 centered at xmath51 such that xmath1235 and suppose xmath1236 and the neighborhood xmath1237 of xmath1238 are such that xmath1239q1timesoverlineusubset u suppose we have a priori xmath1174regularity in xmath1240q1timesoverlineu ie xmath1241 is elliptic there we use a commutant omitting the necessary regularization in the weight xmath96 for brevity xmath1242 where xmath1243 xmath1244 for xmath1245 xmath1246 for xmath1247 with xmath1248 large and xmath1249 near xmath1250 xmath1251 near xmath1252 moreover xmath1253 xmath1254 we then compute xmath1255 now xmath1256 giving rise to the main good term while the xmath1257 term which has the opposite sign is supported where one has a priori regularity the term on the second line can be absorbed into the first by making xmath1258 large since xmath1259 can then be dominated by a small multiple of xmath1260 while the last two terms have the same sign as the main term by our assumptions on xmath602 and xmath1175 a positive commutator computation a standard regularization argument and absorbing the contribution of the imaginary part of xmath1203 by making xmath1258 larger if necessary gives the desired result for the propagation within xmath898 a similar argument applies we use local coordinates xmath1233 in xmath898 with xmath1261 now centered at xmath51 so that xmath1235 and the differentiability order xmath602 becomes irrelevant now as we are away from fiber infinity thus we can use the commutant xmath1262 with xmath674 exactly as above and xmath1263 localizing near xmath97 the positive commutator argument then proceeds as usual returning to we observe that for xmath1264 we can apply this proposition to xmath1265 under the single condition xmath1217 which is the same condition as for real principal type propagation for xmath12 in b sobolev spaces as it should finally we point out that completely analogous results hold for weighted b sobolev spaces xmath646 and their semiclassical analogues the only necessary modification is that now we have to restrict the mellin dual variable to xmath75 called xmath61 here to xmath682 since the mellin transform in xmath75 induces an isometric isomorphism xmath1266we briefly recall supported and extendible distributions on manifolds with boundary following appendix b the model case is xmath1267 and we consider sobolev spaces with regularity xmath45 for notational brevity we omit the factor xmath1268 thus we let xmath1269 called xmath41 space with supported xmath1270 resp extendible xmath1271 character at the boundary xmath1272 the hilbert norm on the supported spaceis defined by restriction from xmath41 while the hilbert norm on the extendible space comes from the isomorphism xmath1273bullet since the supported space on the right hand side is a closed subspace of xmath1274 we immediately get an isometric extension map xmath1275 which identifies xmath1276 with the orthogonal complement of xmath1277bullet in xmath1274 thus xmath1278 the dual spaces relative to xmath43 are given by xmath1279 we now discuss the case of codimension xmath1280 corners which is all we need for our application treating the case of higher codimension corners requires purely notational changes we work locally on xmath1281 xmath1282 xmath1283 consider the domain xmath1284 which is a submanifold of with corners of xmath1285 again since the xmath1286 variables will carry through our arguments below we simplify notation by dropping them ie by letting xmath1287 let xmath45 there are two natural ways to define a space xmath1288 of distributions in xmath41 with extendible character at xmath1272 and supported character at xmath1289 which give rise to two a priori different norms and dual spaces namely xmath1290times0inftybulletbullet hs0inftytimes0inftybullet2 uin hs0inftytimesmathbbr colon operatornamesuppusubset0inftytimes0infty endsplit we equip the first space with the quotient topology and the second space with the subspace topology the first space is the space of restrictions to xmath1291 of distributions with support in xmath1292 while the second space is the space of extendible distributions in the half space xmath1293 which have support in xmath1291 see figure figsuppextdef middle left choice xmath94 right choice xmath1280 the supports of elements of the spaces that xmath1294 resp xmath1295 are quotients resp subspaces of are shaded the xmath97 indicates the vanishing condition in the definition of xmath1295 as in the case of manifolds with boundary discussed above both spaces come equipped with isometric by the definition of their norms extension operators xmath1296 with xmath1297 and xmath1298 contained in the space xmath1299 of distributions in xmath1300 with support in xmath1301 see figure figsuppextmaps we can thus also describe the second variant of xmath1302 equivalently as the quotient xmath1303timesmathbbrbullet furthermore the dual spaces are isometric to xmath1304bulletbulletendaligned ie dualizing switches choices xmath94 and xmath1280 for the definition of the mixed supported and extendible spaces since xmath1307 is an isomorphism if and only if the dual map xmath1310 is an isomorphism it suffices to consider the case xmath1311 in view of the characterizations and of the two versions of xmath1302 as quotients equipped with the quotient norm it suffices to prove the existence of a bounded linear map xmath1312 with xmath1313 the idea is to use the fact that for integer xmath1314 xmath1315spaces of extendible distributions are intrinsically defined thus for xmath1316 the restriction xmath1317 to the lower half plane is an element of xmath1318 with support in xmath1319timesinfty0 but then we can use an extension map xmath1320 defined using reflections and rescalings see xcite which in addition preserves the property of being supported in xmath1321 we can then define the map xmath1322 on xmath41 by xmath1323 for all integer xmath1324 by interpolation the same map in fact works for all xmath1325 since xmath1001 can be chosen arbitrarily this proves the existence of a map for any fixed real xmath44
we show that linear scalar waves are bounded and continuous up to the cauchy horizon of reissner de sitter and kerr de sitter spacetimes and in fact decay exponentially fast to a constant along the cauchy horizon we obtain our results by modifying the spacetime beyond the cauchy horizon in a suitable manner which puts the wave equation into a framework in which a number of standard as well as more recent microlocal regularity and scattering theory results apply in particular the conormal regularity of waves at the cauchy horizon which yields the boundedness statement is a consequence of radial point estimates which are microlocal manifestations of the blue shift and red shift effects
introduction reissnernordstrmde sitter space kerrde sitter space variable order b-sobolev spaces supported and extendible function spaces on manifolds with corners
electronically tuned microwave oscillators are key components used in a wide variety of microwave communications systems xcite the phase of the output signal exhibits fluctuations in time about the steady state oscillations giving rise to phase noise a very important characteristic that influences the overall performance especially at higher microwave frequencies in order to understand the oscillator phase behaviour a statistical model for a non linear oscillating circuit has to be developed and presently no accurate theoretical model for phase noise characterization is available because of the particularly difficult nature of this problem this is due to the hybrid nature of non linear microwave oscillator circuits where distributed elements pertaining usually to the associated feeding or resonator circuits and non linear elements pertaining usually to the amplifiying circuit have to be dealt with simultaneously xcite the main aim of this report is to establish a theoretical framework for dealing with the noise sources and non linearities present in these oscillators introduce a new methodology to calculate the resonance frequency and evaluate the time responses waveforms for various voltages and currents in the circuit without or with the noise present once this is established the phase noise spectrum is determined and afterwards the validity range of the model is experimentally gauged with the use of different types of microwave oscillators xcite this report is organised in the following way section ii covers the theoretical analysis for the oscillating circuit reviews noise source models and earlier approches section iii presents results of the theoretical analysis and highlights the determination of the resonance frequency for some oscillator circuits without noise in section iv phase noise spectra are determined for several oscillator circuits and section v contains the experimental results the appendix contains circuit diagrams and corresponding state equations for several non linear oscillator circuits in standard microwave analysis it is difficult to deal with distributed elements in the time domain and difficult to deal with non linear elements in the frequency domain non linear microwave oscillator circuits have simultaneously non linear elements in the amplifying part and distributed elements in the resonating part non linearity is needed since it is well known that only non linear circuits have stable oscillations before we tackle in detail the determination of the phase noise let us describe the standard procedure for dealing with the determination of resonance frequency of non linear oscillator circuits the first step is to develop a circuit model for the oscillator device and the tuning elements the equivalent circuit should contain inherently noiseless elements and noise sources that can be added at will in various parts of the circuit this separation is useful for pinpointing later on the precise noise source location and its origin xcite the resulting circuit is described by a set of coupled non linear differential equations that have to be written in a way such that a linear sub circuit usually the resonating part is coupled to another non linear sub circuit usually the oscillating part the determination of the periodic response of the non linear circuit the third step entails performing small signal ac analysis linearization procedure around the operating point the result of the ac analysis is a system matrix which is ill conditioned since a large discrepency of frequencies are present simultaneously one has a factor of one million in going from khz to ghz frequencies the eigenvalues of this matrix have to be calculated with extra care due to the sensitivity of the matrix elements to any numerical roundoff xcite we differ from the above analysis by integrating the state equations directly with standard non standard runge kutta methods adapted to the non stiff stiff system of ordinary differential equations the resonance frequency is evaluated directly from the waveforms and the noise is included at various points in the circuit as johnson or shot noise this allows us to deal exclusively with time domain methods for the noiseless noisy non linear elements as well as the distributed elements the latter are dealt with through an equivalence to lumped elements at a particular frequency as far as point 3 is concerned the linearization procedure method is valid only for small signal analysis whereas in this situation we are dealing with the large signal case previously several methods have been developed in order to find the periodic response the most well established methods are the harmonic balance and the piecewise harmonic balance methods xcite schwab xcite has combined the time domain for the non linear amplifier part with the frequency domain for the linear resonating part methods and transformed the system of equations into a boundary value problem that yields the periodic response of the system for illustration and validation of the method we solve 6 different oscillator circuits the appendix contains the circuit diagrams and the corresponding state equations the standard van der pol oscillator the amplitude controlled van der pol oscillator the clapp oscillator the colpitts oscillator model i oscillator model ii oscillator we display the time responses waveforms for various voltages and currents in the attached figures for each of the six oscillators all oscillators reach periodic steady state almost instantly except the amplitude controlled van der pol acvdp and the colpitts circuits for instance we need typically several thousand time steps to drive the acvdp circuit into the oscillatory steady state whereas several hundred thousand steps are required for the colpitts circuit typically the rest of the circuits studied reached the periodic steady state in only less a couple of hundred steps once the oscillating frequency is obtained device noise is turned on and its effect on the oscillator phase noise is evaluated all the above analysis is performed with time domain simulation techniques finally fourier analysis is applied to the waveform obtained in order to extract the power spectrum as a function of frequency very long simulation times on the order of several hundred thousand cycles are needed since one expects inverse power law dependencies on the frequency xcite we use a special stochastic time integration method namely the 2s2o2 g runge kutta method developed by klauder and peterson and we calculate the psd power spectral density from the time series obtained it is worth mentioning that our methodology is valid for any type of oscillator circuit and for any type of noise additive white as it is in johnson noise of resistors mutiplicative and colored or xmath0 with xmath1 arbitrary as it is for shot noise stemming from junctions or imperfections inside the device in addition the approach we develop is independent of the magnitude of the noise regardless of the noise intensity we evaluate the time response and later on the power spectrum without performing any perturbative development whatsoever recently kartner xcite developed a perturbative approach to evaluate the power spectrum without having to integrate the state equations his approach is valid for weak noise only and is based on an analytical expression for the power spectrum nevertheless one needs to evaluate numerically one fourier coefficient xmath2 the spectrum depends on microwave oscillators are realised using a very wide variety of circuit configurations and resonators we plan to design fabricate and test microstrip oscillators with gaas mesfet devices with coupled lines and ring resonators xcite the measured phase noise of these oscillators will be compared with the theoretical prediction from the above analysis we also plan to apply the above analysis to the experimental phase results obtained from various electronically tuned oscillators that have been already published in the literature xcite acknowledgments the author would like to thank fx kartner and w anzill for sending several papers reports and a thesis that were crucial for the present investigation thanks also to so faried who made several circuit drawings and s kumar for suggesting two additional circuits model i and ii to test the software v gngerich f zinkler w anzill and p russer noise calculations and experimental results of varacytor tunable oscillators with significantly reduced phase noise ieee transactions on microwave theory and techniques mtt43 278 1995 s heinen j kunisch and i wolff a unified framework for computer aided noise analysis of linear and non linear microwave circuits ieee transactions on microwave theory and techniques mtt39 2170 1991 in addition we have xmath15 state space equations of clapp oscillator xmath16 fracdvctedt frac1cte ip fracvctevcere fracdvctadt frac1cta ip fracvctavcara fracdvcadt frac1ca iq fracvctavcara fracvcarl fracdipdt jp nonumberendaligned state space equations of coplitts oscillator xmath22 fracdv2dt frac1c2 i2 ib fracv0v2 v3r2 fracdv3dt frac1c3 ib fracv0v2 v3r2 fracdv4dt fracv1v4rlc4 endaligned
we have developed a new methodology and a time domain software package for the estimation of the oscillation frequency and the phase noise spectrum of non linear noisy microwave circuits based on the direct integration of the system of stochastic differential equations representing the circuit our theoretical evaluations can be used in order to make detailed comparisons with the experimental measurements of phase noise spectra in selected oscillating circuits
introduction theoretical analysis time responses of non-linear oscillators and resonance frequency determination phase noise spectrum evaluation experimental verification
loop quantum gravity had never been considered a candidate of the unification of matter and gravity until a remarkable series of discoveries emerged recently first markopoulou and kribsxcite discovered that loop quantum gravity and many related theories of dynamical quantum geometry have emergent excitations which carry conserved quantum numbers not associated with geometry or the gravitational field around the same time bilson thompsonxcite found that a composite or preon model of the quarks leptons and vector bosons could be coded in the possible ways that three ribbons can be braided and twisted this suggested that the particles of the standard model could be discovered amidst the emergent braid states and their conserved quantum numbers associated with those of the standard model one realization of this was then given in xcite for a particular class of dynamic quantum geometry models based on 3valent quantum spin networks obtained by gluing trinions together these are coded in the knotting and braiding of the edges of the spin network they are degrees of freedom because of the basic result that quantum gravity or the quantization of any diffeomorphism invariant gauge theory has a basis of states given by embeddings up to diffeomorphisms of a set of labeled graphs in a spatial manifold indeed the role of the braiding of the edges of the graphs had been a mystery for many years however spin foam models in xmath0 dimensions involve embedded 4valent spin networksxcite it is then natural to ask if there are conservation laws associated with braids in 4valent spin networks besides quantum gravity with a positive cosmological cons tantxcite and quantum deformation of quantum gravityxcite suggest the framing of embedded spin networks in this paperwe extend the investigation of the braid excitations from the 3valent case to the 4valent case we study framed 4valent spin networks embedded in 3d due to the complexity of embedded 4valent spin networks to deal with the braid excitations of them we need a consistent and convenient mathematical formalism in this paper which is the first of a series of papers on the subject we first propose a new notation of the embedded framed 4valent spin networks and define what we mean by braids then discuss equivalence moves with the help of our notation which relate all diffeomorphic embedded 4valent graphs and form the graphical calculus of the kinematics of these graphs and at the end present a classification of the braids these results are key to our subsequent papers we focus on 3strand braids which are the simplest non trivial and interesting braid excitations living on embedded 4valent spin networks firstly we fix the notation namely a tube sphere notation we work in the category of framed graphs in particular the two dimentional projections representing embedded framed 4valent spin networks up to diffeomorphisms there is a single diffeomorphism class of nodes we therefore represent nodes by rigid 2spheres and edges by tubes such a node can be considered locally dual to a tetrahedron as shown in fig notationa if the spin nets are not framed we simply reduce tubes to lines but still keep spheres as nodes to fully characterize the embedding of a spin net in a 3manifold we assume that not only the nodes are rigid ie they can only be rotated or translated but also the positions on the node where the edges are attached are fixed this requirement andthe local duality ensures the non degeneracy of the nodes ie no more than two edges of a node are co planar for the convenience of calculation we simplify the tube sphere notation in fig notationa to fig notationb in which 1 the sphere is replaced by a solid circle 2 the two tubes in the front xmath1 and xmath2 in a are replaced by a solid line piercing through the circle in b and 3 the two tubes in the back xmath3 and xmath4 in a are substituted by xmath3 and xmath4 in b with a dashed line connecting them through the circle there is no loss of generality in taking this simplified notation because one can always arrange a node in the two states like fig notationb c by diffeomorphism before taking a projection due to the local duality between a node and a tetrahedron and the fact that all the four edges of a node are on an equal footing if we choose one of the four edges of a node at a time the other three edges are still on an equal footing in respect to a rotation symmetry with the specially chosen edge as the rotation axis eg the edge xmath3 in fig notationb c this rotation symmetry will be discussed in detail in the next section there could exist twists on embedded tubes eg the xmath5twist on the edge xmath3 with respect to the solid red dot shown in fig notationtwista note that we put twists in the unit of xmath5 for two reasons the first reason is that the possible states by which a node may be represented in a projection can be taken into each other by xmath5 rotations around one of the edges of the node this will become clear in section subsecrot by the local duality of a node to a tetrahedron these correspond to the xmath5 rotations that relate the different ways that two tetrahedra may be glued together on a triangular face these rotations create twists in the edges and as a result of the restriction on projections of nodes we impose set the twists in a projection of an edge of a spin network in units of xmath5 the other reason is that the least twist distinguishable from zero of a piece of tube in a projection is xmath5 and all higher twists distinguishable from each other in the projection must then be multiples of xmath5 because an edge is always between two nodes and a rotation of a node creates annihilates twists on its edges one usually needs to specify the fixed point on an edge with respect to which a twist is counted as shown in fig notationtwista in this manner the 1 unit of twist in fig notationtwista is obviously equivalent to that in notationtwistb which is the same amount of twist in the opposite direction on the other side of the fixed point interestingly both twists in fig notationtwista and b are right handed twists if one point his her right thumb to the node on the same sides of the fixed point as that of the twists therefore we can unambiguously assign the same value to them namely xmath6 unit of xmath5 this provides a way of simplifying the notation of twist ie we can simply label an edge with a left right handed twist a negative positive integer for example fig notationtwista and b can be replaced by notationtwistc without ambiguity recalling the rotation axis mentioned before one can assign states to a node with respect to its rotation axis if the rotation axis is an edge in the back we say the node is in state xmath7 or is simply called a xmath7node eg fig notationb with edge xmath3 as the rotation axis otherwise if the rotation axis is an edge in the back the node is called a xmath8node or in the state xmath8 eg fig notationc with edge xmath4 the results of this paper will refer to the case of framed spin networks defined above however unframed graphs are used in loop quantum gravity and it is useful to have results then for that case as well the particular notation of unframed graphs is obtained from the framed case discussed here by dropping information about twists of the edges which thus represent curves rather than tubes but keeping the nodes as rigid spheres locally dual to tetrahedra this is necessary so that the evolution moves are well defined for unframed embedded graphs which will be explained in the second of this series of papers in the rest of this paperwe refer always to the framed case results for the unframed case will be understood from those for the framed case by neglecting the twists of the edges unless we explicitly describe them equipped with the notation defined above we are interested in a type of topological structures as sub structures of embedded 4valent spin networks namely 3strand braids which are defined as follows defbraida 3strand braid or a braid for simplicity is a sub spinnet of an embedded 4valent spin network which is a three dimensional object formed by two nodes with three common edges now named strands the two nodes are called end nodes each of which has one and only one free edge called an external edge the two dimensional projections of these braids denoted in our notation are called braid diagrams a typical example of which is shown in fig braid the following conditions should be satisfied 1 if braids are arranged horizontally then the left right external edge of a braid can always be the left right most edge of the left right end node and always stretches to the left right which has no tangles with the strands for the left part of the braid diagram in fig notbraida as an example 2 what is captured between the two end nodes eg the region between the two dashed lines in fig braid should meet the definition of braid in the ordinary braid theory for the braid diagram in fig notbraidb as an example 3 the three strands of a braid are never tangled with any other edge of the spin net as illustrated in right side of the braid diagram in fig notbraida for example p we would like to emphasize that the braids defined above are 3d structures each of which has many diffeomorphic embeddings that are represented by their 2d projections in our notation as a result the 2d projections of many braids ie their braid diagrams which appear to be different are actually equivalent to each other in the sense of diffeomorphism the precise set of equivalence relations will be the topic of the next section bearing this in mind in the rest of the paper we are not going to distinguish braids from their braid diagrams unless an ambiguity arises these kinds of braids are different from the braids in the context of ordinary braid theory since the two end nodes of such a braid are topologically significant to the state of the braid these braid are stable under a certain stability condition regarding the evolution of spin nets which will be brought up in the companion paper however in this paper we focus only on the intrinsic properties of these braids or in other words the pure topological properties of the braids up to diffeomorphism ie without dynamic evolution to do so we need to first describe the non dynamical operations that can be applied to the embedded 4valent spin networks we can assign a number to a crossing according to its chirality viz xmath6 for a right handed crossing xmath9 for a left handed crossing and xmath10 otherwise assignment shows this assignment p such a scheme of assignment will become useful in the subsequent discussions as aforementioned the tube diagrams of an embedded spin network belong to different equivalence classes it is therefore obligatory to characterize these equivalence classes by equivalence relations to do so one needs to find the full set of local moves operating on the nodes and edges which do nt change the diffeomorphism class of the embedding of a diagram in the discussion below we work in the framed case in the unframed case one just ignores the twists an obvious set of equivalence moves consists of the usual three reidemeister moves framed or unframed whose details are not repeated here these moves will be applied without further notice more importantly there are two kinds of equivalence moves that can be peculiarly defined on an embedded 4valent spin net under which two diagrams in particular two braids that are related by a sequence of equivalence moves are thought to be equivalent the first kind composes of translation moves the second type of equivalence moves are rotations defined on the nodes we discuss translation moves first translation moves which are in fact extended reidemeister type moves involve not only the edges but also the nodes of an embedded spin net they reflect the translation symmetry of the embedded spin nets let us look at the simplest example first translationa shows a node xmath11 connected to other places of the network via its four edges red points represent attached points on other nodes one can slide the node xmath11 along its edge xmath12 to the left which leads to fig translationb this does not change anything of the topology of the embedded spin net translationx illustrates more complicated cases where a crossing is taken into account p in fig translationxa1 there is a node xmath11 and a crossing however since the crossing is between the edge xmath12 of node xmath11 and the edge xmath13 of some other node and node xmath11 together with all its edges are above edge xmath13 one can safely translate node xmath11 along edge xmath12 to the left passing the crossing which results in fig translationxa2 in which the crossing turns out to be between edge xmath13 and edge xmath14 this which is obviously a symmetry may be understood as a reidemeister move ii p apart from the translation symmetry there is also a rotation symmetry that gives rise to rotations defined on a node with respect to one of its four edges of an embedded spin net these rotations are not those with rigid metric but only the ones that change projections of an embedding without affecting diffeomorphisms as mentioned before xmath5 rotations take states representing a node in a projection into each other it is time to see in detail how these rotations affect a subgraph consisting of a node and its four edges pi3rot shows such a rotation in the case where the node is in a xmath7state with respect to the chosen rotation axis before imposing the rotation while fig pi3rot illustrates the opposite case p p a xmath5 rotation creates a crossing of two edges of the node and causes twists which are explicitly labeled on all edges of the node the twist number on the rotation axis of a node is always opposite to that of the other three edges of the node note that a xmath5rotation changes the state of a node as shown in figures pi3rot and pi3rot ie if the node is in state xmath7 before the rotation it becomes a xmath8node after the rotation this is the key to the first reason that we put the twists in an edge in units of xmath5 a xmath5 rotation relates two projections of an embedded spin network which belong to the same diffeomorphism class two consecutive xmath5 rotations certainly give rise to a xmath15 rotation however it is intuitive to understand xmath15 rotations in a more topological way obviously rotating a tetrahedron by an angle of xmath15 with respect to the normal of any of the four faces of the tetrahedron does not change the view of it therefore by the local duality between a node and a tetrahedron as long as an edge of a node is chosen the other three edges of the node are on an equal footing if we rotate a node with respect to any of its four edges by xmath15 the resulting diagram should be diffeomorphic to or in our context equivalent to the original one in fig 2pi3rot and fig 2pi3rot we list all the xmath15rotations p p each of such rotations generate two crossings and twists on all four edges the twist number on the rotation axis of a node is always opposite to that of the other three edges of the node note that a xmath15 rotation does not change the state of a node with respect to the rotation axis ie if a node is in state xmath7 with respect to xmath16 before the rotation it is still a xmath7node after the rotation the xmath5 and xmath15 rotations can be used to construct larger rotations for example the xmath17 rotations which also certainly do not change the diffeomorphism class a projection belongs to for the convenience of future use we depict these four possible rotations in fig pirot and fig pirot p p note that a xmath17 rotation changes the state of a node ie if a node is in state xmath7 with respect to its rotation axis before the rotation it becomes a xmath8node with respect to the same axis after the rotation xmath5 rotations are the smallest building blocks of all possible rotations they are thus the generators of all rotations this is illustrated in fig pi3rot through fig pirot each of which can be directly used in a graphic calculation recall that all the equivalence moves defined above are diffeomorphic operations on the embedded graphs as an example equibraids depicts two braids that can be deformed into each other by a xmath5 rotation of node 2 with respect to its external edge xmath16 we say these two braids are equivalent to each other ph note that for an end node of a braid only its external edge is allowed to be the rotation axis with respect to which the equivalence rotation moves are applied otherwise one may end up with a situation similar to fig badbraid which does not satisfy definition defbraid therefore although sub spinnets like fig badbraid are equivalent to well defined braids by rotation moves they are not to be investigated because they complicate the clear structure of braids and do not have any new interesting property thus for simplicity we only allow the external edge of an end node of a braid to be the rotation axis if a node is not an end node of a braid any of its four edges can be chosen as a rotation axis ph by looking carefully at the rotations and the crossings and twists generated accordingly one can find that the assignment of values to crossings shown in fig assignment is consistent with the assignment of values to twists shown in fig notationtwist given that the rotations and translations are well defined equivalence moves there should be a conserved quantity which is the same before and after the moves rotations create or annihilate twist and crossings simultaneously we thus define a composite quantity christened effective twist number of a rotation xmath18 where xmath19 is the twist number created by the rotation on an edge of the node xmath20 is the crossing number of a crossing created by the rotation between any two edges of the node and the factor of 2 comes from the fact that a crossing always involve two edges one can easily check that the rotations in fig pi3rot through fig pirot satisfy xmath21 that is rotations have a zero effective twist number therefore we can enlarge xmath22 to a more general quantity xmath23 the effective twist number of subdiagrams of an embedded spin net which are related by rotations of nodes by taking into account all the edges that are affected by rotations we define xmath24 where xmath19 is the twist number on an edge of the subdiagram xmath20 is the crossing number of a crossing in the subdiagram since xmath21 xmath23 is indeed a conserved quantity under rotations important examples of subdiagrams are braids which will become clear when we talk about propagation and interactions the effective twist number xmath23 in eq theta0 is also found to be preserved by translation moves note that the effective twist number is not defined in the unframed case simply because the unframed case has no notion of twists with the help of equivalence moves in particular the rotation moves we can classify all possible 3strand braids into two major types namely reducible braids and irreducible braids whose definitions are given below the aforementioned restriction that only the external edge of an end node of a braid can be the rotation axis of the node ensures the unambiguous assignments of states to the end nodes of a braid and keeps the classification of braids simple note that twists on edges are irrelevant to the calculation in the section they are thus neglected throughout the discussion nevertheless the results are valid for both framed and unframed cases for the purpose of classifying the braids we also consider braids as if they are isolated regions in a graph defredubraida braid is called a reducible braid if it is equivalent to a braid with fewer crossings otherwise it is irreducible the braid on the top part of fig equibraids is an example of a reducible braid whereas the one at the bottom of the figure is an irreducible braid to classify the braids in a convenient way we need a new notation and some auxiliary definitions since we have a way of assigning crossings integers xmath6 or xmath9 as in fig assignment we can use xmath25 matrices with two end nodes in either state xmath7 or xmath8 to denote a 3strand braid with xmath26 crossings as shown in fig matrixbraid and its caption keeping in mind that the state of an end node is and can only be with respect to its external edge for the purpose of calculation it is also convenient to associate crossings with one of the two end nodes of a braid for example in fig matrixbraid the left end node with its nearest crossing can be denoted by xmath27c1 0 endarray right and the right end node with its nearest two crossings can be written as xmath28cc0 0 1 1 endarray right ominus which has xmath28c0 1 endarray right ominus as its 1crossing sub end node end nodes represented in this way are called 1crossing end nodes 2crossing end nodes etc an end node without any crossingis christened a bare end node ph a braid can be decomposed into or recombined from a left end node a right end node and a bunch of crossings represented by matrices for instancexmath29ccc1 0 1 0 1 0 endarray right opluslongleftrightarrowoplusleft beginarray cc1 0 endarray right left beginarray cc1 0 endarray right left beginarray cccc1 0 1 0 1 0 endarray right oplus where the xmath30 between two matrices on the rhs means direct sum or concatenation of two pieces of braids one can see from the above equation that the first two crossings or the second and third crossings on the rhs are cancelled given this we have the following definition defredunodean xmath26crossing end node is said to be a reducible end node if it is equivalent to a xmath31crossing end node with xmath32 by equivalence moves done on the node otherwise it is irreducible the definition above gives rise to another definition of reducible braid which is equivalent to definition defredubraid defredubraid2a braid is said to be reducible if it has a reducible end node if a braid has a reducible left or right end node or both it is called left or right or two way reducible for consistency we may also symbolize the rotation moves because rotations are exerted only on the end nodes of a braid we can denote all possible moves by rotation operators xmath33 xmath34 xmath35 and xmath36 where the superscript xmath37 is the angle of rotation the first subscript xmath38 xmath39 reads that the operation is on the left right end node of a braid and the second subscript xmath38 xmath39 indicates that the direction of rotation is left right handed the left right handedness of the rotation is defined in such a way that if you grab the rotation axis of a node in your left right hand with the thumb pointing to the node your hand wraps up in the direction of rotation results of the rotation operators have been shown graphically in fig pi3rot through fig pirot here we show an example of the algebra in the following equationxmath40ccc1 0 0 0 1 1 endarray right ominusright oplusleft beginarray cccc1 0 0 0 1 1 endarray right rrrpi3left ominusright oplusleft beginarray cccc1 0 0 0 1 1 endarray right left beginarray cc1 0 endarray right oplus oplusleft beginarray ccccc1 0 0 1 0 1 1 0 endarray right oplusendaligned because a braid can be reduced only from its end nodes we first classify the irreducible end nodes we start from 1crossing end nodes all the possible ones are listed in table tb all1xnodes clllleft end nodes right end nodes the following equations then show how all the reducible 1crossing end nodes are reduced to bare nodesxmath41c1 0 endarray right right ominusleft beginarray cc1 0 endarray right left beginarray cc1 0 endarray right ominusleft beginarray cc0 0 endarray right labelreduce1x rllpi3left oplusleft beginarray cc0 1 endarray right right ominusleft beginarray cc0 1 endarray right left beginarray cc0 1 endarray right ominusleft beginarray cc0 0 endarray right nonumber rllpi3left ominusleft beginarray cc1 0 endarray right right oplusleft beginarray cc1 0 endarray right left beginarray cc1 0 endarray right oplusleft beginarray cc0 0 endarray right nonumber rlrpi3left ominusleft beginarray cc0 1 endarray right right oplusleft beginarray cc0 1 endarray right left beginarray cc0 1 endarray right oplusleft beginarray cc0 0 endarray right nonumber rrlpi3left left beginarray cc1 0 endarray right oplusright left beginarray cc1 0 endarray right left beginarray cc1 0 endarray right ominusleft beginarray cc0 0 endarray right ominusnonumber rrrpi3left left beginarray cc0 1 endarray right oplusright left beginarray cc0 1 endarray right left beginarray cc0 1 endarray right ominusleft beginarray cc0 0 endarray right ominusnonumber rrrpi3left left beginarray cc1 0 endarray right ominusright left beginarray cc1 0 endarray right left beginarray cc1 0 endarray right oplusleft beginarray cc0 0 endarray right oplusnonumber rrlpi3left left beginarray cc0 1 endarray right ominusright left beginarray cc0 1 endarray right left beginarray cc0 1 endarray right oplusleft beginarray cc0 0 endarray right oplusnonumberendaligned with the help of the above calculations we can easily list all the irreducible 1crossing end nodes in table tb irred1xnodes clllleft end nodes right end nodes now we consider 2crossing end nodes there are a total of 48 of this kind including left and right end nodes to find all the irreducible 2crossing end nodes we need only to think about those whose sub 1crossing nodes are irreducible since otherwise a 2crossing end node is already reducible this excludes 24 2crossing end nodes if a 2crossing node has an irreducible sub 1crossing node its crossings can definitely not be reduced by xmath42rotations because a xmath15rotation is made of two consecutive xmath43rotations that do not reduce any irreducible 1crossing node and it does not flip the state of a bare node interestingly however a 2crossing end node with an irreducible sub 1crossing end node may still be reduced to an irreducible 1crossing end node by xmath17rotations which can be seen from the following equationsxmath44cc1 0 0 1 endarray right right ominusleft beginarray cccc1 0 1 0 1 0 endarray right left beginarray ccc1 0 0 1 endarray right ominusleft beginarray cc1 0 endarray right labelreduce2x rlrpileft oplusleft beginarray ccc0 1 1 0 endarray right right ominusleft beginarray cccc0 1 0 1 0 1 endarray right left beginarray ccc0 1 1 0 endarray right ominusleft beginarray cc0 1 endarray right nonumber rlrpileft ominusleft beginarray ccc1 0 0 1 endarray right right oplusleft beginarray cccc1 0 1 0 1 0 endarray right left beginarray ccc1 0 0 1 endarray right oplusleft beginarray cc1 0 endarray right nonumber rllpileft ominusleft beginarray ccc0 1 1 0 endarray right right oplusleft beginarray cccc0 1 0 1 0 1 endarray right left beginarray ccc0 1 1 0 endarray right oplusleft beginarray cc0 1 endarray right nonumber rrrpileft left beginarray ccc0 1 1 0 endarray right oplusright left beginarray ccc0 1 1 0 endarray right left beginarray cccc1 0 1 0 1 0 endarray right ominusleft beginarray cc1 0 endarray right ominusnonumber rrlpileft left beginarray ccc1 0 0 1 endarray right oplusright left beginarray ccc1 0 0 1 endarray right left beginarray cccc0 1 0 1 0 1 endarray right ominusleft beginarray cc0 1 endarray right ominusnonumber rllpileft left beginarray ccc0 1 1 0 endarray right ominusright left beginarray ccc0 1 1 0 endarray right left beginarray cccc1 0 1 0 1 0 endarray right oplusleft beginarray cc1 0 endarray right oplusnonumber rlrpileft left beginarray ccc1 0 0 1 endarray right ominusright left beginarray ccc1 0 0 1 endarray right left beginarray cccc0 1 0 1 0 1 endarray right oplusleft beginarray cc0 1 endarray right oplusnonumberendaligned consequently we can list all the irreducible 2crossing end nodes in table tb irred2xnodes cclccl xmath27cc1 1 0 0 endarray right xmath28cc0 1 1 0 endarray right oplus xmath28cc1 1 0 0 endarray right oplus xmath27cc0 0 1 1 endarray right xmath28cc1 0 0 1 endarray right oplus xmath28cc0 0 1 1 endarray right oplus xmath45cc1 1 0 0 endarray right xmath28cc0 1 1 0 endarray right ominus xmath28cc1 1 0 0 endarray right ominus xmath45cc0 0 1 1 endarray right xmath28cc1 0 0 1 endarray right ominus xmath28cc0 0 1 1 endarray right ominus the following theorem states that there is no need to investigate end nodes with more crossings to see if they are irreducible theonxnodean xmath26crossing end node xmath46 which has an irreducible 2crossing sub end node is irreducible if the xmath26crossing end node has a irreducible 2crossing sub end node the two crossings nearest to the node are not reducible by either a single xmath43 or a single xmath15rotation on the node we may consider xmath47rotations on its 3crossing sub end node however if a 3crossing end node is reducible by a xmath17rotation it must contain a reducible 2crossing sub end node according to fig pirot fig pirot and eqreduce2x which is contradictory to the condition given in the theorem this is then true for all cases where xmath48 by simple induction therefore the theorem holds equipped with the knowledge of irreducible end nodes we are ready to classify braids the two end nodes of a braid are either in the same states or in opposite states we first take a look at braids whose end nodes are in the same states theo123xbraidsall xmath26crossing braids in the form xmath27ccc cdots cdots endarray right oplus and xmath45ccc cdots cdots endarray right ominus are reducible for xmath49 it suffices to prove the xmath50 case the case of xmath51 follows similarly or by symmetry 1 xmath52 there are only four possibilities namely xmath27cpm1 0 endarray right oplus and xmath27c0 pm1 endarray right oplus however they are all reducible because they all contain one reducible 1crossing end node according to eq reduce1x 2 xmath53 we first consider the braids formed by an irreducible 2crossing end node xmath54 or xmath55 and a bare end node xmath56 we do the following decompositionxmath57 then from table tb irred2xnodes it is readily seen that xmath58 and xmath59 are always reducible end nodes for any choice of xmath54 and xmath55respectively that is the braids formed this way are reducible we then consider braids formed by two irreducible 1crossing end nodes the first two rows in table tb irred1xnodes and eq reduce2x clearly shows that the result is either an unbraid or one with a reducible 2crossing end node 3 xmath60 we need only consider braids each of which is formed by the direct sum of a 2crossing irreducible end node and a 1crossing irreducible end node this can be done by taking the direct sum between the right left end nodes in the first two rows of table tb irred1xnodes and the left right end nodes in the first two rows of table tb irred2xnodes it is not hard to see that any resultant braid has merely two possibilities i two neighboring crossings are cancelled by the direct sum which leads to 1crossing braids that are proven to be reducible in the case of xmath52 and ii a crossing in the irreducible 2crossing end node is combined with the irreducible 1crossing end node to form a reducible 2crossing end node ie the braid is reducible theorem theo123xbraids does not cover the case where xmath61 which will be included in another theorem soon before that let us consider the braids whose end nodes are in opposite states xmath26crossing braids in the form xmath27ccc cdots cdots endarray right ominus and xmath45ccc cdots cdots endarray right oplus for xmath49 note that due to theoremtheo123xbraids the set of irreducible 1crossing braids to be found here represents the full set of irreducible braids for xmath49 regardless of the states of the end nodes 1 xmath52 an irreducible braid can only be made by an irreducible 1crossing end node and a bare node from table tb irred1xnodes there are only four options which are indeed all irreducible they are now listed in table tb irred1xbraids clll 2 it is sufficient to consider the braids formed by an irreducible 2crossing end node and a bare end node in the opposite state the reason is that if a 2crossing braid is irreducible its two 2crossing end nodes must be irreducible as well moreover if a 2crossing end node is irreducible its 1crossing sub end node is already irreducible therefore one can simply add to each irreducible end node in table tb irred2xnodes a bare end node in the opposite state to create an irreducible 2crossing braid being a bit redundant we list all the 16 irreducible 2crossing braids in table tb irred2xbraids cclccl xmath27cc1 1 0 0 endarray right ominus xmath45cc0 1 1 0 endarray right oplus xmath45cc1 1 0 0 endarray right oplus xmath27cc0 0 1 1 endarray right ominus xmath45cc1 0 0 1 endarray right oplus xmath45cc0 0 1 1 endarray right oplus xmath45cc1 1 0 0 endarray right oplus xmath27cc0 1 1 0 endarray right ominus xmath27cc1 1 0 0 endarray right ominus xmath45cc0 0 1 1 endarray right oplus xmath27cc1 0 0 1 endarray right ominus xmath27cc0 0 1 1 endarray right ominus 3 xmath60 a 3crossing braid in this case is irreducible if and only if it admits the following two decompositionsxmath62 where xmath30 is understood as the direct sum the proof of this claim follows immediately from theorem theonxnode it is time to summarize the case of xmath61 for xmath26crossing braids regardless of the states of the end nodes by the following theorem theonxbraidsa xmath26crossing braid for xmath61 is irreducible if and only if it admits the decompositionxmath63 where xmath30 is understood as the direct sum the only constraint of the arbitrary sequence of crossings is that its last crossing on each side does not cancel the neighboring crossing associated with the end node on the same side an irreducible 2crossing end node contains an irreducible 1crossing end node by theorem theonxnode if the above decomposition is admitted the braid is not reducible on either end node whatever the arbitrary sequence of crossings is up to the constraint therefore the theorem holds the braids that are interesting to us are those reducible ones which is shown in the companion paper thus we may make more detailed divisions in the type of reducible braids by the definition below given a reducible braid xmath3 a braid xmath64 obtained from xmath3 by doing as much reduction as possibleis called an extremum of xmath3 xmath3 may have more than one extrema but all the extrema have the same number of crossings we then have the following 1 if all extrema of xmath3 are unbraids ie braids with no crossing xmath3 is said to be completely reducible 2 if an extremum of xmath3 can be reached by equivalence moves exerted only on its leftright end node xmath3 is called extremely leftright reducible if xmath3 is also completely reducible xmath3 is then said to be completely leftright reducible note that completely left right reducible implies extremely left right reducible but not vice versa in general in this paper we proposed a new notation namely the tube sphere notation for embedded framed 4valent spin networks by means of this notation we discovered a type of topological structures the 3strand braids as sub diagrams of an embedded spin net equivalence moves including translations and rotations which divide projections of embeddings of spin networks into different equivalence classes are defined and discussed in detail the equivalence moves are important and useful in two aspects firstly by rotations we classify 3strand braids into two major types reducible braids and irreducible braids the former of which are further classified for the purpose of subsequent works secondly by equivalence movesone is able to carry out the calculation of braid propagation and interactions of embedded 4valent spin nets these results serve as foundations for the work in the companion paper and all our future work dealing with braid like excitations of embedded 4valent spin networks in another paper we will propose the evolution moves of embedded 4valent spin networks by which some of the reducible 3strand braids are able to propagate on the spin nets and interact with each other and provide a possible formulation of the dynamics of these local excitations the author is in debt to lee smolin the author s advisor for his great insight and heuristic discussion he is grateful to fotini markopoulou for her critical comments he would appreciate the helpful discussions with isabeau premont schwarz aristide baratin and thomasz konopka gratitude must also go to sundance bilson thompson for his proof reading of the manuscript research at perimeter institute is supported in part by the government of canada through nserc and by the province of ontario through medt
we propose a new notation for the states in some models of quantum gravity namely 4valent spin networks embedded in a topological three manifold with the help of this notation equivalence moves namely translations and rotations can be defined which relate the projections of diffeomorphic embeddings of a spin network certain types of topological structures viz 3strand braids as local excitations of embedded spin networks are defined and classified by means of the equivalence moves this paper formulates a mathematical approach to the further research of particle like excitations in quantum gravity 1
introduction notation braids equivalence moves classification of braids conclusions & perspectives acknowledgements
quantum fluctuation can suppress chaotic motion of wave packet in the phase space due to the quantum interference as seen in kicked rotor xcite on the contrary the quantum fluctuation can enhance the chaotic motion of wave packet due to tunneling effect as seen in kicked double well model xcite the relation between chaotic behavior and tunneling phenomenon in classically chaotic systems is interesting and important subject in study of quantum physics xcite recently the semiclassical description for the tunneling phenomena in a classically chaotic system have been developed by several groups xcite lin and ballentine studied interplay between the tunneling and classical chaos for a particle in a double well potential with oscillatory driving force xcite they found that coherent tunneling takes place between small isolated classical stable regions of phase space bounded by kolmogorov arnold moser kam surfaces which are much smaller than the volume of a single potential well hnggi and the coworkers studied the chaos suppressed tunneling in the driven double well model in terms of the floquet formalism xcite they found a one dimensional manifold in the parameter space where the tunneling completely suppressed by the coherent driving the time scale for the tunneling between the wells diverges because of intersection of the ground state doublet of the quasienergies while the mutual influence of quantum coherence and classical chaos has been under investigation since many years ago the additional effects caused by coupling the chaotic system to the other degrees of freedom dof or an environment namely decoherence and dissipation have been studied only rarely xcite as well as the tunneling phenomena in the chaotic system since mid eighties there are some studies on environment induced quantum decoherence by coupling the quantum system to a reservoir xcite recently quantum dissipation due to the interaction with chaotic dof has been also studiedxcite in this paperwe numerically investigate the relation quantum fluctuation tunneling and decoherence combined to the delocalization in wave packet dynamics in one dimensional double well system driven by polychromatic external field before closing this section we refer to a study on a delocalization phenomenon by a perturbation with some frequency components in the other model have reported that the kicked rotator model with a frequency modulation amplitude of kick can be mapped to the tight binding form loyld model on higher dimensional lattice in solid state physics under very specific condition xcite then the number xmath0 of the incommensurate frequencies corresponds the dimensionality of the tight binding system the problem can be efficiently reduced to a localization problem in xmath1 dimension as seen in the case of kicked rotators we can also expect that in the double well system the coupling with oscillatory perturbation is roughly equivalent to an increase in effective degrees of freedom and a transition from a localized wave packet to delocalized one is enhanced by the polychromatic perturbation the concrete confirmation of the naive expectation is one of aims of this numerical work we present the model in the next section in sect3 we show the details of the numerical results of the time dependence of the transition probability between the wells based on the quantum dynamics section 4 contains the summary and discussion furthermore in appendix a we gave details of the classical phase space portraits in the polychromatically perturbed double well system and some considerations to the effect of polychromatic perturbation in appendix b a simple explanation for the perturbed instanton tunneling picture is given we consider a system described by the following hamiltonian xmath2 for the sake of simplicity xmath3 and xmath4 are taken as xmath5 xmath6 xmath7 in the present paper then xmath0 is the number of frequency components of the external field and xmath8 is the perturbation strength respectively xmath9 are order of unity and mutually incommensurate frequencies we choose off resonant frequencies which are far from both classical and quantum resonance in the corresponding unperturbed problem the parameter xmath10 adjusts the distance between the wells and we set xmath11 to make some energy doublets below the potential barrier note that lin dealt with a double well system driven by forced oscillator duffing like model therefore the asymmetry of the potential plays an important role in the chaotic behavior and tunneling transition between the symmetry related kam tori xcite however in our model the potential is remained symmetric during the time evolution process and different mechanism from the forced oscillation makes the classical chaotic behavior xcite in the previous paperxcite we presented numerical results concerning a classical and quantum description of the field induced barrier tunneling under the monochromatic perturbation xmath12 in the unperturbed double well system xmath13 the instanton describes the coherent tunneling motion of the initially localized wave packet it is also shown that the monochromatic perturbation can breaks the coherent motion as the perturbation strength increases near the resonant frequency in the previous paper in the classical dynamics of our model outstanding feature different from previous studies is parametric instability caused by the polychromatic perturbation based on our criterion given below we roughly estimate the type of the motion ie the coherent and irregular motions in a regime of the parameter space spanned by the amplitude and the number of frequency components of the oscillatory driving force it is suggested that the occurrence of the irregular motion is related to dissipative property which is organized in the quantum physics xcite the classical phase space portraits and simple explanation of relation to the dissipative property are given in appendix awe use gaussian wavepacket with zero momentum as the initial state which is localized in the right well of the potential xmath14 where xmath15 means a bottom of the right well the gaussian wavepacket can be approximately generated by the linear combination of the ground state doublet as xmath16 where xmath17 and xmath18 denote the ground state doublet the recurrence time for the wavepacket is xmath19 in the unperturbed case xmath13 where xmath20 is the energy difference between the tunneling doublet of the ground state we set the spread of the initial packet xmath21 and xmath22 for simplicity throughout this paper indeed the ammonia molecule is well described by two doublets below the barrier heigth in unperturbed case we numerically calculate the solution xmath23 of time dependent schrdinger equation by using second order unitary integration with time step xmath24 we define transition probability of finding the wave packet in the left well xmath25 in the cases that the perturbation strength is relatively small xmath26 can be interpreted as the tunneling probability that the initially localized wave packet goes through the central energy barrier and reaches the left well we can expect that the transition probability xmath26 is enhanced as the number xmath0 of the frequency components increases up to some extent because of the increasing of the stochasticity in the total system as a function of time xmath27 for various xmath0 s axmath28 bxmath29 the calculation time is same order to heisenberg time in the unperturbed case figure 1 shows the time dependence of xmath30 for various combinations of xmath8 and xmath0 apparently we can observe the coherent and irregular motions the coherent motion of the wave packet can be well described by the semiquantal picture in a sense that the wave packet does not delocalize to the fully delocalized state the semiquantal picture decomposes the motion of the wave packet into evolution of the centroid motion and the spreading and squeezing of the packet xcite see subsect35 for example in cases of relatively small perturbation strength xmath28 coherent motion remains still up to relatively large xmath31 it is important to emphasize that the tunneling contribution to the transition probability xmath32 is not so significant for large xmath8 andor xmath0 then xmath32 may be interpreted as a barrier crossing probability due to the activation transition because the energy of wave packet increases over the barrier height in the parameter range especially in the relatively large perturbation regime we can interpret the delocalized states as chaos induced delocalization in a sense that the classical chaos enhances the quantum barrier crossing rate quite significantly the chaotic behavior in the classical dynamics is given in appendix a based on the classical poincar section and so on xcite in the present section we mainly focus on the transition of the quantum state from the localized wavepacket to delocalized state based on the data of numerical calculation once the wave packet incoherently spreads into the space as the xmath0 andor xmath8 increase the wavepacket is delocalized and never return to gaussian shape again within the numerically accessible time apparently we regard the delocalized quantum state as a decoherent state in a sense that the behavior of the wave packet is similar to that of the stochastically perturbed case see fig5a in case of relatively small perturbation strength xmath28 the decoherence of quantum dynamics appears at around xmath33 and xmath26 fluctuates irregularly in case of large xmath34 in short the irreversible delocalization of a gaussian wave packet generates a transition from coherent oscillation to irregular fluctuation of xmath26 we have confirmed that the similar behavior is also observed for other sets of values of the frequencies and the different initial phases xmath35 of the perturbation here we define a degree of coherence xmath36 of the time dependence of xmath26 based on the fluctuation of the transition probability in order to estimate quantitatively the difference between coherent and incoherent motions xmath37 where xmath38 represents time average value for a period xmath39 note that we used xmath36 in order to express the decoherence of the tunneling osccilation of the transition probability in the parametrically perturbed double well system on the other hand the other quantities such as purity linear entropy and fidelity are sometimes used to characterize the decoherence of the quantum system xcite the transition of the dynamical behavior based on the fidelity for description of the decoherence in the double well system will be given elsewhere xcite dependence of the degree of coherence xmath36 of tunneling probability for various xmath0 s the xmath40 is numerically estimated by xmath26 figure 2 shows the perturbation strength dependence of xmath36 for various xmath0 s we roughly divide the type of motion of wave packet into three ones as follows in the coherent motions the value of xmath36 s is almost same to the unperturbed case ie xmath41 in which cases the instanton like picture is valid xcite a simple explanation of the perturbed instanton is given in appendix b in the irregular motions which are similar to the stochastically perturbed case the value of xmath36 s becomes much smaller ie xmath42 as a matter of course there are the intermediate cases between the coherent and the irregular motions xmath43 note that the exact criterion of the intermediate motion is not important in the present paper because we can expect that the transitional cases approach to the irregular case in the long time behavior it should be stressed that the critical value xmath44 exists which divides the behavior of xmath26 into regular and irregular motions circlesxmath45 crossesxmath46 and trianglesxmath47 denote coherent motions xmath48 irregular motions xmath49 and the transitional cases xmath50 respectively figure 3 shows a classification of the motion in the parameter space which is estimated by the value of the degree of coherence xmath51 it seems that two kinds of the motion ie coherent and irregular motions are divided by the thin layer corresponding to the transitional case as xmath0 increases decoherence of the motion appears even for small xmath8 the numerical estimation suggests that there are the critical values xmath52 of the perturbation strength depending on xmath0 when the perturbation strength xmath8 exceeds the critical value xmath53 for some xmath0 the tunneling oscillation loses the coherence the approximated phase diagram roughly same as the diagram generated by maximal lyapunov exponent of the classical dynamics see appendix a in this subsection we give a consideration to the reduction of the tunneling period in the regular motion regime xmath54 for xmath12 and xmath55 in the coherent motion regime xmath54 figure 4 shows the xmath56dependence of the the period xmath57 of the tunneling oscillation estimated by the numerical data xmath26 in the coherent motion regime xmath54 we can observe the monotonically decreasing of the tunneling period as the perturbation strength increases in the monochromatically perturbed case the reduction of the tunneling period can be interpreted by applying the floquet theorem to the quasi energy states and the quasi energy as the hamiltonian is time periodic xmath58 when the wave packet does not effecively absorb the energy from the external perturbation the time dependence of the quantum statecan be described by the linear combination of a doublet of quasi degenerate ground states with opposite parity because we prepare the initial state in xmath16 and the evolution is adiabatic in the two state approximation that the avoided crossing of the eigenvalues dynamics does not appear during the time evolution it is expected that the state evolves as xmath59 where xmath60 and xmath61 denote the quasi energies and floquet states of the time periodic hamiltonian xcite under the approximationwe expect the following relation xmath62 where xmath63 means quasi energy splitting of the ground state doublet due to the tunneling between the wells in the monochromatically perturbed case xmath12 let us confirm the relation in eq7 numerically in fig 5 we show the xmath56dependence of xmath64 the behavior is analogus to the xmath56dependence of the tunneling period of the oscillation xmath26 in fig 4 in the weak perturbation regime the similar correspondence between the tunneling period and the change of the quasi energy splitting have been reported for the other double well system by tomsovic it is well known that the chaos around the separatrix contributes to the enhancement of the tunneling split between the doublet ie chaos assisted tunneling the reduction of the tunneling period can be approximately explained by the chaos assisted instanton picture in the coherent oscilation regime xmath65 the simple explanation for the perturbed instanton picture based on the width of the chaotic layer in the classical dynamics is given in appendix b see also appendix a generally speaking as the number of frequencies xmath0 increases the tunneling period is more reduced as seen in fig4 although we do not have analytic representation in the polychromatically perturbed cases we conjecture that as seen in appendix a the increasing of the width of the stochastic layer contributes the reduction of the tunneling period even in the polychromatically perturbed cases as a function of time for some combinations of the parameters a xmath55 xmath66 b xmath55 xmath67 c xmath68 xmath69 d xmath68 xmath67 as a function of time xmath27 under stochastic perturbation with xmath28 bplots of the uncertainty product xmath70 versus time for various xmath71s with the stochastic perturbation the stochastic perturbation strength xmath8 is normalized to be equivalent to one of the polychromatic perturbation here let us investigate the spread of the wave packet in the phase space xmath72 hitherto we mainly investigated the dynamics in xmath73space by xmath26 the phase space volume gives a part of the compensating information for the phase space dynamics of the wave packet figure 6 presents the uncertainly product ie phase space volume as a function of time for various cases which is defined by xmath74 where xmath75 denotes quantum mechanical average the uncertainty product can be used as a measure of quantum fluctuation xcite the initial value is xmath76 for the gaussian wave packet it is found that in the case xmath55 the increase of the perturbation strength does not break the coherent oscillation and enhances the frequency of the time dependence of the uncertainty product for the relatively large xmath8 in xmath68 xmath77 increases until the wave packet is relaxed in the space and it can not return to gaussian wave packet anymore for the larger time scale it fluctuates around the corresponding certain level we can expect that the structure of the time dependence well corresponds to the behavior of the transition probability xmath26 in fig it will be instructive to compare the above irregular motion under the polychromatic perturbation with the stochastically perturbed one we recall that the stochastic perturbation composed of the infinite number of the frequency components xmath78 with absolute continuous spectrum can break the coherent dynamics indeed if the time dependence of the potential comes up with the stochastic fluctuation as xmath79 where xmath80 and xmath81 denote ensemble average and the temperature respectively the stochastic perturbation partially models a heat bath coupled with the system xcite then the number of the frequency component corresponds to the number of degrees of freedom coupled with the double well system the xmath26 for the stochastic perturbation is shown in fig the stochastic perturbation can be achieved numerically by replacing xmath82 in the eq2 by random number and we use uniform random number which is normalized so that the power of the perturbation is the same order to one of the polychromatic case in the limit of large xmath0the motion under the polychromatic perturbation tends to approach the one driven by the stochastic perturbation provided with the same perturbation strengths xmath8 figure 7b shows the uncertainty product xmath77 for the stochastically perturbed cases it is found that the time dependence of the uncertainty product in the stochastically perturbed case behaves similarly to the polychromatically perturbed ones for the relatively small xmath83 on the other hand for the relatively larger xmath804 the time dependence shows quite different behavior while xmath77 grows linearly with time in the stochastically perturbed case in the polychromatically perturbed cases the growth of xmath77 saturates at a certain level the linear growth of xmath77 shows that the external stochasticity breaks the quantum interference in the internal dynamics the growth of xmath77 is strongly related to the growth of the energy of the packet xcite in the polychromaticallyperturbed cases the energy growth saturates at certain level due to quantum interference on the other hand in the case the energy grows unboundedly the activation transition becomes much more dominant than the tunneling transition when the wave packet transfers the opposite well the details concerning relation between the stochastic resonance xcite and suppression of the energy growth will be given elsewhere xcite note that the polychromatic perturbation can be identified with a white noise or a colored noise if the frequencies are distributed over a finite band width only in the limit of xmath78 while the stochastic perturbation can model a heat bath that breaks the quantum interference of the system a similar phenomenon by the different property of the perturbation has been observed as dynamical localization and the noise assisted mixing of the quantum state in the momentum space in the quantum kicked rotor model xcite and the same parameters at xmath84 xmath85 the selection of initial condition of the fluctuation follows the minimum uncertainty contour plots e f of the hushimi functions for the corresponding the quantum state at xmath86 contour lines in the panel e at the values 001 002 005 008 and 01 in the panel f at 001 002 003 004 and 005 xmath87 and xmath28 for a c and e xmath87 and xmath29 for b d and f finally in order to see effect of quantum fluctuation we compare the quantum states with the classical and semiquantal motions in the phase space for some cases the semiquantal equation of motion is given by generalized hamilton like equations as xmath88 where the canonical conjugate pair xmath89 is defined by the quantum fluctuation xmath90 and xmath91 as xmath92 for more details consult xcite it is directly observed that the quantum tunneling phenomenon enhances chaotic motion in comparing to the classical and semiquantal trajectories in fig8a and b poincar surface of section of the classical trajectories in the phase plane at xmath87 are shown the stroboscopic plots are taken at xmath93 due to non time periodic structure of the hamiltonian in the relatively small perturbation strength xmath28 the trajectories stay the single well and are stable even for the long time evolution figure 7c and d show the poincar section of the semiquantal trajectories for a polychromatically perturbed double well system with xmath87 the stroboscopic plots are taken at xmath94 again the semiquantal trajectories for the squeezed quantum coherent state can be obtained by an effective action which includes partial quantum fluctuation to all order in xmath95 xcite it can be seen that in comparing with ones of classical dynamics the trajectories in the semiquantal dynamics spreads into the opposite well even for the small xmath8 this corresponds to the quantum tunneling phenomenon through the semiquantal dynamics apparently the partial quantum fluctuation in the semiquantal approximation enhances the the chaotic behavior notice that the semiquantal picture breaks down for the irregular quantum states because the centroid motion becomes irrelevant in fig8e and f the corresponding coherent state representation for the quantum states are shown it is directly seen that the wave packet spreads over the two wells and the shape is not symmetric once the wave packet incoherently spreads over the space it can not return to the initial state anymore we have confirmed that in a case without separatrix single well namely the case that xmath10 in eq 2 is replaced by xmath96 in the classical phase space the coherent oscillations have remained against the relatively large xmath0 andor xmath8 it follows that the full quantum interference suppresses the chaotic behavior as seen in the semiquantual trajectories we numerically investigated influence of a polychromatic perturbation on wave packet dynamics in one dimensional double well potential the calculated physical quantities are the transition rate xmath26 the time fluctuation xmath40 uncertainty product xmath77 and phase space portrait the results we obtained in the present investigation are summarized as follows 1 we classified the motions in the parameter space spanned by the amplitude and the number of frequency components of the oscillatory driving force ie coherent motions and irregular motions the critical value xmath52 which divides the behavior of xmath26 into regular and irregular motions depends on the number of the frequency component xmath0 2 within the regular motion range the period of the tunneling oscillation is reduced with increase of the number of colors andor strength of the perturbation it could be explained by the increase of the instanton tunneling rate due to appearance of the stochastic layer near separatrix xcite in this parameter regimethe perturbed instanton picture is one of expression for chaos assisted tunneling xcite and chaos assisted ionization picture reported for some quantum chaos systems xcite 3 in the irregular motion in the polychromatically perturbed cases the growth of xmath77 initially increases and saturates at certain level due to quantum interference on the other hand in the stochastically perturbed case the uncertainty product grows unboundedly because the external stochasticity breaks the quantum interference in the internal dynamics the growth of xmath77 is strongly related to the growth of the energy of the wave packet 4 it is expected that the quantum fluctuation are always large for the classically chaotic trajectories compared to the regular ones this implies that the quantum corrections to the evolution of the phase space fluctuation become more dominant for classically chaotic trajectories 5 in the semiquantal approximation the partial quantum fluctuation enhances the chaotic behavior and simultaneously the chaos enhances the tunneling and decoherence of the wave packet the quantum fluctuation observed in the semiquantal picture is suppressed by interference effect in the fully quantum motion the semiquantal picture can not apply to the chaos induced delocalized states furthermore in the appendices we gave classical phase space portraits in the polychromatically perturbed double well system and a simple explanation for the perturbed instanton tunneling picture for the reduction of the tunneling period in the coherent motion regime although we have dealt with quantum dynamics of wave packet with paying attention to existence of the energetic barrier we can expect that the similar phenomena would appear by dynamical barrier in the system the details will be given elsewhere xcite we show classical stroboscopic phase space portrait in this appendix with paying an attention to the effect of polychromatic perturbation on the chaotic behavior in the classical dynamics such a system shows chaotic behavior by the oscillatory force xmath97 xcite the newton s equation of the motion is xmath98 note that in the monochromatically perturbed case xmath12 xmath99 the equation is known as nonlinear mathieu equation which can be derived from surface acoustic wave in piezoelectric solid xcite and nanomechanical amplifier in micronscale devices xcite in fig a1 we show the change of the classical stroboscopic phase space portrait changing the perturbation parameters increasing the perturbation strength xmath8 destroys the separatrix and forms a chaotic layer in the vicinity of the separatrix needless to say the phenomena have been observed even in the monochromatically perturbed cases xcite in the polychromatically perturbed cases xmath100 the smaller the strength xmath8 can generate chaotic behavior of the classical trajectories the larger xmath0 is xcite it should be emphasized that in the polychromatically perturbed cases the width of the chaotic layer grows faster than the monochromatically perturbed case as the perturbation strength increases as a result the increase of the color contributes the increase of the width of the stochastic layer in the polychromatically perturbed cases space are plotted at xmath101 the xmath56dependence for xmath102 is shown in axmath66 bxmath69 cxmath28 and dxmath29 the xmath103dependence for xmath104 is shown in exmath12 fxmath55 gxmath102 and hxmath105 here we use the increasing rate of infinitesimal displacement along the classical trajectory for the extent of chaotic behavior as a finite time lyapunov exponent xmath106 we prepare various initial points in the phase space and conveniently adapt a trajectory with maximal increasing rate among the ensemble within the finite time interval as the finite time lyapunov exponent note that an exact lyapunov exponent should be defined for the long time limit however the roughly estimated lyapunov exponent is also useful to observe the classical quantum correspondence figure a2 shows the xmath56dependence of classical lyapunov exponents for various cases estimated by the numerical data of the classical trajectories estimated by some classical trajectories within the finite time xmath107 where xmath108 is tunneling time given insubsect31 we can roughly observe a transition from motion of kam system xmath109 to chaotic motion xmath110 as the perturbation strength increases the increasing of the number of color xmath0 reduces the value of the critical perturbation strength xmath111 of a transition from a motion of kam system to fully chaotic motion roughly speaking a transition of the classical dynamics corresponds to the transition from coherent motion to irregular one in the quantum dynamics we expect that the transition observed in sect3 will corresponds to quantum signatures of the kam transition from the regular to chaotic dynamics in this subsection we conceptually consider a role of the polychromatic perturbation different from monochromatic one note that the change of the number xmath0 of colors also changes the qualitative nature of the underlying dynamics because xmath0 corresponds to the effective number of dof under some conditions xcite in the our model when the number of dof of the total system is more than four ie xmath112 the classical trajectories can diffuse along the stochastic layers of many resonances that cover the whole phase space if the trajectory starts in the vicinity of a nonlinear resonance the number of resonances increases rapidly with dof changing the characteristic of population transfer from bounded to diffusive such a global instability is known as arnold diffusion in nonlinear hamiltonian system with many dof xcite the effect of arnold diffusion in quantum system is not trivial and the study just has started recently xcite the more detail is out of scope of this paper moreover we can regard the time dependent model of eq1 as nonautonomous approximation for an autonomous model consisting of the double well system coupled finite number xmath0 of harmonic oscillators with the incommensurate frequencies xmath113 it is worth noting that the linear oscillators can be identified with a highly excited quantum harmonic oscillators which all phonon modes are excited around fock states with large quantum numbers then the above model can be regarded as a double well system coupled with xmath0 phonon modes without the interaction withthe phonon modes the gaussian wavepacket remains the coherent motion then the number of dof of total system is xmath114 and the number of the frequency components xmath0 corresponds to the that of the highly excited quantum harmonic oscillators the detail of the correspondence is given in refxcite in quantum chaotic system with finite and many dof we expect occurrence of a dissipative behavior for example we consider simulated light absorption by coupling a system in the ground state with radiation field then stationary one way energy transport from photon source to the system can be interpreted as occurrence of quantum irreversibility in the total system such a irreversibility is called chaos induced dissipation in quantum system with more than two dof xcite in this sense we can expect occurrence of the one way transport phenomenon in the delocalized state in the irregular motion phase if it couples with the other dof in the ground state as seen in refxcite in this appendix we consider the reduction of the tunneling period as the perturbation strength increases in the coherent oscilation regime xmath65 based on a perturbed instanton tunneling in a double well system with dipole type interaction xmath115 the energitical barrier tunneling between the symmetric double wellcan be explained by a three state model or chaos assisted tunneling cat xcite the three states that take part in the tunneling are a doublet of quasi degenerate states with opposite parity localized in the each well and a third state localized in the chaotic layer around the separatrix however note that less attention has been paid to tunneling in kam system while chaotic dynamics has been modeled by multi level hamiltonian and random matrix model to describe the chaos assisted tunneling we give an expresion of the tunneling amplitude in chaos assisted instanton tunneling firstly proposed by kuvshinov et al for a hamiltonian system with time periodic perturbation xcite let us consider only monochromatically perturbed case xmath12 in eq2 xmath116 because the separatrix destruction mechanism by the time periodic perturbation has an universality although our system is different from their one indeed trajectories in the neighborhood of the separatrix of the system are well reproduced by the whisker map of the system whisker map is a map of the energy change xmath117 and phase change xmath118 of a trajectory in the neighborhood of the separatrix for each of its motion during one period of the perturbation ie action angle variable moreover if we linearize the whisker map which describes the behaviors of the trajectories in the neighborhood of the fixed point we can obtain the following standard map xmath119 where xmath120 is a nonlinear parameter of local instability that the exact function form which is not essential for our purpose xmath121 increases with xmath8 and xmath122 means that the dynamics of the system is locally unstable a comparison has done between the whisker map and the strobe plots in the time continuous version by yamaguchi xcite the form of the mapping is convenient for the estimate of the width of the stochastic layer the perturbation destroys separatrix of the unperturbed system and the stochastic layer appears in the regular motion denoted by the circles in fig3 classical chaos can increase the rate of instanton tunneling due to appearance of the stochastic layer near separatrix of the unperturbed system as a resultthe frequency of time dependence xmath26 increases as the classical chaos becomes remarkable in the parameter regime note that the perturbed instanton tunneling picture disappears in the strongly perturbed regime due to the delocalization of wavepacket herewe give only relation between the width of the stochastic layer and the tunneling amplitude in terms of path integral in imaginary time xmath123 found by kuvshivov et al tunneling amplitude between the two wells in the perturbed system can be given by integration over energy of tunneling amplitude xmath124 in unperturbed system as xmath125 exp sqtau e endaligned where xmath126 denotes the euclidian action xmath127 denotes the width of stochastic layer where xmath128 and xmath129 are the energy of the unperturbed system on the separatrix and on the bound of stochastic layer respectively xmath130 is classical solution of euclidian equation of motion the contribution of the chaotic instanton solution are taken into account by means of integration over xmath131 which is energy of the instanton the perturbed instanton solutions correspond to the motions in vicinity of the separatrix inside the layer the only manifestation of the perturbation in this approximation is the appearance of a number of additional solutions of the euclidian equation of motion with energy close to the energy of the unperturbed one instanton solution inside the stochastic layer accordingly we can expect that the appearance of the stochastic layer enhances the tunneling rate as reported in the other systems xcite however we have to have in mind that the result is obtained in the first order on coupling constant xmath8 of the time periodic perturbation and does not take into account the structure of stochastic layer the approximation is valid if the layer is narrow by neglecting the higher order resonances in the phase space for the more details of the perturbed instanton see refxcite the increasing of the tunneling amplitude is directly related to energy splitting xmath63 between the quasi degenerate ground floquet states as seen in figa1 the increasing of number of color xmath0 can enhance the width of the stochastic layer with the perturbation strength being kept at a constant value the theoretical explanation for the reduction of the tunneling period with the number of color is open for further study we expect that in the double well system under the polychromatic perturbation this numerical study will be useful for the analytical derivation of reduction of tunneling period and critical strength of a transition from localized to delocalized behavior of wavepacket by extension of the monochromatically perturbed case the chaos assisted instanton theory might be applicable if we will exactly estimate the width of the stochastic layer in the system under the polychromatic perturbation it should be noted that the uncertainty product is not always good measure of quantum fluctuation because it does not correspond to the real area or volume of phase space however we would like to pay attention to the initial growth and the mean value during the time evolution process instead of the detail of the definition of the exact quantum fluctuation in the dynamics practically the xmath132 remains small thorough the regular instanton like motion on the other hand it shows sharp increase and the large value remains for the strongly chaotic cases see papers s chaudhuri g gangopadhyay and dsray phys a 216 531996 p k chattaraj b maiti and s sengupta int j quant chem 100 2542004 as a resultthe classical chaos enhances quantum fluctuation in the restricted sense
we numerically study influence of a polychromatic perturbation on wave packet dynamics in one dimensional double well potential it is found that time dependence of the transition probability between the wells shows two kinds of the motion typically coherent oscillation and irregular fluctuation combined to the delocalization of the wave packet depending on the perturbation parameters the coherent motion changes the irregular one as the strength andor the number of frequency components of the perturbation increases we discuss a relation between our model and decoherence in comparing with the result under stochastic perturbation furthermore we compare the quantum fluctuation tunneling in the quantum dynamics with ones in the semiquantal dynamics
introduction model numerical results summary and discussion effect of polychromatic perturbation on classical phase space portraits perturbed instanton tunneling
the total energy formula obtained by sun is xmath1 labeluv where xmath2 is the total energy per atom xmath3 and xmath4 are the equilibrium bulk modulus and equilibrium volume respectively xmath5 and xmath6 are parameters and are related by the following relations xmath7 xmath8 which are obtained by imposing the volume analyticity condition since in this casethe energy of the free atoms is zero cohesive energy of the solid at xmath9 is the energy at which the xmath2 is minimum which happens to be at xmath4 thus the formula for cohesive energy xmath10 turns out to be xmath11 also it turns out that xmath12 the values of xmath3 xmath4 and xmath13 are listed for various materials in the paperxcite cohesive energies calculated from the above formula are quite erroneous calculated values for some materials using eqecoh are compared with experimental valuesxcite in table1 also we compare the energy per particle vs volume curve of aluminum with the data obtained from ab initio calculationsxcite in fig1 it can be seen that there is a serious mismatch between the two however from fig1 we can notice that the slopes of the mglj eos and that of the ab initio curve are similar which is the reason for pressure calculated from mglj eos being accurate 11 cohesive energy cols energy vs volume curve for aluminum at temperature xmath9 crosses are ab initio dataxcite solid line is obtained using equv titlefig the mglj potential is given by xmath14 labelglj the parameters xmath15 xmath16 and xmath17 are related to xmath3 xmath4 and xmath18 denoted as xmath19 through the following relations xmath20 where xmath21 is the structural constant which is xmath22 for xmath23 solids ansxmath24 for xmath25 solids and xmath15 is the depth of the potential and xmath26 is the number of first nearest neighbors it can be seen that thermodynamic properties calculated using mglj potential with parameters of sun diverge for materials with xmath19 is less than xmath0 for example consider the excess internal energy per particle xmath27 obtained through the energy equationxcite xmath28 where xmath29 is the density of the system and xmath30 is the radial distribution function since xmath30 becomes xmath31 asymptotically the integral requires that each term of xmath32 decays faster than xmath33 however if xmath19 is less than xmath0 the attractive component of xmath32 decays slower than xmath33 allowing xmath27 in eqee to diverge and for most of the materials xmath19 is less than xmath0 this renders the potential as parameterized by sun to be inapplicable to calculate thermodynamic properties as they involve evaluation of integrals similar to eqee also the potential can not be used in molecular simulations as the tail correction for internal energy is similar to eqee with lower limit being replaced by the cutoff radius of the potential we noted that the mglj eos predicts cohesive energies erroneously also we showed that the mglj potential can not be used in liquid state theories and molecular simulations for materials with xmath34 less than xmath0 as the thermodynamic quantities calculated using it diverge this may be remedied by adjusting parameter xmath16 so that xmath10 is properly reproduced also including sufficient number of neighbors so that the total energy per particle converges would improve the results lincoln et alxcite obtained parameters of morse potentials for various fcc and bcc materials by including up to xmath35 neighbor shell in a separate work we have done the improvements mentioned above and obtained the parameters by fitting the mglj eos to ab initio data same method is followed for eos obtained from other pair potentials and the results are analyzedxcite i am thankful to dr chandrani bhattacharya discussions with whom led to this paper i thank dr nk gupta for his encouragement 18 g kresse j hafner phys rev b 47 558 1993 g kresse furthmuller j computat mater sci 6 151996g kresse furthmuller phys rev b 54 11 1691996g kresse d joubert phys rev b 59 17581999
the cohesive energies of solids calculated using mglj eos proposed by sun jiuxun sun jiuxun j phys condens matter 17 l103 2005 are seen to be erroneous also we observed that the thermodynamic properties calculated using the mglj potential diverge for materials whose pressure derivative of bulk modulus at equilibrium is less than xmath0 thus the mglj potential can not be used in liquid state theories and molecular simulations to obtain thermodynamic properties sun jiuxun xcite recently suggested an equation of stateeos based on modified generalized lennard jones mglj potential the mglj eos is obtained by modifying the generalized lennard jones potential in such a way that the eos obtained is volume analytic and satisfies spinodal condition the mglj eos has three parameters and are related to lattice parameter bulk modulus and derivative of the bulk modulus at equilibrium in the paperxcite it was shown that the pressure p vs compression ratio curve obtained using the mglj eos is quite accurate the idea of generating an eos starting from a potential is interesting and has the advantage that the potential of the material also can be known in addition to the eos however we found some problems with the mglj eos and the potential so obtained from it they are a the cohesive energy calculated from the mglj eos is quite erroneous and b the thermodynamic properties calculated using the mglj potential with parameters of sun diverge for most of the materials details about each problem are as follows
cohesive energy mglj potential conclusion acknowledgements references
in the past decades atom based metrology has had an enormous impact on science technology and everyday life seminal advances include microwave and optical atomic clocks xcite the global positioning system and highly sensitive position resolved magnetometers xcite atom based field measurement has clear advantages over other field measurement methods because it is calibrating free due to the invariance of atomic properties atom based metrology has recently expanded into electric field measurement an all optical sensing approach employed by numerous groups is electromagnetically induced transparency eit of atomic vapors utilizing rydberg levels xcite to measure the properties of the electric field rydberg atoms are well suited for this purpose owing to their extreme sensitivities to dc and ac electric fields which manifest in large dc polarizabilities and microwave transition dipole moments xcite developments include measurements of microwave fields and polarizations xcite millimeter waves xcite static electric fields xcite and sub wavelength imaging of microwave electric field distributions xcite in the frequency range from 10 s to 100 s of mhz rydberg eit rf modulation spectroscopy is a promising method to accomplish atom based calibration free rf electric field measurement xcite rydberg eit in vapor cells offers significant potential for miniaturization xcite of the rf sensor accurate calibration of the electric field is important for instance for antenna calibration characterization of electronic components etc conventional calibration with field sensors that involve dipole antennas that need to be calibrated first obviously leads into a chicken and egg dilemma xcite in the present work we provide an atom based calibration method for vector electrometry of rf fields using rydberg eit in cesium vapor cells the basic idea is that the rf generates a series of intersections between levels in the rydberg floquet map a map in which field perturbed floquet level energies are plotted versus rf electric field the anticrossings occur between floquet states originating in fine structure components of xmath4 states with equal principal and angular momentum quantum numbers xmath5 and xmath6 the crossings present excellent field markers that we use as calibration points for the electric field strength specifically we measure the rf dressed cs 60xmath0 states via rydberg eit spectroscopy at a test frequency of 100 mhz the dependence of the floquet spectrum on the strength and the polarization of the rf field is investigated there are exact crossings between states of different xmath7 xmath8 and xmath9 which are not coupled in a linearly polarized rf field the crossings provide field markers which we use to calibrate the field strength in a test rf transmission system we also analyze narrow spectroscopically resolved anti crossings between floquet states of equal xmath7 or xmath8 and different xmath10 and xmath9 transitions between those states are allowed via an rf raman process further the eit line strength ratios of intersecting floquet states with unlike xmath2 yield the field polarization at various stages of the work the measured spectroscopic data are matched with the results of floquet calculations to accomplish the calibration tasks axis two electrode plates are located on the both sides of the vapor cell where the rf field is applied the polarization of the rf field xmath11 blue arrow points along the xmath12axis the polarization of the beams red and green arrows can be adjusted with the xmath132 wave plates to form an angle xmath14 with the rf electric field xmath11 the probe beam is passed through a dichroic mirror dm and is detected with a photodiode pd polarizing beam splitters pbs are used to produce beams with pure linear polarizations b energy level scheme of cesium rydberg eit transitions the probe laser xmath15 is resonant with the lower transition xmath16 xmath17 xmath18 and the coupling laser xmath19 is scanned through the rydberg transitions xmath20 xmath17 xmath21 the applied rf electric field frequency xmath22 2xmath23100 mhz produces ac stark shifts and rf modulation sidebands that are separated in energy by even multiples of xmath24scaledwidth450 a schematic of the experimental setup and the relevant rydberg three level ladder diagram are shown in figs 1 a and b the experiments are performed in a cylindrical room temperature cesium vapor cell that is 50 mm long and has a 20mm diameter the cell is suspended between two parallel aluminum plate electrodes that are separated by xmath25 mm the eit coupling laser and probe laser beams are overlapped and counter propagated along the centerline of the cell propagation direction along the xmath26axis the coupling and probe lasers have the same linear polarization in the xmath27 plane the angle xmath14 between the laser polarizations and the rf field which points along the xmath12axis is varied by rotating the polarization of the laser beams with xmath132 plates as seen in fig 1 a the weak eit probe beam central rabi frequency xmath28 2xmath2992 mhz and xmath30 waist xmath31 m has a wavelength xmath15 852 nm and is frequency locked to the transition xmath16 xmath17xmath32 as shown in fig 1 b the coupling beam central rabi frequency xmath33 2xmath2972 mhz for 60xmath34 and xmath30 waist xmath35 m is provided by a commercial laser toptica ta shg110 has a wavelength of 510 nm and a linewidth of 1 mhz and is scanned over a range of 15 ghz through the xmath36 rydberg transition the eit signal is observed by measuring the transmission of the probe laser using a photodiode pd after a dichroic mirror dm an auxiliary rf free eit reference setup not shown but similar to the one sketched in fig 1 a is operated with the same lasers as the main setup the auxiliary eit signal is employed to locate the 0detuning frequency reference point for all eit spectra we show it allows us to correct for small frequency drifts of the coupling laser the rf voltage amplitude xmath37 provided by a function generator tektronix afg3102 is applied to the electrodes as shown in fig 1 a and the rf electric field vector xmath11 points along xmath12 blue arrow in fig 1 a the rf frequency is fixed xmath22 2xmath29100 mhz and the rf field amplitude xmath38 is varied by changing xmath37 the rf field ac shifts the rydberg levels and generates even order modulation sidebands see fig 1 b the rf field amplitude xmath38 is approximately uniform within the atom field interaction volume using a finite element calculation we have determined that the average electric field in the atom field interaction region is xmath39 of the field that would be present under absence of the dielectric glass cell ie the glass shields xmath40 of the field the glass cell further gives rise to an xmath41 field inhomogeneity along the beam paths within the cell the rf transmission line between the source and the cell has unavoidable standing wave effects while the standing wave effect is hard to model due to the details of the experimental setup which are fairly complex from the viewpoint of rf field modeling the setup still constitutes a linear transmission system therefore for any given frequency and fixed arrangement of the wiring and the electromagnetic boundary conditions the magnitude of the voltage amplitude that occurs on the rf field plates follows xmath42 where xmath43 is a frequency dependent transmission factor that is specific to the details of the rf transmission line as discussed in detail in section sec measurements we use the atom based field measurement method to determine the transmission factor to be xmath44 the average rf electric field amplitude xmath38 averaged over the atom field interaction zone inside the cell is then related to the known voltage amplitude xmath37 generated by the source via xmath45 xmath46 is the distance between the field plates in this relation the only factor that is difficult to determine is the transmission factor xmath43 xcite the experiment described in this paper represents a good example of how the atom based field measurement method allows one to measure xmath43 and to thereby calibrate rf electric fields 2xmath47100 mhz and the indicated amplitudes xmath48 v cm bottom 0252 v cm middle and 0504 v cm top respectively the main peak at 0 detuning in the field free spectrum corresponds to the xmath49 xmath50 xmath51 xmath50 xmath52 cascade eit the small peak at 168 mhz small arrow originates in the intermediate state hyperfine level xmath53 xcite the peak at 318 mhz is the xmath54 eit line the peaks within the magenta circles are rf induced sidebands of the indicated ordersscaledwidth450 in fig 2 we show rydberg eit spectra for the xmath55 states for xmath56 without rf field bottom curve and with the indicated rf fields upper pair of curves the bottom eit spectrum is obtained with the rf free reference setup the xmath57 main peak in the reference spectrum defines the 0detuning position since the value of the xmath55 fine structure splitting 318mhz arrow in fig 2 is well known the spacing between the zero field xmath55 fine structure components is used to calibrate the detuning axis the top two curves show eit spectra for applied rf field strengths xmath58 v cm and 0504 v cm the xmath58 v cm plot illustrates the rf induced ac stark shifts in weak rf fields the degeneracy between the xmath2 12 32 and 52 magnetic substates of the xmath55 levels becomes lifted the quantization axis for xmath2 is the direction of xmath59 in fig 1 a since the rf field frequency is much lower than the kepler frequency 35 ghz for cs 60xmath60 the ac shifts in weak rf fields are near identical with xmath61 where xmath62 are the dc polarizabilities of the xmath63 states and xmath64 is the rf root mean square field this has been verified with a dc stark shift calculation not shown at higher fields rf induced even harmonic sidebands for xmath65 appear which are marked with magenta circles in the top curve of fig the sidebands come in pairs the lower frequency component has xmath66 the higher frequency one xmath7 the lines that do not shift much throughout fig 2 are the xmath67 states these have near zero polarizability the ac shifts and sideband separations are on the same order as the fine structure splitting of 60xmath0 this similarity in energy scales is important because it gives rise to the level crossings in the floquet maps discussed below we have performed a series of measurements such as in fig 2 over an xmath38field range of 0 to 076 v cm in steps of 0006 v cm we have assembled the rf eit spectra in a floquet map shown fig 3 a at fieldsxmath68v cm the xmath2sublevels shift and split due to the xmath69dependent quadratic ac stark effect the even harmonic level modulation sidebands labeled xmath65 begin to appear when the rf field is increased further also see previous work xcite to match the measured eit spectra with theory we numerically calculate rydberg eit spectra using floquet theory with results shown in fig 3 b for details of the floquet calculationsee xcite a central point of the present work is that the xmath70 level which has near zero polarizability and ac stark shift undergoes a series of crossings with the xmath71 12 32 modulation sidebands the crossings are exact because the linearly polarized rf field does not mix quantum states of different xmath2 the crossings can be measured with about xmath72 precision as an example in fig 3 a we show a zoom in of the first level crossing the crossing is centered at xmath73 v cm with an estimated uncertainty of xmath74 v cm corresponding to a relative uncertainty of xmath75 the uncertainty is mostly attributed to the intrinsic eit linewidth which increases with increasing coupling and probe rabi frequencies laser linewidths and interaction time broadening also contribute to the observed linewidths the columns show in that order the crossing number the calculated electric field xmath38 for the crossing the experimental electric field xmath76 the atoms would be exposed to for an rf amplitude transmission factor xmath77 and the transmission factor xmath78 colsoptionsheader in fig 3 six such crossings are visible within the rectangular boxes with the rf source voltage amplitudes xmath79at which the crossings are observed and recalling that the glass cell shields xmath40 of the electric field from the atoms the electric field the atoms would experience for an amplitude transmission factor of 1 would be xmath80 the ratios between the known theoretical electric fields where the crossings actually occur xmath81 and the xmath82 yield six readings for the amplitude transmission factor xmath83 in table 1it is seen that the xmath84 have a very small spread and do not exhibit a systematic trend from low to high field the average xmath85 is the desired calibration factor for the experimental electric field axis the xmath86axis in fig 3 a shows the calibrated experimental electric field xmath87 with voltage amplitude xmath37 at the source the overall relative uncertainty of the atom based rf field calibration performed in this experiment is xmath3 similar to what has been obtained in xcite and about an order of magnitude better than in traditional rf field calibration xcite the use of narrow band coupling and probe lasers lower rabi frequencies and larger diameter laser beams is expected to reduce the uncertainty to considerably smaller values we note that the calibration uncertainty achieved in this work is based on matching experimental and calculated spectroscopic data at the locations of a series of six isolated level crossings that all occur within a narrow spectral range of less than 50 mhz width see rectangular boxes in fig 3 hence a fairly small amount of spectroscopic data suffices for the presented atom based rf field calibration from fig 3 it is obvious that this advantage traces back to a specific feature of cesium xmath88 states namely that these states offer a mix of magnetic sublevels with near zero and large ac polarizabilities rf dressed rydberg eit spectra of rubidium atoms do not present a similar advantage xcite in fig 3 we further observe three series of avoided crossings which are due to an rf sideband of the xmath10 level intersecting with an rf sideband of the xmath89 level the first number in the avoided crossing labels in fig 3 b shows the number of rf photon pairs taken from the rf field to access the xmath10 band while the second shows the number of rf photon pairs taken from the rf field to access the xmath89 band negative rf photon numbers indicated by underbars correspond to stimulated rf photon emission the coupling between the intersecting xmath10 and xmath89 bands is a two rf photon raman process in which the atom absorbs and re emits an rf photon while changing xmath1 from xmath8 to xmath9 or vice versa this is a second order electric dipole transition which for the given polarization has selection rules xmath90 and xmath91 in fig 3 three series of avoided crossings that satisfy these selection rules are visible one for xmath92 and two for xmath93 each series has a fixed xmath38value and consists of copies of the same avoided crossing along the xmath94axis in steps of 200 mhz the xmath93 series are particularly easy to spot because one of the two intersecting floquet states has near zero polarizability the raman coupling causing the avoided crossings equals the minimal avoided crossing gap size for fixed floquet state wavefunction the raman coupling strength should scale as xmath95 forthe xmath93 avoided crossings at 0319 v cm we observe a coupling strength of 86 mhz while those at 0579 v cm have a coupling strength of 193 mhz the coupling strength ratio which is 22 is somewhat smaller than the xmath95ratio which is 33 the deviation indicates a moderate variation of the floquet state wavefunctions between 0319 v cm and 0579 v cm which is expected from a field calibration point of view the avoided crossings and other details in the spectra could be used to further reduce the uncertainty in the atom based rf field calibration factor xmath43 which is planned in future work comparing the cesium and rubidium level structures it is again noteworthy that cesium offers a combination of xmath2dependent polarizabilities that is particularly favorable for this purpose in the top curve in fig 2it is noted that the xmath96 and xmath97 floquet states are narrow and symmetric whereas the other floquet lines are much wider and are asymmetrically broadened further the xmath7lines exhibit a shoulder on the high frequency side see xmath98mhz marker in fig 2 while the xmath66lines have no shoulder the scan in the top curve of fig 2 also corresponds to the vertical dashed line in fig 3 b close inspection of fig 3 b reveals that the shoulders of the xmath7lines are due to the series of narrow avoided crossings between floquet states in the xmath7manifold the shoulders correspond to the weaker higher frequency component of the crossing the asymmetric line broadening of the wide lines is due to the xmath99 full width variation of the rf field within the atom field interaction zone for instance for the xmath66lines we estimate for the inhomogeneous linewidth xmath100mhz which is close to the observed width of xmath101 mhz the xmath7lines are also inhomogeneously broadened but we do not give a broadening estimate for those lines because of the interference with the mentioned avoided crossing thb defined in the text at an rf field of xmath1020415 v cm as a function of polarization angle xmath14 the data are for the indicated values of the probe rabi frequency the inset shows sample eit spectra for xmath103 and xmath104titlefigscaledwidth400 rydberg eit spectra generally depend on the laser polarizations xcite this also applies to rf modulated rydberg eit spectra here we study the dependence of line strength ratios on the angle xmath14 between the rf field and the polarization of the laser beams both laser beams are linearly polarized and the polarizations are parallel to each other see fig 1 for a line strength comparison of floquet levels of different xmath2 it is advantageous to choose an electric field close to one of the exact crossings discussed above because the two lines of interest will then appear in close proximity to each other allowing for a rapid measurement additionally since states with different xmath2 do not mix the line strength measurements are robust against small variations of the rf electric field as an example the inset in fig 4 we show the rf eit spectra for xmath105 lower curve and xmath104 upper curve at an rf field xmath102 0415 v cm marked with a dashed line in the left panel in fig 3 a the two peaks labeled a1 and a2 within the blue square in the inset in fig 4 correspond to the floquet levels marked with red circles in fig 3 a the peak a1 which corresponds to the xmath106 rf band of the xmath107 floquet state increases with the angle xmath14 whereas the peak a2 which corresponds to the xmath108 rf band of the xmath109 state decreases with xmath14 to quantify this polarization angle dependence we introduce the parameter xmath110 where xmath111 represent the respective areas of gaussian peaks obtained from double gaussian fits to the spectra at angle xmath14 since the intersecting lines have different differential dipole moments it is important to use the areas and not the peak heights see discussion in the last paragraph in sec subsec spectroscopy figure 4 shows xmath112 as a function of the xmath14 at xmath1020415 v cm for the indicated probe laser rabi frequencies together with the corresponding line strength ratio obtained from floquet calculations the floquet calculation yields line strengths valid for the case of low saturation xmath113 we find excellent agreement between the measurements and calculations for xmath28 xmath114 92 mhz curves such as in fig 4 can be used to measure the polarization of an rf field with unknown linear polarization the angle uncertainty can be estimated as xmath115 where xmath116 is the difference between experimental and calculated values of xmath117 and xmath118 is the derivative of the calculated curve for the lowest power case in fig 4 straightforward analysis shows angle uncertainties below xmath119 for xmath120 in the domain xmath121 the angle uncertainty gradually increases from xmath119 to xmath122 because the derivative becomes small we note that this method of polarization measurement has the advantage of being both simple and very fast since the areas of only two lines need to be measured at the expense of reduced acquisition speed the uncertainty could be improved by measuring line strength ratios of multiple line pairs and by averaging over a number of spectra the data for higher probe rabi frequencies in fig 4show a more significant deviation from the calculated curve this is not unexpected because the calculation is for negligible saturation of the probe transition whereas the data in fig 4 vary between moderate and strong saturation of the probe transition in addition to saturation broadening effects there may also be optical pumping effects xcite that could affect the line strength ratio this is beyond the scope of the present work we have demonstrated a rapid and robust atom based method to calibrate the electric field and to measure the polarization of a 100 mhz rf field using rydberg eit in a room temperature cesium vapor cell as an all optical field probe the eit spectra exhibit rf field induced ac stark shifts splittings and even order level modulation sidebands a series of exact floquet level intersections that are specific to cesium rydberg atoms have been used for calibrating the rf electric field with an uncertainty of xmath3 the dependence of the rydberg eit spectra on the polarization angle of the rf field has been studied our analysis of certain line strength ratios has led into a convenient method to determine the polarization of the rf electric field the rydberg eit spectroscopy presented here could be applied to atom based antenna free calibration of rf electric fields and polarization measurement it is anticipated that an extended analysis of all exact and avoided crossings as well as other spectroscopic features will significantly lower the calibration uncertainty future work involving narrow band laser sources miniature spectroscopic cells as well as improved spectroscopic methods lower rabi frequencies wider probe and coupler beams are expected to further reduce the calibration uncertainty the work was supported by nnsf of china grants nos 11274209 61475090 61475123 changjiang scholars and innovative research team in university of ministry of education of china grant no irt13076 the state key program of national natural science of china grant no 11434007 and research project supported by shanxi scholarship council of china 2014 009 gr acknowledges support by the nsf phy1506093 and bairen plan of shanxi province 27ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty httpdxdoiorg10108800261394513174 httpdxdoiorg101103revmodphys87637 httpdxdoiorg101103physrevlett95063004 httpdxdoiorg10106314747206 linkdoibase 101103physrevlett98113003 noop linkdoibase 101038nphys2423 linkdoibase 101103physrevlett111063001 httpdxdoiorg101088095340754820202001 httpdxdoiorg10106314890094 httpdxdoiorg101103physrevlett110123002 linkdoibase 101364ol39003030 doibase httpdxdoiorg10106314883635 httpdxdoiorg10108813672630126065015 httpdxdoiorg101103physreva94023832 linkdoibase 101038nphys566 httpdxdoiorg10106314891534 linkdoibase 101109tap20142360208 noop linkdoibase 101103physreva90043419 httpdxdoiorg101103physrevapplied5034003 httpdxdoiorg10108813672630185053017 httpsarchiveorgdetailsgeneratingstanda1335hill in linkdoibase 101109imtc1993382655 pp httpdxdoiorg101103physreva62053802 httpdxdoiorg101103physreva94043822 noop
we investigate atom based electric field calibration and polarization measurement of a 100mhz linearly polarized radio frequency rf field using cesium rydberg atom electromagnetically induced transparency eit in a room temperature vapor cell the calibration method is based on matching experimental data with the results of a theoretical floquet model the utilized 60xmath0 fine structure floquet levels exhibit xmath1 and xmath2dependent ac stark shifts and splittings and develop even order rf modulation sidebands the floquet map of cesium 60xmath0 fine structure states exhibits a series of exact crossings between states of different xmath2 which are not rf coupled these exact level crossings are employed to perform a rapid and precise xmath3 calibration of the rf electric field we also map out three series of narrow avoided crossings between fine structure floquet levels of equal xmath2 and different xmath1 which are weakly coupled by the rf field via a raman process the coupling leads to narrow avoided crossings that can also be applied as spectroscopic markers for rf field calibration we further find that the line strength ratio of intersecting floquet levels with different xmath2 provides a fast and robust measurement of the rf field s polarization
introduction experimental setup rydberg-atom-based characterization of an rf field conclusion
recently there has been a lot of interest in understanding the scaling behavior in submonolayer island nucleation and growthxcite one reason for this is that the submonolayer growth regime plays an important role in determining the later stages of thin film growthxcite of particular interest is the dependence of the total island density xmath0 and island size distribution xmath1 where xmath2 is the density of islands of size xmath3 at coverage xmath4 and xmath3 is the number of monomers in an island on deposition parameters such as the deposition flux xmath36 and growth temperature xmath37 one concept that has proven especially useful in studies of submonolayer epitaxial growth is that of a critical island sizexcite corresponding to one less than the size of the smallest stable cluster for example if we assume that only monomers can diffuse then in the case of submonolayer growth of 2d islands on a solid 2d substrate standard nucleation theoryxcite predicts that the peak island density xmath38 and the monomer density xmath39 at fixed coverage satisfy xmath40 where xmath41 is the monomer hopping rate xmath42 is the critical island size xmath43 and xmath44 we note that in the case of irreversible island growth xmath45 this implies that xmath46 and xmath47 in addition it has been shown that in the absence of cluster diffusion and in the pre coalescence regime the island size distribution isd satisfies the scaling form xcite xmath48 where xmath27 is the average island size and the scaling function xmath49 depends on the critical island sizexcite however in some cases such as in epitaxial growth on metal111 surfaces it is also possible for significant small cluster diffusion to occurxcite in addition several mechanisms for the diffusion of large clusters on solid surfaces have also been proposed xcite in each case scaling arguments predict that the cluster diffusion coefficient xmath5 decays as a power law with island size xmath3 where xmath3 is the number of particles in a cluster ie xmath50 in particular three different limiting cases have been consideredxcite cluster diffusion due to uncorrelated evaporation condensation xmath7 cluster diffusion due to correlated evaporation condensation xmath51 and cluster diffusion due to periphery diffusion xmath9 we note that the case xmath7 also corresponds to the brownian stokes einstein diffusion of compact 2d clusters in two dimensions in order to understandthe effects of island diffusion on the submonolayer scaling behavior a number of simulations have previously been carried out for example jensen et alxcite have studied the effects of island diffusion with xmath51 on the percolation coverage for the case of irreversible growth without relaxation corresponding to islands with fractal dimension xmath52 more recently mulheran and robbiexcite have used a similar model to study the dependence of the exponent xmath13 on the cluster diffusion exponent xmath15 for values of xmath15 ranging from xmath53 to xmath54 they found that for small values of xmath15 the value of the exponent xmath55 is significantly larger than the value xmath56 expected in the absence of cluster diffusion although it decreases with increasing xmath15 however the scaling of the isd was not studiedxcite motivated in part by these simulations krapivsky et alxcite have carried out an analysis of the scaling behavior for the case of point islands based on the corresponding mean field smoluchowski equationsxcite their analysis suggests that due to the large amount of diffusion and coalescence in this case for xmath19 the total island density saturates corresponding to steady state behavior while the isd exhibits power law behavior of the form xmath57 where xmath58 and the prefactor does not depend on coverage has also been derived by cueille and sirexcite and camachoxcite this power law dependence for the isd is predicted to hold up to a critical island size xmath22 where xmath28 and xmath59 in contrast for xmath32 continuous island evolution is predicted eg the total island density does not saturate and as a result no simple power law behavior is predicted for the isd their analysis also indicates that for all values of xmath15 one has xmath60 with logarithmic corrections however it should be noted that the point island approximation is typically only valid at extremely low coverages herewe present the results of kinetic monte carlo simulations of irreversible island growth with cluster diffusion for the case of compact islands with fractal dimension xmath61 among the primary motivations for this workare recent experimentsxcite on the growth of compact colloidal nanoparticle islands at a liquid air interface in which significant cluster diffusion has been observed accordingly in contrast to much of the previous workxcite our model is an off lattice model however our main goal here is not to explain these experiments but rather to obtain results which may be used as a reference for future work as already noted if cluster diffusion is due to 2d brownian motion as might be expected at a fluid interface then the value of the exponent xmath15 xmath7 is the same as that expected for uncorrelated evaporation condensation however we also present results for xmath51 corresponding to cluster diffusion due to correlated evaporation condensation xmath9 corresponding to cluster diffusion due to periphery diffusion as well as for higher values of xmath15 xmath62 and xmath63 this paper is organized as follows in sec ii we describe our model in detail along with the parameters used in our simulations while in sec iii we discuss the methods we have used to enhance the simulation efficiency in sec iv we derive a generalized scaling form for the isd which is appropriate for the case of a power law isd with xmath64 corresponding to xmath19 we then present our results for the scaling of the island size distribution and island and monomer densities as a function of xmath65 coverage and xmath15 in sec v finally in sec vi we discuss our results for simplicity we have studied a model of irreversible aggregation in which all islands are assumed to be circular and rapid island relaxation perhaps due to periphery diffusion is assumed in particular in our modeleach island or cluster of size xmath3 where xmath3 is the number of monomers in a cluster is represented by a circle with area xmath66 and diameter xmath67 where xmath68 is the monomer diameter in addition each cluster of size xmath3 may diffuse with diffusion rate xmath69 where xmath70 is the monomer diffusion rate xmath41 is the monomer hopping rate and xmath71 is the hopping length similarly we may write xmath72 where xmath73 is the hopping rate for a cluster of size xmath3 in order to take into account deposition monomers are also randomly deposited onto the substrate with rate xmath74 per unit time per unit area since instantaneous coalesce and relaxation is assumed whenever two clusters touch or overlap a new island is formed whose area is equal to the sum of the areas of the original clusters and whose center corresponds to the center of mass of both islands we note that in some cases a coalescence event may lead to overlap of the resulting cluster with additional clusters in this case coalescence is allowed to proceed until there are no more overlaps in addition if a monomer lands on an existing cluster then that monomer is automatically absorbed by the cluster thus at each step of our simulation either a monomer is deposited followed by a check for overlap with any clusters or a cluster is selected for diffusion if a cluster is selected for diffusion then the center of the cluster is displaced by a distance xmath71 in a randomly selected direction for computational efficiency and also because it is the smallest length scale in the problem in most of the results presented here we have assumed xmath75 however we have also carried out some simulations with smaller values xmath76 and xmath77 in order to approach the continuum limit as discussed in more detail in sec vi our results indicate that the dependence of the island and monomer densities on the hopping distance xmath71 is relatively weak we note that besides the exponent xmath15 describing the dependence of the cluster diffusion rate on cluster size the other key parameter in our simulations is the ratio xmath78 of the monomer hopping rate to the monomer deposition rate scaled by the ratio of the hopping length to the monomer diameter eg xmath79 we note that this definition implies that the dimensionless ratio xmath80 of the monomer diffusion coefficient xmath81 to the deposition flux satisfies xmath82 our simulations were carried out assuming a 2d square substrate of size xmath83 in units of the monomer diameter xmath68 and periodic boundary conditions in order to avoid finite size effects the value of xmath83 used xmath84 was relatively large while our results were averaged over xmath85 runs in order to obtain good statistics in order to determine the asymptotic dependence of the island density on coverage andxmath78 our simulations were carried out using values of xmath86 ranging from xmath87 up to a maximum coverage of xmath88 monolayers ml in order to study the dependence on xmath15 simulations were carried out for xmath7 corresponding to brownian diffusion or uncorrelated evaporation condensation xmath51 corresponding to correlated evaporation condensation and xmath89 corresponding to periphery diffusion as well as for higher values xmath10 and xmath11 as well as the case xmath12 corresponding to only monomer diffusion in order to obtain a quantitative understanding of the submonolayer growth behavior we have measured a variety of quantities including the monomer density xmath90 where xmath91 is the number of monomers in the system as a function of coverage xmath4 and the total island density xmath92 where xmath93 is the total number of islands including monomers in the system in addition we have also measured the island size distribution xmath1 where xmath94 corresponds to the density of islands of size xmath3 we note that the factors of xmath95 in the definitions above take into account the fact that the area of a monomer is xmath96 and as a result the densities defined above all correspond to area fractions similarly the coverage xmath97 corresponds to the fraction of the total area covered by islands including monomers while a simple monte carlo approach can be usedxcite to simulate the processes of monomer deposition and cluster diffusion such a method can be very inefficient for large values of xmath78 and small values of xmath15 since the large range of island sizes and diffusion rates can lead to a low acceptance ratio accordingly here we use a kinetic monte carlo approach in particular if we set the deposition rate xmath36 per unit area xmath98 equal to xmath99 then the total deposition rate in the system is xmath100 while the hopping rate for a cluster of size xmath3 is given by xmath101 as a result the total diffusion rate for all clusters is given by xmath102 where xmath103 is the number of clusters of size xmath3 while the total rate of deposition onto the substrate is xmath100 the probability xmath104 of depositing a monomer is then given by xmath105 while the probability of cluster diffusion is xmath106 if cluster diffusion is selected then a binary treexcite whose bottom leaves correspond to the total hopping rate xmath107 for each size xmath3 may be used to efficiently select with the correct probability which cluster will move as well as to efficiently update xmath108 however for large xmath78 and small xmath15 the maximum cluster size can be larger than xmath109 and as a result the computational overhead associated with the binary tree can still be significant accordingly we have implemented a variationxcite of the binary tree approach in which a range of cluster sizes are clustered together into a single leaf or bin in particular to minimize the size of the binary tree starting with island size xmath110 we have used variable bin sizes such that each bin contains several different cluster sizes ranging from a starting value xmath42 to a value approximately equal to xmath111 using this scheme allows us to use a binary tree with a maximum of xmath112 leaves and a rejection probability of only xmath113 to further decrease the computational overhead our binary tree grows dynamically from xmath114 leaves to as many as needed by properly selecting the rates in the binary tree and the corresponding acceptance probabilities one can ensure that each diffusion event is selected with the proper rate in particular if we define the rate of bin xmath42 as xmath115 where xmath116 is the maximum cluster diffusion rate in bin xmath117 corresponding to the smallest cluster size in the bin and xmath118 is the number of islands in the bin then the sum over all leaves may be written xmath119 the probability of attempting a diffusion event is then given by xmath120 while the probability of selecting bin xmath42 is given by xmath121 once a bin is selected using the binary tree a specific cluster is then selected randomly from the list of all the clusters in that bin this implies that a cluster of size xmath3 will be selected with probability xmath122 thus by assuming an acceptance probability for the selected cluster diffusion event given by xmath123 each diffusion event will be selected with the proper rate sinceour simulations are carried out off lattice one of the most time consuming processes is the search for overlaps every time a cluster is moved while the simplest way to carry out such a search is to check for overlaps with all other islands in the system the search time scales as xmath100 and as a result it becomes very time consuming for large systems accordingly we have used a neighbor look up tablexcite which contains a list of all other islands within a buffer distance of each island the search for overlaps is then carried out only among the neighbors on this list rather than over all the islands in the system the neighbor list is updated whenever the total displacement of any island since the last update is larger than half the buffer distance to speed up the updates of the neighbor table we have also used a grid methodxcite in which our system is divided into an xmath124 by xmath124 grid of boxes of size xmath125 and each cluster can be rapidly assigned to a given box using this method the search for neighbors only includes clusters within an island s box as well as the xmath126 adjacent boxes as a result the table update time is reduced to xmath127 instead of xmath100 to further optimize the speed of our simulations the grid size is varied as the average island size increases as discussed in sec i in both simulations and experiments on submonolayer epitaxial growth the island size distribution isd is typically assumed to satisfy the scaling form given in eq isdscal however this scaling form has been derivedxcite on the assumption that there is only one characteristic size scale xmath27 corresponding to the average island size and that the isd does not diverge for small xmath128 however in our simulations of monomer deposition and cluster diffusion and aggregation with xmath19 we find that the isd exhibits a well defined power law behavior for small xmath128 in addition the existence of a shoulder in the isd for large xmath129 implies the existence of a second characteristic length scale which scales as xmath28 we note that this corresponds to an island size scale such that steady state behavior breaks down due to the existence of mass conservation and a finite diffusion length in general one would expect this to lead to a more complicated two variable scaling of the form xmath130 however if the power law behavior for small xmath128 is well defined and xmath64 then it is possible to derive a generalized scaling form involving only one variable in particular we assume that a scaling form for the island size distribution may be written xmath131 in order to determine xmath132 note that xmath133 converting to an integral thismay be rewritten as xmath134 where xmath135 if we now assume that xmath136 for small xmath137 and xmath64 then the smallxmath137 part of the integral dominates and we obtain xmath138 this leads to the generalized scaling form xmath139 we note that a similar scaling form corresponding to the special case xmath140 has previously been derived in ref for the case of the deposition ofspherical droplets with dimension xmath141 on a xmath142dimensional substrate we also note that for xmath140 and xmath143 corresponding to the critical value of xmath23 the standard scaling form eq isdscal is obtained we first consider the case xmath31 corresponding to stokes einstein diffusion fig dens05a shows our results for the total cluster density xmath0 including monomers as well as for the monomer density xmath39 as a function of coverage for three different values of xmath144 ranging from xmath145 to xmath146 in good agreement with the theoretical prediction in refs and of steady state behavior for xmath19 we find that both the monomer density xmath39 and total island density xmath0 reach an approximately constant value beyond a critical coverage xmath17 we note that this coverage decreases with increasing xmath78 while the peak island and monomer densities also decrease with increasing xmath78 the inset in fig fig dens05b shows our results for the exponents xmath13 xmath147 and xmath16 xmath148 corresponding to the dependence of the peak island density xmath149 and coverage xmath17 on xmath144 in qualitative agreement with the results of mulheran et alxcite for fractal islands the value of xmath13 obtained in our simulations is slightly lower but close to xmath150 this is also consistent with the predictionxcite that for point islands xmath13 should be equal to xmath150 with logarithmic corrections fig dens05b shows the corresponding scaled island density xmath151 as a function of the scaled coverage xmath152 as can be seenthere is good scaling up to and even somewhat beyond the value xmath153 corresponding to the peak in the island density in contrast replacing the scaled coverage by xmath154 as in ref leads to good scaling at xmath155 but the scaling is significantly worse for xmath156 also shown is the scaled monomer density xmath157 where the peak monomer density scales as xmath158 and the coverage corresponding to the peak monomer density scales as xmath159 and xmath160 as a function of the scaled coverage xmath161 as for the case of the island density there is good scaling up to and even beyond the scaled coverage corresponding to the peak of the monomer density we note that in contrast to the exponents xmath13 and xmath16 the exponent xmath162 does not appear to depend on xmath15 in particular we find that for all the values of xmath15 that we have studied the value of xmath162 xmath163 is close to the value xmath164 expected in the absence of cluster diffusion we now consider the scaled island size distribution isd in refs and steady state power law behavior of the form xmath165 where xmath58 was predicted for xmath19 for island sizes xmath166 where xmath22 corresponds to the shoulder in the isd for large xmath3 similarly the exponent xmath26 characterizing the scaling of xmath22 as a function of xmath27 eg xmath28 was predicted to satisfy the expression xmath167 we note that for xmath168 these expressions imply that xmath169 and xmath170 since xmath171 and xmath172 one has xmath173 accordingly eq steadystate may be rewritten as xmath174 fig fig isd05a shows the isd scaled using this form as can be seen there is reasonably good scaling for xmath175 although the tail of the distribution does not scale however the measured value of the exponent xmath23 xmath24 is significantly higher than the predicted value in addition the measured value of xmath26 xmath176 is also significantly higher than the predicted value fig isd05b shows the corresponding scaling results obtained using the generalized scaling form eq ns2q and assuming xmath177 and xmath178 we note that this implies that xmath179 as can be seen in this case both the power law region for small xmath128 as well as the bump for large xmath128 scale well using this form we note however that for the smallest clusters eg monomers and dimers there is poor scaling due to deviations from power law behavior for small xmath3 and xmath39 as a function of coverage xmath4 for xmath180 and xmath7 b scaled densities xmath181 and xmath182 as a function of scaled coverage xmath183 and xmath184 respectively inset shows dependence of peak island density xmath149 and coverage xmath17 on xmath144width283 obtained using steady state scaling form eq steadystate2 b scaled isd obtained using generalized scaling form eq ns2q with xmath185 and xmath177 width283 we now consider the case xmath186 which corresponds to cluster diffusion via correlated attachment detachment we note that this is the critical value for power law behavior of the isd which is expected to occur for xmath187 and as a result krapivsky et alxcite have predicted nested logarithmic behavior for the island density since the simulations are not as computationally demanding as for xmath7 in this case we have carried out simulations up to xmath188 fig dens1a shows our results for the total island density xmath0 and monomer density xmath39 as a function of coverage for xmath189 as can be seen while there is a plateau in the island density which appears to broaden and flatten somewhat with increasing xmath190 the plateau is not as flat as for the case xmath7 thus indicating deviations from steady state behavior as for the casexmath191 a plot of the scaled densities xmath181 xmath182 as a function of scaled coverage xmath183 xmath184 shows relatively good scaling up to the coverage corresponding to the peak island density although the value of xmath13 xmath55 is slightly lower than that obtained for xmath7 we now consider the island size distribution as shown in fig fig dens1b in this case the isd does not exhibit a well defined power law behavior in particular on a log log plot the isd is curved with a slope xmath192 for small xmath3 and a smaller effective slope xmath193 for large xmath3 similarly while xmath194 its effective value ranges from xmath195 to xmath196 depending on the value of xmath144 and coverage as a result neither the standard scaling form eq isdscal nor the generalized scaling form eq ns2q can be used to scale the entire island size distribution however using the generalized scaling form ns2q with xmath194 and xmath197 we find good scaling for small xmath128 see fig fig dens1b although the isd does not scale for large xmath128 on the other hand if we use the standard scaling form isdscal which corresponds to the generalized scaling form with xmath140 and xmath143 see inset of fig fig dens1b then the isd scales for xmath198 but not for small xmath3 we note that this lack of scaling is perhaps not surprising since for xmath32 there are two characteristic size scales xmath27 and xmath22 but no well defined power law behavior and xmath39 as a function of coverage xmath4 for xmath180 and xmath51 bscaled isd for xmath51 using generalized scaling form ns2q with xmath140 and xmath197 results correspond to coverages xmath199 xmath180 and xmath200 inset shows corresponding scaling results obtained using the standard scaling form isdscal width283 we now consider the case xmath89 which corresponds to cluster diffusion via edge diffusion fig dens32a shows our results for the total island density xmath0 and monomer density xmath39 as a function of coverage for xmath201 as can be seen while there is a plateau in the island density which appears to broaden with increasing xmath190 it is not as flat as for the case xmath51 thus indicating deviations from steady state behavior as for the casexmath200 a plot of the scaled densities xmath181 xmath182 as a function of scaled coverage xmath183 xmath184 shows relatively good scaling up to the coverage corresponding to the peak island density we note that for xmath202 krapivsky et alxcite have predicted that for point islands there is a continuous logarithmic increase in the total island density of the form xmath203mu2 labelsinequ however we find that for xmath9 and higher not shown scaling plots using this form eg xmath204 as a function of xmath205mu2 provide very poor scaling in particular since xmath55 the scaled peak island density increases with xmath35 while the peak position also shifts significantly to smaller values we now consider the scaled isd for xmath9 again in this case it is not possible to scale the entire isd using the average island size xmath27 since there are two characteristic size scales but no well defined power law behavior in particular if we use the generalized scaling form eq ns2q with xmath197 and xmath140 then reasonable scaling is only obtained for the smallxmath3 tail corresponding to xmath206 not shown in addition as shown in fig fig dens32b using the standard scaling form eq isdscal neither the tail nor the peak scale we note that the height and width of the power law portion of the isd decreases with increasing xmath190 and coverage while the peak near xmath207 becomes higher and sharper as a result the power law portion of the isd is significantly less important than for smaller values of xmath15 in particular for xmath208 and xmath209 it corresponds to only approximately xmath113 of the area under the curve andxmath39 as a function of coverage xmath4 for xmath180 and xmath9 b scaled isd for xmath9 using standard scaling form ns2q with xmath140 and xmath197isdscalwidth283 fig fig picture shows pictures of the submonolayer morphology for xmath210 and xmath209 for xmath211 and xmath212 we note that the size scale xmath213 of each picture decreases with increasing xmath15 so that approximately the same number of islands is visible as can be seen in qualitative agreement with our results there is a very broad distribution of island sizes for xmath7 while the distribution becomes narrower with increasing xmath15 of the submonolayer morphology at coverage xmath209 and xmath208 for a xmath7 m 4096 b xmath51 m 709 c xmath9 m 624 d xmath214 m 485width283 in order to obtain a better understanding of the dependence of the island density and isd on the mobility exponent xmath15 we have also carried out additional simulations for larger values of xmath15 xmath216 and xmath11 as well as in the limit xmath12 in which only monomers can diffuse fig isd2a shows the corresponding results for the scaled isd for xmath217 using the standard scaling form eq isdscal for different values of the coverage xmath4 and xmath144 as for the casexmath9 the isd does not scale although the power law portion for small xmath128 is significantly reduced instead the peak of the scaled isd increases with increasing coverage and xmath144 we also note that for xmath210 and xmath209 the peak height is significantly higher than for xmath218 while the peak position is closer to xmath219 similar results for the scaled isd for xmath220 are shown in fig fig isd2b although in this case it tends to sharpen more rapidly with increasing xmath144 and coverage these results also suggest that while the scaled isd may approach a well defined form independent of coverage and xmath144 in the asymptotic limit of large xmath144 the corresponding scaling function depends on xmath15 such a xmath15dependence is consistent with the dependence of the exponent xmath13 and xmath14 on xmath15 see fig fig allchi and coverage xmath221 for a xmath222 and b xmath220 width283 fig fig isd6 shows our results for the scaled isd for xmath223 as well as in the limit xmath224 in which only monomers can diffuse somewhat surprisingly we find that for xmath223 the scaled isd is significantly broader than for xmath225 and xmath220 although it is still more sharply peaked than for xmath12 these results suggest that at least for finite fixed xmath144 the peak height depends non monotonically on xmath15 eg it increases from xmath9 to xmath220 but then decreases for higher xmath15 this is also consistent with our results for xmath12 see fig fig isd6b for which good scaling is observed but with a peak height which is lower than for xmath223 and coverage xmath221 for a xmath226 and b xmath12width283 fig fig allchia shows a summary of our results for the monomer density xmath39 and total island density xmath0 as a function of coverage for xmath227 and xmath63 for the case xmath210 as can be seen up to the coverage xmath228 corresponding to the peak monomer density both the island and monomer densityare essentially independent of xmath15 fig allchia also shows clearly that both the island density and the coverage xmath17 corresponding to the peak island density increase with increasing xmath15 while the monomer density decreases with increasing xmath15 fig allchib shows a summary of our results for the dependence of the exponents xmath13 xmath14 and xmath162 on xmath15 as can be seen the exponent xmath13 depends continuously on xmath15 decreasing from a value close to xmath150 for small xmath15 xmath7 and approaching a value close to xmath229 for large xmath15 we note that these results are similar to previous results obtained for fractal islands with xmath230 by mulheran and robbiexcite similarly we find that the exponent xmath14 describing the dependence of the monomer density at fixed coverage on xmath144 also shows a continuous variation with increasing xmath15 starting at a value close to xmath150 for xmath7 and increasing to a value close to xmath231 for large xmath15 in contrast the exponent xmath162 describing the flux dependence of the peak monomer density is close to xmath150 for all xmath15 motivated in part by recent experiments on colloidal nanoparticle island nucleation and growth during droplet evaporationxcite we have carried out simulations of a simplified model of irreversible growth of compact islands in the presence of monomer deposition and a power law dependence xmath50 of the island mobility xmath5 on island size xmath3 in particular we have considered the cases xmath7 corresponding to cluster diffusion via brownian motion xmath51 corresponding to cluster diffusion via correlated evaporation condensation and xmath9 corresponding to cluster diffusion via periphery diffusion for comparison we have also carried out simulations for higher values of xmath15 including xmath216 and xmath11 as well as xmath12 in agreement with the predictions of ref and ref for point islands we find that for small values of xmath15 the value of the exponent xmath13 characterizing the dependence of the peak island density on xmath144 is close to but slightly lower than xmath150 however we also find that xmath13 decreases continuously with increasing xmath15 approaching the value xmath229 for large xmath15 as already noted these results are in good agreement with previous results obtained for fractal islandsxcite similarly the exponent xmath14 characterizing the dependence of the peak monomer density on xmath144 is also close to xmath232 for small xmath15 but increases with increasing xmath15 approaching the value xmath231 in the limit xmath233 in contrast the exponent xmath16 describing the dependence of the coverage xmath17 corresponding to the peak island density on xmath144 is significantly smaller than xmath150 for small xmath15 and also decreases with xmath15 approaching zero in the limit of infinite xmath15 this is consistent with the fact that when only monomers are mobile xmath12 the peak island density occurs at a coverage which is independent of xmath144 in the asymptotic limit of large xmath144 for comparison we note that while the monomer density xmath234 depends on xmath144 it only depends on xmath15 for coverages beyond the peak monomer density see fig fig allchia as a result the exponents xmath162 and xmath18 corresponding to the dependence of the peak monomer density and corresponding coverage xmath228 on xmath144 are close to xmath150 for all xmath15 the similarity of our results for xmath13 and xmath16 to previous resultsxcite for fractal islands suggests that these exponents along with the exponent xmath14 depend primarily on the cluster mobility exponent xmath15 and substrate dimension xmath142 but not on the shape or fractal dimension of the islands we note that such a result is not entirely surprising since for the case in which only monomers can diffuse xmath12 it has been found that the exponent xmath13 depends only weakly on the island fractal dimensionxcite in addition we have found that the scaled island and monomer densities xmath235 and xmath236 lead to reasonably good scaling as a function of xmath237 up to and somewhat beyond the peak island density we note that this scaling form is somewhat different from that used in ref in which the coverage is scaled by xmath238 so that only the peak scales in addition to the scaling of the island and monomer densities we have also studied the dependence of the island size distribution isd on the cluster mobility exponent xmath15 in agreement with the predictionxcite that for point islands well defined power law behavior should be observed for xmath19 for the casexmath7 we find a broad distribution of island sizes with a well defined power law however in contrast to the point island prediction that xmath58 which implies xmath169 for xmath7 the value of xmath23 obtained in our simulations xmath24 is somewhat larger similarly the value of the exponent xmath176 describing the dependence of the crossover island size xmath22 on xmath27 for xmath7 is also significantly larger than the point island prediction xmath240 one possible explanation for this is that for compact islands the coalescence rate decreases more slowly with increasing island size than for point islands due to the increase in aggregation cross section with increasing island radius however another possible explanation is the existence of correlations that are not included in the mean field smoluchowski equations in particular we note that in previous work for the case of irreversible growth in the absence of cluster diffusion xmath12 it has been shownxcite that there exist strong correlations between the size of an island and the surrounding capture zone not including monomers and monomer density xmath39 for xmath241 xmath242 and continuum limit corresponding to xmath243 inset shows dependence of peak monomer density xmath39 on xmath71width283 we note that in contrast to previously studied growth models with only limited cluster diffusionxcite in which there is a single well defined peak in the isd corresponding to the average island size xmath27 in the presence of significant cluster mobility there are typically two different size scales xmath27 and xmath22 as a result in general it is not possible to scale the isd using just the average island size xmath27 however for the case xmath19 corresponding to well defined power law behavior up to a critical island size xmath22 our results confirm that for compact islands the isd exhibits steady state behavior as a result the power law region corresponding to xmath175 can be scaled using eq steadystate2 although the largexmath3 tail does not scale accordingly we have proposed a generalized scaling form for the isd xmath30 for the case xmath19 using this form we have obtained excellent scaling for the case xmath7 in contrast for xmath51 there are still two competing size scales xmath27 and xmath22 but there is no well defined power law behavior as a result no single scaling form can be used to scale the entire isd however we find that the value of the exponent xmath26 xmath194 is close to that obtained using the point island expression xmath59 in addition for small xmath128 the isd satisfies xmath244 where xmath245 as a result we find that the small xmath128 tail of the isd can be scaled using the generalized scaling form eq ns2q with xmath197 and xmath140 while the standard scaling form eq isdscal leads to reasonably good scaling of the isd for xmath198 however for xmath202 there is no effective power law behavior and as a result neither the general scaling form eq ns2q nor the standard scaling form eq isdscal lead to good scaling of the isd for finite xmath144 instead we find that using the standard scaling form eq isdscal the fraction of islands corresponding to small xmath128 decreases with increasing xmath144 and coverage while the peak of the scaled isd increases in height and becomes sharper as a result the peak position shifts to the left with increasing xmath144 and coverage and appears to approach xmath99 for large xmath144 interestingly this implies as shown in figs fig dens32b fig isd2 and fig isd6 that for xmath202 the peak of the scaled isd is even higher than for the case of irreversible growth without cluster diffusion xmath12 however our results also suggest that at least for fixed coverage and finite fixed xmath144 the peak height of the scaled isd exhibits a non monotonic dependence on xmath15 since it increases from xmath9 to xmath220 but is smaller for xmath223 it is also interesting to compare our results for xmath202 with those obtained by kuipers and palmerxcite who studied the scaled isd for the case of fractal islands assuming an exponential dependence of the cluster mobility eg xmath246 where xmath247 because of the rapid decay of the mobility with increasing cluster size assumed in their model the resulting scaled island size distributions using the standard scaling form eq isdscal were much closer to those obtained for the case of irreversible growth with no cluster mobility eg xmath12 than the results presented here however for values of xmath248 which were not too small they also found some evidence of a small island size tail although it was much weaker than found here it is also interesting to consider the applicability of the model studied here to recent experiments by bigioni et alxcite for the case of colloidal nanoparticle cluster formation during drop drying we note that in this case one expects that clusters will diffuse on the droplet surface via brownian motion which implies that xmath7 however one also expects that due to the relatively weak van der waals attraction between nanoparticles in this case cluster formation may be reversible accordingly it would be interesting to carry out additional simulations for the case of reversible growth corresponding to a critical island size xmath249 finally we consider the continuum limit of our simulations as already mentioned while our simulations are off lattice in all of the results presented so far we have assumed a hopping length xmath71 equal to the monomer diameter xmath68 we note that this makes our simulations similar to previous simulationsxcite with and without cluster mobility in which a lattice was assumed however it is also interesting to consider the continuum limit xmath250 in order to do so we have carried out additional simulations with smaller values of xmath71 xmath251 and xmath252 in general we find that both the monomer density xmath39 as well as the density xmath253 of all clusters not including monomers exhibit a weak but linear dependence on the hopping length xmath71 see inset of fig fig deltab eg xmath254 where xmath255 corresponds either to the monomer or island density and xmath256 corresponds to the continuum limit accordingly by performing a linear extrapolation we may obtain the corresponding densities in the continuum limit as shown in fig fig delta for xmath20 and xmath9 the island density xmath253 depends relatively weakly on the hopping length and as a result there is very little difference between our results for xmath75 and the continuum limit in contrast the monomer density exhibits a somewhat stronger dependence on the hopping length xmath257 however in general we find xmath258 while the value of xmath259 decreases with increasing xmath15 in particular in the limit xmath12 in which only monomers can diffuse we find xmath260 xmath261 for the island and monomer density respectively these results indicate that in the continuum limit the island and monomer densities are only slightly lower than in our simulations accordingly we expect that in the continuum limit the scaling behavior will not be significantly different from the results presented here this work was supported by air force research laboratory space vehicles directorate contract no fa9453 08c0172 as well as by nsf grant che1012896 we would also like to acknowledge a grant of computer time from the ohio supercomputer center
the effects of cluster diffusion on the submonolayer island density xmath0 and island size distribution xmath1 where xmath2 is the density of islands of size xmath3 at coverage xmath4 are studied for the case of irreversible growth of compact islands on a 2d substrate in our model we assume instantaneous coalescence of circular islands while the mobility xmath5 of an island of size xmath3 where xmath3 is the number of particles in an island satisfies xmath6 results are presented for xmath7 corresponding to brownian motion xmath8 corresponding to correlated evaporation condensation and xmath9 corresponding to cluster diffusion via edge diffusion as well as for higher values including xmath10 and xmath11 we also compare our results with those obtained in the limit of no cluster mobility xmath12 in general we find that the exponents xmath13 and xmath14 describing the flux dependence of the island and monomer densities respectively vary continuously as a function of xmath15 similarly the exponent xmath16 describing the flux dependence of the coverage xmath17 corresponding to the peak island density also depends continuously on xmath15 although the exponent xmath18 describing the flux dependence of the coverage corresponding to the peak monomer density does not in agreement with theoretical predictions that for point islands with xmath19 power law behavior of the island size distribution isd is expected for xmath20 we find xmath21 up to a cross over island size xmath22 however the value of the exponent xmath23 obtained in our simulations xmath24 is higher than the point island prediction xmath25 similarly the measured value of the exponent xmath26 corresponding to the dependence of xmath22 on the average island size xmath27 eg xmath28 is also significantly higher than the point island prediction xmath29 for xmath19 a generalized scaling form for the isd xmath30 is also proposed and using this form excellent scaling of the entire distribution is found for xmath31 however for finite xmath32 we find that due to the competition between two different size scales neither the generalized scaling form nor the standard scaling form xmath33 lead to scaling of the entire isd for finite values of the ratio xmath34 of the monomer diffusion rate to deposition flux instead we find that the scaled isd becomes more sharply peaked with increasing xmath35 and coverage this is in contrast to models of epitaxial growth with limited cluster mobility for which good scaling occurs over a wide range of coverages
introduction model and simulations simulation methods generalized scaling form for the island-size distribution results discussion
the active binary capella xmath3 aurigae hd 34029 hr 1708 was observed with the high energy transmission grating spectrometer hetgs on the chandra x ray observatory cxo we present a first analysis of the spectra with the goals of demonstrating the hetgs performance and of applying plasma diagnostics to infer physical parameters of the capella corona a complementary analysis of the corona of capella based on high resolution spectra obtained using the cxo low energy transmission grating spectrometer letgs has been presented by xcite further analysis of diagnostic emission lines from these and other chandra grating data of capella are underway with the goal of obtaining refined temperature dependent emission measures abundances and densities leading to a self consistent determination of the coronal structure the chandra hetgs the chandra hetgs the high energy transmission grating assembly xcite consists of an array of periodic gold microstructures that can be interposed in the converging x ray beam just behind the chandra high resolution mirror assembly when in place the gratings disperse the x rays according to wavelength creating spectra that are recorded at the focal plane by the linear array of ccds designated acis s there are two different grating types designated meg and heg optimized for medium and high energies partially overlapping in spectral coverage the hetgs provides spectral resolving power of xmath4 1000 for point sources corresponding to a line fwhm of about 002 for meg and 001 for heg and effective areas of 1 180 xmath5 over the wavelength range 12 30 04 10 kev multiple overlapping orders are separated using the moderate energy resolution of the acis detector the hetgs complements the letgs which is optimized for lower energy x rays for detailed descriptions of the instruments see httpchandraharvardedu preliminary analysis of in flight calibration data including those presented here indicates that the hetgs is performing as predicted prior to the chandra launch the spectral resolution is as expected and effective areas are within 10 of the expected values except from 612 where there are systematic uncertainties of up to 20 ongoing calibration efforts will reduce these uncertainties the coronal structure of capella the coronal structure of capella capella is an active binary system comprised of g1 and g8 giants in a 104 d orbit at a distance of 129 pc the g1 star rotates with an xmath6 d period xcite capella has been studied by many previous x ray telescopes including einstein xcite exosat xcite rosat xcite beppo sax xcite and asca xcite the fundamental parameters of capella some activity indicators and primary references may be found in xcite the corona of capella appears intermediate in temperature being cooler than those of rs cvn stars such as hr 1099 or ii peg but significantly hotter than a less active star like procyon x ray observations obtained at low to moderate spectral resolution are generally consistent with emission from an optically thin collisionally dominated plasma with two temperature components xcite spectra obtained by the extreme ultraviolet explorer euve have provided more discriminating temperature diagnostics showing plasma over a continuous range of temperatures with the peak emission measure near xmath7 xcite simultaneous measurements using euve and asca spectra did not require emission from plasma hotter than xmath8 xcite euve observations show variability by factors of 3 to 4 in lines formed above xmath9 xcite xcite have estimated plasma electron densities in the range from xmath10 to xmath11 from lines of fe xxi formed near xmath12 implying that the scale of the emitting volume is xmath13 although xcite question the reliability of this diagnostic xcite use euv lines of fe xviii to constrain the optical depth in the strong x ray emission line fe xvii xmath1415014 to xmath15 from high resolution uv spectra from the hubble space telescope xcite concluded that both stars have comparable coronal emission based on measurements of the fe xvii 1354 coronal forbidden line and that the plasma is magnetically confined thus the corona of capella is actually a composite of two coronae we combined data from three hetgs observations from 1999 august 28 september 24 25 for a total exposure of 89 ks data were processed with the standard chandra x ray center software versions from july 29 r4cu3upd2 and december 13 ciao 11 the image of the dispersed spectrum is shown in figure fig image each photon is assigned a dispersion angle xmath16 relative to the undiffracted zero order image the angle is related to the order xmath17 and wavelength xmath14 through the grating mean period xmath18 by the grating equation xmath19 the spectral order is determined using the acis s ccd pulse height for each photon event with wide latitude to avoid sensitivity to variations in ccd gain or pulse height resolution the positive and negative first orders were summed separately for heg and meg for all observations and divided by the effective areas to provide flux calibrated spectra figure fig spectrum listed in table tab linelist the fe xvii xmath20 line strength is within the uncertainties identical to that observed in 1979 with the einstein crystal spectrometer by xcite while the o viii xmath21 line is roughly half the previous value emission measure distribution emission measure distribution some properties of the coronal temperature structure can be deduced from a preliminary analysis of the spectrum the data warrant a full analysis of the volume emission measure distribution with temperature xmath22 xmath23 in which xmath24 is the electron density of plasma at temperature xmath25 which occupies the volume xmath26 which will be the subject of a future paper as table tab linelist illustrates the spectrum contains lines from different elements in a range of ionization states demonstrating that the emitting plasma has a broad range of temperature further evidence of multi temperature emission comes from two line ratios first ratios of h like to he like resonance lines o viii vii mg xii xi and si xiv xiii indicate ionization ratios corresponding to xmath27 655 660 675 685 and 695 700 respectively second the he like ions provide temperature sensitive ratios involving the resonance xmath28 forbidden xmath29 and intersystem xmath30 lines xcite for the observed o vii mg xi and si xiii lines the ratio xmath31 corresponds to temperatures xmath32 xmath33 and xmath34 respectively using the theoretical models of smith et al 1998 in the low density limit in both cases the ratios indicate that the corona has a broad range of temperature an approximate upper envelope to the true xmath35 distribution is given by the family of curves formed by plotting the ratio of line strength to corresponding emissivity for a collection of lines for a given element its abundance affects only the overall normalization of the envelope of all lines from that element for this initial analysis we assumed solar abundances xcite which is consistent with previous analyses except possibly for ne xcite the vem envelope of figure fig vemt indicatates that plasma must be present over nearly a decade in temperature the absence of lines from he like and h like ions of fe provides an upper limit to the xmath35 above xmath36 although the envelope does not trace closely the peaked distribution derived from euv lines such a distribution is not excluded density diagnostics density diagnostics the he like xmath37 ratio is primarily sensitive to density using the theoretical line ratios of smith et al 1998 our measured o vii ratio of xmath38 implies an electron density within the range xmath39xmath40 similarly the mg xi and si xiii ratios of xmath41 and xmath42 give upper limits near xmath43 and xmath44 respectively we note that our ratio xmath37 for o vii is somewhat lower than that obtained by xcite from letgs spectra the hetgs and letgs observations were not simultaneous however based on evidence from prior euve observations xcite we would be surprised if this difference represented actual changes in the mean coronal plasma density instead we suggest that this results from different treatements of the continuum plus background which particularly affects the strength of the intercombination line these x ray data confirm that capella s corona contains plasma at multiple temperatures in the accessible range from xmath45 to xmath46 and set stringent constraints on the amount of plasma hotter than xmath47 at the time of this observation these properties are generally consistent with the results found with euve and asca xcite and the line strengths are close those seen 20 years earlier by xcite the preliminary results presented here have implications for the structure of capella s corona they suggest that the characteristic dimensions of the coronal loops at xmath48 are small compared to the stellar radius xmath49 for simple semi circular loops of constant circular cross section of radius xmath28 we use the measured density and xmath35 for oxygen to estimate loop heights xmath50 where xmath51 is the ratio of xmath28 to loop length in units of 01 and xmath52 is xmath53 the number of loops detailed loop modeling of xcite also required compact structures though variable cross section loops were needed to increase the proportion of hot to cool plasma work at mit was supported by nasa through the hetg contract nas8 38249 and through smithsonian astrophysical observatory sao contract svi61010 for the chandra x ray center cxc jjd and nsb were supported by nasa nas8 39083 to sao for the cxc we thank all our colleagues who helped develop the hetgs and all members of the chandra team anders e grevesse n 1989 geochimica et cosmochimica acta 53 197 brickhouse n s 1996 in proc iau colloq 152 astrophysics in the extreme ultraviolet ed s bowyer r f malina dordrecht kluwer 105 brickhouse n s dupree a k edgar r j liedahl d a drake s a white n e singh k p 2000 387 brinkman ac et al 2000 submitted brown gv beiersdorfer p liedahl da et al 1998 502 1015 canizares cr 2000 in preparation dempsey rc linsky jl schmitt jhmm and fleming ta1993 413333 dupree ak brickhouse ns doschek ga green jc and raymond jc 1993 418 l41 dupree ak brickhouse ns and sanz forcada j 2000 in preparation favata f mewe r brickhouse n s pallavicini r micela g dupree a k 1997 324 l37 gabriel a h 1972 160 99 gabriel a h jordan c 1969 145 241 griffiths nw jordan c 1998 497 883 holt s s white n e becker r h boldt e a mushotzky r f serlemitsos p j smith b w 1979 234 l65 hummel c a armstrong j t quirrenbach a buscher d f mozurkewich d elias ii n m 1994 107 1859 lemen j r mewe r schrijver c j fludra a 1989 341 474 linsky jl wood be brown a osten ra 1998 492 767 markert th canizares cr dewey d mcguirk m pak cs schattenburg ml 1994 proc spie 2280 168 mewe r et al 1982 260 233 pradhan a k and shull jm 1981 249 821 saba jlr schmelz jt bhatia ak and strong kt 510 1064 schmitt jhmm collura a sciortino s vaiana gs harnden fr jr and rosner r 1990 365 307 schrijver cj mewe r van den oord ghj and kaastra js 1995 302 438 smith r k brickhouse n s raymond j c liedahl d a 1998 in proceedings of the first xmm workshop on science with xmm ed m dahlem noordwijk the netherlands strassmeier kg hall ds fekel fc and scheck m 1993 100 173 swank et al 1981 246 214 van den oord ghj schrijver cj camphens m mewe r kaastra js 326 1090 vedder pw canizares cr 1983 270 666 rrrrr fe xxv 185 xmath54 xmath55 78 s xv 5040 33 110 72 s xv 5060 16 55 72 s xv 5100 26 85 72 si xiii 5680 25 93 70 si xiv 6180 48 472 72 si xiii 6650 182 1228 70 si xiii 6690 47 374 70 si xiii 6740 121 834 70 al xii 7750 16 204 69 mg xii 8419 152 1947 70 mg xi 9170 348 2818 68 mg xi 9230 63 618 68 mg xi 9310 190 1425 68 ne x 10240 92 740 68 ni xxii 10791 62 426 70 ne x 12132 929 4171 68 fexvii12134 68 fe xix 13515 530 1587 69 fe xix 13524 69 fe xvii 15013 3043 7476 67 fe xvii 15272 1119 2919 67 fe xviii 15641 410 938 68 o viii 16003 898 1885 65 fe xvii 16796 2004 3669 67 fe xvii 17071 2641 4554 67 fe xvii 17119 2443 4191 67 o viii 18967 2634 2810 65 o vii 21600 967 396 63 o vii 21800 255 102 63 o vii 22100 736 249 63 n vii 24779 549 327 63
high resolution spectra of the active binary capella g8 iii g1 iii covering the energy range 04 80 kev 15 30 show a large number of emission lines demonstrating the performance of the hetgs a preliminary application of plasma diagnostics provides information on coronal temperatures and densities lines arising from different elements in a range of ionization states indicate that capella has plasma with a broad range of temperatures from xmath0 generally consistent with recent results from observations with the extreme ultraviolet explorer euve and the advanced satellite for cosmology and astrophysics asca the electron density is determined from he like o vii lines giving the value xmath1 at xmath2 he like lines formed at higher temperatures give only upper limits to the electron density the density and emission measure from o vii lines together indicate that the coronal loops are significantly smaller than the stellar radius
introduction observations and data processing coronal diagnostics discussion
although the m dwarfs are the most numerous stars in our galaxy the mass metalicity and age dependencies of their stellar luminosities and radii are poorly calibrated the reason is the selection effect that plays against the detection of fainter and smaller stars less than 20 binaries with low mass dm components have empirically determined masses radii luminosities and temperatures see section sec global table tab stars as a resultthe mass luminosity relation is determined by only a few low mass stars this deficiency hindered the development of the models for the cool dense atmospheres of the m dwarfs it is established that all available models underestimate the radii by around 1015 per cent and overestimate the temperatures by 200300 k of short period binaries with dm components xcite the northern sky variability survey nsvs contains a great number of photometric data xcite that allows searching of variable stars and determination of their periods and types of variability a multiparametric method for search for variable objects in large datasetswas tested on the nsvs xcite and as a result many eclipsing stars were discovered one of them was gsc 2314 0530 xmath17 nsvs 6550671 xmath1402xmath18 xmath19xmath20 on the base of the nsvs photometry obtained in 19992000 we derived the ephemeris xmath21 and built its light curve fig fig nsvs we found that this star has been assigned also as swasp j02205085 3320476 according to the superwasp photometric survey xcite xcite reported its coincidence with the rosat x ray source 1rxs j0220507 332049 initially gsc 2314 0530 attracted our interest by its short orbital period because there were only several systems with non degenerate components and periods below the short period limit of 022 days xcite gsc 1387 0475 with xmath22 d xcite asas j071829 03367 with xmath23 d xcite the star v34 in the globular cluster 47 tuc with xmath24 d xcite and bw3 v38 with orbital period xmath25 d xcite when we established that the components of gsc 2314 0530 were dm stars our interest increased and we undertook intensive photometric and spectral observations in order to determine its global parameters and to add a new information for the dm stars as well as for the short period binaries the ccd photometry of gsc 2314 0530 in xmath0 bands was carried out at rozhen national astronomical observatory with the 2m rcc telescope equipped with versarray ccd camera 1300 xmath26 1340 pixels 20 xmath27 m pixel field of 525 xmath26 535 arcmin as well as with the 60cm cassegrain telescope using the fli pl09000 ccd camera 3056 xmath26 3056 pixels 12 xmath27 m pixel field of 171 xmath26 171 arcmin the average photometric precision per data point was 0005 0008 mag for the 60cm telescope and 0002 0003 mag for the 2m telescope table tab log1 presents the journal of our photometric observations it should be noted that the observations on 2009 december 30 are synchronous in the xmath0 colors colsoptionsheader the small orbital angular momentum is characteristic feature of all short period systems ranging from cvs to cb that seem to be old being at later stages of the angular momentum loss evolution as a result of the period decrease we calculated the orbital angular momentum of the target by the expression xcite xmath28 where xmath29 is in days and xmath30 are in solar units the obtained value xmath31 of gsc 2314 0530 is considerably smaller than those of the rs cvn binaries and detached systems which have xmath32 the orbital angular momentum of gsc 2314 0530 is smaller even than those of the contact systems which have xmath33 it is bigger only than those of the short period cvs of su uma type the small orbital angular momentum of gsc 2314 0530 implies existence of past episode of angular momentum loss during the binary evolution it means also that gsc 2314 0530 is not pre ms object this conclusion is supported by the values of xmath34 of its components the x ray emission of the stellar coronae are directly related to the presence of magnetic fields and consequently gives information about the efficiency of the stellar dynamo xcite established that the x ray luminosity decreased for later m stars while the ratio xmath35 did not change significantly from m0 to m6 as a result he proposed the ratio xmath35 as most relevant measure of activity of m dwarfs xcite found that the upper boundary of xmath35 for late m stars is xmath36 besides all indicators of stellar activity in the optical surface inhomogeneities emission lines flares the star gsc 2314 0530 shows also x ray emission it is identified as rosat x ray source 1rxs j0220507 332049 and x ray flares on the basis of the measured x ray flux xmath37 ergs xmath38 xmath39 of gsc 2314 0530 at quiescence xcite and derived distance 59 pc we calculated its x ray luminosity xmath40 ergs sxmath41 this value is at the upper boundary xmath42 29 for dm stars xcite the value xmath43 of gsc 2314 0530 is almost at the upper boundary of this ratio and considerably bigger than those of the m dwarfs studied by xcite and xcite it is known that the activity and angular momentum loss tend to be saturated at high rotation rates xcite due to its short period and high activity gsc 2314 0530 is perhaps an example of such saturation our observed field fig fig chart contains the weak star usno b1 1233 0046425 we called it twin due to the same tangential shift as our target star gsc 2314 0530 table tab colors presents the proper motion and the colors of twin according to the catalogue nomad usno b1 1233 0046425 has xmath44 corresponding to temperature less than 3200 k we suspect that our twins may form visual binary the angular distance between them of 61arcsec corresponds to linear separation around 3500 au for distance of 59 pc such a supposition is reasonable because it is known that the short period close binaries often are triple systems xcite particularly the object tres her0 07621 from our table tab stars has a red stellar neighbor at a distance 8 arcsec with close proper motion xcite the check of the supposition if twin is physical companion of gsc 2314 0530 needs astrometric observations of the twins the analysis of our photometric and spectral observations of the newly discovered eclipsing binary gsc 2314 0530 allows us to derive the following conclusions 1 this star is the shortest period binary with dm components which period is below the short period limit 2 by simultaneous radial velocity solution and light curve solution we determined the global parameters of gsc 2314 0530 inclination xmath11 orbital separation xmath12 rxmath5 masses xmath4 mxmath5 and xmath6 mxmath5 radii xmath7 rxmath5 and xmath8 rxmath5 temperatures xmath2 k and xmath3 k luminosities xmath9 lxmath5 and xmath10 lxmath5 distance xmath13 pc 3 we derived empirical relations massxmath16 mass radius and mass temperature on the basis of the parameters of known binaries with low mass dm components 4 the distorted light curve of gsc 2314 0530 were reproduced by two cool spots on the primary component the next sign of the activity of gsc 2314 0530 is the strong hxmath14 emission of its components moreover we registered 6 flares of gsc 2314 0530 half of them occurred at the phases of maximum visibility of the larger stable cool spot on the primary the analysis of all appearances of magnetic activity revealed existence of long lived active area on the primary of gsc 2314 0530 the high activity of the target is natural consequence of the fast rotation and low temperatures of its components our study of the newly discovered short period eclipsing binary gsc 2314 0530 presents a next small step toward understanding dme stars and adds a new information to the poor statistic of the low mass dm stars recently they became especially interesting as appropriate targets for planet searches due to the relative larger transit depths the research was supported partly by funds of projects do 02 362 of the bulgarian scientific foundation this research make use of the simbad and vizier databases operated at cds strasbourg france and nasa s astrophysics data system abstract service the authors are very grateful to the anonymous referee for the valuable notes and advices becker a et al 2008 mnras 386 416 blake c et al 2008 aj 684 635 bopp bw 1974 apj 193 389 caillault j et al 1986 apj 304 318 cakirly o ibanoglu c 2010 mnras 401 1141 coughlin j shaw j 2007 j of southeastern assoc for res in astr 1 7 creevey o benedict g brown t et al 2005 apjl 625 127 delfosse x et al 1999 aa 341 l63 dimitrov dp 2009 bulgaj 12 49 everett m howell s 2001 pasp 113 1428 fuhrmeister b schmitt j 2003 aa 403 247 hebb l et al 2006 aj 131 555 irwin j et al 2009 apj 701 1436 landolt a 1992 aj 104 340 leggett s 2000 apjs 82 351 lenz p breger m 2005 coast 146 53 leung k schneider d 1978 aj 83618 lopez morales m ribas i 2005 apj 131 555 lopez morales m et al 2006 arxiv astro ph0610225v1 maceroni c montalban j 2004 aa 426 577 maceroni c rucinski sm 1997 pasp 109 782 maceroni c et al 1994 aa 288 529 metcalfe t et al 1996 apj 456 356 mullan d macdonald j 2001 apj 559 353 norton aj et al 2007 aa 467 785 pollacco d et al 2006 pasp 118 1407 popper d ulrich r 1977 apj 212 l131 pribulla t rucinski s 2006 aj 131 2986 pribulla t vanko m hambalek l 2009 ibvs no5886 pra a zwitter t 2005 apj 628 426 ribas i 2003 aa 398 239 rosner r et al 1981 apj 249 l5 rucinski sm 1992 aj 103 960 rucinski sm 1984 aa 132 l9 rucinski sm 2007 mnras 382 393 rucinski sm pribulla t 2008 mnras 388 1831 schlegel d finkbeiner d davis m 1998 apj 500 525 schmitt j fleming t giampapa m 1995 apj 450 392 stauffer jr hartmann lw 1986 apjs 61 531 stepien k 2006 acta astr 56 347 stetson p 2000 pasp 112 925 torres g ribas i 2002 apj 567 1140 vandenberg d clem j 2003 aj 126 778 van hamme w 1993 aj 106 2096 vida k olah k kovari zs bartus j 2009 aips 1094 812 vilhu o walter f 1987 apj 321 958 voges w aschenbach b boller t et al 1999 aa 349 389 weldrake dtf sackett pd bridges tj freeman kc 2004 aj 128 736 worden sp schneeberg tj giampapa ms 1981 apjs 46 159 wozniak pr vestrand cw akerlof r et al 2004 aj 127 2436 young t et al 2006 mnras 370 1529 zacharias n monet d levine s et al 2005 aas 205 4815
ccd photometric observations in xmath0 colors and spectroscopic observations of the newly discovered eclipsing binary gsc 2314 0530 nsvs 6550671 with dme components and very short period of xmath1 days are presented the simultaneous light curve solution and radial velocity solution allows to determine the global parameters of gsc 2314 0530 xmath2 k xmath3 k xmath4 mxmath5 xmath6 mxmath5 xmath7 rxmath5 xmath8 rxmath5 xmath9 lxmath5 xmath10 lxmath5 xmath11 xmath12 rxmath5 xmath13 pc the chromospheric activity of its components is revealed by strong emission in the hxmath14 line with mean xmath15 and observed several flares empirical relations massxmath16 mass radius and mass temperature are derived on the basis of the parameters of known binaries with low mass dm components firstpage binaries eclipsing binaries spectroscopic stars activity stars fundamental parameters stars late type stars low mass
introduction observations and data reduction is gsc2314-0530 alone? conclusions acknowledgments
open dielectric resonators have received great attention due to numerous applications xcite eg as microlasers xcite or as sensors xcite and as paradigms of open wave chaotic systems xcite the size of dielectric microcavities typically ranges from a few to several hundreds of wavelengths wave dynamical systems that are large compared to the typical wavelength have been treated successfully with semiclassical methods these provide approximate solutions in terms of properties of the corresponding classical system in the case of dielectric cavities the corresponding classical system is an open dielectric billiard insidethe billiard rays travel freely while when impinging the boundary they are partially reflected and refracted to the outside according to snell s law and the fresnel formulas the field distributions of resonance states of dielectric cavities can be localized on the periodic orbits pos of the corresponding billiard xcite and the far field characteristics of microlasers can be predicted from its ray dynamics xcite semiclassical corrections to the ray picture due to the goos hnchen shift xcite fresnel filtering xcite and curved boundaries xcite are under investigation for a more precise understanding of the connections between ray and wave dynamics xcite one of the most important tools in semiclassical physics are trace formulas which relate the density of states of a quantum or wave dynamical system to the pos of the corresponding classical system xcite recently a trace formula for two dimensional 2d dielectric resonators was developed xcite the trace formula was successfully tested for resonator shapes with regular classical dynamics in experiments with 2d dielectric microwave resonators xcite and with polymer microlasers of various shapes xcite however typical microlasers like those used in refs xcite are three dimensional 3d systems while trace formulas for closed 3d electromagnetic resonators have been derived xcite and tested xcite hitherto there is practically no investigation of the trace formula for 3d dielectric resonators the main reason is the difficulty of the numerical solution of the full 3d maxwell equations for real dielectric cavities the case of flat microlasers is special since their in plane extensions are large compared to the typical wavelength whereas their height is smaller than or of the order of the wavelength even in this casecomplete numerical solutions are rarely performed in practice flat dielectric cavities are treated as 2d systems by introducing a so called effective index of refraction xcite see below this approximation has been used in refs xcite and a good overall agreement between the experiments and the theory was found however it is known xcite that this 2d approximation called the xmath0 model in the following introduces certain uncontrolled errors even the separation between transverse electric and transverse magnetic polarizations intrinsic in this approach is not strictly speaking valid for 3d cavities xcite to the best of the authors knowledge no a priori estimates of such errors are known even when the cavity height is much smaller than the wavelength the purpose of the present work is the careful comparison of the experimental length spectra and the trace formula computed within the 2d xmath0 approximation furthermore the effect of the dispersion of the effective index of refraction on the trace formula is investigated as well as the need for higher order corrections of the trace formula due to eg curvature effects the experiments were performed with two dielectric microwave resonators of circular shape and different thickness like in ref these are known to be ideal testbeds for the investigation of wave dynamical chaos xcite and have been used eg in refs the results of these microwave experiments can be directly applied to dielectric microcavities in the optical frequency regime if the ratio of the typical wavelength and the resonator extensions are similar the paper is organized as follows the experimental setup and the measured frequency spectrumare discussed in section sec theo summarizes the xmath0 model for flat 3d resonators the semiclassical trace formula for 2d resonators and how these are combined here the experimental length spectra are compared to this model in and concludes with a discussion of the results two flat circular disks made of teflon were used as microwave resonators the first one disk a has a radius of xmath1 mm and a thickness of xmath2 mm so xmath3 its index of refraction is xmath4 a typical frequency of xmath5 ghz corresponds to xmath6 where xmath7 is the wave number and xmath8 the speed of light the second one disk b has xmath9 mm xmath10 mm xmath11 and xmath12 with xmath13 ghz corresponding to xmath14 the values of the indices of refraction xmath15 of both disks were measured independently see ref xcite and validated by numerical calculations xcite they showed negligible dispersion in the considered frequency range figure sfig setupphoto shows a photograph of the experimental setup the circular teflon disk is supported by three pillars arranged in a triangle the prevalent modes observed experimentally are whispering gallery modes wgms that are localized close to the boundary of the disk xcite therefore the pillars perturb the resonator only negligibly because their position is far away from the boundary additionally xmath16 cm of a special foam rohacell 31ig by evonik industries xcite with an index of refraction of xmath17 is placed between the pillars and the disk as isolation see the total height of the pillars is xmath18 mm so the resonator is not influenced by the optical table two vertical wire antennas are placed diametrically at the cylindrical sidewalls of the disk cf they are connected to a vectorial network analyzer pna n5230a by agilent technologies with coaxial rf cables the network analyzer measures the complex scattering matrix element xmath19 where xmath20 is the ratio between the powers xmath21 coupled out via antenna xmath22 and xmath23 coupled in via antenna xmath24 for a given frequency xmath25 plotting xmath26 versus xmath25 yields the frequency spectrum the measured frequency spectrum of disk b is shown in it consists of several series of roughly equidistant resonances the associated modes can be labeled with their polarization and the quantum numbers of the circle resonator which are indicated in here xmath27 are the azimuthal and radial quantum number respectively tm denotes transverse magnetic polarization with the xmath28 component of the magnetic field xmath29 equal to zero and te denotes transverse electric polarization with the xmath28 component of the electric field xmath30 equal to zero each series of resonances consists of modes with the same polarization and radial quantum number the free spectral range xmath31 for each series is in the range of xmath32xmath33 mhz only modes with xmath34 that is wgms are observed in the experiment the quantum numbers were determined from the intensity distributions which were measured with the perturbation body method see ref xcite and references therein to determine the polarization of the modesa metal plate was placed parallel to the resonator at a variable distance xmath35 see figure fig polmeas shows a part of the frequency spectrum with two resonances for different distances of the metal plate to the disk the metal plate induces a shift of the resonance frequencies where the magnitude of the frequency shift increases with decreasing distance xmath35 notably the direction of the shift depends on the polarization of the corresponding mode te modes are shifted to higher frequencies and tm modes to lower ones so the polarization of each mode can be determined uniquely this behavior is attributed to the different boundary conditions for the xmath30 field tm modes and the xmath29 field te modes at the metal plates the former obeys dirichlet the latter neumann boundary conditions a detailed explanation is given in appendix sec polmeascalc the open dielectric resonators are described by the vectorial helmholtz equation xmath36 left beginarrayc vece vecb endarray right vec0 with outgoing wave boundary conditions where xmath37 and xmath38 are the electric and the magnetic field respectively and xmath39 is the index of refraction at the position xmath40 though all components of the electric and magnetic fields obey the same helmholtz equation they are not independent but rather coupled in the bulk and at the boundaries as required by the maxwell equations the eigenvalues xmath41 of are complex and the real part of xmath41 corresponds to the resonance frequency xmath42 of the resonance xmath43 while the imaginary part corresponds to the resonance width xmath44 full width at half maximum for the infinite slab geometry see the vectorial helmholtz equation can be simplified by separating the wave vector xmath45 into a vertical xmath28 component xmath46 and a component parallel to the xmath47xmath48 plane xmath49 thus xmath50 and the angle of incidence on the top and bottom surface of the resonator is xmath51 for a resonant wave inside the slab the wave vector component xmath52 must obey the resonance condition xmath53 where xmath54 is the fresnel reflection coefficient the solutions of yield the quantized values of xmath52 the effective index of refraction xmath0 is defined as xmath55 and corresponds to the phase velocity with respect to the xmath47xmath48 plane in the experiments only modes trapped due to total internal reflection tir are observed in this case the reflection coefficient xmath54 can be written as xmath56 with xmath57 where xmath58 for tm modes and xmath59 for te modes with these definitions the quantization condition for xmath52 can be reformulated as an implicit equation for the determination of xmath0 xmath60 with xmath61 being the order of excitation in the xmath28 direction xcite the xmath62 term in corresponds to the fresnel phase due to the reflections and the xmath63 term to the geometrical phase in the framework of the xmath0 model the flat resonatoris treated as a dielectric slab waveguide and the vectorial helmholtz equation is accordingly reduced to the 2d scalar helmholtz equation xcite by replacing xmath15 by the effective index of refraction xmath0 xmath64 where the wave function xmath65 inside respectively outside of the resonator corresponds to xmath30 in the case of tm modes and to xmath29 in the case of te modes the boundary conditions at the boundary of the resonator in the xmath47xmath48 plane ie the cylindrical sidewalls xmath66 are xmath67 where xmath68 is the unit normal vector for xmath66 xmath69 for tm modes and xmath70 for te modes equation eq helmholtzscal can be solved analytically for a circular dielectric resonator xcite however it should be stressed that is not exact for flat 3d cavities it defines the 2d xmath0 approximation whose accuracy is unknown analytically but which has been determined experimentally in ref our purpose is to investigate the precision of this approximation for the length spectra of simple 3d dielectric cavities the effective index of refraction for the tm modes with the lowest xmath28 excitation xmath71 of disk a and b is shown in obviously xmath0 depends strongly on the frequency and this dispersion plays a crucial role in the present work it should be noted that also te modes and modes with higher xmath28 excitation exist in the considered frequency range however in the following we focus on tm modes the density of states dos in a dielectric resonator is given by xcite xmath72 2 mathrmimleftkjright2 where the summation runs over all resonances xmath43 the dos can be separated into a smooth average part xmath73 and a fluctuating part xmath74 the smooth part is well described by the weyl formula given in ref xcite and depends only on the area the circumference and the index of refraction of the resonator the fluctuating part on the other hand is related to the pos of the corresponding classical dielectric billiard for a 2d dielectric resonator with regular classical dynamics the semiclassical approximation for xmath74 is xcite xmath75 where xmath76 is the length of the po xmath77 is the product of the fresnel reflection coefficients for the reflections of the rays at the dielectric interfaces xmath78 denotes the phase changes accumulated at the reflections ie xmath79 and at the conjugate points of the corresponding po and the amplitude xmath80 is proportional to xmath81 where xmath82 is the area of the billiard covered by the family of the po it should be noted that this semiclassical formula fails to accurately describe contributions of pos with angle of incidence close to the critical angle for tir xmath83 as concerns the amplitude consequently higher order corrections to the trace formula need to be developed for these cases xcite we restrict the discussion to the experimentally observed tm modes and compare the results with the trace formula obtained in ref xcite to select the tm modes with xmath71 the polarization and the quantum numbers of all resonances had to be determined experimentally as described in the trace formula for 2d resonators is applied to the flat 3d resonators considered here by inserting the frequency dependent effective index of refraction xmath84 instead of xmath15 into to test the accuracy of the resulting trace formula we computed the fourier transform ft of xmath74 in ref xcite it was shown that it is essential to fully take into account the dispersion of xmath0 in the ft for a meaningful comparison of the resulting length spectrum with the geometric lengths of the pos therefore we define xmath85 where the quantity xmath86 is a geometrical length and xmath87 is thus called the length spectrum we will compare it to the ft as defined in using xmath84 instead of xmath15 of the trace formula xmath88 the resonance parameters xmath41 are obtained by fitting lorentzians to the measured frequency spectra and xmath89 correspond to the frequency range considered since in a circle resonator the resonance modes with xmath90 are doubly degenerate the measured resonances are counted twice each note that even though only the most long lived resonances ie the wgms are observed experimentally and these comprise only a fraction of all resonances a comparison of the experimental length spectrum with the trace formula is meaningful xcite figure fig lspektsclhkreis shows the experimental length spectrum evaluated using and the ft of the semiclassical trace formula for disk a a total of xmath91 measured tm modes with radial quantum numbers xmath92xmath93 from xmath94 ghz to xmath95 ghz was used the pos in the circle billiard are denoted by their period xmath96 and their rotation number xmath97 and have the shape of polygons and stars see insets in their lengths are indicated by the solid arrows the pos with xmath98 and xmath99 were used to compute the trace formula only pos with xmath100are indicated in for the sake of clarity the pos with xmath101 only add small contributions to the right shoulder of the peak corresponding to the xmath102 and xmath103 orbits the amplitudes are xmath104 and the phases are xmath105 with the angles of incidence of the pos with respect to the surface normal being xmath106 the overall agreement between the experimental length spectrum and the semiclassical trace formula is good and the major peaks in the length spectrum are close to the lengths of the xmath107xmath103 orbits however no clear peaks are observed at the lengths of the xmath108 and the xmath109 orbit in the experimental length spectrum note that in the experimental length spectrum only orbits that are confined by tir are observed cf this is not the case for the xmath108 and the xmath109 orbits in the frequency range of interest where their angle of incidence xmath110 is smaller than the critical angle xmath111 as depicted in the length spectrum of disk b is shown in altogether xmath112 resonances with xmath92xmath93 from xmath113 ghz to xmath114 ghz were used the semiclassical trace formula was again computed for xmath98 and xmath99 the agreement of the experimental length spectrum and the ft of the trace formula is good and comparable to that obtained for disk a as in the case of disk a the experimental length spectrum exhibits no peaks for the xmath108 orbit whose length is not within the range depicted in and for the xmath109 orbit since they are not confined by tir in the considered frequency range a closer inspection of figs fig lspektsclhkreis and fig lspektsclbkreis shows two unexpected effects first the peak positions of the ft of the semiclassical trace formula deviate slightly from the lengths of the pos this can be seen best in the bottom parts of figs fig lspektsclhkreis and fig lspektsclbkreis where the contributions of the individual pos to the trace formula are depicted second there is a small but systematic difference between the peak positions of the experimental length spectrum and those of the ft of the trace formula we will demonstrate that the first effect is related to the dispersion of xmath0 and the second effect to the systematic error of the xmath0 model cccc xmath115 xmath76 m xmath116 m xmath117 m disk a xmath107 xmath118 xmath119 xmath120 xmath121 xmath122 xmath123 xmath124 xmath102 xmath125 xmath126 xmath127 xmath103 xmath128 xmath129 xmath130 xmath131 xmath132 xmath133 xmath134 xmath135 xmath136 xmath137 xmath138 disk b xmath107 xmath139 xmath140 xmath141 xmath121 xmath142 xmath143 xmath144 xmath102 xmath145 xmath146 xmath147 xmath103 xmath148 xmath149 xmath150 xmath131 xmath151 xmath152 xmath153 xmath135 xmath154 xmath155 xmath151 the difference between the peak positions of the trace formula and the lengths of the pos can be understood by considering the exponential term in the ft of the semiclassical trace formula which for a single po is xmath156 with xmath78 given by the crucial point is that the phase xmath78 is frequency dependent because it contains the phase of the fresnel coefficients which in turn depends on xmath84 the modulus of the ft will be largest for that length xmath86 for which the exponent in is stationary ie its derivative with respect to xmath157 vanishes this leads to the following estimate xmath116 for the peak position xmath158 where xmath159 is related to the fresnel coefficients via the wave number xmath160 at which is evaluated is the center of the relevant wave number frequency interval which is xmath161 with xmath162 the frequency xmath163 is defined by xmath164 ie it corresponds to the minimum frequency at which the considered po is confined by tir cf below xmath165 the fresnel phase vanishes the estimated peak positions xmath116 are indicated by the dashed arrows in figs fig lspektsclhkreis and fig lspektsclbkreis and agree well with the peak positions xmath117 of the individual po contributions in the bottom parts of the figures a list of the lengths of the pos the peak positions of the single po contributions and the estimates xmath116 according to for disks a and b is provided in in general the estimate xmath116 deviates only by xmath24xmath22 mm from the actual peak position xmath117 about xmath166 of xmath117 the xmath108 and the xmath109 orbits are not confined by tir therefore for these orbits the fresnel phase vanishes and accordingly xmath167 furthermore their contributions to the length spectrum are symmetric with respect to xmath168 while those of the other pos are asymmetric with an oscillating tail to the left see bottom parts of figs fig lspektsclhkreis and fig lspektsclbkreis these tails are attributed to the frequency dependence of the fresnel phase they can lead to interference effects as can be seen for example in there eg the peak positions of the semiclassical trace formula dashed line in the top part for the xmath107 and the xmath121 orbit deviate from the peak positions xmath117 of the corresponding single orbit contributions dashed and dotted lines in the bottom part due to interferences between the contributions of a po and the side lobes of those of the other pos in order to identify such interferences it is generally instructive to compare the ft of the semiclassical trace formula with those of its single orbit contributions it should be noted that the effect discussed in this paragraph also occurs for any 2d resonator made of a dispersive material it was shown in the previous paragraph that the dispersion of xmath0 plays an important role furthermore the semiclassical trace formula is known to be imprecise for pos with angles of incidence close to the critical angle this is especially crucial here since several pos are close to the critical angle in at least a part of the considered frequency regime see these deficiencies of the semiclassical trace formula indicate the necessity to implement modifications of it to pursue this presumption we will compare the experimental length spectrum with the ft of the exact trace formula for the 2d dielectric circle resonator using a frequency dependent index of refraction xmath169 in order to investigate the deviations between it and the ft of the semiclassical trace formula the trace formula is called exact since it is derived directly from the quantization condition for the dielectric circle resonator and without semiclassical approximations it is given by xmath170 with the definitions xmath171xmath172 xmath173 xmath174 and xmath175 here xmath176 xmath177 are the hankel functions of the first and second kinds respectively and the prime denotes the derivative with respect to the argument equation eq trformexact is essentially eq 67 of ref xcite with an additional factor xmath178 in the term xmath179 a detailed derivation is given in appendix sec exacttraceform in the semiclassical limit the term xmath180 in turns into the product of the fresnel reflection coefficients the term xmath181 into the oscillating term xmath182 and xmath183 contributes to the amplitude xmath80 for pos close to the critical angle ie when the stationary point of the integrand in is xmath184 the term xmath180 varies rapidly with xmath185 whereas it is assumed to change slowly in the stationary phase approximation used to derive the semiclassical trace formula xcite therefore including curvature corrections in the fresnel coefficients does not suffice for an accurate calculation of the contributions of these pos to xmath74 consequently we compute the integral entering in numerically figure fig lspektexacthkreis shows the comparison of the experimental length spectrum solid line the ft of the semiclassical trace formula dashed line and that of the exact trace formula dotted line for disk a the latter was computed for xmath186 and xmath99 the other two curves are the same as in the top part of the main difference between the semiclassical and the exact trace formula are the larger peak amplitudes of the exact trace formula where this is mainly attributed to the additional xmath187 factor the differences between the peak amplitudes of the experimental length spectrum and those of the exact trace formula are actually expected since the measured resonances comprise only a part of the whole spectrum cf xcite though they are not very large on the other hand the peak positions xmath188 of the exact trace formula differ only slightly from those of the semiclassical one and still deviate from those of the experimental length spectrum xmath189 see inset of the difference xmath190 is in the range of xmath191xmath16 mm ie about xmath192 of the periodic orbit length xmath76 similar effects are observed in which shows the experimental length spectrum and the ft of the semiclassical trace formula and that of the exact trace formula computed for xmath186 and xmath99 for disk b the peak amplitudes of the exact trace formula are again somewhat larger than those of the experimental length spectrum and the peak positions xmath188 of the exact trace formula differ from those of the experimental length spectrum by xmath193 mm for the xmath107 and the xmath121 orbits the relative error xmath194 is thus slightly smaller than in the case of disk a since the trace formula itself is exact the only explanation for these deviations is the systematic error of the xmath0 model therefore we compare the measured xmath195 and the calculated xmath196 resonance frequencies for disk a in the resonance frequencies were calculated by solving the 2d helmholtz equation for the circle as in ref the difference between the measured and the calculated frequencies in a is as large as half a fsr ie the distance between two resonances with the same radial quantum number and slowly decreases with increasing frequency this is in accordance with the result that the fsr of the calculated resonances in b is slightly larger than that of the measured ones since the frequency spectrum consists of series of almost equidistant resonances the effect of this systematic error on the peak positions in the length spectrum can be estimated by considering a simple 1d system with equidistant resonances whose distance equals xmath31 the peak position in the corresponding length spectrum is xmath197 and a deviation of xmath198 leads to an error xmath199 of the peak position with xmath200 mhzcompared to xmath201 mhz we expect a deviation of xmath202 or xmath203 mm in the peak positions which agrees quite well with the magnitude of the deviations found in for disk b the comparison between the measured and the calculated fsr not shown here yields xmath204 which also agrees well with the deviations of xmath194 found in thus we may conclude that the deviations between the peak positions of the experimental length spectrum and that of the trace formula indeed arise from the systematic error of the xmath0 model unfortunately we know of no general method to estimate the magnitude of this systematic error beforehand it should be noted that the exact magnitude of the systematic error contributing to the deviations found in figs fig lspektexacthkreis and fig lspektexactbkreis depends on the index of refraction used in the calculations still it was shown in ref xcite and also checked here that deviations remain regardless of the value of xmath15 used which is known with per mill precision for the disks a and b xcite furthermore b demonstrates that the index of refraction of a disk can not be determined without systematic error from the measured xmath31 even if the dispersion of xmath0 is fully taken into account the resonance spectra of two circular dielectric microwave resonators were measured and the corresponding length spectra were investigated in contrast to previous experiments with 2d resonators xcite flat 3d resonators were used the length spectra were compared to a combination of the semiclassical trace formula for 2d dielectric resonators proposed in ref xcite and a 2d approximation of the helmholtz equation for flat 3d resonators using an effective index of refraction in accordance with ref the experimental length spectra and the trace formula showed good agreement and the different contributions of the pos to the length spectra could be successfully identified the positions of the peaks in the experimental length spectrum are however slightly shifted with respect to the geometrical lengths of the pos we found that this shift is related to two different effects which are first the frequency dependence of the effective index of refraction and second a systematic inaccuracy of the xmath0 approximation in the examples considered here the former effect is as large as xmath205 of the po length while the latter effect is as large as xmath192 of xmath76 and the two effects cancel each other in part the results and methods presented here provide a refinement of the techniques used in refs xcite and allow for the detailed understanding of the spectra of realistic microcavities and lasers in terms of the 2d trace formula furthermore many of the effects discussed here also apply to 2d systems made of a dispersive material some open problems remain though the comparison of the semiclassical trace formula with the exact one for the circle showed that the former needs to be improved for pos close to the critical angle furthermore there are some deviations between the experimental length spectra and the trace formula predictions due to the systematic error of the effective index of refraction approximation its effect on the length spectra proved to be rather small and thus allowed for the identification of the different po contributions however the computation of the resonance frequencies of flat 3d resonators based on the combination of the 2d trace formula and the xmath0 approximation would lead to the same systematic deviations from the measured ones as in ref another problem with this systematic error is that its magnitude can not be estimated a priori the comparison of the results for disk a and disk b seems to indicate that it gets smaller for xmath206 but there are not enough data to draw final conclusions yet especially since the value of xmath207 is of similar magnitude for both disks in fact ref xcite rather indicates that the systematic error of the xmath0 model increases with decreasing xmath207 this could be attributed to diffraction effects at the boundary of the disks that become more important when xmath208 gets smaller compared to the wavelength on the other hand the exact 2d case is recovered for xmath209 ie an infinitely long cylinder in conclusionthe accuracy of the xmath0 model in the limit xmath206 remains an open problem an analytical approach to the problem of flat dielectric cavities that is more accurate than the xmath0 model would be of great interest another perspective direction is to consider 3d dielectric cavities with the size of all sides having the same order of magnitude and to develop a trace formula for them similar to those for metallic 3d cavities xcite the authors wish to thank c classen from the department of electrical engineering and computer science of the tu berlin for providing numerical calculations to validate the measured resonance data this work was supported by the dfg within the sonderforschungsbereich 634 the placement of a metal plate parallel to the dielectric disk influences the effective index of refraction thus leading to a shift of the resonance frequencies in order to determine the change of xmath0 we first calculate the reflection coefficient xmath210 for a wave traveling inside the dielectric and being reflected at the dielectric air interface with the metal plate at a distance xmath35 the geometry used here is depicted in the ansatz for the xmath30 field tm polarization and respectively the xmath29 field te polarization is xmath211 where xmath212 fulfills xmath213 xmath214 is the angular frequency xmath215 xmath216 are constants and xmath54 is the reflection coefficient the different wave numbers are connected by xmath217 for the case of tir considered here xmath96 is real and the penetration depths xmath218 of the field intensity into region ii is xmath219 for the case of tm polarizationthe electric field is xmath220 ei omega t since it obeys neumann boundary conditions at the metal plate where xmath221 is a constant the boundary conditions at the interface between region i and ii are that xmath222 and xmath223 are continuous which yields xmath224 for the case of te polarization the magnetic field in region ii is xmath225 ei omega t since it obeys dirichlet boundary conditions at the metal plate with the condition that xmath29 and xmath226 are continuous at the dielectric interface xmath227 is obtained analogous to we define xmath228 and obtain xmath229 nu sqrtfracnmathrmeff2 1n2 nmathrmeff2 hd 2 l with xmath230 for xmath231 is recovered this explains why the optical table does not disturb the resonator by inserting xmath232 intowe obtain xmath233 as condition for xmath234 where xmath235 refers to in order to determine the effect of a change of xmath35 on the effective index of refraction we compute xmath236 where xmath237 hd2 l the derivative of xmath238 is xmath239 which approaches xmath240 for xmath241 with xmath242 for tm and xmath243 for te polarization then for xmath231 xmath244 with xmath245 given as xmath246 accordingly for large distances of the metal plate xmath247 ie for tm modes xmath0 increases for decreasing distance xmath35 of the metal plate from the resonator and for te modes it gets smaller this qualitative behavior is found for the whole range of xmath35 even when is no longer valid since the resonance frequencies of the disk are to first order approximation xmath248 the tm modes are shifted to lower and the te modes to higher frequencies as observed in the dos for the tm modes of the 2d circular dielectric resonator is xmath249 where the eigenvalues xmath250 are the roots of xcite xmath251 with xmath252 the factor xmath47 ensures that xmath253 has no poles informally one can write xmath254 where xmath255 is a certain smooth function without zeros and poles this means that the dos can be written as xmath256 the smooth term contributes to the weyl expansion and requires a separate treatment xcite in the following we ignore all such terms the derivative xmath257 in contains two terms xmath258 the first term is xmath259 where the second derivatives of the bessel and hankel functions were resolved via the bessel differential equation the second term is xmath260 endarray where xmath261 was replaced by xmath262 since xmath263 we approximate xmath264 and obtain xmath265 it was checked numerically that this approximation is very precise which is why we still call the result an exact trace formula combining both terms yields xmath266 with xmath267the dos is thus xmath268 we replace the bessel functions xmath269 in xmath270 by xmath271 and extract a term xmath272 with xmath273 defined in to obtain xmath274 where xmath275 xmath276 and xmath277 are defined in eqs eq trerm eq tream and eq trebm respectively with the help of the geometric series xmath278 is rewritten as xmath279 with xmath280 using the wronskian xcite xmath281 frac4 ipi z this simplifies to xmath282 and we obtain xmath283 mathrmcc endarray the first term xmath284 corresponds to the smooth part of the dos since we are only interested in the fluctuating part we drop the term and apply the poisson resummation formula to the rest to obtain xmath285 with xmath183 defined in replacing xmath286 with xmath287 and ignoring those xmath115 combinations that are not related to any pos and thus do not give significant contributions finally yields taking the semiclassical limit as described in ref xcite results in with an additional factor of xmath187 this means that the dispersion of xmath15 leads to slightly higher amplitudes in the semiclassical limit 43ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty noop doibase doi1010631106688 noop linkdoibase 101364ol20002093 linkdoibase 101103physreva76023830 linkdoibase 10106311807967 linkdoibase 101364ol31001896 linkdoibase 101038nnano201199 linkdoibase 101007978 3 540 85859 123 linkdoibase 101364ol27000824 linkdoibase 101103physreve67015207 noop linkdoibase 101103physreve78016201 linkdoibase 101364ol19001693 linkdoibase 101038385045a0 linkdoibase 101103physreva79013830 linkdoibase 101002andp19474360704 linkdoibase 101364ol27000007 linkdoibase 101103physreve65045603 linkdoibase 101103physrevlett88094102 linkdoibase 1012090295 50758410008 linkdoibase 101103physreve82026202 doibase doi10106311665328 linkdoibase 10106311665596 noop linkdoibase 101103physreve78056202 linkdoibase 1010881751 81134415155305 linkdoibase 101103physreve81066215 linkdoibase 101103physreve83036208 linkdoibase 1010160003 49167790334 7 linkdoibase 101103physreve534166 linkdoibase 101103physrevlett89064101 linkdoibase 101109jstqe2005853848 linkdoibase 101103physreva80023825 linkdoibase 101364josab22002295 in noop vol pp noop doibase doi1010881367 263083046 linkdoibase 1010881367 2630132023013 in linkdoibase 101109iceaa20105653856 plink wwwrohacellcom linkdoibase 101103physreve66056207 noop edited by vol
the length spectra of flat three dimensional dielectric resonators of circular shape were determined from a microwave experiment they were compared to a semiclassical trace formula obtained within a two dimensional model based on the effective index of refraction approximation and a good agreement was found it was necessary to take into account the dispersion of the effective index of refraction for the two dimensional approximation furthermore small deviations between the experimental length spectrum and the trace formula prediction were attributed to the systematic error of the effective index of refraction approximation in summary the methods developed in this article enable the application of the trace formula for two dimensional dielectric resonators also to realistic flat three dimensional dielectric microcavities and lasers allowing for the interpretation of their spectra in terms of classical periodic orbits
[sec:intr]introduction [sec:expsetup]experimental setup for the measurement of frequency spectra [sec:theo]the effective index of refraction and the trace formula [sec:lspekt]comparison of experimental length spectra and trace formula predictions [sec:conc]conclusions [sec:polmeascalc]influence of the metallic plate on the resonance frequencies [sec:exacttraceform]exact trace formula for the circular dielectric resonator
organic semiconductors are envisioned to revolutionize display and lighting technology the remaining engineering related challenges are being tackled and the first products are commercially available already to guarantee a sustainable market entry however it is important to further deepen the understanding of organic semiconductors and organic semiconductor devices electronic trap states in organic semiconductors severely affect the performance of such devices for organic thin film transistors tft s for example key device parameters such as the effective charge mobility the threshold voltage the subthreshold swing as well as the electrical and environmental stability are severely affected by trap states at the interface between the gate dielectric and the semiconductor trap states in organic semiconductors have been studied for several decadesxcite although the first organic field effect transistors emerged in the 1980 s polymeric semiconductors ref small molecule organic semiconductors ref it is only recently that trap states in organic field effect transistors are a subject of intense scientific investigation refs xcite and references therein the present study is focused on trap densities in small molecule organic semiconductors these solids consist of molecules with loosely bound xmath2electrons the xmath2electrons are transferred from molecule to molecule and therefore are the source of charge conduction small molecule organic semiconductors tend to be crystalline and can be obtained in high purity typical materials are oligomers such as pentacene tetracene or sexithiophene but this class of materials also includes eg rubrene cxmath3 or the soluble material tips pentacene ref trap densities are often given as a volume density thus averaging over various trapping depths the spectral density of localized states in the band gap ie the trap densities as a function of energy trap dos gives a much deeper insight into the charge transport and device performance in this paperwe compare for the first time the trap dos in various samples of small molecule organic semiconductors including thin film transistors tft s where the active layer generally is polycrstalline and organic single crystal field effect transistors sc fet s these data are also compared with the trap dos in the bulk of single crystals made of small molecule semiconductors it turns out that it is this comparison of trap densities in tft s sc fet s and in the bulk of single crystals that is particularly rewarding the trap dos in organic semiconductors can be derived from several different experimental techniques including measurements of field effect transistors space charge limited current sclc measurements thermally stimulated currents tsc kelvin probe time of flight tof or capacitance measurements for the present study we focus on the trap dos as derived from electrical characteristics of organic field effect transistors or from sclc measurements of single crystals we begin with a brief discussion of charge transport in small molecule semiconductors followed by a summary of the current view of the origin of trap states in these materials after a comparison of different methods to calculate the trap dos from electrical characteristics of organic field effect transistors we are eventually in a position to compile compare anddiscuss trap dos data even in ultrapure single crystals made of small molecule semiconductors the charge transport mechanism is still controversial the measured mobility in ultrapure crystals increases as the temperature is decreased according to a power law xmath4xcite this trend alone would be consistent with band transport however the mobilities xmath5 at room temperature are only around 1xmath6vs and the estimated mean free path thus is comparable to the lattice constants it has often been noticed that this is inconsistent with band transportxcite since the molecules in the crystal have highly polarizable xmath2orbitals polarization effects are not negligible in a suitable description of charge transport in organic semiconductors holstein s polaron band model considers electron electron interactions and the model has recently been extendedxcite with increasing temperature the polaron mass increases this effect is accompanied by a bandwidth narrowing and inevitably results in a localization of the charge carrier consequently this model predicts a transition from band transport at low temperature to phonon assisted hopping transport at higher temperatures eg room temperature the model may explain the experimentally observed increase in mobility with decreasing temperature and seems to be consistent with the magnitude of the measured mobilities at room temperature on the other hand thermal motion of the weakly bound molecules in the solid is large compared to inorganic crystals such thermal motions most likely affect the intermolecular transfer integral troisi et al have shown that at least for temperatures above 100k the fluctuation of the intermolecular transfer integral is of the same order of magnitude as the transfer integral itself in materials such as pentacene anthracene or rubrenexcite as a consequence the fluctuations do not only introduce a small correction but determine the transport mechanism and limit the charge carrier mobilityxcite clearly the thermal fluctuations are less severe at a reduced temperature and the calculations predict a mobility that increases with decreasing temperature according to a power law this is in excellent agreement with the measured temperature dependence in ultrapure crystals moreover the model predicts mobilities at room temperature between 01xmath6vs and 50xmath6vs which also is in good agreement with experimentxcite interestingly the importance of thermal disorder is supported by recent tetrahertz transient conductivity measurements on pentacene crystalsxcite in essence the band broadening due to the thermal motion of the molecules is expected to result in electronic trap states which would be related to the intrinsic nature of small molecule semiconductorsxcite clearly trap states can also be due to extrinsic defects and these traps can completely dominate the charge transport resulting in an effective mobility xmath7xcite for amorphous inorganic semiconductors such as amorphous silicon the mobility edge picture has been developed fig figure dossketchxcite the mobility edge separates extended from localized states the existence of extended states in amorphous silicon is attributed to the similarity of the short range configuration of the atoms in the amorphous phase which is similar to the configuration in the crystalline phasexcite hopping in localized states is expected to be negligible if transport in extended states exists ie we have an abrupt increase in mobility at the mobility edge only the charge carriers that are thermally activated to states above the mobility edge contribute to the transport of charge schematic representation of the mobility edge separating localized states traps from extended states at the mobility edge the mobility as a function of energy abruptly rises and only the charge carriers that are thermally activated to states above the mobility edge contribute to the charge transport in the following we assume that charge transport in small molecule semiconductors can be described by an effective transport level and a distribution of trap states below this transport level the mobility edge model is a specific realization of this very general assumption ina completely disordered material no short range order all electronic states are localizedxcite the charge carriers are highly localized and hop from one molecule to the next however even this situation can be described by introducing an effective transport level and a broad distribution of trap states below the transport levelxcite in the following we use the term valence band edge this term may generally be interpreted as the effective transport level and denotes the mobility edge in the mobility edge picture we proceed by summarizing the current view of the microscopic origin of traps in small molecule semiconductors charge carrier traps within the semiconductor are caused by structural defects or chemical impurities chemical impurities may also cause a surrounding of structural defects by distorting the host latticexcite on the other hand chemical impurities tend to accumulate in regions with increased structural disorder ref as well as at the surface of a crystal ref trap states caused by the gate dielectric can become very important in organic field effect transistors finally as mentioned already also the thermal fluctuations of the molecules are expected to result in shallow trap states within the band gap in the bulk of ultrapure anthracene or naphthalene crystals typical densities of vacancies a dominant point defect are of the order of xmath8xmath9 ref xcite p 222 vacancies are expected to be concentrated close to other structural defects due to a reduced formation energyxcite extended structural defects eg edge dislocations screw dislocations or low angle grain boundaries can be present in significant densities in organic crystals eg 10xmath10xmath9 ref xcite p226 therefore extended structural defects are thought to be the main source of traps in ultrapure organic crystalsxcite thin films of small molecule semiconductors are expected to have a higher density of structural defects than single crystals thin films of small molecule semiconductors are often polycrystalline and grain boundaries can limit the charge transport in such films for example measurements of sexithiophene based transistors with sioxmath11 gate dielectric and an active channel consisting of only two grains and one grain boundary show that the transport is in fact limited by the grain boundaryxcite at the grain boundary a high density of traps exists and the density of these traps per unit area of the active accumulation layer is of the order of xmath12xmath13xcite in the following we focus on structural defects in vacuum evaporated pentacene films which are of particular relevance for this work since pentacene films are often polycrystalline large angle grain boundaries are expected to produce additional structural defects also in this material the effect of grain boundaries on charge transport in pentacene films is still controversial atomic force measurements afm of ultrathin pentacene films have clearly shown that the field effect mobility in pentacene based transistors can be higher in films with smaller grainsxcite in addition some experimental evidence indicates that there is no correlation between charge trapping and topographical features in pentacene thin filmsxcite on the contrary it has recently been shown that long lived energetically deep traps that cause gate bias stress effects in pentacene based tft s are mainly concentrated at grain boundariesxcite another important cause of structural disorder in pentacene films is polymorphism since pentacene can crystallize in at least four different structures phases it is quite common that at least two of these phases coexist in pentacene thin filmsxcite a theoretically study deals with in grain defects in vacuum evaporated pentacene filmsxcite structural defects are formed during the film growth upon addition of more and more defective molecules at a given site the ideal crystal structure becomes energetically more and more favourable the system eventually relaxes into the ideal crystal structure during the continuation of the film growth the relaxation happens provided that the evaporation rate is low enough and that there is enough time for relaxationxcite in this study it is suggested that structural defects within the grains of a pentacene film that resist relaxation can not exceed densities of 10xmath14xmath9 at typical growth conditions a structural defect can however influence the electronic levels of 10 surrounding molecules even if these molecules are in the perfect crystal configuration it is concluded that grain boundaries and not in grain defects are the most prominent cause of structural defects in pentacene thin filmsxcite on the other hand an experimental study identifies pentacene molecules that are displaced slightly out of the molecular layers that make up the crystalsxcite by means of high impedance scanning tunneling microscopy stm specific defect islands were detected in pentacene films with monolayer coverage within the defect islands the pentacene molecules are displaced up to 25 along the long molecular axis out of the pentacene layer with a broad distribution in the magnitude of the displacements electronic structural calculations show that the displaced molecules lead to traps for both electrons and holes the maximum displacement of the pentacene molecules as seen by stm is 25 and this corresponds to a maximum trap depth of 01evxcite the best method to produce crystals of small molecule semiconductors includes a zone refinement step in the purification procedure ref xcite p 224 even such crystals still have a considerable impurity content anthracene for example still has an impurity content of 01ppm in the best crystals which corresponds to a volume density of xmath15xmath9 ref xcite p 224 zone refinement produces organic materials of much higher purity as compared to purification by sublimationxcite however zone refinement can only be applied if the material can be molten without a chemical reaction or a decomposition to occur this is not possible for many materials including tetracene or pentacene thus much higher impurity concentrations are expected eg in tetracene or pentacenexcite an experimental study indicates that in tetracene single crystals the charge carrier mobility is limited by chemical impurities rather than by structural defectsxcite the ability of a chemical impurity to act as trap depends on its accessible energy levels in a simplistic view a hole trap forms if the ionization energy of the impurity is smaller than the ionization energy of the host materialxcite we focus on pentacene and the center ring of the pentacene molecule is expected to be the most reactivexcite an important impurity is thus thought to be the oxidized pentacene species 613pentacenequinone where two oxygen atoms form double bonds with the carbon atoms at the 613positions according to theoretical studies pentacenequinone is expected to lead to states in the band gap of pentacene ref and and may predominantly act as scattering center ref repeated purification of pentacene by sublimation can result in very high mobilities in pentacene single crystalsxcite another common impurity in pentacene is thought to be 613dihydropentacene where additional hydrogen atoms are bound both at the 6 and at the 13positionxcite properties of the gate dielectric s surface such as surface roughness surface free energy and the presence of heterogenous nucleation sites are expected to play a key role in the growth of small molecule semiconductor films from the vapour phase thus influencing the quality of the films apart from growth related effects the sole presence of the gate dielectric can influence the charge transport in a field effect transistor especially because the charge is transported in the first few molecular layers within the semiconductor at the interface between the gate dielectric and the semiconductor thus also fet s based on single crystals are affected even laminated flip crystal type sc fet s the surface of the gate dielectric contains chemical groups that act as charge carrier traps the trapping mechanism may be as simple as the one discussed above for chemical impurities this means that the trapping depends on the specific surface chemistry of the gate dielectric but the ability of certain chemical groups on the surface of the gate dielectric to cause traps will also depend on the nature of the small molecule semiconductor the trapping mechanism can also be seen as a reversible or irreversible electrochemical reaction driven by the application of a gate voltagexcite chemical groups on the surface of the gate dielectric certainly affect the transport of electrons in n type field effect transistorsxcite water adsorbed on the gate dielectric may dissociate and react with pentacene one possible reaction product is 613dihydropentacene the number of impurities that are formed can depend on the electrochemical potential and would thus increase as the gate voltage is ramped up in a field effect transistorxcite it has also been suggested that water causes traps by reacting with the surface of the gate dielectric water on a sioxmath11 gate dielectric with a large number of silanol groups si oh causes the formation of sioxmath16groups and the latter groups can act as hole trapsxcite in addition to chemical reactions involving water water molecules may act as traps themselves just like any other chemical impurity a polar impurity molecule leads to an electric field dependent trap depth thoughxcite even if a polar impurity does not lead to a positive trap depth its dipole moment modifies the local value of the polarization energy since we have highly polarizable xmath2orbitals in organic semiconductors this results in traps in the vicinity of the water moleculesxcite the net effect is a significant broadening of the trap dos at the insulator semiconductor interfacexcite it has been suggested that the polarity of the gate dielectric surface impedes the charge transport as described in the followingxcite a more polar surface has randomly oriented dipoles which lead to a modification of the local polarization energy within the semiconductor and thus to a change of the site energies as in the case of polar water molecules this brings a broadening of the trap dos the dependence of the mobility on the dielectric constant of the gate dielectric has been observed with conjugated polymers refs and and with rubrene single crystal field effect transistorsxcite more recently a model has been put forward to quantitatively study the effect of randomly oriented static dipole moments within the gate dielectricxcite the model predicts a significant broadening of the trap dos within the first 1 nm at the insulator semiconductor interface and can explain the dependence of the mobility on the dielectric constant of the gate dielectric quantitativelyxcite in this context it is important to realize that surfaces with a low polarity have a low surface free energy and are thus expected to have a high water repellency as well clearly the high water repellency leads to a a reduced amount of water at the critical insulator semiconductor interfacexcite as already mentioned in sec section chargetransport the thermal fluctuations of the intermolecular transfer integral may be of the same order of magnitude as the transfer integral itself in small molecule semiconductors such as pentacene anthracene or rubrenexcite a theoretical study has pointed out that the large fluctuations in the transfer integral result in a tail of trap states extending from the valence band edge into the gapxcite moreover the band tail is temperature dependent the extension of the band tail increases with temperature due to an increase in the thermal motion of the moleculesxcite for pentacene the theoretical study predicts exponential band tails xmath17 with xmath18mev at xmath19k and xmath20mev at xmath21k some experimental evidence suggests that trap states due to the thermal motion of the molecules play a role in samples with a low trap densityxcitefield effect transistors are often used to measure the trap dos the trap dos can be calculated from the measured transfer characteristics with various analytical methods or by simulating the transistor characteristics with a suitable computer program in sec section comparison we quantitatively compare the trap dos from various studies in the literature with our data since in these studies different methods were used to derive the trap dos it is necessary to ensure that all these methods lead to comparable results analytical methods that are relevant for the comparison in sec section comparison were developed by lang et al ref horowitz et al ref fortunato et al ref grnewald et al ref and kalb et al method i ref method ii ref the trap dos as calculated with the different methods from the same set of measured data is shown in fig figure compmethodsxcite clearly the choice of the method to calculate the trap dos has a considerable effect on the final result the graph also contains the trap dos obtained by simulating the transistor characteristics with a computer program developed by oberhoff et al and this may be seen as the most accurate trap dosxcite the analytical results agree to a varying degree with the simulation method i by kalb et al gives a good estimate of the slope of the trap dos but overestimates the magnitude of the trap densities which can be attributed to a neglect of the temperature dependence of the band mobility xmath5xcite for the method by lang et al the effective accumulation layer thickness xmath22 is assumed to be constant gate voltage independent an effective accumulation layer thickness of xmath23 nm is generally used the method by lang et al leads to a significant underestimation of the slope of the trap dos and with an effective accumulation layer thickness of xmath23 nm to a significant underestimation of the trap densities very close to the valence band edge vb these deviations need to be considered in the following analysis color online spectral density of localized states in the band gap trap dos of pentacene as calculated with several methods from the same set of transistor characteristics the transistor characteristics were measured with a pentacene based tft employing a polycrystalline pentacene film and a sioxmath11 gate dielectric the energy is relative to the valence band edge vb the choice of the method to calculate the trap dos has a considerable effect on the final result adapted from ref on the one hand trap dos data were taken from publications by various groups that are active in the field the data were extracted by using the dagra software which allows to convert plotted data eg in the figures of pdf files into data columns on the other hand we also add to the following compilation unpublished data from experiments in our laboratory we focus on the trap dos in small molecule semiconductors since almost no data exists in the literature on the trap dos in solution processed small molecule semiconductors we almost exclusively deal with the trap dos in vapour deposited small molecules more specifically the data are from tft s which were made by evaporating the small molecule semiconductors in high vacuum the single crystals for the sc fet s and for the measurements of the bulk trap dos were grown by physical vapour transport sublimation and recrystallization in a stream of an inert carrier gasxcite moreover the electron trap dos close to the conduction band edge cb has rarely been studied so far in small molecule semiconductors and with one exception we are dealing with the hole trap dos in small molecule semiconductors in the following color online trap dos from thin film transistors tft s made with small molecule organic semiconductors several different semiconductors gate dielectrics and methods to calculate the trap dos were used some details of the tft fabrication are listed in table table tfts along with the method that was used to calculate the trap dos and the reference of the data small molecule semiconductors tend to be crystalline and can be obtained in high purity typical materials are oligomers such as pentacene or sexithiophene but this class of materials also includes eg rubrene or cxmath3 the molecules interact by weak van der waals type forces and have loosely bound xmath2electrons which are the source of charge conduction cols it is interesting to compare the trap dos in small molecule organic semiconductors with the trap dos in hydrogenated amorphous silicon a si h and polycrystalline silicon poly si for a si h the mobility edge picture is used to describe the charge transport and trap states have been studied extensivelyxcite the distribution of bond angles and interatomic distances in amorphous silicon a si around a mean value leads to a blurred band edge ie to band tails extending into the gap the trap densities at a given energy reflect the volume density of certain bond angles and interatomic distances for example a rather large deviation from the atomic configuration in the crystalline phase from the mean value in the amorphous phase leads to traps with energies far from the band edge these traps are present with rather low densities since small deviations are much more likely to occur in addition we may have dangling bonds in a si acting as traps it is well known that hydrogenation of a si leads to a reduction in the trap dos due to a passivation of dangling bonds with hydrogenxcite for fig figure asi we have selected typical trap dos data from samples with small molecule semiconductors data from fig figure together the data are compared with a typical hole trap dos in a si h dash dotted green lines and with a typical electron trap dos in a si h full green line details of the data are given in table table si in fig figure asi we see that the hole trap dos in tft s with small molecule semiconductors such as pentacene is surprisingly similar to the hole trap dos in a si h both the magnitude of the trap densities and the slope of the distribution are very similar finally in fig figure polysi we similarly compare data from small molecule semiconductors with a typical hole trap dos in poly si dash dotted blue line and an electron trap dos in poly si full blue line the trap distribution is less steep in poly si as compared to the trap dos in organic thin films such that we have higher trap densities far from the transport band edge we compared the hole trap dos trap densities as a function of energy relative to the valence band edge in various samples of small molecule organic semiconductors as derived from electrical characteristics of organic field effect transistors and space charge limited current measurements in particular we distinguish between the trap dos in thin film transistors with vacuum evaporated small molecules the trap dos in organic single crystal field effect transistors and the trap dos in the bulk of single crystals grown by physical vapour transport a comparison of all data strongly suggests that structural defects at grain boundaries tend to be the main cause of fast traps in tft s made with vacuum evaporated pentacene and supposedly also in related materials moreover we argue that dipolar disorder due to the presence of the gate dielectric and more specifically water adsorbed on the gate dielectric surface is the main cause of traps in sc fet s made with a semiconductor such as rubrene one of the most important findings is that bulk trap densities can be reached in organic field effect transistors if the organic semiconductor has few structural defects eg single crystals and if a highly hydrophobic gate dielectric is used the highly hydrophobic cytopxmath24 fluoropolymer gate dielectric essentially is a gate dielectric that does not cause traps at the insulator semiconductor interface and thus leads to organic field effect transistors with outstanding performance the trap dos in tft s with small molecule semiconductors is very similar to the trap dos in hydrogenated amorphous silicon this is surprising due to the very different nature of polycrystalline thin films made of small molecule semiconductors with van der waals type interaction on the one hand and covalently bound amorphous silicon on the other hand although several important conclusions can be drawn from the extensive data it is clear that the present picture is not complete more systematic studies are necessary to consolidate and complete the understanding of the trap dos in organic semiconductors and organic semiconductor devices the present compilation may serve as a guide for future studies
we show that it is possible to reach one of the ultimate goals of organic electronics producing organic field effect transistors with trap densities as low as in the bulk of single crystals we studied the spectral density of localized states in the band gap trap dos of small molecule organic semiconductors as derived from electrical characteristics of organic field effect transistors or from space charge limited current measurements this was done by comparing data from a large number of samples including thin film transistors tft s single crystal field effect transistors sc fet s and bulk samples the compilation of all data strongly suggests that structural defects associated with grain boundaries are the main cause of fast hole traps in tft s made with vacuum evaporated pentacene for high performance transistors made with small molecule semiconductors such as rubrene it is essential to reduce the dipolar disorder caused by water adsorbed on the gate dielectric surface in samples with very low trap densities we sometimes observe a steep increase of the trap dos very close xmath0ev to the mobility edge with a characteristic slope of xmath1mev it is discussed to what degree band broadening due to the thermal fluctuation of the intermolecular transfer integral is reflected in this steep increase of the trap dos moreover we show that the trap dos in tft s with small molecule semiconductors is very similar to the trap dos in hydrogenated amorphous silicon even though polycrystalline films of small molecules with van der waals type interaction on the one hand are compared with covalently bound amorphous silicon on the other hand although important conclusions can already be drawn from the existing data more experiments are needed to complete the understanding of the trap dos near the band edge in small molecule organic semiconductors
introduction [section-chargetransport]charge transport in small molecule organic semiconductors [section-origin]causes of trap states in small molecule organic semiconductors [section-quanti]calculating the trap dos from experiment [section-comparison]comparison of trap dos data summary and conclusions
entanglement renormalizationxcite is a renormalization group rg approach to quantum many body systems on a lattice as with most rg methods xcite it proceeds by coarse graining the microscopic degrees of freedom of a many body system and thus also their hamiltonian xmath0 to produce a sequence of effective systems with hamiltonians xmath1 that define a flow towards larger length scale lower energies entanglement renormalization operates in real space it does not rely on fourier space analysis and it is a non perturbative approach that is it can handle interactions of any strength as a result it has a wide range of applicability from quantum criticality xcite to emergent topological order xcite from frustrated antiferromagnets xcite to interacting fermions xcite and even to interacting anyons xcite entanglement renormalization produces an efficient approximate representation of the ground state of the system in terms of a variational tensor network the multi scale entanglement renormalization ansatz mera xcite from which one can extract expectation values of arbitrary local observables most applications of the mera have so far focused on systems that are translation invariant herewe will consider instead systems where translation invariance is explicitly broken by the presence of a defect for simplicity we assume that the defect is placed on an infinite quantum critical system that in the absence of the defect would be both homogeneous that is translation invariant and a fixed point of the rg that is scale invariant under that assumption the mera offers a shockingly simple description in the absence of the defect it is completely characterized by a single pair of tensors xmath2 and in the presence of the defect by just one additional tensor xmath3 if the defect is also itself at a scale invariant fixed point of the rg flow or by a sequence of a few additional tensors xmath4 that describe its flow towards an rg fixed point in this paperwe propose and benchmark algorithms for quantum critical systems in the presence of defects that exploit the simple description afforded by the mera we start by briefly reviewing the required background material on entanglement renormalization including a recently proposed theory of minimal updates xcite that is at the core of the surprisingly compact mera description of defects in quantum critical systems two distinctive aspects of entanglement renormalization are the tensor network structure of the coarse graining transformation and the variational nature of the approach the coarse graining transformation is implemented by a linear isometric map xmath5 relating the hilbert spaces of the lattice system before and after coarse graining as illustrated in fig fig era the linear map xmath5 decomposes as a network of tensors called disentanglers xmath6 and isometries xmath7 the structure of the network has been designed with the important property that xmath5 preserves locality local operators are mapped into local operators thus if xmath0 is a short ranged hamiltonian then the effective hamiltonians xmath8xmath9 etc are also short ranged on the other hand the approach is variational the disentanglers xmath6 and isometries xmath7 are loaded with variational parameters which are determined through energy minimization this ensures that the coarse graining transformation xmath5 is properly adapted to the system under consideration that is instead of deciding a priori which degrees of freedom should be kept and which should be thrown away the method proceeds by asking the hamiltonian xmath0 which part of many body hilbert space corresponds to low energies and proceeds to safely remove the rest for a lattice in xmath10 dimensions that decomposes as a tensor network made of disentanglers xmath6 depicted as squares and isometries xmath7 depicted as triangles b the mera on a xmath10 dimensional lattice made of xmath11 sites obtained by collecting together a sequence of coarse graining transformations xmath12width321 however the most prominent feature of entanglement renormalization setting it apart from other real space rg approaches is its handling of short range entanglement while isometries xmath7map a block of sites into an effective site and thus play a rather standard role in a coarse graining transformation disentanglers xmath6 perform a more singular task the removal of short range entanglement from the system thanks to this removal the coarse graining transformation xmath5 constitutes a proper implementation of the rg xcite in that the sequence of effective systems with hamiltonians xmath13 only retain degrees of freedom corresponding to increasing length scales in particular at fixed points of the rg flow entanglement renormalization explicitly realizes scale invariance the system before coarse graining and the system after coarse graining are seen to be locally identical the mera xcite is the class of tensor network state that results from joining the sequence of coarse graining transformations xmath14 see fig fig erb it is a variational ansatz for ground states or more generally low energy states of many body systems on a lattice in xmath15 spatial dimensions by construction the mera extends in xmath16 dimensions where the additional dimension corresponds to length scale or rg flow as a result it is distinctly well suited to study systems where several length scales are relevant because the information related to each length scale is stored in a different part of the network in particular the mera offers an extremely compact description of ground states of homogeneous systems at fixed points of the rg flow that is in systems with both translation invariance and scale invariance these encompass both stable gapped rg fixed points which include topologically ordered systems xcite and unstable gapless rg fixed points corresponding to quantum critical systems xcite indeed translation invariance leads to a position independent coarse graining transformation xmath5 made of copies of a single pair of tensors xmath17 whereas scale invariance implies that the same xmath5 can be used at all scales as a result the single pair xmath18 completely characterizes the state of an infinite system the study of quantum critical systems is therefore among the natural targets of the mera until now most applications of the mera to quantum criticality have focused on systems that are invariant under translations see however refs in translation invariant systems the mera provides direct access to the universal information of the quantum phase transition as often encoded in the conformal data of an underlying conformal field theoryxcite cft see appx sect mera for a review in particular in one spatial dimension one can extract the central charge and identify the set of primary scaling operators xmath19 both local xcite and non local xcite together with their scaling dimensions xmath20 from which most critical exponents of the theory follow as well as the corresponding operator product expansion coefficients this data completely characterizes the underlying cft the goal of this manuscript is to address quantum critical systems where the translation invariance of a system is explicitly broken by the presence of a boundary an impurity an interface etc we refer to any such obstruction to translation invariance generically as a defect and to the system in the absence of the defects as the host system methods for simulating quantum critical systems with such defects are important in order to understand and model their effects in realistic settings a major difficulty in addressing such systems is that since the presence of a defect manifestly breaks the translation invariance of the host hamiltonian the ground state is no longer homogeneous instead expectation values of local observables differ from the homogeneous case throughout the whole system by an amount that only decays as a power law with the distance to the defect in this scenario a natural option which we will not follow here would be to choose a coarse graining map xmath5 with position dependent disentanglers and isometries that adjust to the power law profile of ground state expectation values notice that the resulting mera would be made of a large number proportional to the system size of inequivalent disentanglers and isometries and would therefore incur much larger computational costs again proportional to the system size than in a homogeneous system importantly we would not be able to study infinite systems directly and when extracting the low energy properties of the defect these would be significantly contaminated by ubiquitous finite size effects which vanish as a power law with the system size what one would like then is a mera description of many body systems with defects that is nearly as compact as in the homogeneous case fortunately a recent theory of minimal updates in holography xcite provides us with a recipe to obtain such a description let xmath0 denote a local hamiltonian for an extended many body system on a xmath15dimensional lattice and let xmath21 xmath22 denote the hamiltonian for the same system after we added a new term xmath23 localized in region xmath24 in addition let xmath25 and xmath26 denote the ground states of the hamiltonian xmath0 and of hamiltonian xmath21 the modified hamiltonian respectively then the theory of minimal updates in holography xcite argues in favor of the following conjecture conjecture minimal update a mera for xmath26 can be obtained from a mera for xmath25 by modifying the latter only in the causal cone xmath27 of region xmath24 here the causal cone xmath27 of region xmath24 is the part of the mera that describes the successive coarse graining of region xmath28 for instance for a region xmath24 consisting of two contiguous sites fig fig directedmera illustrates the causal cone xmath27 the figure also shows how a mera for xmath25 should be modified to obtain a mera for xmath26 of a lattice hamiltonian xmath0 in xmath10 space dimensions scale and translation invariance result in a compact description two tensors xmath29 are repeated throughout the infinite tensor network b the theory of minimal updates dictates that the ground state xmath26 of the hamiltonian xmath30 is represented by a mera with the same tensors xmath29 outside the causal cone xmath31 shaded whereas inside xmath31 two new tensors xmath32 are repeated throughout the semi infinite causal cone c d the same illustrations without drawing the tensors of the networkwidth321 in this paper we propose and benchmark mera algorithms for quantum critical system with one or several defects the theoretical foundation of the algorithms is the above conjecture on minimal updates specialized to a hamiltonian of the form xmath33 where xmath0 is the hamiltonian for the host system and xmath34 is the hamiltonian describing the localized defect more specifically we will assume that the host hamiltonian xmath0 which describes an infinite system on a lattice is a homogeneous critical fixed point hamiltonian so that its ground state xmath25 can be succinctly described by a mera that is characterized in terms of just a single pair of tensors xmath17 region xmath24 will typically consists of one or two sites then following the above conjecture a mera for the ground state xmath35 of the hamiltonian xmath36 which we call modular mera and will be further described in sect sect modularity is completely characterized in terms of two sets of tensors see fig fig directedmera first the pair of tensors xmath17 corresponding to the scale and translation invariant host system is repeated throughout the outside of the causal cone of the defect second for a defect that is scale invariant that is a fixed point of the rg flow another pair of tensors xmath37 is repeated throughout the inside of the causal cone of the defect after some rewiring of the modular mera this second pair xmath38 will be replaced by a single tensor xmath3 some settings will require slight modifications of this simple description for instance in the case of interfaces involving several types of system each system will contribute a different pair of tensors for the outside of the causal cone on the other hand if the defect is not yet at a fixed point of the rg flow then instead of a single tensor xmath3 a sequence of scale dependent tensors xmath4 will be used to account for the flow of the defect into the rg fixed point the modular mera leads to simple numerical algorithms for quantum critical systems in the presence of one of several defects which complement and generalize those discussed in ref for homogeneous systems as in the homogeneous case the computational cost of the new algorithms is independent of the system size allowing us to address infinite systems in this way we can extract the universal low energy properties associated to a defect directly in the thermodynamic limit where they are free of finite size effects although in this paper we restrict our attention to systems in xmath10 dimensions for simplicity the key idea of the algorithms can also be applied to systems in xmath39 dimensions in the discussion in sect sect conclusion we will also address how to lift the assumption present throughout this work that the host system is both translation and scale invariant the algorithms proposed in this paper are thus based on assuming the validity of the conjectured theory of minimal updates in holography of ref we contribute to that theory in two ways first by applying the above conjecture recursively we will investigate applications that go well beyond the simple scenario described in ref namely that of a single impurity specifically the modular mera describes the ground state of a complex system such as an interface between two systems xmath40 and xmath41 by combining modules obtained by studying simpler systems such as homogeneous versions of system xmath40 and system xmath41 separately modularity is central to the algorithms proposed in this work and key to their computational efficiency second the benchmark results presented here constitute solid evidence that the conjectured minimal updates are indeed sufficient to accurately represent a large variety of defects this contributes significantly to establishing the theory of minimal updates which so far was supported mostly by the theoretical arguments provided in ref in this paper we assume that the reader is already familiar with the scale invariant mera for translation invariant systems a detailed introduction to which can be found in ref however for completeness we have also included a brief review to the mera in the presence of scale and translation invariance in appx sect mera sect modularity introduces the modular mera and describes how they can be applied to quantum critical systems with an impurity boundary interface and more complex settings such as several defects or y interfaces involving three systems also called y junctions it also explains how to extract the low energy universal properties of the defect sect optmod discusses how to optimize the modular mera this is illustrated with the paradigmatic case of a single impurity the first step involves optimizing a mera for the homogeneous system refs so as to obtain the pair of tensors xmath17 then an effective hamiltonian for the causal cone of the impurity or wilson chain is produced by properly coarse graining the host hamiltonian xmath0 and adding the impurity term xmath23 finally a simplified tensor network ansatz for the ground state of the wilson chain is optimized by energy minimization from which one would be able to extract tensor xmath3 or tensors xmath42 sect bench benchmarks the modular mera algorithm for a number of quantum critical systems in xmath10 spatial dimension these include systems with one and several impurities systems with one or two boundaries interfaces between two systems and y interfaces between three systems for each type of defect we outline how the basic algorithm of sect sect optmod needs to be modified the approach is seen to provide accurate numerical results for ground state properties both for expectation values of local observables and for low energy universal properties eg in the form of conformal data describing an underlying cft including the critical exponents associated to the defect finally sect sect conclusion concludes the paper with a discussion and a summary of results we have also included three appendices sect mera provides a basic introduction to key aspects of er and mera used throughout the manuscript and reviews how to extract universal properties conformal data from a translation and scale invariant mera b and c provide technical details on certain aspects of the modular mera in this section we introduce the modular mera for homogeneous systems with one or several defects we also explain how to extract the universal properties of a defect including its set of scaling dimensions from which one can derive all critical exponents associated to the defect for simplicity we only consider lattice systems in one spatial dimension the modular mera is built upon the conjecture that the presence of a defect can be accurately accounted for by only updating the interior of the causal cone xmath27 of the region xmath24 on which the defect is supported below we will argue that when applied recursively this minimal update implies that we can describe eg an interface between two semi infinite quantum critical spin chains by combining modules that describe the two systems individually that is in the absence of an interface we refer to this property as modularity in the holographic description of quantum states next we describe the modular mera for systems with a single impurity an open boundary or an interface of two different quantum systems notice that the impurity system can be considered as an interface of two identical systems while the open boundary can be considered as an interface with a trivial system before discussing more general applications of modularity such as systems with multiple impurities or y interfaces of three quantum chains a note on terminology we call modular mera any mera for a system with one or several defects that following the theory of minimal updates of ref has been obtained from a mera for the host system that is without the defects by modifying only the tensors in the causal cone of the defects on the other hand for specific types of defects such as an impurity a boundary etc we also occasionally use the more specific terms impurity mera boundary mera etc to denote the corresponding specific type of modular meras throughout this section the quantum critical homogeneous host system is described by an infinite lattice xmath43 in one dimension with a fixed point hamiltonian xmath44 made of constant nearest neighbor couplings xmath45 such that its the ground state xmath25 of xmath0 can be represented by a scale invariant and translation invariant mera with a single pair of tensors xmath17 let us first consider an impurity problem in one spatial dimension with hamiltonian xmath46 where xmath47 accounts for an impurity that is supported on a small region xmath24 which in the following is supposed to be made of two contiguous sites let xmath48 denote the ground state of hamiltonian xmath49 then the theory of minimal updates in holography xcite asserts that a mera for the ground state xmath48 can be obtained by modifying the mera for xmath50 only in the causal cone xmath27 of region xmath24 which we assume to also be scale invariant accordingly the impurity mera is fully described by two pairs of tensors xmath51 and xmath52 if the impurity is not scale invariant then additional pairs of scale dependent tensors xmath53 inside the causal cone will be required in order to describe the non trivial rg flow of the impurity to a scale invariant rg fixed point fig defectmeraa depicts the impurity mera in practical computations we find it more convenient to apply cosmetic changes inside the causal cone of the tensor network as described in fig fig defectmerab c and work instead with the impurity mera depicted in fig fig defectmerac this requires first splitting the isometries xmath7 within the causal cone xmath27 into pairs of binary isometries xmath54 and xmath55 as described in appendix sect isodecomp and then further simplifying the tensor network inside the causal cone replacing the pair of tensors xmath37 by a single tensor xmath3 if the impurity is not scale invariant then additional scale dependent tensors xmath4 will be required notice that figs fig defectmeraa and fig defectmerac represent two essentially equivalent forms of the modular mera however the latter form is slightly simpler and accordingly we will use it in the theoretical discussion of sect sect critmod and in the benchmark results of sect sect benchimpurity of hamiltonian xmath49 eq a regular form of an impurity mera for xmath56 originating in the mera for a scale invariant translation invariant state xmath25 described by a pair of tensors xmath17 and that has a different pair of tensors xmath37 inside the causal cone xmath27 shaded of the local region xmath24 associated to the impurity b prior to modifying the homogeneous mera we can decompose some of its isometries xmath7 into upper xmath54 and lower xmath55 isometries as described in appendix sect isodecomp c a slightly different impurity mera for the same ground state xmath56 is obtained by replacing the tensors within the causal cone xmath27 of the tensor network in b with a new set of isometric tensors xmath3width321 let us now consider a modular mera for a semi infinite chain with a boundary notice that a special case of the impurity hamiltonian of eq s3e2 corresponds to an impurity that cancels out the interaction between the two sites in region xmath28 xmath57 where xmath58 denotes the part of the homogeneous hamiltonian xmath59 that is supported on xmath24 more generally xmath60 could also contain additional single site terms such as a single site magnetic field etc notice that since we are dealing with a special case of the impurity hamiltonian of eq s3e2 the impurity mera of fig fig boundarymeraa could be used as an ansatz for its ground state however since there is no interaction and therefore no entanglement between the left and right semi infinite halves of the system we can simplify the impurity mera by setting the disentanglers xmath61 within the causal cone to identity resulting in the doubled boundary mera depicted in fig fig boundarymerab in other words the theory of minimal updates xcite asserts that a modular mera consisting of half a homogeneous mera and a single column of boundary tensors xmath3 can be used to represent the ground state xmath62 of a homogeneous hamiltonian with an open boundary xmath63 where the additional and completely unconstrained one site term xmath64 is included to set the boundary condition this form of modular mera for boundary problems boundary mera was first proposed and tested in ref there however no theoretical justification of its remarkable success was provided in sect sect benchbound we expand upon these previous results for boundary mera by benchmarking the ansatz both for semi infinite chains and for finite systems with two open boundaries note that a related form of boundary mera was also proposed in ref of hamiltonian xmath65 a an impurity mera can be used as an ansatz for the ground state xmath56 of a homogeneous hamiltonian xmath0 that has an impurity xmath66 added on region xmath24 see also fig fig defectmera b as a special case of the impurity mera if the impurity xmath66 is chosen such as to remove all interaction between the left and right halves of the chain as described in eq s3e3 then the disentanglers xmath61 from a can be set to identity in this way we obtain two copies of the boundary mera an ansatz for the ground state xmath62 of a semi infinite system with a single open boundarywidth321 next we describe a modular mera for an interface between two semi infinite homogeneous systems xmath40 and xmath41 consider an infinite chain with hamiltonian xmath67 where xmath68 xmath69 is the restriction to the left right semi infinite half of the chain of a hamiltonian for a scale and translation invariant system xmath40 xmath41 and where xmath70 describes a coupling between xmath40 and xmath41 across the interface xmath24 if the strength xmath71 of the interface coupling is set at xmath72 then hamiltonian xmath73 reduces to a pair of non interacting open boundary hamiltonians of the form described in eq s3e3b in this case the ground state could be represented with two different boundary meras as depicted in fig fig interfacemeraa if we now consider switching on the interface coupling ie xmath74 then the theory of minimal updates asserts that only the inside of the causal cone of xmath24 in fig fig interfacemeraa needs be modified similar to the approach with the impurity mera in fig fig defectmerac we replace the structure within the causal cone by a new set of isometric tensors xmath3 which leads to the interface mera as shown in fig fig interfacemerab the performance of the interface mera is benchmarked in sect sect benchtwo in eq b if a non zero interface coupling xmath75 is introduced then the mera from a is modified within the causal cone xmath27 of region xmath24 with the introduction of a new set of isometric tensors xmath3 the resulting ansatz is an interface merawidth321 the theory of minimal updates produces a modular mera also for more complex problems such as systems involving multiple impurities or for systems with several types of defects such a system with both a boundary and an impurity in the benchmark results of sect sect bench we describe a modular mera for a system with two impurities for a finite system with two open boundaries and for a y interface of three semi infinite quantum spin chains a summary of several types of modular mera together with the corresponding hamiltonians is depicted in fig fig meratypes notice that in all instances the modular mera is characterized by a small number of tensors that does not scale with the system size thus it can be used to address thermodynamically large systems directly as shall be demonstrated in the benchmark results from a homogeneous system and dark shading indicates regions occupied by tensors associated to a defect a mera for the scale and translation invariant ground state xmath76 of a homogeneous hamiltonian xmath77 b impurity mera for the ground state xmath56 of an impurity hamiltonian xmath49 eq s3e2 see also fig fig defectmera c modular mera for the ground state xmath78 of a hamiltonian xmath79 with two impurities localized on disjoint regions xmath80 and xmath81 d tensor product of two boundary meras for the ground state xmath82 of an impurity hamiltonian xmath49 in which the impurity is used to remove any interaction between the left and right halves of the chain e modular mera for the ground state xmath83 of the hamiltonian xmath84 for a finite chain with two open boundaries at xmath85 and xmath86 f interface mera for the ground state xmath87 of an interface hamiltonian xmath73 eq s3e4 describing the interface between two two homogeneous systems xmath40 and xmath41width321 located at the impurity site of an impurity mera is coarse grained into one site operators xmath88 then xmath89 and so forth b the scaling superoperator xmath90 associated to the impurity c an operator at the site of the impurity xmath91 and an operator xmath92 some distance xmath93 from the impurity become nearest neighbors after xmath94 coarse graining stepswidth321 next we explain how to extract the large length scale universal properties of a defect from the modular mera we will see that the structure of the ansatz automatically implies i the existence of a new set of scaling operators and scaling dimensions associated to the defect that is in addition to the so called bulk scaling operators and scaling dimensions associated to the host system see appx sect scalemera ii that the expectation values of local observables differ from those in the absence of the defect by an amount that decays as a power law with the distance to the defect these properties which match those obtained in the context of boundary conformal field theory bcft xcite indicate that the modular mera is a very natural ansatz to describe ground states of quantum critical systems in the presence of a defect and further justifies the validity of the theory of minimal updates of ref for concreteness let us consider the impurity mera in fig fig defectmerac which is fully characterized by the homogeneous tensors xmath95 and the impurity tensor xmath3 let xmath96 be a local operator that is measured on the region xmath24 where the impurity is located which we effectively collapse into a single site each layer xmath5 of the impurity mera can be interpreted as a coarse graining transformation that will map xmath96 into a new local operator xmath97 as also illustrated in fig fig twocorrloca the coarse graining of one site operators located at the impurity is achieved by means of a scaling superoperator xmath98 associated to the impurity xmath99 where the form of xmath98 is depicted in fig fig twocorrlocb notice that xmath98 depends only on the impurity tensor xmath3 ie it does not depend on tensors xmath95 one can diagonalize the impurity superoperator xmath98 as was done with the scaling superoperator xmath100 in appx sect scalemera to obtain its scaling operators xmath101 and scaling dimensions xmath102 which are defined as xmath103 let us now evaluate the ground state correlator between an impurity scaling operator xmath101 located at the site of the impurity xmath104 and a bulk scaling operator xmath105 located at site xmath93 xmath106 as illustrated in fig fig twocorrlocc for convenience we choose xmath107 for a integer xmath108 after applying one layer of coarse graining the distance between the scaling operatorsis reduced to xmath109 xmath110 which leads to the equality xmath111 where xmath112 and xmath113 are eigenvalues of the scaling superoperators xmath98 and xmath114 respectively after xmath115 coarse graining transformations the two scaling operators become nearest neighbors in the effective lattice iterating eq s3e7 that many times we obtain xmath116 in the last step we have ignored a subdominant term that becomes negligible in the large xmath93 limit and have introduced the constant xmath117 the constant xmath118 is defined as the correlator for the scaling operators on adjacent sites xmath119 here xmath120 is the two site reduced density matrix on the site of the impurity and the adjacent site s3e8 reproduces a well established result from bcft xcite the correlator between a scaling operator at the impurity and a scaling operator outside the impurity decays polynomially with the distance xmath93 with an exponent that is the sum of the corresponding impurity scaling dimension xmath121 and bulk scaling dimension xmath122 let us now specialize eq s3e8 by setting the impurity scaling operator to the identity xmath123 this leads to xmath124 ie the expectation value of a bulk scaling operator xmath105 tends to zero polynomially in distance xmath125 from the impurity with an exponent equal to its scaling dimension xmath122 recall that in a bulk critical system all bulk scaling operators with the exception of the identity have vanishing expectation value xmath126 thus in the large xmath93 limit the expectation value of arbitrary local operator xmath127 located at site xmath93 of the impurity mera differs from its bulk expectation value xmath128 as xmath129 where the exponent xmath130 of the decay represents the dominant smallest non zero scaling dimension of the operator xmath96 when decomposed in a basis of bulk scaling operators s3e11 shows that in the modular mera the expectation values of local observables deviate from bulk expectation values everywhere with a magnitude that decays polynomially with respect to the distance xmath93 from the defect in this section we describe how the modular mera can be optimized for concreteness we focus on the optimization of the impurity mera depicted in fig fig logscalea noting that other modular meras such as those introduced in sect sect modularity can be optimized using a similar approach in the following the impurity mera will be optimized so as to approximate the ground state of an impurity hamiltonian xmath0 of the form xmath131 where xmath132 is the hamiltonian of a translation invariant quantum critical host system and the term xmath133 represents a local impurity localized on a region xmath24 of the lattice the proposed optimization algorithm is a direct implementation of the theory of minimal updates first a scale invariant mera for the ground state xmath76 of the host hamiltonian xmath77 is obtained which is then modified within the causal cone xmath27 of region xmath24 in order to account for the impurity xmath133 and obtain the ground state xmath56 of xmath49 the three steps for optimizing the impurity mera are thus as follows 1 the tensors xmath17 describing the host system are obtained through optimization of a scale invariant mera for the ground state xmath76 of the host hamiltonian xmath77 step s1e1 2 the original impurity hamiltonian xmath134 defined on the infinite lattice xmath135 is mapped to an effective hamiltonian xmath136 on a semi infinite wilson chain xmath137 to be introduced below xmath138 through an inhomogeneous coarse graining xmath139 defined in terms of tensors xmath17 step s1e2 3 the impurity tensors xmath3 are obtained through optimization of a tensor network approximation to the ground state xmath140 of the effective problem xmath136 on the wilson chain step s1e3 the optimization of the mera for the host hamiltonian step step s1e1 above has been covered extensively in eg refs xcite to which we refer the reader we now describe in sect sect logscale the details of step step s1e2 and in sect sect optlog the optimization algorithm for step step s1e3 and impurity tensors xmath3 for a xmath141 lattice xmath135 the causal cone xmath27 of the impurity region xmath24 is shaded the wilson chain xmath142 is the xmath141 lattice formed along the boundary of this causal cone b the inhomogeneous coarse graining xmath139 maps the initial hamiltonian xmath0 here partitioned into shells xmath143 of varying size see eq eq shells to the effective hamiltonian xmath136 defined on the wilson chain xmath144 c a schematic depiction of the coarse graining of a term from the local hamiltonian xmath145 assuming scale invariance of the hamiltonian xmath0 to a local coupling on the wilson chain see eq eq ad6 d diagrammatic representation of the coarse graining described in eq eq ad7 for xmath146 e diagrammatic representation of the coarse graining described in eq eq ad7 for xmath147 g a diagrammatic representation of xmath148width321 consider a mera on lattice xmath135 and a region xmath24 with corresponding causal cone xmath27 we call the wilson chain of region xmath28 denoted xmath149 the one dimensional lattice obtained by following the surface of the causal cone xmath150 see fig fig logscalea that is the hilbert space for the wilson chain is built by coarse graining the hilbert space of the initial lattice xmath43 with an inhomogeneous logarithmic scale coarse graining transformation xmath139 which is comprised of all the tensors in the mera that lay outside the causal cone xmath150 see fig fig logscaleb in the followingwe describe how the hamiltonian xmath49 defined on lattice xmath135 is coarse grained to an effective hamiltonian xmath136 on this wilson chain which by construction can be seen to be only made of nearest neighbor terms xmath151 here the nearest neighbor coupling xmath152 depends on xmath153 however below we will see that scale invariance of the host hamiltonian xmath0 implies that for all values of xmath153 xmath152 is proportional to a constant coupling xmath154 obtaining the effective hamiltonian xmath136 for the wilson chainis a preliminary step to optimizing the impurity tensors xmath3 it is convenient to split the hamiltonian xmath49 intro three pieces xmath155 where xmath156 collects the impurity hamiltonian xmath157 and the restriction of the host hamiltonian xmath0 on region xmath24 and xmath158 and xmath159 contain the rest of hamiltonian terms to the left and two the right of region xmath24 respectively for simplicity we shall only consider explicitly the contribution to the effective hamiltonian xmath136 that comes from xmath159 xmath160 where xmath161 measures the distance from the impurity region xmath24 we note that xmath158 in eq eq split yields an identical contribution whereas xmath156 is not touched by the coarse graining transformation xmath139 let us rewrite xmath159 as xmath162 here xmath163 denotes the sum of all terms in xmath159 supported on the sites of lattice xmath135 that are in the interval xmath164 to the right of xmath165 where xmath166 is xmath167 for instance xmath168 is the sum of hamiltonian terms in the interval xmath16912 which is actually just a single term xmath170 while xmath171 is the sum of terms in the interval xmath17225 xmath173 and so forth let xmath174 denote the ascending superoperator that implements one step of coarse graining of xmath163 the explicit forms of xmath175 xmath176 and xmath177 are depicted in fig fig logscaled f respectively then the term xmath178 of the effective hamiltonian xmath136is obtained by coarse graining xmath179 a total of xmath153 times xmath180 as an example fig fig logscalec depicts the coarse graining of the term xmath145 xmath181 through use of eq eq ad5 one can evaluate all the terms xmath182 for xmath183 that define the effective hamiltonian xmath136 on the wilson chain xmath142 let us now specialize the analysis to the case where the original hamiltonian on xmath43 is scale invariant see appx sect scalemera in this case xmath163 transforms in a precise way under coarse graining namely xmath184 for all xmath185 let us define xmath186 then all the terms xmath187 of the effective hamiltonian xmath136 are seen to be proportional to this same term xmath188 xmath189 and the effective hamiltonian xmath136 for the wilson chain is xmath190 that is in the scale invariant case we have obtained a nearest neighbor hamiltonian where each nearest neighbor term is proportional to xmath188 with a proportionality constant that decays exponentially with xmath153 if the scale invariant mera contained xmath191 transitional layers before reaching scale invariance see appx sect scalemera then the form of the terms in xmath136 would be position dependent for xmath192 and only become proportional to a fixed xmath193 for xmath194 the hamiltonian xmath136 is analogous to the effective hamiltonian wilson obtained and subsequently solved in his celebrated solution to the kondo impurity problem xcite this observation was central to the proposal and justification of minimal updates in mera in ref defined on lattice xmath135 is mapped to an effective hamiltonian xmath136 defined on the wilson chain xmath142 via the inhomogeneous coarse graining xmath139 b the set of impurity tensors xmath195 form a tree tensor network state xmath140 on xmath142 we denote by xmath196 the block of radius xmath153 about xmath24 c d the block hamiltonian xmath197 defined as the part of xmath136 supported on block xmath196 is coarse grained to the one site block hamiltonian xmath198 using the impurity tensors xmath199 e f the reduced density matrix xmath200 on block xmath196 is coarse grained to the one site reduced density matrix xmath201 using the impurity tensors xmath199width321 once we have constructed the effective hamiltonian xmath136 for the logarithmic scale wilson chain xmath142 as represented schematically in fig fig defectalgaa we can proceed to optimize for the impurity tensors xmath3 the impurity tensors xmath3 form a tensor network known as tree tensor network xcite ttn which we use as a variational ansatz for the ground state xmath140 on the wilson hamiltonian xmath136 see fig fig defectalgab specifically the impurity tensorsxmath3 will be obtained through the energy minimization xmath202 notice that if folded through the middle this ttn is equivalent to a matrix product state mps xcite therefore its optimization can be accomplished using standard variational mps methods xcite once they have been properly adapted to a semi infinite chain here for concreteness we describe in detail an optimization algorithm that is similar to the techniques employed in the optimization algorithm for scale invariant mera xcite we assume that the state xmath140 can be described by the above ttn made of tensors xmath203 where all the tensors for xmath204 are given by a fixed tensor xmath205 the number of required transitional tensors xmath206 will in general depend on both the details of the mera for the state xmath76 of the lattice xmath43 more specifically on the number xmath191 of transitional layers required before reaching scale invariance see appx sect scalemera as well as the details of the specific impurity under consideration in practice the appropriate xmath206is found heuristically one starts with a small xmath206 minimizes the energy using eg the algorithm provided below and then iteratively increases xmath206 until the corresponding optimized energy does no longer depend on xmath206 in total xmath207 distinct tensors xmath208 need be optimized this is achieved by iteratively optimizing one tensor at a time so as to minimize the energy xmath209 if xmath195 is the tensor to be optimized then we proceed by computing its linearized environment xmath210 which is the tensor obtained by removing tensor xmath195 but not its conjugate xmath211 from the tensor network describing the energy xmath209 and that therefore fulfills xmath212 where ttr denotes a tensor trace an updated xmath195 that minimizes the energy is then obtained through the singular value decomposition svd of xmath210 let us define the nested set of blocks xmath213 as block of radius xmath153 around xmath24 with xmath214 see fig fig defectalgab then the process of computing linearized environments xmath210 is simplified by first computing the coarse grained block hamiltonians xmath198 and reduced density matrices xmath201 supported on xmath196 as described in sectssect optham and sect optden respectively sect optlin discusses details of the construction of linearized environments and the svd update while sect sect optalg describes how these steps can be composed into the full optimization algorithm let us denote by xmath197 the part of the hamiltonian xmath136 that is supported on block xmath196 and by xmath198 its effective one site version that results from coarse graining xmath197 by the first xmath153 impurity tensors xmath199 see fig fig defectalgac d for examples the block hamiltonian xmath216 for a larger block xmath217 can be computed from the smaller block hamiltonian xmath218 by xmath219 where xmath220 is the one site impurity ascending superoperator associated to xmath195 and xmath221 and xmath222 are left and right ascending superoperators that add the contributions from the local couplings xmath223 to the block hamiltonian the forms of these ascending superoperators are depicted as tensor network diagrams in fig fig defectalgba we us denote by xmath200 the reduced density matrix that is obtained from xmath140 by tracing out the sites outside the block xmath196 and by xmath201 as its effective one site version that results from coarse graining xmath200 with the first xmath153 impurity tensors xmath199 see fig fig defectalgae f for examples the one site density matrix xmath225 for a smaller block xmath226 can be obtained from the density matrix xmath227 for the larger region xmath228 by fine graining it with isometry xmath195 then tracing out the boundary sites this can be achieved by applying the one site descending superoperator xmath229 associated to the impurity tensor xmath195 xmath230 see fig fig defectalgbb notice that scale invariance such that xmath231 for scales xmath232 implies that xmath233 for all xmath234 where the fixed point density matrix xmath235 satisfies xmath236 here xmath237 is the one site scaling superoperator as introduced in sect sect critmod when studying scale invariant properties of modular mera which is just the impurity ascending superoperator xmath238 constructed from xmath205 we can thus obtain xmath239 as the dominant eigenvector of xmath240 eg by diagonalizing xmath240 from xmath239 one can then sequentially compute the density matrices xmath241 by using eq eq densitya a the tensor contractions required for evaluating the block hamiltonian xmath242 see also eq eq blockham b the tensor contraction required for evaluating the reduced density matrix xmath243 from xmath244 see also eq eq densitya c the five contributions to the linearized environment xmath210 of the impurity tensor xmath195width321 fig fig defectalgbc shows the linearized environment xmath210 for the impurity tensor xmath195 xmath210 decomposes into a sum of five terms each of which corresponds to a small tensor network and it depends on the effective hamiltonian xmath245 the reduced density matrices xmath201 and xmath246 the hamiltonian terms xmath223 and xmath247 and the impurity tensors xmath248 xmath249 and xmath250 xmath251 let us consider first the optimization of xmath195 for xmath252 in this case the updated impurity tensor is chosen as xmath253 where xmath254 and xmath255 are isometric tensors obtained from the svd of the linearized environment xmath210 namely xmath256 see ref for further details for xmath204 the impurity tensor xmath195 is a copy of the impurity tensor xmath205 in order to update xmath205 we should construct the environment as the sum of environments for each xmath257 xmath258 obtaining the environment xmath259 directly through this infinite summation may only be possible at a very large computational cost however since the system is assumed to be scale invariant the environments xmath260 in eq s4e11 should quickly converge to a fixed environment as we increase xmath153 thus one can obtain an approximate environment xmath261 of the scaling impurity tensor xmath205 through a partial summation of eq s4e11 xmath262 the number xmath263 of terms in this partial summation required in order to obtain a sufficiently accurate environment will in general depend on the problem under consideration however for the numerical results of sect sect bench we find that keeping xmath264 is sufficient in most cases once the linearized environment xmath261 has been computed the tensor xmath265 is updated by taking the svd of the environment as in the case xmath252 let us then review the algorithm to optimize the tensors xmath208 of the ttn of fig fig defectalgab for the ground state xmath140 of the effective hamiltonian xmath136 the optimization is organized in sweeps through the ttn where each sweep consists of a sequence of single tensor updates for each xmath195 from xmath266 to xmath267 we iterate these optimization sweeps until the state xmath140 has converged sufficiently recall that the effective hamiltonian xmath136 generically takes the form of eq eq ad9 with nearest neighbor coupling strengths that decay geometrically with the distance to the origin thus a very good approximation to the ground state of xmath136 can be obtained using wilson s numerical renormalization groupxcite nrg here we use the nrg to initialize the impurity tensors xmath195 and then apply the variational sweeping to further improve the approximation to the ground state each iteration of the variational sweep is comprised of the following steps 1 compute the fixed point density matrix xmath239 through diagonalization of the adjoint impurity scaling superoperator xmath268 2 compute the block density matrices xmath269 for all xmath270 using eq eq densitya sequentially update xmath195 starting from xmath266 and proceeding to xmath271 for each such values of xmath153 first compute the linearized environment xmath210 and then update the impurity tensor xmath195 via the svd of this environment then compute the effective hamiltonian xmath198 from xmath245 using the updated isometry xmath195 as described in eq eq blockham 4 update the fixed point tensor xmath205 compute an approximate environment xmath259 as described in eq s4e12 and then update the fixed point tensor xmath205 via the svd of this environment notice that this algorithm is analogous to the one introduced to optimize the scale invariant mera as described in ref in this section we benchmark the use of the modular mera for several types of defect in quantum critical systems specifically we consider impurities boundaries and interfaces in the case of a single impurity a single boundary and a simple interface we use the corresponding modular meras introduced in sects sect impuritymera sect boundmera and sect interfacemera for multiple impurities two boundaries and y interfaces we use more complicated modular meras that result from a recursive use of the theory of minimal updates as outlined in sect sect furgen in several cases we also specify how to modify the basic optimization algorithm of sect sect optmod we start by benchmarking the use of the modular mera to describe a quantum critical system in the presence of a single impurity first and then in the presence of multiple impurities let us first consider a quantum critical system with a hamiltonian of the form xmath272 where xmath0 is a fixed point hamiltonian that describes the host system which is invariant both under translations and changes of scale and xmath273 accounts for an impurity localized on region xmath24 of the lattice specifically we test the impurity mera in the case where xmath0 corresponds to the critical ising hamiltonian xmath274 where xmath275 and xmath276 are pauli matrices and the impurity hamiltonian xmath277 acts on two adjacent lattice sites xmath278 where it weakens or strengthens the nearest neighbor term xmath279 for some real number xmath71 the quantum critical ising model with an impurity of this form which is in direct correspondence with the xmath280 classical ising model with a defect line has been studied extensively in the literature xcite we refer the reader to ref for a review of the problem we optimize the impurity mera for the ground state xmath56 of this impurity problem using the strategy outlined in sect sect optmod we fist find tensors xmath17 for the ground state of the homogeneous critical ising model using a scale invariant mera with bond dimension xmath281 this mera incorporated both the xmath282 spin flip global on site symmetry and the reflection symmetry see appendix sect refsym of xmath283 this optimization required approximately 1 hour of computation time on a 32 ghz desktop pc with 12 gb of ram the mapping of the initial impurity hamiltonian xmath49 to the effective problem xmath136 on the wilson chain xmath144 as described in sect sect logscale was accomplished in negligible computation time it is less expensive than a single iteration of the optimization of the scale invariant mera optimization of the impurity tensors xmath195 as discussed in sect sect optlog was performed for a range of impurity strengths namely the two series xmath284 and xmath285 which required approximately 20 minutes of computation time for each value of xmath71 evaluated from an impurity mera xmath286 s versus the exact solutions solid lines for the critical ising model with an impurity xmath287 as described eq s5e4 located on lattice sites xmath278 the magnetization approaches the bulk value xmath288 polynomially as xmath289 for all values of xmath71 consideredwidth321 for the critical ising model with a conformal defect xmath287 comparing results from the impurity mera xmath286 s with the exact results of eq s5e5 solid lines note that only scaling dimensions in the xmath290 parity sector of the xmath282 global symmetry of the ising model are plotted as those in the xmath291 parity sector are invariant under addition of the conformal defect b the complete spectrum of scaling dimensions obtained from the mera organized according to parity sector xmath292 for values of xmath293width321 from the optimized impurity mera we compute the magnetization profiles xmath294 as shown fig fig defectzmag which match the exact profiles obtained by solving the free fermion problem see ref with high precision for all defect strengths xmath71 considered the magnetization approaches the constant bulk value xmath295 as xmath289 ie with scaling dimension xmath296 this result consistent with the behavior of modular mera predicted in sect sect critmod is in agreement with the scaling of the magnetization xmath297 predicted from study of the ising cft where the xmath276 operator is related to the energy density operator xmath298 of the ising cft with scaling dimension xmath296 for each value of the impurity coupling xmath71 we also compute the scaling dimensions xmath299 associated to the impurity by diagonalizing the impurity scaling superoperator xmath90 as described sect sect critmod in refs the spectrum of scaling dimensions for the critical ising model associated to the impurity xmath287 have been derived analytically xmath300 where xmath301 is a positive integer and xmath302 is a phase associated to the strength of the impurity xmath71 xmath303 a comparison of the scaling dimensions obtained from mera and the exact scaling dimensions is presented in fig fig defectcritexp remarkably the impurity mera accurately reproduces the smallest scaling dimensions all scaling dimensions xmath304 for the full range of xmath71 considered which include the special cases of i an impurity that removes any interaction between the left and right halves of the chain xmath305 ii the case with no impurity xmath306 and iii an impurity which sets an infinitely strong ising interaction over two spins xmath307 these results confirm that the impurity mera accurately approximates the ground state of the impurity system both in terms of its local expectation values eg magnetization profile xmath294 and its long distance universal properties eg scaling dimensions xmath308 andxmath81 here separated by xmath309 lattice sites the causal cones of the individual impurities fuse at a depth xmath310 at small depth xmath311 the mera has two types of impurity tensor xmath312 and xmath313 one associated to each of the impurities at greater depth xmath314 the mera has one type of impurity tensor xmath315 associated to a fusion of the two impurities b an inhomogeneous coarse graining xmath139 defined from the bulk tensors maps the original two impurity hamiltonian xmath0 to an effective two impurity hamiltonian xmath136 a subsequent coarse graining xmath316 defined from the impurity tensors xmath312 and xmath313 maps xmath136 into an effective single impurity hamiltonian xmath317width321 next we consider a system with two impurities with hamiltonian xmath318 where xmath319 and xmath320 represent the distinct impurities located on separate local regions xmath80 and xmath81 of the lattice the two impurity mera for the ground state xmath321 of hamiltonian xmath322 is depicted in fig fig twodefectcga in this more complex modular merathe tensors have been modified within the causal cone xmath323 of the union of regions xmath80 and xmath81 for length scalesxmath324 where xmath93 is the distance separating the two regions xmath325 and xmath326 the causal cones xmath327 and xmath328 are distinct while for length scales xmath329 the causal cone have fused into a single cone thus for short length scales xmath324 there are two distinct types of impurity tensor tensors xmath312 associated to the impurity xmath40 and tensors xmath313 associated to the impurity xmath41 for longer length scales xmath329 there is a single type of impurity tensor xmath315 which is associated to the fusion of the two impurities xmath40 and xmath41 into a new impurity xmath330 the steps for optimizing the two impurity mera are as follows 1 optimize a scale invariant mera for the ground state xmath25 of the homogeneous host hamiltonian xmath0 to obtain tensors xmath331 step s3e1 2 optimize a single impurity mera for the single impurity hamiltonian xmath332 to obtain the impurity tensors xmath312 step s3e2 3 optimize a single impurity mera for the single impurity hamiltonian xmath333 to obtain the impurity tensors xmath313 step s3e3 4 map the original two impurity hamiltonian xmath322 of eq s5e7 to an effective single impurity hamiltonian xmath317 xmath334 as depicted in fig fig twodefectcgb where xmath139 is an inhomogeneous coarse graining defined in terms of the bulk tensors and xmath316 is a coarse graining defined in terms of the impurity tensors xmath312 and xmath313 step s3e4 5 optimize a ttn for the effective single impurity problem xmath317 to obtain the impurity tensors xmath315 step s3e5 thus by exploiting minimal updates and the modular character of mera the two impurity problem is addressed by solving a sequence of three single impurity problems two single impurity problems for impurities xmath40 and xmath41 separately and a third single impurity problem for the effective impurity xmath330 that results from coarse graining together impurities xmath40 and xmath41 to test the validity of this approach we investigate the case where xmath77 in eq s5e7 is the critical ising model xmath335 of eq s5e3 and xmath336 and xmath337 are each defects of the form described in eq conformal field theory predictsxcite that when viewed at distances much larger than the separation xmath93 between the two impurities the two impurity ising model is equivalent to an ising model with a single impurity xmath330 with effective hamiltonian xmath338 the strength xmath339 of the fused impurity xmath330 relates to the strength xmath340 and xmath341 of the original impurities xmath40 and xmath41 according to xcite xmath342 where xmath343 is the phase associated to the defect as described by eq we employ the mera to test a special case of eq s5e12 in which we choose the weight of the second impurity as the inverse of the first xmath344 such that xmath345 is the unique solution to eq s5e12 in other words we test the case where the two impurities are predicted to fuse to identity ie no impurity at large distances obtained from the mera for the ground state of the critical ising model with two conformal impurities xmath346 and xmath347 see eq s5e4 with strengths xmath348 and xmath349 respectively which are located xmath350 lattice sites apart the magnetization profile when both impurities are present is represented with xmath351 s while the two solid lines each represent magnetization profiles when only one of the impurities is present b the spectra of scaling dimensions associated to the impurities xmath352 and xmath353 are the single impurity spectra for impurities of strength xmath348 and xmath349 respectively while xmath354 is the spectrum arising from the fusion of these conformal impurities it is seen that xmath354 matches the scaling dimensions of the bulk ie impurity free critical ising modelwidth321 we optimize the two impurity mera for the case xmath355 and xmath356 where the impurities are set a distance of xmath350 sites apart tensors xmath357 and the single impurity tensors xmath312 and xmath313 are recycled from the single impurity calculations of sect sect benchsingle thus the only additional work to address the two impurity problem provided the individual impurities have been previously addressed is to perform steps step s3e4 and step s3e5 above namely producing an effective single impurity hamiltonian xmath317 and then optimizing the impurity tensors xmath315 for the fused impurity xmath330 the scaling superoperator xmath358 associated to the fused impurity was diagonalized to obtain the scaling dimensions xmath354 associated to the fused impurity xmath330 these scaling dimensions together with the magnetization profile xmath359 of the two impurity system are plotted in fig fig twodefectcritexp it can be seen that the scaling dimensions xmath354 reproduce the spectrum of scaling dimensions for the homogeneous ising model xcite as predicted by eq s5e12 thus indicating that the two impurity mera accurately captures the universal properties of the ground state the method outlined to address a two impurity problem can be easily generalized to the case of a system with any finite number of impurities the many impurity problem can likewise be reduced to a first sequence of single impurity problems that under fusion give rise to a second sequence of single impurity problems and so on next we benchmark the use of the modular mera to describe a quantum critical system in the presence of one boundary semi infinite chain and in the presence of two boundaries finite chain let us first consider a semi infinite lattice xmath360 with hamiltonian xmath0 xmath361 where the hamiltonian term xmath64 at site xmath362 describes the boundary and can be chosen so as to describe certain types of open boundary conditions such as fixed or free open boundary conditions and xmath363 is a nearest neighbor hamiltonian term such that the hamiltonian xmath364 represents the host system which is invariant under translations and under changes of scale the boundary mera for the ground state xmath62 of hamiltonian xmath65 as described sect sect boundmera was initially introduced and tested in ref here we shall both reproduce and expand upon the results in that paper a similar construction was proposed also in ref in order to optimize the boundary mera depicted in fig fig boundeffectivea which is fully characterized in terms of the tensors xmath365 for the homogeneous system and the tensors xmath3 for the boundary we follow the following steps 1 optimize tensors xmath17 by energy minimization of a mera for the homogeneous host system with hamiltonian xmath59 step s4e1 2 map the original boundary hamiltonian xmath65 to the effective boundary hamiltonian xmath136 on the wilson chain xmath142 xmath366 through the inhomogeneous coarse graining xmath139 as depicted in fig fig boundeffectiveb step s4e2 3 optimize the tensors xmath3 by energy minimization on the effective hamiltonian xmath136 step s4e3 these steps can be accomplished with only minor changes to the method presented in sect sect optmod of whichis described by a pair of bulk tensors xmath367 and a boundary tensor xmath195 the causal cone xmath368 of the boundary xmath369 which only contains boundary tensors xmath195 is shaded and the associated wilson chain xmath142 is indicated b an inhomogeneous coarse graining xmath139 defined in terms of the bulk tensors is used to map the original boundary hamiltonian xmath0 to an effective boundary hamiltonian xmath136 on the wilson chain xmath142width321 for the critical ising model with free and fixed bc obtained with a boundary mera the exact solution approaches the bulk value xmath370 as xmath371 right error in xmath372 for free bc similar to that for fixed bc the non vanishing expectation value of bulk scaling operators is accurately reproduced even thousands of sites away from the boundarywidth321 we consider two quantum critical models for the host hamiltonian xmath59 the critical ising model xmath373 of eq s5e3 and the quantum xx model xmath374 where xmath275 and xmath375 are pauli matrices the boundary condition at site xmath362 are set either as free boundary in which case xmath376 in eq s6e1 or fixed boundary xmath377 tensors xmath17 for the ising model can be recycled from the calculations of sect sect benchimpurity while for the quantum xx model they are obtained from a mera with xmath378 that exploits both reflection symmetry and a global xmath379 spin symmetry and required approximately 2 hours of optimization time on a 32 ghz desktop pc with 12 gb of ram optimization of the effective boundary problem xmath136 for the boundary tensors xmath3 required less than 10 minutes of computation time for each of the critical models under each of the boundary conditions tested fig zmag displays the magnetization profile xmath380 for the ising model with both free and fixed bc which are compared against the exact magnetization profiles obtained using the free fermion formalism xmath381 the optimized boundary mera accurately reproduces the effect of the boundary on the local magnetization even up to very large distances specifically the exact magnetization profile is reproduced within xmath382 accuracy up to distances of xmath383 sites from the boundary fig scaledim shows the boundary scaling dimensions xmath130 for critical ising and quantum xx models obtained by diagonalizing the scaling superoperator xmath98 associated to the boundary the boundary scaling dimensions obtained from the boundary mera also reproduce the known results from cft xcite with remarkable accuracy for the ising model the smallest scaling dimensions xmath384are reproduced with less than xmath385 error while for the quantum xx model xmath386 the error is less than xmath387 finally we analyze the boundary contribution xmath388 to the ground state energy xmath389 defined as the difference between the energy xmath390 of the semi infinite chain with the boundary term xmath64 eq s6e1 and one half of the ground state energy for the host hamiltonian on the infinite chain xmath391 eq s6e1b since both xmath390 and xmath391 are infinite quantities we can not compute xmath388 through the evaluation of the individual terms in eq s6e6 instead we estimate xmath388 by comparing the energy of the first xmath93 sites of the semi infinite chain to the energy of xmath93 sites of the infinite homogeneous system and increase the value of xmath93 until the energy difference is converged within some accuracy for the quantum ising model on a semi infinite latticewe obtain the following results for free bc a value xmath392 which is remarkably close to the exact solutionxcite xmath393 and for fixed boundary conditions a value xmath394 which based upon the exact solution for finite chains of over a thousand sites we estimate to carry an error of less than xmath395 let us now consider a finite lattice xmath43 made of xmath396 sites and with two boundaries with hamiltonian xmath397 where xmath398 and xmath399 at sites xmath362 and xmath400 describe the left and right boundaries respectively and the xmath363 is a nearest neighbor hamiltonian term as in eq s6e1b a two boundary mera for the ground state xmath401 of a finite chain with hamiltonian xmath402 is depicted in fig fig boundfinitea each layer of tensors consists of tensors xmath403 in the bulk and tensors xmath404 and xmath405 at the left and right boundaries respectively the two boundary mera is organized into a finite number xmath406 of layers and has an additional tensor xmath407 at the top the steps for optimizing this particular form of modular mera are as follows 1 optimize tensors xmath17 by energy minimization of a mera for the homogeneous infinite host system with hamiltonian xmath0 step s5e1 2 optimize the left boundary tensors xmath404 by energy minimization on an effective semi infinite single boundary problem with boundary term xmath408 as described in sect sect benchsemi 3 optimize the right boundary tensors xmath405 by energy minimization on an effective semi infinite single boundary problem with boundary term xmath409 as described in sect sect benchsemi coarse grain the original boundary problem xmath410 defined on the xmath396site lattice xmath411 into an effective boundary problem xmath412 defined on the coarse grained lattice xmath413 xmath414 where each xmath415 is a layer of the two boundary mera as depicted in fig fig boundfiniteb step s5e3 5 compute the top tensor xmath407 through diagonalization of the effective hamiltonian xmath416 for its ground state or excited states step s5e4 in summary to treat a finite chain with open boundaries with the mera one should first address an infinite system then two semi infinite systems and finally a coarse grained version of the original hamiltonian which is reduced to a small number of sites the mera is organized into xmath417 layers where each layer xmath5 is described by a pair of bulk tensors xmath403 and left right boundary tensors xmath404 and xmath405 the boundary mera also has a top tensor xmath407 at the final level b the original boundary problem xmath418 defined on an xmath396site lattice xmath419 can be mapped into an effective open boundary problem xmath420 defined on a xmath421site lattice xmath422 through coarse graining with mera layers xmath423 and xmath424 see also eq s6e9width321 of xmath425 sites with different combinations of open boundary conditions the energy is expressed in units such that the gap between descendants is a multiple of unity all non equivalent combinations of open bc are considered the different open bc are xmath426width321 to test the validity of the two boundary mera to finite systems with open boundary conditions we investigate the low energy spectrum of the critical ising model under different fixed and free boundary conditions as defined sect sect benchsemi we are able to recycle the tensors xmath17 for the homogeneous host system as well as the boundary tensors xmath404 and xmath405 obtained from the previous investigation of semi infinite ising chains in sect sect benchsemi thus we only need to perform steps step s5e3 and step s5e4 above we proceed by constructing the effective hamiltonians xmath416 for a two boundary mera with xmath427 total layers which equates to a total system size of xmath428 sites for all non equivalent combinations of boundary conditions there are four such non equivalent combinations free free fixedupfixeddown fixedupfixedup and free fixed the low energy spectra of the effective hamiltonians xmath416 are then computed with exact diagonalization based on the lanczos method these low energy spectra displayed in fig fig finitespect match the predictions from cft xcite to high precision these results indicate that the two boundary mera is not only a good ansatz for the ground states of finite systems with open boundary conditions but also for their low energy excited states furthermore only the top tensor xmath407 of the mera needs to be altered in order to describe different excited states next we benchmark the use of the modular mera to describe the interface between two or more quantum critical systems let us first consider the interface between two systems xmath40 and xmath41 described by an infinite lattice xmath360 with a hamiltonian of the form xmath429 where the hamiltonian term xmath430 couples two left and right semi infinite chains xmath431 and xmath432 xmath433 and the nearest neighbor terms xmath434 and xmath435 are such that on an infinite lattice the hamiltonians xmath436 describe homogeneous quantum critical host systems that are invariant under translations and changes of scale the interface mera for the ground state xmath87 of hamiltonian xmath73 depicted in fig fig interfaceeffectivea is made of the following tensors two sets of tensors xmath437 and xmath438 corresponding to the mera for the ground state of the host hamiltonians xmath439 and xmath440 respectively and the interface tensors xmath3 optimization of the interface mera can be accomplished through a straightforward generalization of the approach described in sect sect optmod for an impurity the only differences here are that one needs to address first two different homogeneous systems and that the coarse graining of xmath73 into the effective hamiltonian xmath136 on the wilson chain xmath142 see fig fig interfaceeffectiveb uses one set of host tensors xmath437 on the left and the other xmath438 on the right supported on semi infinite chain xmath431 with a different critical system xmath69 supported on semi infinite chain xmath432 each layer xmath5 of the interface merais described by a pair of tensors xmath437 associated to host system xmath40 a pair of tensors xmath438 associated to host system xmath41 and an interface tensor xmath3 which resides in the causal cone xmath27 of the interface region xmath24 the wilson chain xmath142 associated to the interface xmath24 is indicated b the inhomogeneous coarse graining xmath139 defined in terms of the host tensors xmath441 and xmath442 maps original interface hamiltonian xmath0 to an effective interface hamiltonian xmath136 defined on the wilson chain xmath142width321 as defined eq s7e3 of the interface between a quantum xx model chain on sites xmath443 and the critical ising chain on sites xmath444 coupled across the interface xmath278 the parameter xmath71 relates to the strength of the interface coupling in all cases the magnetization decays to the bulk value xmath445 for quantum xx and xmath446 for ising as xmath447width321 we test the validity of the interface mera by choosing as quantum critical systems xmath40 and xmath41 the quantum xx model in eq s6e4 and the critical ising model in eq s5e3 respectively and as the coupling at the interface the two site term xmath448 for several values of xmath449 the tensors xmath450 for the quantum xx model and xmath451 for the ising model are recycled from previous computations in sect sect benchbound thus the only additional work required is to produce the effective interface hamiltonian xmath136 and then to optimize the interface tensors xmath3 by energy minimization over xmath136 the later undertaken on a 32 ghz desktop pc with 12 gb of ram required only approximately 20 minutes of computation time for every value of xmath71 fig fig interfacemag plots the magnetization profile xmath452 xmath453 obtained from the optimized interface mera associated to the interface of the quantum xx and ising model as a function of the coupling strength xmath71 across the interface b the scaling dimensions for the interface with no coupling xmath72 which take on integer and half integer values are seen to be the product of the boundary scaling dimensions for quantum xx and ising models with free bc c under interaction strength xmath454 much of the degeneracy of the xmath72 case is lifted yet the scaling dimensions remain organized in conformal towerswidth321 for xmath72 in eq s7e2 that is two decoupled semi infinite chains we recover indeed the magnetization profiles for the semi infinite quantum xx chain and semi infinite ising chain with a free boundary as expected for xmath75 the quantum xx chain acquires a non zero magnetization near the interface and the magnetization of the ising chain near the interface is reduced with respect to the case xmath72 however away from the interface the magnetizations still decay polynomially to their values for a homogeneous system xmath445 for the quantum xx model and xmath446 for the critical ising model we also computed the scaling dimensions xmath130 associated to the interface as plotted in fig fig interfacecritexp through diagonalization of the scaling superoperator xmath90 associated to the interface the exact scaling dimensions are only known to us for the case of interface strength xmath72 decoupled case where one would expect the spectrum of scaling dimensions to be the product of spectra for the open boundary ising and open boundary quantum xx models on a semi infinite chains see fig fig scaledim the numerical results of fig fig interfacecritexp match this prediction for xmath75 we no longer have exact scaling dimensions to compare with however we see that these are still organized in conformal towers where the scaling dimensions for descendant fields differ by an integer from the scaling dimensions of the corresponding primary fields xcite and where the scaling dimensions of the primary fields depend on xmath71 this is a strong indication that the results from the interface mera are correct interestingly those scaling dimensions that correspond to an integer value for xmath72 remain unchanged for xmath75 up to small numerical errors these are likely to be protected by a symmetry the interface hamiltonian has a global xmath282 spin flip symmetry similar to the case of the critical ising impurity model described in sect sect benchimpurity see eq s7e1b b under action of the inhomogeneous coarse graining xmath139 the hamiltonian xmath0 is mapped to an effective y interface hamiltonian xmath136 on the wilson chain c the y interface tensors xmath195 which form a peculiar tree tensor network on the wilson chain are obtained through optimization of the effective hamiltonian xmath136width321 let us now consider a y interface also called y junction between three systems as described by a lattice xmath360 made of the union of three semi infinite lattices xmath431 xmath432 and xmath455 xmath456 see fig fig ymeraa with hamiltonian xmath457alabels7e1b sumr 1infty hbrbr 1b sumr 1infty hcrcr 1cnonumberendaligned here we use xmath458 and xmath459 xmath460 to denote site xmath161 of lattice xmath431 respectively xmath432 xmath455 the term xmath461 describes the coupling between the three semi infinite chains xmath431 xmath432 and xmath455 whereas the nearest neighbor terms xmath434 xmath435 and xmath462 are such that on an infinite lattice the hamiltonians xmath463 describe homogeneous quantum critical host systems that are invariant under translations and changes of scale the y interface mera for the ground state xmath464 of hamiltonian xmath465 is a straightforward generalization of the interface mera considered in sect sect benchtwo it is characterized by three sets of tensors xmath450 xmath451 and xmath466 that describe the mera for the ground states of the host hamiltonians xmath467 xmath468 and xmath469 and a set of tensors xmath3 at the y interface upon optimizing tensors xmath450 xmath451 and xmath466 in three independent optimizations they are used to map the initial y interface hamiltonian xmath465 to an effective hamiltonian xmath136 see fig fig ymerab now by employing three copies of the mapping depicted in fig fig logscaleb the y interface tensors xmath3 which are arranged in the ttn structure depicted in fig fig ymerac are then optimized to minimize the energy according to the effective hamiltonian xmath136 using the approach described in sect sect optlog obtained for the y interface of three ising chains with xmath71 the strength of the coupling at the y interface the scaling dimensions are organized according to parity sectors xmath292 of the global xmath282 symmetry of the ising model left for the case of xmath72 ie no coupling between different chains the spectrum is seen to be a product of three times the spectrum of the free boundary ising chain see fig fig scaledima where some numeric error is evident for the larger xmath470 scaling dimensions right the cases of coupling strength xmath471 all converge to the same spectrum which symmetric between the xmath292 parity sectorswidth321 we benchmark the y interface mera for an interface of three identical semi infinite chains where the each of the chains is a critical ising model as defined in eq s5e3 and the interface coupling is given by xmath472 labels7e10endaligned where the pauli operators xmath473 xmath474 and xmath475 act on the first site of the semi infinite lattices xmath431 xmath432 and xmath455 respectively once again tensors xmath450 xmath451 and xmath466 for the critical ising model are recycled from previous calculations we optimize the y interface tensors xmath3 by minimizing the energy of the effective hamiltonian xmath136 for interface coupling strengths xmath476 for each value of xmath71we compute the spectrum of scaling dimensions xmath130 associated to the interface by the usual diagonalization of the corresponding scaling superoperator the results for are plotted in fig fig ychaincritexp for xmath72 which corresponds to three uncoupled semi infinite ising chains with free boundary conditions the spectrum of scaling dimensions obtained from the y interface merais seen to be indeed the product of three copies of the spectrum of scaling dimensions for free bc ising model see fig fig scaledim as expected for all non zero interface couplings xmath75 the scaling dimensions converged to an identical spectrum independent of xmath71 with smaller values of xmath71 however requiring more transitional layers xmath206 to reach the fixed point indicating an rg flow to the strong coupling or large xmath71 limit indeed choosing a very large coupling strength xmath477 reproduces the same spectrum of scaling dimensions with only xmath478 transitional layers required notice that the spectrum obtained for xmath75 which is identical between xmath292 parity sectors of the xmath282 symmetry of the ising model is somewhat similar to that in fig fig defectcritexpb for the ising chain with an infinitely strong bond impurity xmath479 between two sites in this manuscript we have built on the theory of minimal updates in holography proposed in ref and have argued that a recursive use of the conjectured minimal updates leads to the modular mera a surprisingly simple ansatz to describe the ground state of a quantum critical system with defects such as impurities boundaries and interfaces we then have provided compelling numerical evidence that the modular mera is capable of accurately describing these ground states by considering a large list of examples notice that the modular mera is at its core a concatenation of two conjectures regarding the structure of the ground state wave function of quantum critical systems the first conjecture embodied in the specific of tensors of the mera is that the ground state of a quantum critical system contains entanglement that can be removed by means of unitary transformations disentanglers acting locally on each length scale xcite the second conjecture the theory of minimal updates xcite is that in order to account for a change of the hamiltonian in region xmath24 only the tensors inside the causal cone xmath27 of region xmath24 need to be modified the results in this paper provide evidence that these two conjectures are correct and thus teach us about the structure of the ground state wave function the modular mera is characterized by a small number of unique tensors that is independent of the system size xmath396 similarly the computational cost of the optimization algorithms is also independent of the system size as a result the effects of local defects in an otherwise homogeneous system can be studied directly in the thermodynamic limit avoiding finite size effects when extracting the universal properties of defects furthermore modularity has the useful implication that tensors can be recycled from one problem to another for instance the same tensors xmath17 for the homogeneous critical ising model were used in sect sect benchimpurity for impurity problems in sect sect benchbound for boundary problems and in sect sect benchinterface for interface problems similarly the impurity tensors xmath3 obtained from a single impurity problem in sect sect benchsingle were later reused in a multiple impurity problem in sect sect benchmultiple in this manuscriptwe have assumed for simplicity that the quantum critical host system was described by a homogeneous hamiltonian xmath0 that was a fixed point of the rg flow and exploited translation and scale invariance to obtain a mera for its ground state xmath25 that was fully characterized in terms of just one single pair of tensors xmath17 this had the advantage that a finite number of variational parameters encoded in the pair xmath17 was sufficient to completely describe an infinite system however the theory of minimal updates does not require scale or translation invariance let us first remove the assumption that the host system is a fixed point of the rg flow in this case each layer of tensors of the mera corresponding to a different length scale xmath153 will be described by a different pair xmath480 assuming that after some finite scale xmath191 the system can effectively be considered to have reached an rg fixed point characterized by fixed point tensors xmath17 we still obtain a finite description of the ground state of an infinite system in terms of the tensors xmath481 and xmath17 the effect of a defect on a region xmath24 can still be accounted for by a modular mera where the tensors in the causal cone xmath27 are modified again by energy minimization over the wilson hamiltonian xmath136 described in sect sect logscale however in this case xmath136 will not have the simple form of eq eq ad9 but instead will consist of xmath153dependent terms xmath482 for xmath483 after which all its terms will be proportional to some coupling xmath154 this case was briefly mentioned in sect sect optmod let us now also remove the assumption of translation invariance in the host system then the mera for the ground state xmath25 of the host hamiltonian xmath0 requires tensors xmath484 that depend both on the scale xmath153 and position xmath161 in this casethe mera for xmath25 depends on a number of tensors that grows linearly in the system size in the presence of a defect added to the host hamiltonian xmath0 we can still obtain a modular mera for the system with the defect by applying a minimal update to the mera for xmath25 however in this case we can not take the thermodynamic limit although in this manuscript we focused in exploring modularity in xmath10 spatial dimension the theory of minimal updates as proposed in ref applies to any spatial dimension xmath15 and thus the modular mera can be also used in systems in xmath39 dimensions the algorithms we presented here can be easily generalized to study eg a system in xmath485 dimensions with an impurity in xmath486 dimensions following the outline described in sect sect optmod here one would first optimize the mera for the impurity free homogeneous system and then re optimize the tensors within the causal cone of the impurity notice that since the causal cone of the impurity is a one dimensional structure one would build an effective system wilson chain which is again a semi infinite chain as in the xmath10 case instead the study of a boundary or of an interface in xmath485 dimensions requires the study of a more complex xmath485 effective hamiltonian where one dimension corresponds to the extension of the boundary and the other corresponds to scale the authors acknowledge kouichi okunishi for helpful discussions regarding wilson s solution to the kondo problem and helpful input from masaki oshikawa regarding the two impurity ising model support from the australian research council apa ff0668731 dp0878830 is acknowledged is supported by the sherman fairchild foundation this research was supported in part by perimeter institute for theoretical physics research at perimeter institute is supported by the government of canada through industry canada and by the province of ontario through the ministry of research and innovation 99 g evenbly and g vidal arxiv13070831 2013 g vidal phys lett 99 220405 2007 for a review of the renormalization groupsee me fisher rev mod phys 70 653 1998 g evenbly and g vidal phys rev b 81 235102 2010 l cincio j dziarmaga and m m rams phys 100 240603 2008 g evenbly and g vidal new j phys 12 025007 2010 g evenbly and g vidal phys rev b 79 144108 2009 rnc pfeifer g evenbly and g vidal phys a 79 040301r 2009 s montangero m rizzi v giovannetti and r fazio phys b 80 113103 2009 g evenbly and g vidal phys lett 102 180406 2009 g evenbly r n c pfeifer v pico s iblisdir l tagliacozzo i p mcculloch and g vidal phys b 82 161107r 2010 p silvi v giovannetti p calabrese g e santoro and r fazio j stat l03001 2010 g vidal in understanding quantum phase transitions edited by l d carr taylor xmath487 francis boca raton 2010 g evenbly p corboz and g vidal phys rev b 82 132411 2010 g evenbly and g vidal chapter 4 in strongly correlated systems numerical methods edited by a avella and f mancini springer series in solid state sciences vol 176 2013 arxiv11095334 m aguado and g vidal phys rev 100 070404 2008 r koenig b w reichardt and guifre vidal phys b 79 195123 2009 m aguado annals of physics volume 326 issue 9 pages 2444 2473 2011 l tagliacozzo and g vidal physrevb 83 115127 2011 o buerschaper j m mombelli m christandl and miguel aguado j math 54 012201 2013 j haah arxiv13104507 2013 h chang y hsieh and y kao arxiv13052663 2013 g evenbly and g vidal phys lett 104 187203 2010 k harada phys b 86 184421 2012 j lou t suzuki k harada and n kawashima arxiv12121999 2012 p corboz g evenbly f verstraete and g vidal phys a 81 010303r 2010 c pineda t barthel and j eisert phys a 81 050303r 2010 p corboz and g vidal phys b 80 165129 2009 r n c pfeifer p corboz o buerschaper m aguado m troyer and g vidal physical review b 82 115126 2010 r koenig and e bilgin phys b 82 125118 2010 g vidal phys 101 110501 2008 without the action of disentanglers xmath6 short range entanglement is preserved under coarse graining leading to an effective description that still contains some of the original small scale degrees of freedom as a result the coarse graining transformation is not a proper realization of the rg indeed two many body systems that differ in irrelevant short range details but behave identically at low energies that is two many body systems that flow to the same fixed point of the rg will flow to different fixed points of the coarse graining transformation because after being coarse grained they still retain small scale details that reveal their origin j i cirac and f verstraete j phys a math theor 42 504004 2009 g evenbly and g vidal j stat 2011 145891 918 p di francesco p mathieu and d senechal conformal field theory springer 1997 m henkel conformal invariance and critical phenomena springer 1999 cardy nucl b275 200 1986 j cardy arxiv hept th0411189v2 2008 wilson rev phys 47 773 1975 y shi l m duan and g vidal phys a 74 022320 2006 l tagliacozzo g evenbly and g vidal phys b 80 235127 2009 v murg f verstraete o legeza and r m noack phys rev b 82 205105 2010 m fannes b nachtergaele and r f werner commun math phys 144 443 1992 s ostlund and s rommer phys 75 3537 1995 s rommer and s ostlund phys b 55 2164 1997 u schollwoeck ann of phys 326 96 2011 wilson adv volume 16 issue 2 pages 170 186 1975 r v bariev sov jetp 50 613 1979 b m mccoy and j h h perk phys 44 840 1980 l p kadanoff phys b 24 5382 1981 a c brown phys b 25 331 1982 l turban j phys a 18 l325 1985 l g guimaraes and j r drugowich de felicio j phys a 19 l3411986 m henkel and a patkos nucl b 285 29 1987 g g cabrera and r julien phys rev b 35 7062 1987 m henkel a patkos and m schlottmann nucl b 314 609 1989 m oshikawa and i affleck nucl phys b 495533 582 1997 m oshikawa private communicationthis appendix contains a brief introduction to entanglement renormalization and the mera focusing mostly on a system that is both translation invariant and scale invariant we start by reviewing the basic properties of entanglement renormalization and the mera in a finite one dimensional lattice xmath43 made of xmath396 sites where each site is described by a hilbert space xmath488 of finite dimension xmath489 let us consider a coarse graining transformation xmath5 that maps blocks of three sites in xmath360 to single sites in a coarser lattice xmath490 made of xmath491 sites where each site in xmath490 is described by a vector space xmath492 of dimension xmath493 with xmath494 see fig fig hamintroa specifically we consider a transformation xmath5 that decomposes into the product of local transformations known as disentanglers xmath6 and isometries xmath7 disentangles xmath6 are unitary transformations that act across the boundaries between blocks in xmath360 xmath495 where xmath496 is identity on xmath488 while isometries xmath7 implement an isometric mapping of a block of three sites in xmath360 to a single site in xmath490 xmath497 where xmath498 is the identity operator on xmath492 the isometric constraints on disentanglers xmath6 and isometries xmath7 are expressed pictorially in fig fig hamintrob based on entanglement renormalization maps a lattice xmath360 made of xmath396 sites into a coarse grained lattice xmath490 made of xmath491 sites b the isometries xmath7 and disentanglers xmath6 that constitute the coarse graining transformation xmath5 are constrained to be isometric see also eqs eq b1 and eq b2 c an operator xmath499 supported on a local region xmath500 made of two contiguous sites is coarse grained to a new local operator xmath501 supported on a local region xmath502 made also of two contiguous sites d a nearest neighbor hamiltonian xmath503 is coarse grained to a nearest neighbor hamiltonian xmath504 e the left center and right ascending superoperators xmath505 xmath506 and xmath507 can be used to compute the new coupling xmath508 from the initial coupling xmath45 see also eq eq b5width321 an important property of the coarse graining transformation xmath5 is that by construction it preserves locality let xmath499 be a local operator defined on a region xmath24 of two contiguous sites of lattice xmath43 this operator transforms under coarse graining as xmath509 where the new operator xmath501 is supported on a region xmath510 of two contiguous sites in lattice xmath511 see fig fig hamintroc the coarse grained operator xmath501 remains local due to the specific way in which transformation xmath5 decomposes into local isometric tensors xmath6 and xmath7 indeed in xmath512 most tensors in xmath5 annihilate to identity with their conjugates in xmath513 the causal cone xmath27 of a region xmath24 is defined as to include precisely those tensors that do not annihilate to identity when coarse graining an operator supported on xmath24 and it thus tracks how region xmath24 itself evolves under coarse graining in particular a local hamiltonian xmath0 on xmath360 will be coarse grained into a local hamiltonian xmath8 on xmath490 xmath514 see fig fig hamintrod the local coupling xmath508 of the coarse grained hamiltonian xmath8 can be computed by applying the left center right ascending superoperators xmath515 xmath516 and xmath517 to the coupling xmath45 of the initial hamiltonian xmath518 see fig fig hamintroe the coarse graining transformation xmath5 can be repeated xmath519 times to obtain a sequence of local hamiltonians xmath520 where each of the local hamiltonian xmath521 is defined on a coarse grained lattice xmath522 of xmath523 sites notice the use of subscripts to denote the level of coarse graining with the initial lattice xmath524 and hamiltonian xmath525 the final coarse grained hamiltonian xmath416 in this sequence which is defined on a lattice xmath526 of xmath527 sites can be exactly diagonalized so as to determine its ground state xmath528 as a linear isometric map each transformation xmath415 can also be used to fine grain a quantum state xmath529 defined on xmath530 into a new quantum state xmath531 defined on xmath532 xmath533 thus a quantum state xmath534 defined on the initial lattice xmath419 can be obtained by fine graining state xmath528 with the transformations xmath415 as xmath535 if each of the transformations xmath415 has been chosen as to properly preserve the low energy subspace of the hamiltonian xmath536 such that xmath521 is a low energy effective hamiltonian for xmath536 then xmath537 is a representation of the ground state of the initial hamiltonian xmath538 more generally the multi scale entanglement renormalization ansatz mera is the class of states that can be represented as eq eq b7 for some choice of xmath539 and xmath528 for a generic choice of local hilbert space dimensions xmath540 where xmath541 only a subset of all states of lattice xmath43 can be represented in eq eq b7 whereas the choice xmath542 allows for a computationally inefficient representation of any state of the lattice we now move to discussing the mera for a quantum critical system that is both scale invariant and translation invariant we describe how universal information of the quantum critical point can be evaluated by characterizing the scaling operators and their scaling dimensions we also review the power law scaling of two point correlators in this appendix fixed point objects eg xmath5 xmath0 xmath17 etc are denoted with a star superscript as xmath543 xmath544 xmath545 etc whereas in the main text of this manuscript we did not use a star superscript to ease the notation let xmath546 be an infinite lattice and let xmath538 denote a translation invariant quantum critical hamiltonian we assume that this hamiltonian tends to a fixed point of the rg flow of eq eq b6 such that all coarse grained hamiltonians xmath521 are proportionate to a fixed point hamiltonian xmath544 for some sufficiently large xmath153 specifically the coarse grained hamiltonians in the scale invariant regime are related as xmath547 where xmath548 with xmath549 is the dynamic critical exponent of the hamiltonian ie xmath550 for a lorentz invariant quantum critical point equivalently the local couplings that define that hamiltonians are related as xmath551 for concreteness let us assume that the initial hamiltonian xmath538 reaches the scale invariant lorentz invariant fixed point after xmath147 coarse grainings such that its rg flow can be written xmath552 where xmath543 represents the scale invariant coarse graining transformation for xmath544 in this case the ground state xmath553 of the hamiltonian xmath538 can be represented by the infinite sequence of coarse graining transformations xmath554 see fig fig scaleintro the class of states that can be represented as eq eq b9 are called scale invariant mera the scale dependent transformations before scale invariance herexmath423 and xmath424 correspond to transitional layers of the mera these are important to diminish the effect of any rg irrelevant terms potentially present in the initial hamiltonian which break scale invariance at short distances in general the number xmath191 of transitional layers required will depend on the specific critical hamiltonian under consideration strictly speaking scale invariance is generically only attained after infinitely many transitional layers but in practice a finite number xmath191 of them often offers already a very good approximation we call the fixed point coarse graining transformation xmath543 scale invariant notice that the scale invariant mera which describes a quantum state on an infinite lattice is defined in terms of a small number of unique tensors each transitional map xmath415 is described by a pair of tensors xmath555 and the scale invariant map xmath556 is described by the pair xmath557 of transitional layers with coarse graining maps xmath558 here xmath559 followed by an infinite sequence of scaling layers with a scale invariant map xmath543 b each xmath415 of the scale invariant mera is a coarse graining transformation composed of local tensors xmath555width321 we now discuss how scaling operators and their scaling dimensions can be evaluated from the scale invariant mera this is covered in more detail in eg refs for simplicity let us consider a scale invariant mera with no transitional layers that is composed of an infinite sequence of a scale invariant map xmath543 described by a single pair xmath545 as shown in fig fig twocorra a one site operator xmath96 placed on certain lattice sites is coarse grained under the action of layer xmath543 into new one site operator xmath88 this coarse graining is implemented with the one site scaling superoperator xmath100 xmath560 where xmath100 is defined in terms of the isometry xmath561 and its conjugate see also fig fig twocorrb the one site scaling operators xmath562 are defined as those operators that transform covariantly under action of xmath100 xmath563 where xmath564 is the scaling dimension of scaling operator xmath562 as is customary in rg analysis the scaling operators xmath562 and their scaling dimensions xmath564 can be obtained through diagonalization of the scaling superoperator xmath565 one can obtain explicit expressions for two point correlation functions of the scale invariant mera based upon their scaling operators as we now describe let us suppose that two scaling operators xmath562 and xmath566 are placed on special sites xmath161 and xmath567 that are at a distance of xmath568 sites apart for positive integer xmath569 as shown in fig fig twocorrc the correlator xmath570 can be evaluated by coarse graining the scaling operators until they occupy adjacent sites where the expectation value xmath571 can then be evaluated with the local two site density matrix xmath572 which is the same at every level of the mera due to scale invariance for each level of coarse graining applied to the scaling operators xmath562 and xmath566 we pick up a factor of the eigenvalues of the scaling operators as described eq eq b10b and the distance xmath93 between the scaling operators shrinks by a factor of 3 see fig fig twocorrc which leads to the relation xmath573 notice that the scaling operators are coarse grained onto adjacent sites after xmath574 levels thus through iteration of eq eq b11 we have xmath575 where constant xmath576 is the expectation value of the correlators evaluated on adjacent sites xmath577 thus it is seen that the correlator of two scaling operators xmath562 and xmath566 scales polynomially in the distance between the operators with an exponent that is the sum of their corresponding scaling dimensions xmath20 and xmath122 in agreement with predictions from cft xcite notice that eq eq b12 was derived from structural considerations of the mera alone and as such holds regardless of how the tensors in the scale invariant mera have been optimized this argument is only valid for the chosen special locations xmath161 and xmath567 for a generic pair of locations the polynomial decay of correlationsmay only be obtained after proper optimization for instance via energy minimization of the mera so as to approximate the ground state of a translation invariant quantum critical hamiltonian xmath0 which are defined in terms of a single pair of tensors xmath578 a one site operator xmath96 is coarse grained into new one site operators xmath88 and xmath89 b the scaling superoperator xmath100 acts covariantly upon scaling operators xmath19 see also eq eq b10b c two scaling operators xmath19 and xmath579 that are separated by xmath93 lattice sites are coarse grained onto neighboring sites after xmath580 maps xmath556width321 which involves spatial permutation of indices as well as enacting a unitary matrix xmath581 on each index b the definition of reflection symmetry for a disentangler xmath6width188 in this appendix we describe how symmetry under spatial reflection can be exactly enforced into the mera this is done by directly incorporating reflection symmetry in each of the tensors of the mera note that an equivalent approach dubbed inversion symmetric mera was recently proposed in ref such a step was found to be key in applications of the modular mera to quantum critical systems with a defect as considered in sect sect bench indeed we found that in order for the modular mera to be an accurate representation of the ground state of a quantum critical system with a defect the homogeneous system that is the system in the absence of the defect had to be addressed with a reflection invariant mera let us describe how the individual tensors of the mera namely the isometries xmath7 and disentanglers xmath6 can be chosen to be reflection symmetric ie xmath582 see fig fig refsym herexmath583 is a superoperator that denotes spatial reflection which squares to the identity the spatial reflection on a tensor involves permutation of its indices as well as a reflection within each index as enacted by a unitary matrix xmath581 such that xmath584 the latter is needed because each index of the tensor effectively represents several sites of the original system which also need to be reflected permuted matrix xmath581 has eigenvalues xmath292 corresponding to reflection symmetric and reflection antisymmetric states respectively it is convenient though not always necessary to work within a basis such that each xmath489dimensional index xmath585 decomposes as xmath586 where xmath587 labels the parity xmath588 for even parity and xmath290 for odd parity and xmath589 labels the distinct values of xmath585 with parity xmath587 in such a basis xmath581 is diagonal with the diagonal entries corresponding to the eigenvalues xmath292 let us turn our attention to the question of how reflection symmetry as described in eq s8e1 can be imposed on the mera tensors for concreteness we consider an isometry xmath7 analogous considerations apply to a disentangler notice that we can not just symmetrize xmath7 under reflections directly xmath590 because the new reflection symmetric tensor xmath591 will no longer be isometric instead we can include an additional step in the optimization algorithm that symmetrizes the environment of the tensors before each tensor is updated in the optimization of the mera xcite in order to update an isometry xmath7 one first computes its linearized environment xmath592 now to obtain an updated isometry that is reflection symmetric we first symmetrize its environment xmath593 in this way we ensure that the updated isometry xmath591 which is obtained through a svd of xmath594 see ref is reflection symmetric yet also retains its isometric character likewise the environments xmath595 of disentanglersxmath6 should also be symmetrized from the ternary mera which coarse grains three xmath489dimensional lattice sites into a single xmath489 dimensional lattice site is decomposed into upper and lower binary isometries xmath596 and xmath597 the index connecting the upper and lower binary isometries is chosen at an independent dimension xmath598 b the upper and lower binary isometries xmath596 and xmath597 should be chosen to maximize their overlap with the ternary isometry xmath7 against the one site density matrix xmath572 see eq s9e1width321 in the formulation of modular mera described in sect sect modularity it was convenient to decompose some of the isometries xmath7 of the mera used to describe the homogeneous system into pairs of upper and lower isometries xmath596 and xmath597 as depicted in fig fig isosplita in this sectionwe discuss how this can be accomplished let xmath489 denote the bond dimension of the indices of the isometry xmath7 and let xmath493 denote the index connecting the upper and lower isometries xmath596 and xmath597 since xmath493 effectively represents two sites with bond dimension xmath489 we have that the isometric character of xmath54 requires xmath599 we should perform this decomposition such that it does not change the quantum state described by the mera perhaps to within some very small error therefore the best choice of upper xmath596 and lower xmath597 isometries follows from maximizing their overlap with the isometry xmath7 against the one site density matrix xmath572 that is we choose them such that they maximize xmath600 see fig fig isosplitb given the density matrix xmath572 and isometry xmath7 one can obtain xmath54 and xmath55 by iteratively maximizing the above trace over each of the two tensors one at a time ideally we would like the decomposition of xmath7 into the product of xmath54 and xmath55 to be exact that is such that such that xmath601 this is typically only possible for xmath602 however in practice we find that for choice of bond dimension xmath598 between one or two times the dimension xmath489 ie xmath603 the above trace is already xmath604 with xmath605 negligibly small the use of a xmath493 smaller than xmath606 results in a reduction of computational costs
we propose algorithms based on the multi scale entanglement renormalization ansatz to obtain the ground state of quantum critical systems in the presence of boundaries impurities or interfaces by exploiting the theory of minimal updates ref g evenbly and g vidal arxiv13070831 the ground state is completely characterized in terms of a number of variational parameters that is independent of the system size even though the presence of a boundary an impurity or an interface explicitly breaks the translation invariance of the host system similarly computational costs do not scale with the system size allowing the thermodynamic limit to be studied directly and thus avoiding finite size effects eg when extracting the universal properties of the critical system
introduction modular mera optimization of modular mera benchmark results conclusions introduction to mera reflection symmetry decomposition of isometries
hadrons are the bound states of the strong interaction which is described by the quantum chromodynamics qcd in the framework of yang mills gauge theory one of the main goals of the hadron physics is to understand the composition of hadrons in terms of quarks and gluons the quark model is proved successful in classifying the mesons and baryons as xmath5 and xmath6 composite systems almost all the well established mesons can be described as a quark antiquark state except some mesons with exotic quantum numbers which are impossible for a xmath5 system but no experimental evidence is reported for exotic baryons which are inconsistent with the xmath6 configuration until the beginning of this century theoretically the qcd does not forbid the existence of the hadrons with the other configurations such as the glueballs the hybrids and the multiquarks in this review we focus on the pentaquarks if the pentaquark really exists it will provide a new stage to test the qcd in the nonperturbative region and to investigate the properties of the strong interaction in the quark model language the pentaquark is a hadron state with four valence quarks and one valence antiquark as xmath7 xcite because the pentaquark can decay to a three quark baryon and a quark antiquark meson its width was suggested to be wide xcite but it was predicted to have a narrow width due to its particular quark structure xcite in principle any baryon may have the five quark contents and experiments have revealed the important role of the intrinsic sea quarks in understanding the structure of the proton on the other hand the pentaquark state may also mix with the corresponding three quark state or hybrid state so the situation is much more complicated the pentaquark is easier to be identified if it has no admixture with any three quark state ie if the flavor of the anti quark xmath8 in the xmath7 state is different from any of the other four quarks xcite early experiments in 1960 s and 1970 s were performed to search for a baryon with positive strangeness as the pentaquark candidate referred to as the xmath9 xcite but no enhancements were found this field developed rapidly on both the experimental and the theoretical aspects in the last decade since the first report for a positive strangeness pentaquark like baryon referred to as the xmath0 by the leps collaboration xcite its mass and width are closed to the prediction of the chiral soliton model xcite this particle was quickly reported in subsequent experiments by some groups and many theoretical models were applied to understanding this particle and to predicting other pentaquarks such as the diquark cluster model xcite the diquark triquark model xcite the constituent quark model xcite the chiral quark model xcite the bag model xcite the meson baryon binding xcite the qcd sum rules xcite the lattice qcd xcite and the chiral soliton model in new versions xcite unfortunately many experiments found no signals for this particle what is worse the signals observed in the early experiments by some groups disappeared when including the new data with high statistics however some groups reconfirmed their observations for this particle with very high statistical significance in their updated experiments so even the existence of the pentaquark is a mystery the production mechanism and the analysis method should be investigated in details recently a charged charmonium like meson xmath10 was observed by bes xcite and belle xcite it is a suggestive evidence for the existence of the multiquark meson this arouses much interest on the study of the multiquark states in this paper we review the experimental search for the pentaquark states in sect ii and iii we concentrate on the searches for the xmath0 with positive or negative results in sect iv we focus on a non strange pentaquark candidate in sect v the other observed pentaquark candidates are presented then we discuss the results in sect vi and a brief summary is contained in the last section the pentaquark candidate xmath0 was widely discussed and searched for since the first report on the experimental observation by the leps collaboration xcite the skyrme s idea that baryons are solitons xcite arouses interesting and the soliton picture consists with the qcd in the large xmath11 limit xcite the xmath0 if exists is the lightest member in the predicted antidecuplet xcite its mass and width were predicted in the chiral soliton model xcite in quark model language the xmath0 is described as a pentaquark state xmath12 unlike the other pentaquark xmath7 states that the antiquark may have the same flavor with at least one quark the lowest fock state of the xmath0 composes of five quarks with the anti quark being xmath13 which is of different flavor from the other four xmath14 quarks therefore it is easy to be identified from other ordinary baryons with minimal xmath15 configurations xcite for the pentaquarkstates that the antiquark has the same flavor of some quark the mixing of the pentaquark state and the corresponding three quark state and hybrid state makes the situation complicated because any three quark baryon may have five quark components from both perturbative and nonperturbative aspects such as the baryon meson fluctuation picture xcite and the light cone fock state expansion xcite since the xmath0 has the same quark constituents as the combination of xmath16 and xmath17 these two modes are expected as the primary decay channel and thus are usually used in reconstructions in the experiments after the first report for the xmath0 the signals were observed by many groups xcite and some groups confirmed their results with new data xcite all these results are briefly listed in table thetay negative results were also reported by many groups and some early positive results in the photoproduction experiment by clas xcite and in the proton proton collision by cosy tof xcite were rejected by their high statistics experiments later xcite those are discussed in the next section experiments with positive signals for the xmath0 colsoptionsheader the xmath3 baryon if exists is the lightest charmed pentaquark state like the xmath0 but the xmath13 quark replaced with the xmath18 quark its lowest fock state is xmath19 which has the same constituent quarks with the combination of xmath20 or xmath21 thus these are estimated as the dominant decay channels the signal for the xmath3 was only observed in the dis experiment by the h1 collaboration xcite the analysis was based on the data at hera in 1996 2000 the xmath22 was reconstructed via the decay channal xmath23 in the distribution of xmath24 with opposite charge combinations a peak was observed at xmath25 mevxmath26 with a gaussian width of xmath27 mevxmath26 the background was estimated from the monte carlo simulation the number of the signal events is xmath28 corresponding to a statistical significance of 54xmath29 however this resonance was not observed in any other experiment as listed in the table thetac the xmath30 baryon if exists is a double strangeness xmath31 pentaquark candidate having the lowest fock state xmath32 it is recognized as an isospin quartet together with its partners xmath33 xmath34 and xmath35 the primary decay channel of xmath2 is estimated to be xmath36 the signal for the xmath2 was only reported by the na49 collaboration at cern xcite the analysis was base on the data of 158 gevxmath37 proton beam colliding with the lhxmath38 target the xmath39 was reconstructed via the decay channel xmath40 then 1640 xmath39 and 551 xmath41 were selected a peak was observed in the xmath36 mass spectrum combined with the xmath42 data as the antiparticle the fitted peak is at xmath43 mevxmath26 with 69 signals over the background of 75 events corresponding to a statistical significance of 58xmath29 estimated as xmath44 one of its isospin partners xmath45 was also observed and the fitted mass is xmath46 mevxmath26 unfortunately this resonance was not observed in any other experiment as listed in the table xi1860 among all the experiments in which the xmath0 was observed the width was claimed to be narrow however the mass position of the signals in different experiments spreads in a large region from 1520 to 1560 mevxmath26 as shown in fig mass this does not consist with a narrow resonance the mass values in leps early experiment xcite diana experiments xcite saphir experiment xcite and jinr experiments xcite are around 1540 mevxmath26 while the values in hermes experiment xcite svd early experiment xcite itep s analysis on the old neutrino experiments xcite and e522 experiment xcite are near 1530 mevxmath26 besides the leps recent experiment xcite the svd updated experiment xcite and the zeus experiment xcite provide even lower mass values close to 1520 mevxmath26 and the obelix experiment gives a much higher value therefore it is possible that even the observed signals do not correspond to the same particle the width of the xmath0 are not measured as accurate as the mass and thus the values in almost all the experiments are consistent as shown in fig width mass values for the xmath0 observed in various experiments the error bars represent the statistical uncertaintiesscaledwidth700 width values for the xmath0 measured in various experiments the error bars represent the statistical uncertainties the experiments with only the upper limit on the width are listed in the table thetay the dashed line is the upper limit of the intrinsic width of xmath0 at 90 cl by belle xcitescaledwidth700 the statistical significance of the signals was usually estimated as xmath47 in the early experiments this estimator neglects the uncertainty of the background and thus the significance may be overestimated the estimator xmath44 which assumes a smooth background with a well defined shape and xmath48 which assumes a statistical independent background with uncertainties are more proper since the production mechanism is still unknown if so the significance of the signals in the svd updated analysis xcite reduces to 56xmath29 and xmath49 estimated as xmath48 for the two samples respectively but is still large enough as an evidence in this case however the hermes result decreases to 27 39xmath29 the jinr xmath50 result decreases to 41xmath29 and the jinr xmath51 result decreases to 35xmath29 these are not enough to be claimed as an evidence besides the log likelihood difference is also a suitable alternative and was used by the leps xcite and the diana xcite the xmath52 produced in the inclusive experiments may affect the result of the xmath0 reconstructed via the xmath53 mode as pointed out by m zavertyaev xcite the decay xmath54 could lead to a spurious sharp peak at 1540 mevxmath26 when the momentum of the xmath52 is greater than 2 gevxmath37 on the other hand the xmath55 decays of the xmath52s could enhance the background when the xmath56 or the proton was paired with a xmath57 or a xmath58 up to now no positive result is reported for the xmath0 production in the xmath59 experiments a possible way to understand these null results with no contradict with the positive ones is to assume that the xmath0 production cross section is highly suppressed in xmath59 annihilations using the quark constituent counting rules ai titov et al estimated the ratio of xmath0 to xmath60 production in the fragmentation region and showed the ratio decreases very fast with energy xcite this ratio is often applied to estimate the yield of the xmath0 because xmath60 is a narrow resonance with similar mass to the xmath0 and is easily reconstructed the low value of this ratio implies a very different production mechanism for xmath0 if it really exists for the dis experiments performed at hera xcite the zeus and the h1 provided opposite conclusions these two experiments were almost in the same conditions and with the data collected during the same period but even using the same cuts the h1 could not produce the xmath0 signal observed by the zeus it is very confused and probably the signal observed by the zeus is a statistical fluctuation for the photoproduction experiments there is a contradiction in the xmath61 experiments between the upper limit on the cross section given by the clas xcite and the result reported by the saphir xcite and a contradiction in the xmath62 experiments between the upper limit on the cross section given by the clas xcite and the result reported by the leps xcite as claimed by the leps the contradiction between the leps and clas results is due to the different measurements if the xmath0 is preferably produced at the forward angles the clas would possibly not detect the xmath63 meson associated with the xmath0 because the most forward angle for the xmath63 detection is about xmath64 for the clas while most acceptance is of forward xmath64 for the leps xcite it may be a similar case for the contradiction between the clas and the saphir if this is true it will be a suggestion on the angular distribution for the xmath0 production in the experiments at higher energy the xmath0 productions will be boosted to a much more forward direction and thus escape the coverage of the detectors in most high energy experiments this may be a possible explanation for the null results in most high energy experiments in the improved analysis by the diana xcite a very narrow intrinsic width for the xmath0 was estimated as xmath65 mevxmath26 this result passed the upper limits given by the belle xcite the e559 xcite and the j parc xcite therefore there is no contradiction between these experiments although opposite conclusions were reported for the existence of the xmath0 among all the experimental search for the xmath0 pentaquark candidate those with negative results have higher statistics and are consequently more reliable in usual but it is hard to prove that all the observed peaks are fakes or fluctuations especially the updated results by the leps xcite the diana xcite and the svd xcite can be claimed as strong evidence for the xmath0 so the existence of the xmath0 is still debatable for the other pentaquark candidates such as xmath1 xmath2 and xmath3 the signals are much less reliable since none of them is confirmed in any other experiment we reviewed the experimental search for the pentaquark states during the last decade both the most widely studied candidate xmath0 and the other candidates like xmath1 xmath2 and xmath3 as well as the non strange pentaquark candidates are included since the first observation of the pentaquark like baryon state xmath0 this field has aroused much interest but even the existence of the pentaquark is debatable up to now if the pentaquark really exists it will open a new world of the qcd and hadron physics in particular if the xmath0 exists its production mechanism is almost unknown and needs to be investigate whether it is a pentaquark or not besides it will imply the existence of a flavor multiplet if the pentaquark does not exist the peaks observed in the experiments with positive results need a reasonable explanation in addition the contradictions between the experiments should be examined in details this will improve the analysis method and raise the reliability of the result in future experiments in order to confirm or rule out the existence of the pentaquark particularly the xmath0 comparisons between experiments in similar conditions are required among the experiments the updated results by the leps xcite the diana xcite and the svd xcite provide the best positive evidence for the xmath0 therefore more experiments at medium energy may lead to a clear conclusion 132ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty linkdoibase 101016s0031 91636492001 3 linkdoibase 101103physrevd15281 linkdoibase 101103physrevd17260 linkdoibase 1010160550 32137990036 1 linkdoibase 101103physrevd15267 linkdoibase 101103physrevd20748 linkdoibase 101016s0375 94749781460 1 linkdoibase 101007s002180050406 linkdoibase 101142s021773239900239x linkdoibase 1010160370 26936990266 4 linkdoibase 101103physrevlett91012002 linkdoibase 101103physrevlett91232003 linkdoibase 101103physrevd69114017 linkdoibase 101016jphysletb200311010 linkdoibase 101016jphysletb200309062 linkdoibase 101016jphysletb200309061 linkdoibase 101016jphysletb200308010 linkdoibase 101016jphysletb200308050 linkdoibase 101103physrevd69094029 linkdoibase 101016jphysletb200309050 linkdoibase 101016jphysletb200402016 linkdoibase 101016jphysletb200307067 linkdoibase 101016jnuclphysb200402004 linkdoibase 101103physrevd69117502 linkdoibase 101103physrevc69055203 linkdoibase 101103physrevlett91232002 linkdoibase 101016jphysletb200312018 noop linkdoibase 101103physrevlett93152001 linkdoibase 101103physrevd70074508 linkdoibase 101103physrevd72034505 linkdoibase 101103physrevd71034001 noop linkdoibase 101103physrevd69094011 linkdoibase 101103physrevd70097503 linkdoibase 101103physrevlett110252001 linkdoibase 101103physrevlett110252002 noop linkdoibase 1010160550 32138390063 9 linkdoibase 1010160550 32138490584 4 linkdoibase 1010160550 32138590409 2 noop edited by linkdoibase 101016jphysletb200309049 linkdoibase 1010160375 94749290647 3 linkdoibase 101007s100500050136 linkdoibase 1010160370 26939600597 7 linkdoibase 101016s0370 15739700089 6 linkdoibase 10113411611587 linkdoibase 10113411954823 linkdoibase 101016jphysletb200401079 linkdoibase 101016jphysletb200308019 linkdoibase 10113411707127 linkdoibase 101016jnuclphysbps200501015 linkdoibase 101016jphysletb200404024 noop linkdoibase 101016jnuclphysa200503041 linkdoibase 101016jphysletb200602048 linkdoibase 101016jnuclphysa200608006 linkdoibase 101103physrevc79025210 linkdoibase 101134s106377880701005x linkdoibase 101134s1063778810070100 noop noop linkdoibase 101103physrevlett91252001 linkdoibase 101016jphysletb200405067 linkdoibase 101103physrevlett96212001 linkdoibase 101016jphysletb200704023 linkdoibase 101103physrevlett92032001 linkdoibase 101103physrevlett96042001 linkdoibase 101103physrevd74032001 linkdoibase 101103physrevlett97032001 linkdoibase 101016jphysletb200607013 linkdoibase 101140epjc s10052 006 0080y linkdoibase 101103physrevd70012004 linkdoibase 101103physrevlett95042002 linkdoibase 101016jphysletb200505008 linkdoibase 101103physrevd76092004 linkdoibase 101016jphysletb200408021 linkdoibase 101016jphysletb200708005 linkdoibase 101140epjc s10052 006 0140 3 linkdoibase 101016jphysletb200606055 linkdoibase 1010880954 3899344002 linkdoibase 101103physrevlett92042003 linkdoibase 101016jnuclphysbps200501062 linkdoibase 101103physrevlett93212003 linkdoibase 101140epja i2004 10062 4 linkdoibase 1010880954 3899308090 linkdoibase 101103physrevd70111101 linkdoibase 101103physrevc72055201 noop linkdoibase 101103physrevc77045203 linkdoibase 101103physrevlett109132002 linkdoibase 101016jphysletb200510077 linkdoibase 101103physrevc85035209 101103physrevc85049901 noop linkdoibase 101140epja i2003 10029y noop linkdoibase 101016jphysletb200701041 linkdoibase 101143ptps16890 linkdoibase 101103physrevlett100252002 linkdoibase 101140epja i2011 11089 0 linkdoibase 1010881674 11373312051 linkdoibase 101103physrevc83022201 noop noop linkdoibase 101134s002136400818001x linkdoibase 101103physrevd69077501 linkdoibase 101016jphysletb200402029 linkdoibase 101103physrevd70094042 linkdoibase 101142s0217751x06032101 linkdoibase 101016jnuclphysa200502082 linkdoibase 101103physrevlett97102001 noop linkdoibase 101103physrevd72051101 linkdoibase 101103physrevc75055208 linkdoibase 101016jphysletb200403012 linkdoibase 101140epjc s2004 02042 9 linkdoibase 101103physrevd73091101 linkdoibase 101103physrevd74051101 linkdoibase 101016jphysletb200507023 linkdoibase 101016jnuclphysa200502075 linkdoibase 101103physrevc85015205 linkdoibase 101103physrevd71032004 linkdoibase 101140epjc s10052 007 0407 3 linkdoibase 101016jphysletb200502016 linkdoibase 101103physrevc70022201 linkdoibase 101103physrevd75032003 linkdoibase 101016jphysletb200801063 linkdoibase 101140epjc s2005 02281 2 linkdoibase 101103physrevlett95152001 noop linkdoibase 101103physrevc70042202
it has been ten years since the first report for a positive strangeness pentaquark like baryon state however the existence of the pentaquark state is still controversial some contradictions between the experiments are unsolved in this paper we review the experimental search for the pentaquark candidates xmath0 xmath1 xmath2 xmath3 and xmath4 in details we review the experiments with positive results and compare the experiments with similar conditions but opposite results
introduction experiments with positive results for @xmath0 discussions summary
in the standard model xmath3 and xmath4 are not mass eigenstates instead we have the small cp violating effects are neglected xmath5 so the time evolution of the xmath6 states looks like xmath7 where xmath8 is the mass eigenvalue and xmath9 the corresponding width it follows from 1 and 2 that the probability for xmath3 meson not to change its flavour after a time xmath10 from the creation is xmath11 and the probability to convert into the xmath4 meson xmath12where xmath13 is the average width and xmath14 so xmath15 mass difference between the xmath16 mass eigenstates defines the oscillation frequency standard model predicts xcite that xmath17 xmath18 being the cabibbo kobayashi maskawa matrix element therefore the mixing in the xmath19 meson system proceeds much more faster than in the xmath20 system the total probability xmath21 that a xmath3 will oscillate into xmath4 is xmath22 in the first xmath23mixing experiments xcite just this time integrated mixing probability was measured the result xcite xmath24 shows that in the xmath1 system xmath25 is expected in fact the allowed range of xmath26 is estimated to be between xmath27 and xmath28 in the standard model xcite such a big value of xmath26 makes impossible time integrated measurements in the xmath1 system because xmath21 in 5 saturates at xmath29 for large values of x although it was thought that unlike the kaon system for the xmath16 mesons the decay width difference can be neglected xcite nowadays people is more inclined to believe the theoretical prediction xcite that the xmath30 transition with final states common to both xmath1 and xmath31 can generate about 20 difference in lifetimes of the short lived and long lived xmath1mesons xcite but we can see from the 3 xmath32 5 formulas that the effect of nonzero xmath33 is always xmath34 and so of the order of several percents because xmath35 is expected in the followingwe will neglect this effect and will take xmath36 though in some formulas xmath33 is kept for reference reason the development of high precision vertex detectors made it possible to measure xcite in the xmath23 system the time dependent asymmetry xmath37 the same techniques can also be applied to the xmath38 system recently the atlas detector sensitivity to the xmath26 parameter was studied xcite using xmath39 decay chain for xmath1 meson reconstruction it was shown that xmath26 up to 40 should be within a reach xcite the signal statistics could be increased by using other decay channels like xmath40 the purpose of this note is to study the usefulness of the decay chain xmath41 for xmath1 meson reconstruction in the atlas xmath1mixing experiments about 20 000 following b decays were generated using the pythia monte carlo program xcitexmath42 xmath43 xmath44 xmath45 xmath46 xmath47 xmath45 xmath48 xmath49 the impact parameter was smeared using the following parameterized description of the impact parameter resolution xmath50 where resolutions are in xmath51 and xmath52 is the angle with respect to the beam line it was shown in xcite that this parameterized resolution reasonably reproduces the results obtained by using the full simulation and reconstruction programs for the transverse momentum resolution an usual expression xcite xmath53 was assumed track reconstruction efficiencies for various particles were taken from xcite because now we have 6 particles in the final state instead of 4 for the xmath2 decay channel we expect some loss in statistics due to track reconstruction inefficiencies but the effect is not significant because the investigation in xcite indicates a high reconstruction efficiency of 095 the topology of a considered xmath19 decay chain is shown schematically in a figure 150110 4040 40403230 1217xmath54 40402140 6245xmath55 80603115 8352xmath43 95553133 13065xmath56 95553035 13355xmath57 95553235 13035xmath58 80601310 8590xmath59 80602320 10290xmath58 80603230 11280xmath59 the xmath1 decay vertex reconstruction was done in the following three steps first of all the xmath60 was reconstructed by finding three charged particles presumably originated from the xmath60 decay and fitting their tracks for this goal all combinations of the properly charged particles were examined in the generated events assuming that two of them are kaons and one is pion the resulting invariant mass distribution is shown in fig1a for signal events the expected xmath60 peak is clearly seen along with moderate enough combinatorial background cuts on xmath61 xmath62 and xmath63 were selected in order to optimize signal to background ratio to select one more cut on xmath64 the information about the invariant mass resolution is desirable 2a shows the reconstructed xmath60 meson from its true decay products the finite invariant mass resolution is due to applied track smearing and equals approximately to xmath65 after xmath60 meson reconstruction xmath66 meson was searched in three particle combinations from the remaining charged particles each particle in the combination being assumed to be a pion 1b shows a resulting invariant mass distribution for signal events because of huge width of xmath66 signal to background separation is not so obvious in this case if xmath66 is reconstructed from its true decay products as in fig 2b its width is correctly reproduced to draw out xmath66 from the background further cuts were applied on xmath67 xmath68 xmath69 and xmath70 at last xmath19 decayvertex was fitted using reconstructed xmath60 and xmath66 almost the same resolution in the xmath1decay proper time was reached xmath71 as in xcite the corresponding resolution in the b meson decay length in the transverse plane is xmath72 the relevant distributions are shown in fig3 branching ratios and signal statistics for the xmath73 channel are summarized in table 1 notethat we use an updated value for brxmath74 from xcite xmath19 branching ratios are still unknown experimentally neglecting su3 unitary symmetry breaking effects we have taken brxmath73xmath75 brxmath76 branching ratios and signal statistics for xmath77 colsoptionsheader as we see about 2065 reconstructed xmath19 are expected after one year run at xmath78 luminosity the corresponding number of events within one standard deviation xmath79 from the xmath19 mass equals 1407 this last number should be compared to 2650 signal events as reported in xcite when xmath80 decay channel is used events which pass the first level muon trigger latexmathpt 6 gev c background can come from other xmath16 decays of the same or higher charged multiplicity and from random combinations with some or all particles originating not from a xmath16 decay combinatorial background the following channels were considered and no significant contributions were found to the background xmath82 these eventsdo nt pass the analysis cuts because the xmath83 mass is shifted from the xmath60 mass by about 100 mev and so does the xmath20 mass compared to the xmath19 mass xmath84 followed by xmath85 taking xmath86 from xcite we see that the expected number of xmath87 events originated from this source is only five times less than the expected number of truly signal events but the decay topology for this decay chain is drastically different 1 5 not 3 3 and therefore it is unexpected that significant amount of the b decays will be simulated in this way note that even for xmath88 decay channel the similar background is negligible xcite although xmath89 is about 44 times bigger than xmath90 xmath91 about 10 000 such events were generated by pythia and then analyzed using brxmath92 from xcite and assuming that xmath93 decay goes through xmath94 oscillations xmath95 and therefore xmath96 we have got fig4 it is seen from this figure that because of xmath97 mass shift the contribution of this channel to the background proves to be negligible note that fig4 refers to the total number of the xmath91 events in fact the distribution of these events with regard to the decay proper time is oscillatory xmath98 not xmath26 defining the oscillation frequency so in general this will result in oscillatory dilution factor the conclusion that this dilution factor is irrelevent relies on the fact that no candidate event was found with invariant mass within one standard deviation from the xmath1 mass for xmath99 integrated luminosity a huge monte carlo statistics is needed for combinatorial background studies no candidate event with xmath100was found within xmath101 inclusive xmath102 events this indicates that signal background ratio is expected to be not worse than 11 the observation of the xmath103 oscillations is complicated by some dilution factors first of all the decay proper time is measured with some accuracy xmath104 from previous discussions we know that in our case xmath105 is expected due to this finite time resolution the observed oscillations are convolutions of the expressions 3 and 4 given above with a gaussian distribution for example xmath106 fracdssqrt2pisigma sim nonumber frac12 efracgamma thbarleft coshfracdelta gamma 2hbartsigma fracsigmataudtimecos fracdelta mhbartsigma fracsigmatau right labeleq9 endaligned where xmath107 tau frachbargamma so the main effect of this smearing is the reduction of the oscillation amplitude by xmath108 this is quite important in the xmath1 system where xmath109 there is also a time shift xmath110 in 9 this time shift does not really effect the observability of the oscillations and we will neglect it in fact 9 is valid only for not too short decay times xmath111 because in 3 and 4 distributions xmath112 is assumed another reduction in the oscillation amplitude is caused by the particle antiparticle mistagging at t0 in our case particle antiparticle nature of the xmath16 mesonis tagged by the lepton charge in the semileptonic decay of the associated beauty hadron mistagging is mainly due to xmath113 oscillations accompanying b quark can be hadronized as a neutral xmath16 meson and oscillate into xmath114 before semileptonic decay xmath115 cascade process then the lepton is misidentified as having come directly from the xmath16meson and associated to the xmath116 decay leptons coming from other decaying particles kxmath117 detector error in the lepton charge identification let xmath118 be the mistagging probability if we have tagged xmath119 xmath3 mesons among them only xmath120 are indeed xmath3s and xmath121 are xmath4s misidentified as xmath3s so at the proper time xmath10 we would observe xmath122 due to cpt invariance xmath123 fracn2efracgamma thbar left coshfracdelta gamma t2hbar1 2eta cos fracdelta m thbar right nonumber endaligned decays associated to the xmath4 meson and therefore xmath124 so the dilution factor due to mistagging is xmath125 in our studieswe have taken xmath126 as in xcite finally the dilution can emerge from background suppose that apart from xmath127 events with xmath128 oscillations we also have xmath129 additional background events half of them will simulate xmath114 meson and half of them b meson assuming asymmetry free background so the observed number of would be xmath128 oscillations will be xmath130 and the oscillation amplitude will be reduced by an amount xmath131 neglecting the proper time dependence of this dilution factor that is supposing that the background is mainly due to xmath16hadron decays and therefore has approximately the same proper time exponential decay as the signal xcitewe have taken xmath132 which corresponds to the 21 signal background ratio for xmath133 integrated luminosity the number of reconstructed xmath19s would reach xmath134 from the analyzed channel alone another xmath135sare expected from the xmath136 channel xcite for events in which xmath19 meson does not oscillate before its decay the xmath137 meson and the tagging muon have equal sign charges if the xmath19 meson oscillates opposite charge combination emerges the corresponding decay time distributions are xmath138 d is the product of all dilution factors and xmath119 is the total number of reconstructed xmath19s the unification of samples from xmath73 and xmath80 decay channels allows to increase xmath26 measurement precision fig7 and fig8 show the corresponding xmath139 asymmetry plots for xmath140 and 35 it seems to us that xmath141 decay channel is almost as good for the xmath1mixing exploration as previously studied xmath142 and enables us to increase signal statistics about 15 times further gain in signal statistics can be reached xcite by using xmath143 decay mode and considering other decay channels of xmath60 these possibilities are under study we refrain from giving any particular value of xmath26 as an attainable upper limit too many uncertainties are left before a real experiment will start note for example that about two times bigger branching ratios for both xmath144 and xmath145 decay channels are predicted in xcite xmath146 as a xmath147 production cross section can also have significant variation in real life xcite so although the results of this investigation strengthen confidence in reaching xmath26 as high as 40 xcite it should be realized that some theoretical predictions about xmath1physics and collider operation were involved and according to tdlees first law of physicist xcite without experimentalist theorist tend to drift however maybe it is worthwhile to recall his second law also without theorist experimentalists tend to falter many suggestions of peerola strongly influenced this investigation and lead to considerable improvement of the paper communications with sgadomski and nellis are also appreciated authors thank nv makhaldiani for drawing their attention to tdlees paper 99 a ali d london j phys g19 1993 1069 see for example ua1 coll albajar et al phys lett b186 1987 247 1991 171 cleo coll j bartelt et al phys 71 1993 1680 argus coll h albrecht et al z phys c55 1992 357 aleph coll d buskulic et al phys lett 284 1992 177 opal coll acton et al phys lett b276 1992 379 b adeva et al phys lett b288 1992 395 delphi coll p abreu et al phys lett b332 1994 488 moser b mixing talk given at the xmath148 international symposium on heavy flavour physics montreal canada 1993 cern ppe93 164 a ali d london cp violation and flavour mixing in the standard model desy95 148 hep ph9508272 a ali d london implications of the top quark mass measurement for the ckm parameters xmath26 and cp asymmetries cern th739894 hep ph9408332 buras w slominsski h steger nucl b245 1984 369 franzini phys 173 1989 1 voloshin et al 46 1987 181 a datta ea paschos u trke phys b196 1987 382 bigi lifetimes of heavy flavour hadrons whence and whither und hep95big06 hep ph9507364 i dunietz xmath38 mixing cp violation and extraction of ckm phases from untagged xmath1 data samples fermilab pub94361t hep ph9501287 aleph coll d decamp et al phys b313 1993 498 aleph coll d buskulic et al cern ppe93 99 opal coll r akers et al cern ppe94 43 p eerola s gadomski b murray xmath19mixing measurement in atlas atlas internal note phys no039 1994 atlas technical proposal cern lhcc94 43 1994 tsjstrand pythia 57 and jetset 74 physics and manual cern th711293 1993 review of particle properties phys rev d50 1994 s rudaz mb voloshin phys lett b252 1990 443 p eerola et al asymmetries in xmath16 decays and their experimental control atlas internal note phys no054 1994 p camarri a nisati time dependent analysis of cp asymmetries in the xmath149 system atlas internal note phys no065 1995 p blasii p colangelo g nandulli phys b2831992 434 p eerola measurement of cp violation in b decays with the atlas experiment atlas internal note phys no009 1992 t d lee the evolution of weak interactions talk given at the symposium dedicated to jack steinberger geneva 1986 cern 86 07
the usefulness of the xmath0 decay chain is investigated for the xmath1reconstruction in the future atlas xmath1mixing experiment it is shown that this decay channel is almost as suitable for this purpose as previously studied xmath2 10 mm
introduction event simulation event reconstruction signal and background dilution factors prospects for @xmath26 measurements conclusions acknowledgements
the color fields of hadrons boosted to the light cone are thought to grow very strong parametrically of order xmath2 where xmath3 is the coupling xcite the fields of nuclei are enhanced further by the high density of valence charges per unit transverse area which is proportional to the thickness xmath4 of a nucleus xcite in collisions of such strong color fieldsa large number of soft gluons is released due to the genuinely non perturbative dynamics of the strong color fieldsa semi hard saturation scale xmath5 emerges it corresponds to the transverse momentum where the phase space density of produced gluons is of order xmath6 the mean multiplicity per unit rapidity in high energy collisions is then xmath7 below we argue that a semi classical effective theory of valence color charge fluctuations predicts that the variance of the multiplicity distribution is of order xmath8 so that the perturbative expansion of xmath9 begins at order xmath10 we show that in the strong field limit then a gaussian effective theory leads to koba nielsen olesen kno scaling xcite this relates the emergence of kno scaling in xmath11integrated multiplicity distributions from high energy collisions to properties of soft gluons around the saturation scale collisions at various energies as measured by the ua5 xcite alice xcite and cms xcite collaborations respectively note that we restrict to the bulk of the distributions up to 35 times the mean multiplicityscaledwidth500 the kno scaling conjecture refers to the fact that the particle multiplicity distribution in high energy hadronic collisions is universal ie energy independent if expressed in terms of the fractional multiplicity xmath12 this is satisfied to a good approximation in the central pseudo rapidity region at center of mass energies of 900 gev and above xcite as shown in fig fig knolhcdata on the other hand ua5 data xcite taken at xmath13 gev appears to show a slightly distorted multiplicity distribution this is in line with the observation that at lower energies higher order factorial moments xmath14 of the distribution are energy dependent and significantly different from the reduced moments xmath15 xcite gq cq in fact since the difference of xmath14 and xmath15 is subleading in the density of valence charges one may interpret this finding to indicate that the high density approximation is less accurate for xmath16 gev xmath0 collisions approximate kno scaling has been predicted to persist also for min bias xmath17 collisions at lhc energies in spite of additional glauber fluctuations of the number of participants and binary collisions xcite a more detailed discussion of multiplicity distributions at tev energies is given in refs xcite and references therein transverse momentum integrated multiplicities in inelastic hadronic collisions are not governed by an external hard scale unlike say multiplicity distributions in xmath18 annihilation or in jets xcite hence the explanation for the experimental observation should relate to properties of the distribution of produced gluons around the saturation scale xmath5 we shall first discuss the multiplicity distribution of smallxmath19 gluons obtained from a gaussian effective action for the color charge fluctuations of the valence charge densities xmath20 xcite z esmv smv d2x dx eq smv in the strong field limit a semi classical approximation is appropriate and the soft gluon field in covariant gauge can be obtained in the weizscker williams approximation as azx g azx g d2z azz parametrically the mean multiplicity obtained from the action eq smv is then eq nbar n qs2 s where xmath21 denotes a transverse area and xmath22 the prefactor in eq nbar can be determined numerically xcite but is not required for our present considerations one can similarly calculate the probability to produce xmath23 particles by considering fully connected diagrams with xmath23 valence sources xmath20 in the amplitude and xmath23 sources xmath24 in the conjugate amplitude for both projectile and target respectively these can be expressed as xcite of the xmath23 particles should be similar herewe assume that all particles are in the same rapidity bin conn cq where the reduced moments eq cqmv cq this expression is valid with logarithmic accuracy and was derived under the assumption that all transverse momentum integrals over xmath25 are effectively cut off in the infrared at a scale xmath26 due to non linear effects the fluctuation parameter xmath27 in eq eq cqmv is of order k nc2 1 qs2 s once again the precise numerical prefactor in the classical approximation has been determined by a numerical computation to all orders in the valence charge density xmath20 xcite the multiplicity distribution is therefore a negative binomial distribution nbd xcite eq nbd pn indeed multiplicity distributions observed in high energy xmath0 collisions in the central region can be described quite well by a nbd see for example refs the parameter xmath28 determines the variance of the distribution the latter approximation applies in the limit xmath29 see below and can be obtained from the inclusive double gluon multiplicity eq dn2k conn from this expressionit is straightforward to see that the perturbative expansion of xmath28 starts at xmath30 since the connected diagrams on the lhs of eq eq dn2k involve the same number of sources and vertices as the disconnected diagrams on the rhs of that equation also see appendix this observation is important since in general the nbd eq nbd exhibits kno scaling only when xmath29 and if xmath27 is not strongly energy dependent a numerical analysis of the multiplicity distribution at 2360 gev for example achieves a good fit to the data for xmath31 xcite which we confirm below such values for xmath9 have also been found for peripheral collisions of heavy ions from ab initio solutions of the classical yang mills equations xcite furthermore those solutions predict that xmath32 for central collisions of xmath33 nuclei to illustrate how deviations from kno scaling arise it is instructive to consider a deformed theory with an additional contribution to the quadratic action we shall add a quartic operator xcite sq d2v dv1 dv2 eq squartic we assume that the contribution from the quartic operator is a small perturbation since xmath34 while xmath35 in the classical approximation the mean multiplicity is unaffected by the correction as it involves only two point functions in the theories eq smv and eq squartic need to be matched thus the bare parameters xmath36 in eq smv and xmath37 in eq squartic are different as the latter absorbs some self energy corrections we refer to ref xcite for details on the other hand xmath28 as defined in eq dn2k now becomes eq kquartic qs2 s 1 3 nc2 1 2 in nsd collisions at various energies andnbd fits xmath38 and xmath39 note that the mean multiplicity quoted for the fits has been rescaled by 15 to include neutral particles also that here xmath27 is integrated over the transverse plane of the collisiontitlefigscaledwidth470 in nsd collisions at various energies and nbd fits xmath38 and xmath39 note that the mean multiplicity quoted for the fits has been rescaled by 15 to include neutral particles also that here xmath27 is integrated over the transverse plane of the collisiontitlefigscaledwidth470 therefore in the classical approximation eq quarticnbark 1 3 nc2 1 this result illustrates that xmath9 decreases as the contribution of the xmath40 operator increases we repeat that the derivation assumed that the correction is small so that eq quarticnbark does not apply for large values of xmath41 ref xcite estimated by entirely different considerations that for protons xmath42 at xmath43 that would correspond to a smaller value of xmath9 by a factor of 143 than for the gaussian theory assuming that rg flow with energy approaches a gaussian action xcite xmath9 should increase by about this factor nbd fits to the data shown in fig fig knofits confirm that xmath9 indeed increases with energy which might indicate flow towards a gaussian action however the observed increase from xmath44 gev to 7 tev is much stronger a factor of about 3 this apparent discrepancy could be resolved at least partially by running of the coupling in eqs eq nbareq quarticnbark with xmath5 but this requires more careful analysis at the effective scale xmath5 is taken into account if the mean multiplicity is computed with energy evolved unintegrated gluon distributions like eg in refs xcite in the previous section we considered the multiplicity distribution of produced gluons in a collision of classical ym fields sourced by classical color charges xmath20 moving on the light cone at high energies though ie when xmath45 the classical fields are modified by quantum fluctuations xcite resummation of boost invariant quantum fluctuations leads to an energy dependent saturation scale for example as required in order to reproduce the growth of the multiplicity xmath46 with energy in particular the energy dependence of the mean saturation scale averaged over all evolution ladders distribution of quantum emissions can be obtained by solving the running coupling bk equation xcite instead in this section we shall solve a evolution equation which accounts both for saturation non linear effects as well as for the fluctuations of the rapidities and transverse momenta of the virtual gluons in the wave function of a hadron before the collision we do this in order to determine the multiplicity distribution rather than just the mean number of dipoles in a hadronic wave function boosted to rapidity xmath47 we shall do so by solving via monte carlo techniques the following evolution equation for xmath48 which is the probability for the dipole size distribution xmath49 to occur eq plevol z fznxxzpnxxz y z fznxpnxy note that in this section xmath50 denotes the logarithmic dipole size conjugate to its transverse momentum rather than to a light cone momentum fraction this equation has been studied before in ref xcite for fixed xmath51 and in ref xcite for running xmath52 those papers also provide references to related earlier work the first term in eq plevol is a gain term due to dipole splitting while the second term corresponds to loss due to recombination fznx is the splitting rate and tznx 1 x nx 1zx is the dipole scattering amplitude for a dipole projectile of size xmath53 to scatter off the target with the dipole distribution xmath49 note that xmath54 is non linear in the dipole density as it involves also the pair and higher densities finally xy x y x y r2 r2 is the elementary dipole dipole scattering amplitude at lo in perturbative qcd for more detailswe refer to ref xcite here we recall only that it was found there that evolution with a running coupling suppresses fluctuations in the tails of the travelling waves and so restores approximate geometric scaling xcite we have determined the multiplicity distribution of dipoles with size xmath55 niy 1qs2y dx nix y i110 5 by evolving a given initial configuration xmath56 xmath57 times despite starting with a fixed initial condition evolution introduces fluctuations in the rapidities where splittings occur and in the sizes of the emerging dipoles in the wave function left evolution with trivial xmath1function xmath58const right qcd xmath1functiontitlefigscaledwidth470 in the wave function left evolution with trivial xmath1function xmath58const right qcd xmath1functiontitlefigscaledwidth470 in fig fig plkno we show that fixed coupling evolution does not obey kno scaling of the distribution of virtual quanta while running coupling evolution does the shape of the distribution however looks different than the measured distribution of produced particles from fig fig knolhcdata this could be due to the fact that our evolution model does not treat diffusion in impact parameter space hence xmath59 shown in fig fig plkno should be interpreted as the multiplicity distribution at the center of the hadron our work is supported by the doe office of nuclear physics through grant no de fg02 09er41620 and by the city university of new york through the psc cuny research award program grant 65041 0043 we can obtain the fluctuation parameter xmath27 by calculating the inclusive double gluon multiplicity and expressing it in terms of the single inclusive or mean multiplicity the connected two particle production cross section for gluons with rapidity xmath61 and xmath62 has the form n2p q conn eq c2 xmath63 is the mean multiplicity and the brackets denote an average over events xmath64 is given by n2p qfgaafgbbfgccfgdd i1 4 1ak2 1bk4 1ck1 1dk3 2ap k2 2bq k4 2cp k1 2dq k3 xmath65 denotes the lipatov vertex for which lp klp kp k2 eq lipatovvertex for the four point function in the target and projectile fields we use xcite a2p k2 b2q k4 c2p k1 d2q k3 24 2 24 2 abcd acbd adbc k1k3k2k4 eq4ptcorr the first two lines on the rhs of the above equation originate from the quadratic part of the action while the third line is due to the quartic operator the product of the gaussian parts of the two four point functions gives nine terms one of which xmath66 corresponds to a disconnected contribution it exactly cancels the second term in eq eq c2 four of the other eight terms xmath67 or xmath68 give identical leading contributions to double gluon production they correspond to a rainbow diagram like the one shown in fig fig gaussianconn in the rainbow diagram on one side target or projectile the xmath20 s corresponding to the same gluon momentum are contracted with each other the remaining four non rainbow diagrams are suppressed relative to the terms we keep at large xmath69 and xmath23 xcite hence the leading gaussian contribution is 4 eq gaussianconn the same reasoning applies also for the additional quartic contribution and only rainbow diagrams are considered like the one in fig fig quarticconn there are two of them one for the projectile and one for the target to first order in xmath70 and their contribution is fgaafgbbfgccfgdd i1 4 2 2 acbdk1k2k3k4 abcd acbd adbck1k3k2k4 the color factor evaluates to fgaafgbbfgccfgdd acbd abcdacbd adbc2 nc2nc2 1nc2nc2 12 using eq eq lipatovvertex we get 2 2 the integral over the ladder momentum is again cut off at the saturation scale xmath5 then the quartic contribution to connected two gluon production becomes 2 2 eq quarticconn the last step is to express the fully connected diagrams in terms of the single inclusive cross section 2 ncnc2 1 eq single summing eq eq gaussianconn and eq eq quarticconn and using eq eq single we get the fluctuation parameter xmath28 is now identified with the expression in the square brackets we rewrite it in terms of eq beta 2 and use eq qs qs2dz 2z to arrive at the final expression eq1k qs2s 1 3nc2 1 in this section we are going to calculate the connected diagrams for three gluon production to obtain the correction to the reduced moment xmath71 at order xmath72 assuming as before that xmath73 at the end of this sectionwe also outline corrections suppressed by higher powers of xmath74 we are looking for the contribution of the connected diagrams to the following expression xcite fgaafgbbfgccfgfffgeefgdd i1 6 1fp k2 1eq k4 1dl k6 1ap k11bq k3 1cl k5 2fk2 2ek4 2dk6 2ak12bk3 2ck5 eq c3 as before the xmath20 correlators of the target and the projectile consist of two parts one from the quadratic operator in the action and another from the additional xmath75 operator f e d ab c f e d ab c f e d a b c the product of the two gaussian contributions from the target and the projectile to leading order in the gluon momenta gives rise to 16 rainbow diagrams the result has been obtained previously xcite and reads expressed in terms of the mean multiplicity eq gaussian3gluon the correction to first order in xmath70 is 2 f ed ab c f e d a b c eq simcorr again we are considering only rainbow diagrams so for the gaussian six point function in the above expression from all possible contractions we keep only the term 26 3 afbecdk1k2k3k4k5k6 to calculate the correction to the six point function to first order in xmath70we factorize it into a product of two and four point functions there are fifteen possible factorizations of that kind three of them are disconnected diagrams and the remaining twelve give identical contributions we consider for example the following combination 1fp k2 1eq k4 1dl k6 1ap k11bq k3 1cl k5 1ap k11bq k3 1fp k2 1eq k4 1dl k6 1cl k5 the two point function is 1ap k11bq k3 22 ab pq k1k3 and for the correction to the four point function we use the last line from eq eq4ptcorr the color factor is fgaafgbbfgccfgfffgeefgddabcdef cedf cfde afbecd 2nc3nc2 1nc3nc2 12 putting everything together into eq eq c3 and multiplying by two because of eq simcorr and by twelve which is the number of possible diagrams we get 4 2 i1 6 k1k2k3k4k5k6pq k1k3pq k2k4k6k5 4 2 again we regularize the ladder integrals at the saturation scale finally using expression eq single for the mean multiplicity the xmath75 contribution to three gluon production becomes eq correction3gluon summing eq gaussian3gluon and eq correction3gluon from the above equationthe third reduced moment is c3 or eq c3final qs4s2 1 9nc2 1 where we have used expressions eq beta and eq qs for a nbd we have that xmath76but if we compare eq c3final to the square of eq eq1k which is qs4s21 6nc2 1 we see that the coefficients of the corrections at order xmath77 differ that means that the xmath75 operator in the action provides a correction to the negative binomial distribution in fact such deviation from a nbd is more obvious if even higher order operators are added to the action dropping the longitudinal dependence of the operators for simplicity such an action would have the form sd2v the additional terms are suppressed by powers of xmath74 xcite 2gga13 3gga132 4gga133 5gga134 6gga135 the cubic operator gives a correction to the six point function ie to xmath71 at order xmath78 but does not correct the four point function ie xmath79 it only renormalizes xmath36 the same applies to the xmath80 operator xmath71 will contain a term xmath81 but xmath82 does not hence beyond a quadratic action the relation xmath83 is not exact z koba h b nielsen and p olesen nucl b 40 317 1972 r e ansorge et al ua5 collaboration z phys c 43 357 1989 k aamodt et al alice collaboration eur j c 68 89 2010 v khachatryan et al cms collaboration jhep 1101 079 2011 w a zajc phys b 175 219 1986 a dumitru and y nara phys c 85 034907 2012 r ugoccioni and a giovannini j phys 5 199 2005 hep ph0410186 d prorok int j mod a 26 3171 2011 arxiv11010787 hep ph t mizoguchi and m biyajim arxiv12070916 hep ph a bassetto m ciafaloni and g marchesini nucl b 163 477 1980 y l dokshitzer v s fadin and v a khoze z phys c 18 37 1983 y l dokshitzer phys b 305 295 1993 g p salam nucl b 449 589 1995 a krasnitz y nara and r venugopalan phys 87 192302 2001 nucl a 727 427 2003 t lappi phys c 67 054903 2003 d kharzeev e levin and m nardi nucl a 730 448 2004 erratum ibid a 743 329 2004 nucl a 747 609 2005 a dumitru d e kharzeev e m levin and y nara phys rev c 85 044920 2012 j l albacete a dumitru h fujii and y nara arxiv12092001 hep ph f gelis t lappi and l mclerran nucl a 828 149 2009 p tribedy and r venugopalan nucl a 850 136 2011 erratum ibid a 859 185 2011 arxiv11122445 hep ph t lappi s srednyak and r venugopalan jhep 1001 066 2010 b schenke p tribedy and r venugopalan arxiv12066805 hep ph a dumitru and e petreska nucl a 879 59 2012 e iancu k itakura and l mclerran nucl a 724 181 2003 a dumitru j jalilian marian t lappi b schenke and r venugopalan phys b 706 219 2011 e iancu and d n triantafyllopoulos jhep 1111 105 2011 jhep 1204 025 2012 i balitsky phys d 75 014001 2007 y v kovchegov and h weigert nucl a 784 188 2007 nucl a 789 260 2007 j l albacete and y v kovchegov phys rev d 75 125021 2007 e iancu j t de santana amaral g soyez and d n triantafyllopoulos nucl a 786 131 2007 a dumitru e iancu l portugal g soyez and d n triantafyllopoulos jhep 0708 062 2007 a m stasto k j golec biernat and j kwiecinski phys lett 86 596 2001 k dusling d fernandez fraile and r venugopalan nucl a828 161 2009 arxiv09024435 nucl th
transverse momentum integrated multiplicities in the central region of xmath0 collisions at lhc energies satisfy koba nielsen olesen scaling we attempt to relate this finding to multiplicity distributions of soft gluons kno scaling emerges if the effective theory describing color charge fluctuations at a scale on the order of the saturation momentum is approximately gaussian from an evolution equation for quantum corrections which includes both saturation as well as fluctuations we find that evolution with the qcd xmath1function satisfies kno scaling while fixed coupling evolution does not thus non linear saturation effects and running coupling evolution are both required in order to reproduce geometric scaling of the dis cross section and kno scaling of virtual dipoles in a hadron wave function
introduction kno scaling from a gaussian action in the classical limit quantum evolution and the distribution of dipoles in the hadronic wave function the moment @xmath60 with the quartic action the moment @xmath71 with the quartic action
massive stars play a fundamental role in driving the energy flow and material cycles that influence the physical and chemical evolution of galaxies despite receiving much attention their formation process remains enigmatic observationally the large distances to the nearest examples and the clustered mode of formation make it difficult to isolate individual protostars for study it is still not certain for instance whether massive stars form via accretion similar to low mass stars or through mergers of intermediate mass stars advances in instrumentation have enabled sub arcsecond resolution imaging at wavelengths less affected by the large column densities of material that obscure the regions at shorter wavelengths recent observations exploiting these capabilities have uncovered the environment surrounding individual massive protostellar systems from analysis of xmath423 xmath0 m co bandhead emission xcite have inferred keplerian disks very closely surrounding within a few au four massive young stellar objects while interferometric mm continuum observations find the mass function of protostellar dust clumps lies close to a salpeter value down to clump radii of 2000au xcite these high resolution observations point toward an accretion formation scenario for massive stars further discrimination between the two competing models is possible by examining the properties in particular the young stellar populations of hot molecular cores the mid infrared mir window 7 25 xmath0 m offers a powerful view of these regions the large column densities of material process the stellar light to infrared wavelengths and diffraction limited observations are readily obtained recent observations indicate that class ii methanol masers exclusively trace regions of massive star formation xcite and are generally either not associated or offset from uchii regions xcite xcite hereafter m05 have carried out multi wavelength mm to mir observations toward five star forming complexes traced by methanol maser emission to determine their large scale properties they found that maser sites with weak xmath510mjy radio continuum flux are associated with massive xmath650mxmath7 luminous xmath610xmath8lxmath7 and deeply embedded axmath940 mag cores characterising protoclusters of young massive protostars in an earlier evolutionary stage than uchii regions the spatial resolution of the observations xmath68xmath2 was however too low to resolve the sources inside the clumps details of the regions from observations in the literature are described in m05 we have since observed three of the m05 regions at high spatial resolution to uncover the embedded sources inside the cores at mir wavelengths the data were obtained with michelle on the 8m gemini north telescope in queue mode on the 18xmath10 22xmath11 and 30xmath10 of march 2003 each pointing centre was imaged with four n band silicate filters centred on 79 88 116 and 125 xmath0 m and the qa filter centred on 185 xmath0 m with 300 seconds on source integration time g17349 and g18895 were observed twice on separate nights and g19260 observed once the n and q band observations were scheduled separately due to the more stringent weather requirements at q band the standard chop nod technique was used with a chop throw of 15xmath2 and chop direction selected from msx images of the region to minimise off field contamination the spatial resolution calculated from standard star observations was xmath4 036xmath2 at 10 xmath0 m and xmath4 057xmath2 at 185 xmath0 m the 32xmath2x24xmath2 field of view fully covered the dust emission observed by m05 in each region particular care was taken to determine the telescope pointing position but absolute positions were determined by comparing the mir data to sensitive high resolution cm continuum vla images of the 3 regions minier et al in prep similar spatial distribution and morphology of the multiple components allowed good registration between the images the astrometric uncertainty in the vla images is xmath41xmath2 flux calibration was performed using standard stars within 03 airmass of the science targets there was no overall trend in the calibration factor as a result of changes in airmass throughout the observations the standard deviation in the flux of standards throughout the observations was found to be 74 31 44 24 and 9 for the four n band and 185 xmath0 m filters respectively the statistical error in the photometrywas dominated by fluctuations in the sky background upper flux limits were calculated from the standard deviation of the sky background for each filter and a 3xmath12 upper detection limit is used in table 1 similarly a 3xmath12 error value is quoted for the fluxes in table 1 typical values for the n and q band filters were 0005 and 003 jy respectively the flux densities for the standard stars were taken from values derived on the gemini south instrument t recs which shares a common filter set with michelle regions confused with many bright sources were deconvolved using the lucy richardson algorithm with 20 iterations this was necessary to resolve source structure and extract individual source fluxes the instrumental psf was obtained for each filter using a bright non saturated standard star the results were reliable and repeatable near the brighter sources when using different stars for the psf and observations of the objects taken over different nights as a further check the standard stars were used to deconvolve other standards and reproduced point sources down to 1 of the peak value after 20 iterations so only sources greater than 3 of the peak value were included in the final images the resulting deconvolutions are shown in fig 1 tabsources cols as the large scale clump dust and gas morphology appears simple and centrally peaked see m05 we make the reasonable assumption that the protocluster centres coincide with the central peak of dust emission the spatial distribution of the point sources within the protocluster is similar between the clumps with close point sources toward the cluster centre the methanol masers are found closest to the brightest mir point source within the assumed 1xmath2 pointing error from image registration these sources have temperatures sufficient to evaporate methanol ice from the dust grains into the gas phase xmath690k as well as sufficient luminosity of ir photons to pump the masing transition conditions models suggest are required for such emission xcite it is known that more massive stars favour cluster centres eg xcite but it is unclear whether they form there or migrate in from outside we have used the simple harmonic model of ballistic motion developed by xcite to consider the motion of sources within the cores using the measured column density and radius from m05 listed in table 2 the time required for migration from the edge to the centre is xmath4 xmath13 years this is comparable to the predicted hmc lifetime of 10xmath14 years xcite so we can not rule out the possibility of migration within the clumps any sources having migrated to the centre in this way would have acquired a velocity of xmath4 2 kmsxmath15 with respect to the clump massive stars in clusters are observed to have a high companion star fraction xcite in the m16 cluster xcite observed massive stars earlier than b3 with visual companions separated by 1000 3000au if multiple systems are bound from birth it is likely some of the sources we have observed will belong to multiple systems even though the companions may lie below the detection limit however all three regions show two or more point sources at close angular separation see insets of figure 1 corresponding to linear separations of 1700 to 6000au we can not determine whether these stars are physically bound or simply close due to projection effects but we can calculate the instrumental sensitivity required to confirm or deny the association assuming they are physically bound in a keplerian orbit the maximum proper motions projection angle 0xmath16 of xmath4 01 mas year are too small to be detected on short temporal baselines the maximum velocity difference projection angle 90xmath16 xmath4 2 kmsxmath15 is achievable by high spectral resolution observations of any line features the mass distribution of stars is generally well described as a power law through the initial mass function imf given the mass of gas available to form stars we may estimate the likelihood that a cluster will end up with the most massive stars that are observed in it the fraction of gas that forms stars is given by the star formation efficiency sfe and is observationally found to be less than 50 in any cloud and to be xmath17 33 for nearby embedded clusters xcite for a cluster whose total stellar mass is 120 50 320 mxmath7 equivalent to the gas mass determined for the three cores xcite estimate that the mean maximum mass that a star may have in it is 10 5 20 mxmath7 this is comparable to the largest observed mass in two out of the three cases however we also observed several other stars in each cluster so can estimate the probability of generating stars of equal or greater mass than the remaining mass distribution we did this by running monte carlo simulations to populate 10xmath14 clusters using xcite xcite and xcite imfs until the available gas mass was exhausted we only considered clusters which contained a star of at least equal mass to the most massive observed the simulations show that even using the salpeter form of the imf most biased toward forming high mass stars and allowing 50 of the gas to form stars it is difficult to generate the observed mass distributions probabilities xmath18 10xmath19 10xmath20 10xmath15 for the three cores respectively by itself this may not be significant for a single cluster however since the probability is low for all three sources studied it is unlikely that the mass distribution of the most massive stars can be produced by sampling a standard form of the imf from the reservoir of gas available for star formation this conclusion would not hold if there was a substantial stellar mass already in the cluster that remains unseen or if much of the original gas mass had already been dispersed from the core due to star formation the former requires a sfe close to unity and given the relatively quiescent state of the cores the latter seems unlikely a larger sample of young massive protoclusters is required to draw general conclusions however in all three hot molecular cores traced by methanol maser emission we have found multiple mir sources which can be separated into three morphological types unresolved point sources p unresolved point source with weak surrounding extended emission pe and extended sources e the point sources lie at close angular separations future high spatial and spectral resolution observations may be able to determine whether or not they are physically bound the methanol masers are found closest to the brightest mir point source within the assumed 1xmath2 pointing accuracy cooler extended sources dominate the luminosity the time scale for a source at the core edge to migrate to the centre is comparable to the hot molecular core lifetime so it is not possible to rule out large protostellar motions within the core from the derived gas mass of the core and mass estimates for the sources monte carlo simulations show that it is difficult to generate the observed distributions for the most massive cluster members from the gas in the core using a standard form of the imf this conclusion would not hold however if most of the original gas has already formed stars or has been dispersed such that the original core mass is much greater than now observed sl would like to thank alistair glass scott fisher tony wong and melvin hoare for helpful discussion of the data and scientific input we thank the anonymous referee for the thorough response and insightful comments this work was made possible by funding from the australian research council and unsw the gemini observatory is operated by the association of universities for research in astronomy inc under a cooperative agreement with the nsf on behalf of the gemini partnership nsf usa pparc uk nrc canada conicyt chile arc australia cnpq brazil and conicet argentina b t 1989 interstellar extinction in the infrared infrared spectroscopy in astronomy proceedings of the 22nd eslab symposium held in salamanca spain 7 9 december 1988 edited by bh kaldeich esa sp290 european space agency 1989 p93 pp 93 g bouvier j eislffel j simon t 2001 in asp conf ser 243 from darkness to light origin and evolution of young stellar clusters statistical properties of visual binaries as tracers of the formation and early evolution of young stellar clusters
we present high resolution mid infrared images toward three hot molecular cores signposted by methanol maser emission g17349 242 s231 s233ir g18895 089 s252 afgl5180 and g19260 005 s255ir each of the cores was targeted with michelle on gemini north using 5 filters from 79 to 185 xmath0 m we find each contains both large regions of extended emission and multiple luminous point sources which from their extremely red colours xmath1 appear to be embedded young stellar objects the closest angular separations of the point sources in the three regions are 079 100 and 333xmath2 corresponding to linear separations of 1700 1800 and 6000au respectively the methanol maser emission is found closest to the brightest mir point source within the assumed 1xmath2 pointing accuracy mass and luminosity estimates for the sources range from 3 22 mxmath3 and 50 40000 lxmath3 assuming the mir sources are embedded objects and the observed gas mass provides the bulk of the reservoir from which the stars formed it is difficult to generate the observed distributions for the most massive cluster members from the gas in the cores using a standard form of the imf masers stars formation techniques high angular resolution stars early type stars mass function infrared stars
introduction observations and data reduction conclusions acknowledgments
connectivity and network design problems play an important role in combinatorial optimization and algorithms both for their theoretical appeal and their many real world applications an interesting and large class of problems are of the following type given a graph xmath5 with edge or node costs find a minimum cost subgraph xmath6 of xmath2 that satisfies certain connectivity properties for example given an integer xmath7 one can ask for the minimum cost spanning subgraph that is xmath8edge or xmath8vertex connected if xmath9 then this is the classical minimum spanning tree mst problem for xmath10 the problem is np hard and also apx hard to approximate more general versions of connectivity problems are obtained if one seeks a subgraph in which a subset of the nodes xmath11 referred to as terminals are xmath8connected the well known steiner tree problem is to find a minimum cost subgraph that xmath12connects a given set xmath13 many of these problems are special cases of the survivable network design problem sndp in sndp each pair of nodes xmath14 specifies a connectivity requirement xmath15 and the goal is to find a minimum cost subgraph that has xmath15 disjoint paths for each pair xmath16 given the intractability of these connectivity problems there has been a large amount of work on approximation algorithms a number of elegant and powerful techniques and results have been developed over the years see xcite in particular the primal dual method xcite and iterated rounding xcite have led to some remarkable results including a xmath1approximation for edge connectivity sndp xcite an interesting class of problems related to some of the connectivity problems described above is obtained by requiring that only xmath0 of the given terminals be connected these problems are partly motivated by applications in which one seeks to maximize profit given a upper bound budget on the cost for example a useful problem in vehicle routing applications is to find a path that maximizes the number of vertices in it subject to a budget xmath17 on the length of the path in the exact optimization setting the profit maximization problem is equivalent to the problem of minimizing the cost length of a path subject to the constraint that at least xmath0 vertices are included of course the two versions need not be approximation equivalent nevertheless understanding one is often fruitful or necessary to understand the other the most well studied of these problems is the xmath0mst problem the goal here is to find a minimum cost subgraph of the given graph xmath2 that contains at least xmath0 vertices or terminals this problem has attracted considerable attention in the approximation algorithms literature and its study has led to several new algorithmic ideas and applications xcite we note that the steiner tree problem can be relatively easily reduced in an approximation preserving fashion to the xmath0mst problem more recently lau et al xcite considered the natural generalization of xmath0mst to higher connectivity in particular they defined the xmath18subgraph problem to be the following find a minimum cost subgraph of the given graph xmath2 that contains at least xmath0 vertices and is xmath8edge connected we use the notation xmath0xmath8ec to refer to this problem in xcitean xmath19 approximation was claimed for the xmath0xmath1ec problem however the algorithm and proof in xcite are incorrect more recently and in independent work from ours the authors of xcite obtained a different algorithm for xmath0xmath1ec that yields an xmath20 approximation we give later a more detailed comparison between their approach and ours it is also shown in xcite that a good approximation for xmath0xmath8ec when xmath8 is large would yield an improved algorithm for the xmath0densest subgraph problem xcite in this problem one seeks a xmath0vertex subgraph of a given graph xmath2 that has the maximum number of edges the xmath0densest subgraph problem admits an xmath21 approximation for some fixed constant xmath22 xcite but has resisted attempts at an improved approximation for a number of years now in this paper we consider the vertex connectivity generalization of the xmath0mst problem we define the xmath0xmath8vc problem as follows given an integer xmath0 and a graph xmath2 with edge costs find the minimum cost xmath8vertex connected subgraph of xmath2 that contains at least xmath0 vertices we also consider the terminal version of the problem where the subgraph has to contain xmath0 terminals from a given terminal set xmath3 it can be easily shown that the xmath0xmath8ec problem reduces to the xmath0xmath8vc problem for any xmath23 we also observe that the xmath0xmath8ec problem with terminals can be easily reduced as follows to the uniform problem where every vertex is a terminal for each terminal xmath24 create xmath25 dummy vertices xmath26 and attach xmath27 to xmath28 with xmath8 parallel edges of zero cost now set xmath29 in the new graph one can avoid using parallel edges by creating a clique on xmath26 using zero cost edges and connecting xmath8 of these vertices to xmath28 note however that this reduction only works for edge connectivity we are not aware of a reduction that reduces the xmath0xmath8vc problem with a given set of terminals to the xmath0xmath8vc problem even when xmath30 in this paperwe consider the xmath0xmath1vc problem our main result is the following thm kv there is an xmath31 approximation for the xmath0xmath1vc problem where xmath32 is the number of terminals cor ke there is an xmath31 approximation for the xmath0xmath1ec problem where xmath32 is the number of terminals one of the technical ingredients that we develop is the theorem below which may be of independent interest given a graph xmath2 with edge costs and weights on terminals xmath3 we define xmath33 for a subgraph xmath6 to be the ratio of the cost of edges in xmath6 to the total weight of terminals in xmath6 thm cycle let xmath2 be an xmath1vertex connected graph with edge costs and let xmath34 be a set of terminals then there is a simple cycle xmath35 containing at least xmath1 terminals a non trivial cycle such that the density of xmath35 is at most the density of xmath2 moreover such a cycle can be found in polynomial time using the above theorem and an lp approachwe obtain the following cor cycle given a graph xmath5 with edge costs and xmath32 terminals xmath36 there is an xmath37 approximation for the problem of finding a minimum density non trivial cycle note that theorem thm cycle and corollary cor cycle are of interest because we seek a cycle with at least two terminals a minimum density cycle containing only one terminal can be found by using the well known min mean cycle algorithm in directed graphs xcite we remark however that although we suspect that the problem of finding a minimum density non trivial cycle is np hard we currently do not have a proof theorem thm cycle shows that the problem is equivalent to the densxmath1vc problem defined in the next section remark the reader may wonder whether xmath0xmath1ec or xmath0xmath1vc admit a constant factor approximation since the xmath0mst problem admits one we note that the main technical tool which underlies xmath38 approximations for xmath0mst problem xcite is a special property that holds for a lp relaxation of the prize collection steiner tree problem xcite which is a lagrangian relaxation of the steiner tree problem such a property is not known to hold for generalizations of xmath0mst including xmath0xmath1ec and xmath0xmath1vc and the xmath0steiner forest problem xcite thus one is forced to rely on alternative and problem specific techniques we consider the rooted version of xmath0xmath1vc the goal is to find a min cost subgraph that xmath1connects at least xmath0 terminals to a specified root vertex xmath39 it is relatively straightforward to reduce xmath0xmath1vc to its rooted version see section sec k2vc for details we draw inspiration from algorithmic ideas that led to poly logarithmic approximations for the xmath0mst problem to describe our approach to the rooted xmath0xmath1vc problem we define a closely related problem for a subgraph xmath6 that contains xmath39 let xmath40 be the number of terminals that are xmath1connected to xmath39 in xmath6 then the density of xmath6 is simply the ratio of the cost of xmath6 to xmath40 the densxmath1vc problem is to find a 2connected subgraph of minimum density an xmath37 approximation for the densxmath1vc problem where xmath32 is the number of terminals can be derived in a some what standard way by using a bucketing and scaling trick on a linear programming relaxation for the problem we exploit the known bound of xmath1 on the integrality gap of a natural lp for the sndp problem with vertex connectivity requirements in xmath41 xcite the bucketing and scaling trick has seen several uses in the past and has recently been highlighted in several applications xcite our algorithm for xmath0xmath1vc uses a greedy approach at the high level we start with an empty subgraph xmath42 and use the approximation algorithm for densxmath1vc in an iterative fashion to greedily add terminals to xmath42 until at least xmath43 terminals are in xmath42 this approach would yield an xmath44 approximation if xmath45 however the last iteration of the densxmath1vc algorithm may add many more terminals than desired with the result that xmath46 in this casewe can not bound the quality of the solution obtained by the algorithm to overcome this problem one can try to prune the subgraph xmath6 added in the last iteration to only have the desired number of terminals for the xmath0mst problem xmath6 is a tree and pruning is quite easy we remark that this yields a rather straightforward xmath20 approximation for xmath0mst and could have been discovered much before a more clever analysis given in xcite one of our technical contributions is to give a pruning step for the xmath0xmath1vc problem to accomplish this we use two algorithmic ideas the first is encapsulated in the cycle finding algorithm of theorem thm cycle second we use this cycle finding algorithm to repeatedly merge subgraphs until we get the desired number of terminals in one subgraph this latter step requires care the cycle merging scheme is inspired by a similar approach from the work of lau et al xcite on the xmath0xmath1ec problem and in xcite on the directed orienteering problem these ideas yield an xmath47 approximation we give a slightly modified cycle merging algorithm with a more sophisticated and non trivial analysis to obtain an improved xmath31 approximation some remarks are in order to compare our work to that of xcite on the xmath0xmath1ec problem the combinatorial algorithm in xciteis based on finding a low density cycle or a related structure called a bi cycle the algorithm in xcite to find such a structure is incorrect further the cycles are contracted along the way which limits the approach to the xmath0xmath1ec problem contracting a cycle in xmath1node connected graph may make the resulting graph not xmath1node connected in our algorithmwe do not contract cycles and instead introduce dummy terminals with weights to capture the number of terminals in an already formed component this requires us to now address the minimum density non trivial simple cycle problem which we do via theorem thm cycle and corollary cor cycle in independent work lau et al xcite obtain a new and correct xmath48approximation for xmath0xmath1ec they also follow the same approach that we do in using the lp for finding dense subgraphs followed by the pruning step however in the pruning step they use a completely different approach they use the sophisticated idea of no where zero xmath49flows xcite although the use of this idea is elegant the approach works only for the xmath0xmath1ec problem while our approach is less complex and leads to an algorithm for the more general xmath0xmath1vc problem we work with graphs in which some vertices are designated as terminals given a graph xmath2 with edge costs and terminal weights we define the density of a subgraph xmath6 to be sum of the costs of edges in xmath6 divided by the sum of the weights of terminals in xmath6 henceforth we use xmath1connected graph to mean a xmath1vertex connected graph the goal of the xmath0xmath1vc problem is to find a minimum cost 2connected subgraph on at least xmath0 terminals for simplicity of exposition however we stick to the more restricted version recall that in the rooted xmath0xmath1vc problem the goal is to find a min cost subgraph on at least xmath0 terminals in which every terminal is 2connected to the specified root xmath39 the unrooted xmath0xmath1vc problem can be reduced to the rooted version by guessing 2 vertices xmath16 that are in an optimal solution creating a new root vertex xmath39 and connecting it with 0cost edges to xmath50 and xmath28 it is not hard to show that any solution to the rooted problem in the modified graph can be converted to a solution to the unrooted problem by adding 2 minimum cost vertex disjoint paths between xmath50 and xmath28 since xmath50 and xmath28 are in the optimal solution the cost of these added paths can not be more than xmath51 we omit further details from this extended abstract in the densxmath1vc problem the goal is to find a subgraph xmath6 of minimum density in which all terminals of xmath6 are 2connected to the root the following lemma is proved in section subsec lp below it relies on a xmath1approximation via a natural lp for the min cost xmath1connectivity problem due to fleischer jain and williamson xcite and some standard techniques lem densv there is an xmath37approximation algorithm for the densxmath1vc problem where xmath32 is the number of terminals in the given instance let xmath51 be the cost of an optimal solution to the xmath0xmath1vc problem we assume knowledge of xmath51 this can be dispensed with using standard methods we pre process the graph by deleting any terminal that does not have 2 vertex disjoint paths to the root xmath39 of total cost at most xmath51 the high level description of the algorithm for the rooted xmath0xmath1vc problem is given below xmath52 xmath53 is the empty graph while xmath54 use the approximation algorithm for densxmath1vc to find a subgraph xmath6 in xmath2 if xmath55 xmath56 xmath57 mark all terminals in xmath6 as non terminals else prune xmath6 to obtain xmath58 that contains xmath59 terminals xmath60 xmath61 output xmath42 at the beginning of any iteration of the while loop the graph contains a solution to the densxmath1vc problem of density at most xmath62 therefore the graph xmath6 returned always has density at most xmath63 if xmath55 we add xmath6 to xmath42 and decrement xmath59 we refer to this as the augmentation step otherwise we have a graph xmath6 of good density but with too many terminals in this case we prune xmath6 to find a graph with the required number of terminals this is the pruning step a simple set cover type argument shows the following lemma lem greedy if at every augmentation step we add a graph of density at most xmath64 where xmath59 is the number of additional terminals that must be selected the total cost of all the augmentation steps is at most xmath65 therefore we now only have to bound the cost of the graph xmath58 added in the pruning step we prove the following theorem in section sec pruning thm avekv let xmath66 be an instance of the rooted xmath0xmath1vc problem with root xmath39 such that every vertex of xmath2 has xmath1 vertex disjoint paths to xmath39 of total cost at most xmath67 and such that xmath68 there is a polynomial time algorithm to find a solution to this instance of cost at most xmath69 we can now prove our main result for the xmath0xmath1vc problem theorem thm kv theorem thm kv let xmath51 be the cost of an optimal solution to the rooted xmath0xmath1vc problem by lemma lem greedy the total cost of the augmentation steps of our greedy algorithm is xmath70 to bound the cost of the pruning step let xmath59 be the number of additional terminals that must be covered just prior to this step the algorithm for the densxmath1vc problem returns a graph xmath6 with xmath71 terminals and density at most xmath72 as a result of our pre processing step every vertex has 2 vertex disjoint paths to xmath39 of total cost at most xmath51 now we use theorem thm avekv to prune xmath6 and find a graph xmath58 with xmath59 terminals and cost at most xmath73 therefore the total cost of our solution is xmath74 it remains only to prove lemma lem densv that there is an xmath75approximation for the densxmath1vc problem and theorem thm avekv bounding the cost of the pruning step we prove the former in section subsec lp below before the latter is proved in section sec pruning we develop some tools in section sec cycles chief among these tools is theorem thm cycle recall that the densxmath1vc problem was defined as follows given a graph xmath5 with edge costs a set xmath76 of terminals and a root xmath77 find a subgraph xmath6 of minimum density in which every terminal of xmath6 is 2connected to xmath39 here the density of xmath6 is defined as the cost of xmath6 divided by the number of terminals it contains not including xmath39 we describe an algorithm for densxmath1vc that gives an xmath37approximation and sketch its proof we use an lp based approach and a bucketing and scaling trick see xcite for applications of this idea and a constant factor bound on the integrality gap of an lp for sndp with vertex connectivity requirements in xmath41 xcite we define lp dens as the following lp relaxation of densxmath1vc for each terminal xmath78 the variable xmath79 indicates whether or not xmath28 is chosen in the solution by normalizing xmath80 to 1 and minimizing the sum of edge costs we minimize the density xmath81 is the set of all simple cycles containing xmath78 and the root xmath39 for any xmath82 xmath83 indicates how much flow is sent from xmath28 to xmath39 through xmath35 note that a pair of vertex disjoint paths is a cycle the flow along a cycle is 1 if we can 2connect xmath78 to xmath39 using the edges of the cycle the variable xmath84 indicates whether the edge xmath85 is used by the solution xmath86 xmath87 it is not hard to see that an optimal solution to lp dens has cost at most the density of an optimal solution to densxmath1vc we now show how to obtain an integral solution of density at most xmath88 where xmath89 is the cost of an optimal solution to lp dens the linear program lp dens has an exponential number of variables but a polynomial number of non trivial constraints it can however be solved in polynomial time fix an optimal solution to lp dens of cost xmath89 and for each xmath90 for ease of notation assume xmath91 is an integer let xmath92 be the set of terminals xmath78 such that xmath93 since xmath94 there is some index xmath95 such that xmath96 since every terminal xmath97 has xmath98 the number of terminals in xmath92 is at least xmath99 we claim that there is a subgraph xmath6 of xmath2 with cost at most xmath100 in which every terminal of xmath92 is 2connected to the root if this is true the density of xmath6 is at most xmath101 and hence we have an xmath37approximation for the densxmath1vc problem to prove our claim about the cost of the subgraph xmath6 in which every terminal of xmath92 is 2connected to xmath39 consider scaling up the given optimum solution of lp dens by a factor of xmath102 for each terminal xmath103 the flow from xmath78 to xmath39 in this scaled solution is at least 1 and the cost of the scaled solution is xmath104 in xcite the authors describe a linear program xmath105 to find a minimum cost subgraph in which a given set of terminals is 2connected to the root and show that this linear program has an integrality gap of 2 the variables xmath84 in the scaled solution to lp dens correspond to a feasible solution of xmath105 with xmath92 as the set of terminals the integrality gap of 2 implies that there is a subgraph xmath6 in which every terminal of xmath92 is 2connected to the root with cost at most xmath106 therefore the algorithm for densxmath1vc is 1 find an optimal fractional solution to lp dens 2 find a set of terminals xmath92 such that xmath107 3 find a min cost subgraph xmath6 in which every terminal in xmath92 is 2connected to xmath39 using the algorithm of xcite xmath6 has density at most xmath37 times the optimal solution to densxmath1vc a cycle xmath108 is non trivial if it contains at least 2 terminals we define the min density non trivial cycle problem given a graph xmath5 with xmath3 marked as terminals edge costs and terminal weights find a minimum density cycle that contains at least 2 terminals note that if we remove the requirement that the cycle be non trivial that is it contains at least 2 terminals the problem reduces to the min mean cycle problem in directed graphs and can be solved exactly in polynomial time see xcite algorithms for the min density non trivial cycle problem are a useful tool for solving the xmath0xmath1vc and xmath0xmath1ec problems in this section we give an xmath75approximation algorithm for the minimum density non trivial cycle problem first we prove theorem thm cycle that a 2connected graph with edge costs and terminal weights contains a simple non trivial cycle with density no more than the average density of the graph we give two algorithms to find such a cycle the first described in section subsec nonpoly is simpler but the running time is not polynomial a more technical proof that leads to a strongly polynomial time algorithm is described in section subsec strong we recommend this proof be skipped on a first reading to find a non trivial cycle of density at most that of the 2connected input graph xmath2 we will start with an arbitrary non trivial cycle and successively find cycles of better density until we obtain a cycle with density at most xmath109 the following lemma shows that if a cycle xmath35 has an ear with density less than xmath110 we can use this ear to find a cycle of lower density lem goodear let xmath35 be a non trivial cycle and xmath6 an ear incident to xmath35 at xmath50 and xmath28 such that xmath111 let xmath112 and xmath113 be the two internally disjoint paths between xmath50 and xmath28 in xmath35 then xmath114 and xmath115 are both simple cycles and one of these is non trivial and has density less than xmath110 xmath35 has at least 2 terminals so it has finite density xmath6 must then have at least 1 terminal let xmath116 xmath117 and xmath118 be respectively the sum of the costs of the edges in xmath112 xmath113 and xmath6 and let xmath119 xmath120 and xmath121 be the sum of the weights of the terminals in xmath112 xmath113 and xmath122 assume wlog that xmath112 has density at most that of xmath113 that is xmath123 and xmath113 has cost 0 and weight 0 in this case let xmath112 be the component with non zero weight xmath112 must contain at least one terminal and so xmath114 is a simple non trivial cycle the statement xmath124 is equivalent to xmath125 xmath126 therefore xmath114 is a simple cycle containing at least 2 terminals of density less than xmath110 lem2conncomp given a cycle xmath35 in a xmath1connected graph xmath2 let xmath42 be the graph formed from xmath2 by contracting xmath35 to a single vertex xmath28 if xmath6 is a connected component of xmath127 xmath128 is xmath1connected in xmath42 let xmath6 be an arbitrary connected component of xmath127 and let xmath129 to prove that xmath58 is 2connected we first observe that xmath28 is 2connected to any vertex xmath130 any set that separates xmath131 from xmath28 in xmath58 separates xmath131 from the cycle xmath35 in xmath2 it now follows that for all vertices xmath132 xmath131 and xmath133 are 2connected in xmath58 suppose deleting some vertex xmath50 separates xmath131 from xmath133 the vertex xmath50 can not be xmath28 since xmath6 is a connected component of xmath127 but if xmath134 xmath28 and xmath131 are in the same component of xmath135 since xmath28 is 2connected to xmath131 in xmath58 similarly xmath28 and xmath133 are in the same component of xmath135 and so deleting xmath50 does not separate xmath131 from xmath133 we now show that given any 2connected graph xmath2 we can find a non trivial cycle of density no more than that of xmath2 thm cycleexists let xmath2 be a xmath1connected graph with at least xmath1 terminals xmath2 contains a simple non trivial cycle xmath136 such that xmath137 let xmath35 be an arbitrary non trivial simple cycle such a cycle always exists since xmath2 is xmath1connected and has at least 2 terminals if xmath138 we give an algorithm that finds a new non trivial cycle xmath139 such that xmath140 repeating this process we obtain cycles of successively better densities until eventually finding a non trivial cycle xmath136 of density at most xmath109 let xmath42 be the graph formed by contracting the given cycle xmath35 to a single vertex xmath28 in xmath42 xmath28 is not a terminal and so has weight 0 consider the 2connected components of xmath42 from lemma lem2conncomp each such component is formed by adding xmath28 to a connected component of xmath127 and pick the one of minimum density if xmath6 is this component xmath141 by an averaging argument xmath6 contains at least 1 terminal if it contains 2 or more terminals recursively find a non trivial cycle xmath139 in xmath6 such that xmath142 if xmath139 exists in the given graph xmath2 it has the desired properties and we are done otherwise xmath139 contains xmath28 and the edges of xmath139 form a ear of xmath35 in the original graph xmath2 the density of this ear is less than the density of xmath35 so we can apply lemma lem goodear to obtain a non trivial cycle in xmath2 that has density less than xmath110 finally if xmath6 has exactly 1 terminal xmath50 find any 2 vertex disjoint paths using edges of xmath6 from xmath50 to distinct vertices in the cycle xmath35 since xmath2 is 2connected there always exist such paths the cost of these paths is at most xmath143 and concatenating these 2 paths corresponds to a ear of xmath35 in xmath2 the density of this ear is less than xmath110 again we use lemma lem goodear to obtain a cycle in xmath2 with the desired properties we remark again that the algorithm of theorem thm cycleexists does not lead to a polynomial time algorithm even if all edge costs and terminal weights are polynomially bounded in section subsec strong we describe a strongly polynomial time algorithm that given a graph xmath2 finds a non trivial cycle of density at most that of xmath2 note that neither of these algorithms may directly give a good approximation to the min density non trivial cycle problem because the optimal non trivial cycle may have density much less than that of xmath2 however we can use theorem thm cycleexists to prove the following theorem thm equivalence there is an xmath144approximation to the unrooted densxmath1vc problem if and only if there is an xmath144approximation to the problem of finding a minimum density non trivial cycle assume we have a xmath145approximation for the densxmath1vc problem we use it to find a low density non trivial cycle solve the densxmath1vc problem on the given graph since the optimal cycle is a 2connected graph our solution xmath6 to the densxmath1vc problem has density at most xmath145 times the density of this cycle find a non trivial cycle in xmath6 of density at most that of xmath6 it has density at most xmath145 times that of an optimal non trivial cycle note that any instance of the unrooted densxmath1vc problem has an optimal solution that is a non trivial cycle consider any optimal solution xmath6 of density xmath146 by theorem thm cycle xmath6 contains a non trivial cycle of density at most xmath146 this cycle is a valid solution to the densxmath1vc problem therefore a xmath147approximation for the min density non trivial cycle problem gives a xmath147approximation for the densxmath1vc problem theorem thm equivalence and lemma lem densv imply an xmath37approximation for the minimum density non trivial cycle problem this proves corollary cor cycle we say that a graph xmath5 is minimally 2connected on its terminals if for every edge xmath148 some pair of terminals is not 2connected in the graph xmath149 section subsec strong shows that in any graph which is minimally 2connected on its terminals every cycle is non trivial therefore the problem of finding a minimum density non trivial cycle in such graphs is just that of finding a minimum density cycle which can be solved exactly in polynomial time however as we explain at the end of the section this does not directly lead to an efficient algorithm for arbitrary graphs in this section we describe a strongly polynomial time algorithm which given a 2connected graph xmath5 with edge costs and terminal weights finds a non trivial cycle of density at most that of xmath2 we begin with several definitions let xmath35 be a cycle in a graph xmath2 and xmath42 be the graph formed by deleting xmath35 from xmath2 let xmath150 be the connected components of xmath42 we refer to these as earrings of xmath35 were simply a path it would be an ear of xmath35 but xmath151 may be more complex for each xmath151 let the vertices of xmath35 incident to it be called its clasps from the definition of an earring for any pair of clasps of xmath151 there is a path between them whose internal vertices are all in xmath151 we say that a vertex of xmath35 is an anchor if it is the clasp of some earring an anchor may be a clasp of multiple earrings a segment xmath13 of xmath35 is a path contained in xmath35 such that the endpoints of xmath13 are both anchors and no internal vertex of xmath13 is an anchor note that the endpoints of xmath13 might be clasps of the same earring or of distinct earrings it is easy to see that the segments partition the edge set of xmath35 by deleting a segment we refer to deleting its edges and internal vertices observe that if xmath13 is deleted from xmath2 the only vertices of xmath152 that lose an edge are the endpoints of xmath13 a segment is safe if the graph xmath153 is 2connected arbitrarily pick a vertex xmath154 of xmath35 as the origin and consecutively number the vertices of xmath35 clockwise around the cycle as xmath155 the first clasp of an earring xmath6 is its lowest numbered clasp and the last clasp is its highest numbered clasp if the origin is a clasp of xmath6 it is considered the first clasp not the last the arc of an earring is the subgraph of xmath35 found by traversing clockwise from its first clasp xmath156 to its last clasp xmath157 the length of this arc is xmath158 that is the length of an arc is the number of edges it contains note that if an arc contains the origin it must be the first vertex of the arc figure fig earring illustrates several of these definitions 00 circle 2 cm at 00 xmath35 302 cm 303 cm 20 30 452 cm 453 cm at 38235 cm xmath159 at 235025 xmath160 at 53235 cm xmath161 at 0235 xmath162 3028 cm arc304528 cm 3042 cm arc 304542 cm 3028 cm arc 15033007 cm 4528 cm arc 13531507 cm at 1535 cm xmath6 302 cm circle 1 mm 452 cm circle 1 mm 20 circle 1 mm 02 circle 1 mm 30175 cm arc 3045175 cm 00 circle 2 cm at 00 xmath35 02 036 arc909036 cm 02 at 016 xmath163 at 016 xmath164 at 039 font xmath165 302 cm 303 cm arc30303 cm 302 cm at 3016 cm xmath156 at 3016 cm xmath157 3022 cm arc3033022 cm 2522 cm 2528 cm arc252528 cm 2522 cm 302 cm circle 1 mm 302 cm circle 1 mm 02 circle 1 mm 02 circle 1 mm 1502 cm circle 1 mm at 15016 cm xmath162 00 circle 2 cm at 00 xmath35 02 036 arc909036 cm 02 at 016 xmath163 at 016 xmath164 at 039 font xmath166 at 3016 cm xmath156 at 3016 cm xmath157 302 cm 3036 cm 882 cm 853 cm arc85853 cm 882 cm at 8533 cm font xmath167 302 cm 303 cm 2522 cm 2534 cm arc 258834 cm 8822 cm arc 883522 cm 2522 cm 2528 cm arc 258028 cm 8022 cm arc 803522 cm 302 cm circle 1 mm 302 cm circle 1 mm 02 circle 1 mm 02 circle 1 mm 1502 cm circle 1 mm at 15016 cm xmath162 00 circle 2 cm at 00 xmath35 02 036 arc909036 cm 02 at 016 xmath163 at 016 xmath164 at 039 font xmath165 at 3016 cm xmath156 at 3016 cm xmath157 302 cm 3031 cm arc 3010531 cm 1052 cm at 38255 cm font xmath166 302 cm 3025 cm arc3013525 cm 1352 cm at 14325 cm font xmath167 35215 cm arc 3585215 cm 85345 cm arc 8585345 cm 85215 cm arc 8535215 cm 2522 cm 25295 cm arc25100295 cm 100215 cm arc 100130215 cm 130265 cm arc 13025265 cm 25215 cm 302 cm circle 1 mm 302 cm circle 1 mm 02 circle 1 mm 02 circle 1 mm 1502 cm circle 1 mm at 15016 cm xmath162 00 circle 2 cm at 00 xmath35 02 036 arc909036 cm 02 at 016 xmath163 at 016 xmath164 at 041 font xmath168 at 3016 cm xmath156 at 3016 cm xmath157 302 cm 3036 cm 302 cm 303 cm arc301203 cm 1202 cm at 1253 cm font xmath167 3522 cm arc 358522 cm 8534 cm arc 852534 cm 2522 cm 3522 cm 3528 cm arc3511528 cm 11522 cm arc 11533022 cm 302 cm circle 1 mm 302 cm circle 1 mm 02 circle 1 mm 02 circle 1 mm 1502 cm circle 1 mm at 15016 cm xmath162 00 circle 2 cm at 00 xmath35 02 036 arc909036 cm 02 at 016 xmath163 at 016 xmath164 at 041 font xmath168 at 3016 cm xmath156 at 3016 cm xmath157 302 cm 3036 cm 302 cm 303 cm arc301203 cm 1202 cm at 1253 cm font xmath167 2522 cm 2534 cm arc 258534 cm 8522 cm arc 853522 cm 3522 cm arc 3511522 cm 11528 cm arc1153528 cm 3522 cm 302 cm circle 1 mm 302 cm circle 1 mm 02 circle 1 mm 02 circle 1 mm 1502 cm circle 1 mm at 15016 cm xmath162 thm earringproof let xmath6 be an earring of minimum arc length every segment contained in the arc of xmath6 is safe let xmath169 be the set of earrings with arc identical to that of xmath6 since they have the same arc we refer to this as the arc of xmath169 or the critical arc let the first clasp of every earring in xmath169 be xmath163 and the last clasp of each earring in xmath169 be xmath164 because the earrings in xmath169 have arcs of minimum length any earring xmath170 has a clasp xmath171 that is not in the critical arc that is xmath172 or xmath173 we must show that every segment contained in the critical arc is safe recall that a segment xmath13 is safe if the graph xmath153 is 2connected given an arbitrary segment xmath13 in the critical arc let xmath156 and xmath157 xmath174 be the anchors that are its endpoints we prove that there are always 2 internally vertex disjoint paths between xmath156 and xmath157 in xmath152 this suffices to show 2connectivity we consider several cases depending on the earrings that contain xmath156 and xmath157 figure fig earringproof illustrates these cases if xmath156 and xmath157 are contained in the same earring xmath58 it is easy to find two vertex disjoint paths between them in xmath153 the first path is clockwise from xmath175 to xmath176 in the cycle xmath35 the second path is entirely contained in the earring xmath58 an earring is connected in xmath177 so we can always find such a path otherwise xmath156 and xmath157 are clasps of distinct earrings we consider three cases both xmath156 and xmath157 are clasps of earrings in xmath169 one is but not both or neither is 1 we first consider that both xmath156 and xmath157 are clasps of earrings in xmath169 let xmath156 be a clasp of xmath166 and xmath157 a clasp of xmath167 the first path is from xmath157 to xmath163 through xmath167 and then clockwise along the critical arc from xmath163 to xmath156 the second path is from xmath157 to xmath164 clockwise along the critical path and then xmath164 to xmath156 through xmath166 it is easy to see that these paths are internally vertex disjoint 2 now suppose neither xmath156 nor xmath157 is a clasp of an earring in xmath169 let xmath156 be a clasp of xmath166 and xmath157 be a clasp of xmath167 the first path we find follows the critical arc clockwise from xmath157 to xmath164 the last clasp of the critical arc from xmath164 to xmath163 through xmath165 and again clockwise through the critical arc from xmath163 to xmath156 internal vertices of this path are all in xmath6 or on the critical arc let xmath178 be a clasp of xmath166 not on the critical arc and xmath179 be a last clasp of xmath167 not on the critical arc the second path goes from xmath156 to xmath178 through xmath166 from xmath180 to xmath181 through the cycle xmath35 outside the critical arc and from xmath179 to xmath157 through xmath167 internal vertices of this path are in xmath182 or in xmath35 but not part of the critical arc since each of xmath178 and xmath179 are outside the critical arc therefore we have 2 vertex disjoint paths from xmath156 to xmath157 finally we consider the case that exactly one of xmath183 is a clasp of an earring in xmath169 suppose xmath156 is a clasp of xmath168 and xmath157 is a clasp of xmath184 the other case where xmath185 and xmath186 is symmetric and omitted though figure fig earringproof illustrates the paths let xmath181 be the index of a clasp of xmath167 outside the critical arc the first path is from xmath157 to xmath164 through the critical arc and then from xmath164 to xmath156 through xmath166 the second path is from xmath157 to xmath179 through xmath167 and from xmath179 to xmath156 clockwise through xmath35 note that the last part of this path enters the critical arc at xmath163 and continues through the arc until xmath156 internal vertices of the first path that are in xmath35 are on the critical arc but have index greater than xmath175 internal vertices of the second path that belong to xmath35 are either not in the critical arc or have index between xmath163 and xmath156 therefore the two paths are internally vertex disjoint we now describe our algorithm to find a non trivial cycle of good density proving theorem thm cycle let xmath2 be a xmath1connected graph with edge costs and terminal weights and at least xmath1 terminals there is a polynomial time algorithm to find a non trivial cycle xmath136 in xmath2 such that xmath137 theorem thm cycle let xmath2 be a graph with xmath32 terminals and density xmath146 we describe a polynomial time algorithm that either finds a cycle in xmath2 of density less than xmath146 or a proper subgraph xmath42 of xmath2 that contains all xmath32 terminals in the latter case we can recurse on xmath42 until we eventually find a cycle of density at most xmath146 we first find in xmath187 time a minimum density cycle xmath35 in xmath2 by theorem thm cycleexists xmath35 has density at most xmath146 because the minimum density non trivial cycle has at most this density if xmath35 contains at least 2 terminals we are done otherwise xmath35 contains exactly one terminal xmath28 since xmath2 contains at least 2 terminals there must exist at least one earring of xmath35 let xmath28 be the origin of this cycle xmath35 and xmath6 an earring of minimum arc length by theorem thm earringproof every segment in the arc of xmath6 is safe let xmath13 be such a segment since xmath28 was selected as the origin xmath28 is not an internal vertex of xmath13 as xmath28 is the only terminal of xmath35 xmath13 contains no terminals and therefore the graph xmath188 is 2connected and contains all xmath32 terminals of xmath2 the proof above also shows that if xmath2 is minimally 2connected on its terminals that is xmath2 has no 2connected proper subgraph containing all its terminals every cycle of xmath2 is non trivial if a cycle contains 0 or 1 terminals it has a safe segment containing no terminals which can be deleted this gives a contradiction therefore given a graph that is minimally 2connected on its terminals finding a minimum density non trivial cycle is equivalent to finding a minimum density cycle and so can be solved exactly in polynomial time this suggests a natural algorithm for the problem given a graph that is not minimally 2connected on its terminals delete edges and vertices until the graph is minimally 2connected on the terminals and then find a minimum density cycle as shown above this gives a cycle of density no more than that of the input graph but this may not be the minimum density cycle of the original graph for instance there exist instances where the minimum density cycle uses edges of a safe segment xmath13 that might be deleted by this algorithm in this section we prove theorem thm avekv we are given a graph xmath2 and xmath3 a set of at least xmath0 terminals further every terminal in xmath2 has 2 vertex disjoint paths to the root xmath39 of total cost at most xmath67 let xmath32 be the number of terminals in xmath2 and xmath189 its total cost xmath190 is the density of xmath2 we describe an algorithm that finds a subgraph xmath6 of xmath2 that contains at least xmath0 terminals each of which is 2connected to the root and of total edge cost xmath191 we can assume xmath192 or the trivial solution of taking the entire graph xmath2 suffices the main phase of our algorithm proceeds by maintaining a set of 2connected subgraphs that we call clusters and repeatedly finding low density cycles that merge clusters of similar weight to form larger clusters the weight of a cluster xmath136 denoted by xmath193 is roughly the number of terminals it contains clusters are grouped into tiers by weight tier xmath95 contains clusters with weight at least xmath194 and less than xmath102 initially each terminal is a separate cluster in tier 0 we say a cluster is large if it has weight at least xmath0 and small otherwise the algorithm stops when most terminals are in large clusters we now describe the algorithm mergeclusters see next page to simplify notation let xmath144 be the quantity xmath195 we say that a cycle is good if it has density at most xmath144 that is good cycles have density at most xmath196 times the density of the input graph for each xmath95 in xmath197 do if xmath198 every terminal has weight 1 else mark all vertices as non terminals for each small 2connected cluster xmath136 in tier xmath95 do add a dummy terminal xmath199 to xmath2 of weight xmath193 add dummy edges of cost 0 from xmath199 to two arbitrary distinct vertices of xmath136 while xmath2 has a non trivial cycle xmath35 of density at most xmath144 in xmath2 let xmath200 be the small clusters that contain a terminal or an edge of xmath35 note that the terminals in xmath35 belong to a subset of xmath201 form a new cluster xmath202 of a higher tier by merging the clusters xmath203 xmath204 if xmath198 mark all terminals in xmath202 as non terminals else delete all dummy terminals in xmath202 and the associated dummy edges we briefly remark on some salient features of this algorithm and our analysis before presenting the details of the proofs 1 in iteration xmath95 the terminals correspond to tier xmath95 clusters clusters are 2connected subgraphs of xmath2 and by using cycles to merge clusters we preserve 2connectivity as the clusters become larger 2 when a cycle xmath35 is used to merge clusters all small clusters that contain an edge of xmath35 regardless of their tier are merged to form the new cluster therefore at any stage of the algorithm all currently small clusters are edge disjoint large clusters on the other hand are frozen even if they intersect a good cycle xmath35 they are not merged with other clusters on xmath35 thus at any time an edge may be in multiple large clusters and up to one small cluster 3 in iteration xmath95 of mergeclusters the density of a cycle xmath35 is only determined by its cost and the weight of terminals in xmath35 corresponding to tier xmath95 clusters though small clusters of other lower or higher tiers might be merged using xmath35 we do not use their weight to pay for the edges of xmath35 4 the xmath95th iteration terminates when no good cycles can be found using the remaining tier xmath95 clusters at this point there may be some terminals remaining that correspond to clusters which are not merged to form clusters of higher tiers however our choice of xmath144 which defines the density of good cycles is such that we can bound the number of terminals that are left behind in this fashion therefore when the algorithm terminates most terminals are in large clusters by bounding the density of large clusters we can find a solution to the rooted xmath0xmath1vc problem of bounded density because we always use cycles of low density to merge clusters an analysis similar to that of xcite and xcite shows that every large cluster has density at most xmath205 we first present this analysis though it does not suffice to prove theorem thm avekv a more careful analysis shows that there is at least one large cluster of density at most xmath206 this allows us to prove the desired theorem we now formally prove that mergeclusters has the desired behavior first we present a series of claims which together show that when the algorithm terminates most terminals are in large clusters and all clusters are 2connected rem cluster throughout the algorithm the graph xmath2 is always 2connected the weight of a cluster is at most the number of terminals it contains the only structural changes to xmath2 are when new vertices are added as terminals they are added with edges to two distinct vertices of xmath2 this preserves 2connectivity as does deleting these terminals with the associated edges to see that the second claim is true observe that if a terminal contributes weight to a cluster it is contained in that cluster a terminal can be in multiple clusters but it contributes to the weight of exactly one cluster we use the following simple proposition in proofs of 2connectivity the proof is straightforward and hence omitted prop shareedge let xmath207 and xmath208 be xmath1connected subgraphs of a graph xmath5 such that xmath209 then the graph xmath210 is xmath1connected lem clusters2conn the clusters formed by mergeclusters are all xmath1connected let xmath202 be a cluster formed by using a cycle xmath35 to merge clusters xmath200 the edges of the cycle xmath35 form a 2connected subgraph of xmath2 and we assume that each xmath211 is 2connected by induction further xmath35 contains at least 2 vertices of each xmath211 may be a singleton vertex for instance if we are in tier 0 but such a vertex does not affect 2connectivity so we can use induction and proposition prop shareedge above we assume xmath212 is 2connected by induction and xmath35 contains 2 vertices of xmath213 so xmath214 is 2connected note that we have shown xmath215is 2connected but xmath35 and hence xmath202 might contain dummy terminals and the corresponding dummy edges however each such terminal with the 2 associated edges is a ear of xmath202 deleting them leaves xmath202 2connected lem fewleftbehind the total weight of small clusters in tier xmath95 that are not merged to form clusters of higher tiers is at most xmath216 assume this were not true this means that mergeclusters could find no more cycles of density at most xmath144 using the remaining small tier xmath95 clusters but the total cost of all the edges is at most xmath189 and the sum of terminal weights is at least xmath216 this implies that the density of the graph using the remaining terminals is at most xmath217 but by theorem thm cycleexists the graph must then contain a good non trivial cycle and so the while loop would not have terminated cor weightlargeclusters when the algorithm mergeclusters terminates the total weight of large clusters is at least xmath218 each terminal not in a large cluster contributes to the weight of a cluster that was not merged with others to form a cluster of a higher tier the previous lemma shows that the total weight of such clusters in any tier is at most xmath219 since there are xmath220 tiers the total number of terminals not in large clusters is less than xmath221 so far we have shown that most terminals reach large clusters all of which are 2connected but we have not argued about the density of these clusters the next lemma says that if we can find a large cluster of good density we can find a solution to the xmath0xmath1vc problem of good density lem segment let xmath202 be a large cluster formed by mergeclusters if xmath202 has density at most xmath222 we can find a graph xmath223 with at least xmath0 terminals each of which is xmath1connected to xmath39 of total cost at most xmath224 let xmath200 be the clusters merged to form xmath202 in order around the cycle xmath35 that merged them each xmath211 was a small cluster of weight at most xmath0 a simple averaging argument shows that there is a consecutive segment of xmath211s with total weight between xmath0 and xmath225 such that the cost of the edges of xmath35 connecting these clusters together with the costs of the clusters themselves is at most xmath226 let xmath227 be the first cluster of this segment and xmath228 the last let xmath28 and xmath229 be arbitrary terminals of xmath227 and xmath228 respectively connect each of xmath28 and xmath229 to the root xmath39 using 2 vertex disjoint paths the cost of this step is at most xmath230 we assumed that every terminal could be 2connected to xmath39 using disjoint paths of cost at most xmath67 the graph xmath223 thus constructed has at least xmath0 terminals and total cost at most xmath231 we show that every vertex xmath232 of xmath223 is 2connected to xmath39 this completes our proof let xmath232 be an arbitrary vertex of xmath223 suppose there is a cut vertex xmath131 which when deleted separates xmath232 from xmath39 both xmath28 and xmath229 are 2connected to xmath39 and therefore neither is in the same component as xmath232 in xmath233 however we describe 2 vertex disjoint paths xmath234 and xmath235 in xmath223 from xmath232 to xmath28 and xmath229 respectively deleting xmath131 can not separate xmath232 from both xmath28 and xmath229 which gives a contradiction the paths xmath234 and xmath235 are easy to find let xmath211 be the cluster containing xmath232 the cycle xmath35 contains a path from vertex xmath236 to xmath237 and another vertex disjoint path from xmath238 to xmath239 concatenating these paths with paths from xmath240 to xmath28 in xmath227 and xmath241 to xmath229 in xmath228 gives us vertex disjoint paths xmath242 from xmath243 to xmath28 and xmath244 from xmath245 to xmath229 since xmath211 is 2connected we can find vertex disjoint paths from xmath232 to xmath243 and xmath245 which gives us the desired paths xmath234 and xmath235 may not be in any cluster xmath211 in this case xmath234 is formed by using edges of xmath35 from xmath232 to xmath237 and then a path from xmath240 to xmath28 xmath235 is formed similarly we now present the two analyses of density referred to earlier the key difference between the weaker and tighter analysis is in the way we bound edge costs in the former each large cluster pays for its edges separately using the fact that all cycles used have density at most xmath246 in the latter we crucially use the fact that small clusters which share edges are merged roughly speaking because small clusters are edge disjoint the average density of small clusters must be comparable to the density of the input graph xmath2 once an edge is in a large cluster we can no longer use the edge disjointness argument we must pay for these edges separately but we can bound this cost first the following lemma allows us to show that every large cluster has density at most xmath205 lem tiercost for any cluster xmath202 formed by mergeclusters during iteration xmath95 the total cost of edges in xmath202 is at most xmath247 we prove this lemma by induction on the number of vertices in a cluster let xmath248 be the set of clusters merged using a cycle xmath35 to form xmath202 let xmath249 be the set of clusters in xmath248 of tier xmath95 and xmath250 be xmath251 xmath250 contains clusters of tiers less or greater than xmath95 that contained an edge of xmath35 the cost of edges in xmath202 is at most the sum of the cost of xmath35 the cost of xmath249 and the cost of xmath250 since all clusters in xmath250 have been formed during iteration xmath95 or earlier and are smaller than xmath202 we can use induction to show that the cost of edges in xmath250 is at most xmath252 all clusters in xmath249are of tier xmath95 and so must have been formed before iteration xmath95 any cluster formed during iteration xmath95 is of a strictly greater tier so we use induction to bound the cost of edges in xmath249 by xmath253 finally because xmath35 was a good density cycle and only clusters of tier xmath95 contribute to calculating the density of xmath35 the cost of xmath35 is at most xmath254 therefore the total cost of edges in xmath202 is at most xmath255 let xmath202 be an arbitrary large cluster since we have only xmath220 tiers the previous lemma implies that the cost of xmath202 is at most xmath256 that is the density of xmath202 is at most xmath205 and we can use this fact together with lemma lem segment to find a solution to the rooted xmath0xmath1vc problem of cost at most xmath257 this completes the weaker analysis but this does not suffice to prove theorem thm avekv to prove the theorem we would need to use a large cluster xmath202 of density xmath206 instead of xmath205 for the purpose of the more careful analysis implicitly construct a forest xmath258 on the clusters formed by mergeclusters initially the vertex set of xmath258 is just xmath13 the set of terminals and xmath258 has no edges every time a cluster xmath202 is formed by merging xmath200 we add a corresponding vertex xmath202 to the forest xmath258 and add edges from xmath202 to each of xmath203 xmath202 is the parent of xmath259 we also associate a cost with each vertex in xmath258 the cost of the vertex xmath202 is the cost of the cycle used to form xmath202 from xmath203 we thus build up trees as the algorithm proceeds the root of any tree corresponds to a cluster that has not yet become part of a bigger cluster the leaves of the trees correspond to vertices of xmath2 they all have cost 0 also any large cluster xmath202 formed by the algorithm is at the root of its tree we refer to this tree as xmath260 for each large cluster xmath202 after mergeclusters terminates say that xmath202 is of type xmath95 if xmath202 was formed during iteration xmath95 of mergeclusters we now define the final stage clusters of xmath202 they are the clusters formed during iteration xmath95 that became part of xmath202 we include xmath202 itself in the list of final stage clusters even though xmath202 was formed in iteration xmath95 of mergeclusters it may contain other final stage clusters for instance during iteration xmath95 we may merge several tier xmath95 clusters to form a cluster xmath136 of tier xmath261 then if we find a good density cycle xmath35 that contains an edge of xmath136 xmath136 will merge with the other clusters of xmath35 the penultimate clusters of xmath202 are those clusters that exist just before the beginning of iteration xmath95 and become a part of xmath202 equivalently the penultimate clusters are those formed before iteration xmath95 that are the immediate children in xmath260 of final stage clusters figure 1 illustrates the definitions of final stage and penultimate clusters such a tree could be formed if in iteration xmath262 4 clusters of this tier merged to form xmath263 a cluster of tier xmath264 subsequently in iteration xmath95 clusters xmath6 and xmath265 merge to form xmath266 we next find a good cycle containing xmath267 and xmath2 xmath266 contains an edge of this cycle so these three clusters are merged to form xmath17 note that the cost of this cycle is paid for the by the weights of xmath267 and xmath2 only xmath266 is a tier xmath264 cluster and so its weight is not included in the density calculation finally we find a good cycle paid for by xmath268 and xmath35 since xmath17 and xmath263 share edges with this cycle they all merge to form the large cluster xmath202 circle draw inner sep0pt minimum size6 mm circle draw inner sep0pt minimum size7 mm font y at 655 high xmath202 a at 154 vertex xmath95 b at 454 high xmath269 c at 754 vertex xmath95 d at 1054 vertex xmath264 a y b c y d at 074 xmath268 at 364 xmath17 at 674 xmath35 at 974 xmath263 e at 325 vertex xmath95 f at 4525 high xmath264 g at 625 vertex xmath95 e b f b g at 2327 xmath267 at 3827 xmath266 at 53527 xmath2 h at 351 vertex xmath95 j at 551 vertex xmath95 h f j at 2913 xmath6 at 4913 xmath265 at y vertex at b vertex at f vertex an edge of a large cluster xmath202 is said to be a final edge if it is used in a cycle xmath35 that produces a final stage cluster of xmath202 all other edges of xmath202 are called penultimate edges note that any penultimate edge is in some penultimate cluster of xmath202 we define the final cost of xmath202 to be the sum of the costs of its final edges and its penultimate cost to be the sum of the costs of its penultimate edges clearly the cost of xmath202 is the sum of its final and penultimate costs we bound the final costs and penultimate costs separately recall that an edge is a final edge of a large cluster xmath202 if it is used by mergeclusters to form a cycle xmath35 in the final iteration during which xmath202 is formed the reason we can bound the cost of final edges is that the cost of any such cycle is at most xmath144 times the weight of clusters contained in the cycle and a cluster does not contribute to the weight of more than one cycle in an iteration this is also the essence of lemma lem tiercost we formalize this intuition in the next lemma lem final the final cost of any large cluster xmath202 is at most xmath270 where xmath271 is the weight of xmath202 let xmath202 be an arbitrary large cluster in the construction of the tree xmath260 we associated with each vertex of xmath260 the cost of the cycle used to form the corresponding cluster to bound the total final cost of xmath202 we must bound the sum of the costs of vertices of xmath260 associated with final stage clusters the weight of xmath202 xmath271 is at least the sum of the weights of the penultimate tier xmath95 clusters that become a part of xmath202 therefore it suffices to show that the sum of the costs of vertices of xmath260 associated with final stage clusters is at most xmath144 times the sum of the weights of xmath202 s penultimate tier xmath95 clusters note that a tier xmath95 cluster must have been formed prior to iteration xmath95 and hence it can not itself be a final stage cluster a cycle was used to construct a final stage cluster xmath136 only if its cost was at most xmath144 times the sum of weights of the penultimate tier xmath95 clusters that become a part of xmath136 larger clusters may become a part of xmath136 but they do not contribute weight to the density calculation therefore if xmath136 is a vertex of xmath260 corresponding to a final stage cluster the cost of xmath136 is at most xmath144 times the sum of the weights of its tier xmath95 immediate children in xmath260 butxmath260 is a tree and so no vertex corresponding to an penultimate tier xmath95 cluster has more than one parent that is the weight of a penultimate cluster pays for only one final stage cluster therefore the sum of the costs of vertices associated with final stage clusters is at most xmath144 times the sum of the weights of xmath202 s penultimate tier xmath95 clusters and so the final cost of xmath202 is at most xmath270 lem penultimate if xmath272 and xmath273 are distinct large clusters of the same type no edge is a penultimate edge of both xmath272 and xmath273 suppose by way of contradiction that some edge xmath85 is a penultimate edge of both xmath272 and xmath273 which are large clusters of type xmath95 let xmath274 respectively xmath275 be a penultimate cluster of xmath272 resp xmath273 containing xmath85 as penultimate clusters both xmath274 and xmath275 are formed before iteration xmath95 but until iteration xmath95 neither is part of a large cluster and two small clusters can not share an edge without being merged therefore xmath274 and xmath275 must have been merged so they can not belong to distinct large clusters giving the desired contradiction thm goodlargecluster after mergeclusters terminates at least one large cluster has density at most xmath206 we define the penultimate density of a large cluster to be the ratio of its penultimate cost to its weight consider the total penultimate costs of all large clusters for any xmath95 each edge xmath276can be a penultimate edge of at most 1 large cluster of type xmath95 this implies that each edge can be a penultimate edge of at most xmath220 clusters therefore the sum of penultimate costs of all large clusters is at most xmath277 further the total weight of all large clusters is at least xmath278 therefore the weighted average penultimate density of large clusters is at most xmath279 and hence there exists a large cluster xmath202 of penultimate density at most xmath280 the penultimate cost of xmath202 is therefore at most xmath281 and from lemma lem final the final cost of xmath202 is at most xmath270 therefore the density of xmath202 is at most xmath282 theorem thm goodlargecluster and lemma lem segment together imply that we can find a solution to the rooted xmath0xmath1vc problem of cost at most xmath191 this completes our proof of theorem thm avekv we list the following open problems can the approximation ratio for the xmath0xmath1vc problem be improved from the current xmath44 to xmath283 or better removing the dependence on xmath32 to obtain even xmath284 could be interesting if not can one improve the approximation ratio for the easier xmath0xmath1ec problem can we obtain approximation algorithms for the xmath0xmath8vc or xmath0xmath8ec problems for xmath285 in general few results are known for problems where vertex connectivity is required to be greater than 2 but there has been more progress with higher edge connectivity requirements given a 2connected graph of density xmath146 with some vertices marked as terminals we show that it contains a non trivial cycle with density at most xmath146 and give an algorithm to find such a cycle we have also found an xmath37approximation for the problem of finding a minimum density non trivial cycle is there a constant factor approximation for this problem can it be solved exactly in polynomial time b awerbuch y azar a blum and s vempala new approximation guarantees for minimum weight xmath0trees and prize collecting salesmen 281254262 1999 preliminary version in proc of acm stoc 1995 m x goemans and d p williamson the primal dual method for approximation algorithms and its application to network design problems in ds hochbaum editor approximation algorithms for np hard problems pws publishing company 1996
in the xmath0xmath1vc problem we are given an undirected graph xmath2 with edge costs and an integer xmath0 the goal is to find a minimum cost 2vertex connected subgraph of xmath2 containing at least xmath0 vertices a slightly more general version is obtained if the input also specifies a subset xmath3 of terminals and the goal is to find a subgraph containing at least xmath0 terminals closely related to the xmath0xmath1vc problem and in fact a special case of it is the xmath0xmath1ec problem in which the goal is to find a minimum cost 2edge connected subgraph containing xmath0 vertices the xmath0xmath1ec problem was introduced by lau et al xcite who also gave a poly logarithmic approximation for it no previous approximation algorithm was known for the more general xmath0xmath1vc problem we describe an xmath4 approximation for the xmath0xmath1vc problem
introduction the algorithm for the @xmath0-@xmath1vc problem finding low-density non-trivial cycles pruning 2-connected graphs of good density conclusions
grb light curves measured with swift consist of a bat light curve in the 15 150 kev range followed after slewing within xmath2 s by a detailed 03 10 kev xrt x ray light curve xcite this information supplements our knowledge of the highly variable hard x ray and xmath0ray light curves measured from many grbs with batse and other grb detectors about one half of swift grbs show x ray flares or short timescale structure sometimes hours or later after the onset of the grb approximately xmath3 of the swift grbs display rapid x ray declines and an additional xmath4 display features unlike simple blast wave model predictions xcite we make three points in this paper 1 highly variable light curves can be produced by an external shock under the assumption that the grb blast wave does not spread or spreads much more slowly than assumed from gas dynamic or relativistic hydrodynamic models that do not take into account magnetic effects in grb blast waves if this assumption is valid then it is wrong to conclude that highly variable xmath0ray emissions x ray flares with xmath5 or late time x ray flares require delayed central engine activity or colliding shells 2 external shocks in grb blast waves can accelerate cosmic ray protons and ions to xmath1 ev making grbs a logical candidate to accelerate the highest energy cosmic rays 3 escape of ultra high energy cosmic rays uhecrs takes place from an external shock formed by an expanding grb blast wave on time scales of a few hundred seconds for the observer blast wave deceleration due to the loss of the internal hadronic energy is proposed xcite to be the cause of x ray declines in grb light curves observed with swift we have performed a detailed analysis of the interaction between a grb blast wave shell and an external stationary cloud xcite the analysis is performed under the assumption that the cloud width xmath6 where xmath7 is the distance of the cloud from the grb explosion the interaction is divided into three phases 1 a collision phase with both a forward and reverse shock 2 a penetration phase where either the reverse shock has crossed the shell while the forward shock continues to cross the cloud or vice versa and 3 an expansion phase where both shocks have crossed the cloud and shell and the shocked fluid expands the shell width is written as xmath8 and the proper number density of the relativistic shell is given by xmath9 where xmath10 is the coasting lorentz factor of the grb blast wave and xmath11 is the apparent isotropic energy release short timescale flaring requires a a strong forward shock which from the relativistic shock jump conditions xcite imply a maximum cloud density given by xmath12 and b significant blast wave deceleration to provide efficient energy extraction which occurs in clouds with thick columns xcite that is with densities xmath13 these two conditions translate into the requirement that xmath14 in order to produce short timescale variability the short timescale variabilty condition xcite for quasi spherical clouds is xmath15 using eq deltax for the shell width eqs deltacl and deltacl imply the requirement that xmath16 in order to produce rapid variability from an external shock hence the production of xmath0ray pulses and x ray flares from external shocks depends on whether the grb blast wave width spreads in the coasting phase according to eq deltax with xmath17 as is generally argued in the gas dynamical study of xcite inhomogeneities in the grb fireballproduce a spread in particle velocities of order xmath18 so that xmath19 when xmath20 this dependence is also obtained in a hydrodynamical analysis xcite two points can be made about these relations first the spread in xmath21 considered for a spherical fireball is averaged over all directions as the fireball expands and becomes transparent the variation in fluid motions or gas particle directions over a small solid angle xmath22 of the full sky becomes substantially less second the particles within a magnetized blast wave shell will expand and adiabatically cool so that the fluid will spread with thermal speed xmath23 the comoving width of the blast wave is xmath24 so that the spreading radius xmath25 adiabatic expansion of nonrelativistic particles can produce a very cold shell with xmath26 leading to very small shell widths the requirement on the thinness of xmath27 does not apply to the adiabatic self similar phase where the width is necessarily xmath28 as implied by the relativistic shock hydrodynamic equations xcite even in this case however xmath29 if the blast wave is highly radiative xcite under the assumption of a strong forward shock and small clouds in the vicinity of a grb highly variable grb light curves are formed with reasonable efficiency xmath30 to transform blast wave energy into xmath0 rays xcite the maximum particle energy for a cosmic ray proton accelerated by an external shock in a grb blast wave is derived consider a grb blast wave with apparent isotropic energy release xmath31 ergs initial coasting lorentz factor xmath32 and external medium density xmath33 xmath34 the comoving blast wave volume for the assumed spherically symmertric explosion after reaching distance xmath7 from the center of the explosion is xmath35 where the shell width xmath36 the factor xmath37 is the product of the geometrical factor xmath38 and the factor xmath39 from the continuity equations of relativistic hydrodynamics xmath40 is the evolving grb blast wave lorentz factor the hillas condition xcite for maximum particle energy xmath41 is that the particle larmor radius is less than the size scale of the system xmath42 in the stationary frame primes refer to the comoving frame is given by xmath43 the largest particle energy is reached at the deceleration radius xmath44 when xmath45 where the deceleration radius xmath46 hence xmath47 the mean magnetic field xmath48 in the grb blast wave is assigned in terms of a magnetic field parameter xmath49 that gives the magnetic field energy density in terms of the energy density of the downstream shocked fluid so xmath50 thus xmath51 xcite so that external shocks of grbs can accelerate particles to ultra high and indeed super gzk energies implicit in this result is that acceleration occurs within the grb blast wave through for example second order fermi acceleration xcite acceleration to ultra high energy through first order relativistic shock acceleration requires a highly magnetized surrounding medium xcite if uhecrs are accelerated by grb blast waves then blast wave dynamics will be affected by the loss of internal energy when the uhecrs escape this effect is proposed to explain the rapid x ray declines in the swift grb light curves xcite photohadronic processes become important when the threshold condition xmath52 where xmath53 is the dimensionless photon energy xmath54 is the proton energy and xmath55 is the proton lorentz factor for protons interacting with photons at the peak photon energy xmath56 of the xmath57 spectrum xmath58 the comoving timescale for a proton to lose a significant fraction of its energy through photohadronic processesis given by xmath59 where xmath60 c xmath61 xmath62b is the product of the photohadronic cross section and inelasticity and the comoving energy density of photons with energy xmath63 is xmath64 the relation between the measured xmath65 flux xmath66 and internal energy density is xmath67 where xmath68 cm is the luminosity distance of the grb for protons interacting with photons with energy xmath69 we therefore find that the comoving time required for a proton with energy xmath70 as measured by an observer outside the blast wave to lose a significant fraction of its energy through photohadronic processes is xmath71 where xmath72 cm and xmath73 ergs xmath74 sxmath75 is the xmath57 flux measured at xmath76 the relation between xmath70 and xmath76 is given by eq epk the dependence of the terms xmath77 xmath78 xmath79 and xmath80 on observer time in eq tprimephipi can be analytically expressed for the external shock model in terms of the grb blast wave properties xmath11 xmath10 environmental parameters eg xmath81 and microphysical blast wave parameters xmath49 and xmath82 xcite this can also be done for other important timescales for example the available comoving time xmath83 since the start of the grb explosion the comoving acceleration time xmath84 written as a factor xmath85 times the larmor timescale xcite the escape timescale xmath86 in the bohm diffusion approximation and the proton synchrotron energy loss timescale xmath87 1 shows the rates or the inverse of the timescales for xmath88 ev protons in the case of an adiabatic blast wave that decelerates in a uniform surrounding medium the left hand panel of fig 1 uses the parameter set xmath89 and the right hand panel uses the parameter set xmath90 the characteristic deceleration timescale in the left and right cases given by xmath91 s is xmath92 s and xmath93 s respectively for these parameters it takes a few hundred seconds to accelerate protons to energies xmath94 ev at which time photohadronic losses and escape start to be important photohadronic losses inject electrons and photons into the grb blast wave the electromagnetic cascade emission in addition to hyperrelativistic electron synchrotron radiation from neutron escape followed by subsequent photohadronic interactions xcite makes a delayed anomalous xmath0ray emission component as observed in some grbs xcite ultra high energy neutrino secondaries are produced by the photohadronic processes detection of high energy neutrinos from grbs would confirm the importance of hadronic processes in grb blast waves the ultra high energy neutrons and escaping protons form the uhecrs with energies xmath1 ev the grb blast wave rapidly loses internal energy due to the photohadronic processes and particle escape the blast wave will then rapidly decelerate producing a rapidly decaying x ray flux as argued in more detail elsewherexcite the rapidly decaying fluxes in swift grbs are signatures of uhecr acceleration by grbs if this scenario is correct glast will detect anomalous xmath0ray components particularly in those grbs that undergo rapid x ray declines in their x ray light curves this work is supported by the office of naval research by nasa glast science investigation no dpr s1563y and nasa swift guest investigator grant no dpr nng05ed41i thanks also to guido chincarini for the kind invitation
highly variable xmath0ray pulses and x ray flares in grb light curves can result from external shocks rather than central engine activity under the assumption that the grb blast wave shell does not spread acceleration of cosmic rays to xmath1 ev energies can take place in the external shocks of grbs escape of hadronic energy in the form of uhecrs leads to a rapidly decelerating grb blast wave which may account for the rapid x ray declines observed in swift grbs 19991201 v14c il nuovo cimento
introduction x-ray flares and @xmath0-ray pulses from external shocks cosmic ray acceleration in grb blast waves rapid x-ray declines from uhecr escape
there have been many reviews of positron physics over the years xcite more recently xcite and xcite considered the subject with an emphasis on experimental measurements involving noble gas targets the related topic of antihydrogen formation has also been thoroughly reviewed xcite resonances and the closely related bound states of positrons with atoms and molecules has also been extensively discussed xcite this work concentrates on the progress in application of theoretical methods to scattering processes in a quantum few body system involving positrons as projectiles and multi electron atomic targets with explicit treatment of positronium formation particular emphasis is on the developments taken place since the comprehensive review of positron physics by xcite it begins by describing the currently available theories of low energy positron collisions with atoms and simple molecules then it describes the development and application of the two centre convergent close coupling method which explicitly treats the ps formation processes developments in positron physics have resulted in several technologies in medicine and material science in medicine the use of the positron emission tomography pet scanners help to make diagnoses of cancer detection and of certain brain function disorders material science uses positron annihilation lifetime spectroscopy pals to analyze and design specific materials the critical component is positronium ps formation with its annihilation providing the key signature of its origin ps is a short lived exotic atom of a bound positron electron pair that has similar structure to atomic hydrogen scattering experiments are the main tool of modern physics to learn about the structure of matter by analyzing collision products we can extract useful information about the objects being studied historically ordinary matter particles like electrons and protons were predominantly used as scattering particles in experimental atomic and molecular physics withthe development of positron and antiproton beams studies of interactions of these particles with matter became possible the last decade has seen significant progress in low energy trap based positron beams xcite new high resolution experimental measurements have been performed for a range of atomic and molecular targets including he xcite ne and ar xcite xe xcite kr xcite hxmath0 xcite and hxmath0o xcite the development of positron beams motivated novel experimental and theoretical studies particularly important are the positronium formation cross sections see the recent recommendations of xcite in addition interest has been motivated by possible binding of positrons to atoms xcite theoretical description of electron impact ionisation and excitation processes has seen significant progress in recent years due to the development of various highly sophisticated methods including the exterior complex scaling ecs xcite r matrix with pseudo states rmps xcite time dependent close coupling tdcc xcite and convergent close coupling ccc xcite a review of electron induced ionisation theory has been given by xcite such problems are examples of a class where there is only one natural centre namely the atomic centre all coordinates are readily written with the origin set at the atomic centre yet there are many atomic collision systems of practical and scientific interest that involve at least two centres such as the positron hydrogen scattering system this is a three body system where all the particles are distinguishable and which allows for their rearrangement here we have two natural centres the atomic centre and the positronium ps centre for positron hydrogen scattering ionisationnow splits into two separate components the rearrangement process of ps formation and the three body breakup process a proper formulation of ps formation processes requires a combined basis consisting of two independent basis sets for each of the centres which makes theoretical studies considerably more challenging than for electron scattering furthermore the positron atom system is an ideal prototype of the ubiquitous collision systems such as proton atom scattering where charge exchange processes also require a two centre treatment see xcite for example every positron atom scattering system has an ionisation breakup threshold above which an electron may be freely ejected at 68 ev below this thresholdis the ps formation threshold at higher energies excitation or ionization of the targetcan take place in addition for multi electron targets there could be many more reaction channels such as multiple ionisation and ionisation with excitation for molecular targets there could be rovibrational excitation and dissociation another reaction channel is the positron electron pair annihilation which can occur at any scattering energy in this processthe positron collides with one of the target s electrons and annihilates into 2 or 3 gamma rays theoretical xcite and experimental studies xcite of the annihilation process have shown that its cross section is up to 10xmath1 times smaller than the elastic scattering cross section therefore the annihilation channel is often omitted from scattering calculations the elastic scattering excitation ps formation and ionisation are the dominant channels of primary interest the first theoretical studies of positron scattering from atoms date back to the 1950s when xcite used the first born approximation fba to describe ps formation in exmath2h collisions the born method is based on the assumption that the wavefunction for the scattering system can be expanded in a rapidly convergent series this approximation consists in using plane waves to describe the projectile and scattered particles the born approximation is reliable when the scattering potential is relatively small compared to the incident energy and thus is applicable only at high energies therefore this method is mainly focused on high energy excitation and ionization processes one of the most successful methods applied to the low energy exmath2h elastic scattering problem is the kohn variational method the method was initially developed for scattering phase shifts in nuclear reactions by xcite later this method was extended to positron hydrogen scattering by xcite a detailed description of the method was given by xcite the method is based on finding the form of the functional called kohn functional involving the phaseshift as a parametric function of the total wavefunction requiring the functional to be stationary with respect to the variations of the parameters generates equations for the linear parameters the phaseshifts can be accurately obtained by performing iterative calculations and finding the values of the nonlinear parameters that make the functional stationary the many body theory of xcite utilizes techniques that originated from quantum field theory using the feynman diagram techniquethe perturbation series in the interaction between particles can be written in an intuitive way when it is applied to positron atom scattering however difficulties arise due to necessity to take into account virtual ps formation however a finite number of perturbation theory terms can not describe a bound ps state xcite developed a sophisticated method based on the many body perturbation theory they used an approximation by considering virtual ps formation only in the ground state the calculations with this method showed that for elastic scattering of positrons on hydrogen and helium atoms the virtual ps formation contribution was almost 30 and 20 of the total correlation potential respectively xcitehave further improved the method by introducing the techniques for the exact summation of the electron positron ladder diagram series the method was applied to xmath3 scattering below the ps formation threshold and resulted in good agreement with accurate variational calculations the momentum space coupled channel optical cco potential method was first developed for electron atom scattering in the 1980s by xcite the method relies on constructing a complex equivalent local potential to account for the ionization and the ps formation channels the cco method gave excellent ionization xcite total and ps formation xcite cross sections for positron scattering on hydrogen xcite used the cco method to study excitation of atomic hydrogen from the metastable 2s state following xcite xcite developed the so called two center two channel eikonal final state continuum initial distorted wave model to calculate ps formation in the ground and the lowest excited states they also presented a xmath4 scaling law for formation of psxmath5 cross sections on the entire energy range with xmath6 varying as a function of the positron incident energy a hyperspherical hidden crossing hhc method has been applied to positron impact ionization of hydrogen near the threshold by xcite they have calculated the ionization cross section for s p and d waves the hhc has also been used to calculate partial wave ps1xmath6formation cross sections for low energy positron collisions with h li and na atoms xcite one of the most sophisticated and commonly used methods is the close coupling cc formalism which is based on the expansion of the total wavefunction using the target state wavefunctions substitution of this expansion into the schrdinger equation yields coupled differential equations in coordinate space or lippmann schwinger integral equations for the t matrix in momentum space by solving these equationsthe transition amplitudes are obtained for all open channels considerable pioneering work in this field has been done by xcite xcite xcite and xcite who demonstrated the success of using two centre expansions consisting of ps and atomic states here and belowwhen we discuss close coupling calculations we denote the combined two centre basis used to expand the total scattering wavefunction as xmath7 where xmath8 is the number of atomic negative energy eigenstates and xmath9 is the number of positronium eigenstates we also use a bar to indicate negative and positive energy pseudostates for instance ccxmath10 refers to close coupling calculations with a combined basis made of xmath8 pseudostates for the atomic centre supplemented by xmath9 ps eigenstates xcite performed the first accurate two state cc11 calculation this work known as the static exchange approximation showed a giant spurious resonance near 40 ev incident positron energy absence of such a resonance was demonstrated by xcite using a larger ccxmath11 calculation that included xmath6 xmath12 and xmath13type pseudostates for both centres however the ccxmath11 gave new spurious resonances above the ionization threshold an energy averaging procedure was used to get smooth results for the cross sections to get rid of the pseudoresonances considerable progress in the description of exmath2h has been made by xcite by using the close coupling approach xcite have performed convergence studies for the full positron hydrogen problem at low energies below the ionization threshold they showed good agreement of sufficiently large pseudostate close coupling calculations and the benchmark variational calculations of ps formation by xcite the convergent close coupling ccc method was first developed for exmath14h scattering by xcite its modification to positron scattering in a one centre approach was trivial in that electron exchange was dropped and the interaction potentials changed sign xcite the ccc method with a xmath15 basis ie a single atomic centre expansion without any ps states gave very good results for the total elastic excitation and ionization cross sections at higher incident energies where the ps formation cross section is small allowing for distinction between two experimental data sets xcite the ccc calculations showed no pseudoresonances so long as a sufficiently large basis was taken following the success of the large single centre ccc calculations xcite and xcite used a large basis for the atomic centre supplemented by a few eigenstates of ps calculations with the xmath16 and xmath17 bases made of a large atomic basis similar to that of xcite and the three lowest lying eigenstates of ps gave results significantly better than those from the xmath11 basis xcite the ccc method with a xmath18 basis was developed by xcite to study convergence in two centre expansions this was applied to positron scattering on hydrogen within the s wave model retaining only xmath6states in the combined basis this work for the first time demonstrated the convergence of the non orthogonal two centre expansions the convergence in all channels was only possible when two independent near complete laguerre bases are employed on both of the centres interestingly the total ionisation cross section had two independently converged components one component was coming from the atomic centre and represented direct ionisation of the hydrogen atom the other came from the ps centre and represented ps formation in the continuum the convergence in the case of the full positron hydrogen scattering problem was demonstrated by xcite the ccc calculations with such a combined two centre basis have shown very good agreement with the experimental measurements of xcite and xcite theoretical investigations of positron scattering from helium has an additional challenge due to the complexity of the target structure in multi electron targets two electron excitation or ionization with excitation channels are usually excluded this is a good approximation as the contribution of these channels is typically two orders of magnitude smaller than the corresponding one electron excitation processes xcite first calculations of exmath2he scattering have been performed by xcite in the fba they used only the ground states for he and ps and obtained cross sections for elastic scattering and ps formation in its ground state their study highlighted the importance of the ps channel coupling with the elastic channel and thereby motivated further studies another extensive study based on the born approximation was presented by xcite to estimate ps formation cross section in arbitrary s states from the fba studies it became clear that more sophisticated approaches to the problem were required the distorted wave born approximation dwba results are obtained by using distorted wavefunctions in first order calculations this method can give more accurate results than the fba down to lower energies studies utilizing the dwba by xcite were applied to the helium xmath19 and xmath20 excitations by positrons in the energy range from near the threshold up to 150 ev although the agreement with the experimental data was not very satisfactory the method indicated the importance of the inclusion of the polarization potential in the excitation channels at low energies the most systematic study of the ionisation process within the framework of dwba was carried out by xcite they used coulomb and plane waves and also included exchange effects they obtained good agreement with the experimental results of xcite and xcite over the energy range from near threshold to 500 ev however the most important and difficult channel ps formation was not included in the early dwba studies xcite calculated the differential and total cross sections for the excitation of the helium xmath21 state using the second order dwba method another dwba method including ps channels has been reported by xcite for intermediate to high scattering energies they have calculated ps formation cross section and achieved good agreement with available experimental data above 60 ev however their results were not accurate for ps formation below 60 ev considering the fact that ps formation starts at 178 ev and reaches its maximum around 40 ev the applicability of this method is quite limited xcite applied the random phase approximation rpa based on many body theory xcite to positron helium scattering at low energies by using an approximate account of virtual ps formation they obtained good agreement with the elastic scattering experimental data of xcite and xcite at the lowest energies a similar rpa method was used by xcite to calculate positron impact excitation of he into 2xmath22s and 2xmath22p states as mentioned above xcite developed a more sophisticated method based on the many body perturbation theory the calculations with this method showed that the contribution from virtual ps formation was significant applications of the method to various atomic targets were reported by xcite and xcite the kohn variational method was first applied to positron helium collisions by humberston et al a comprehensive study of positron helium scattering with the kohn variational method was given by xcite they obtained very accurate cross sections at low energies however agreement with the experimental results of xcite and xcite for ps formation cross section was qualitative with a similar energy dependence but with almost 25 difference in magnitude nevertheless very good agreement was obtained for the total cross section below the ps formation threshold the cco method mentioned earlier was applied to positron scattering on helium by xcite they calculated the total and ps formation cross sections from the ps formation threshold to 500 ev the calculated results agreed well with the corresponding experimental data except for the data of xcite for the total cross section in the energy range from 50 to 100 ev xcite applied a polarized orbital approximation method to low energy elastic positron helium scattering and obtained good agreement with the experimental results other calculations using optical potentials were presented by xcite and by xcite for slow positron scattering from helium elastic scattering cross sections of both reports were in good agreement with experimental data in general the optical potential methods proved to be useful for calculations of total cross sections they are problematic however when applied to more detailed cross sections like target excitation and ps formation in excited states xcite utilized the classical trajectory monte carlo ctmc technique to model positron scattering the method is described fully for ion atom collisions by xcite and by xcite using this technique xcite calculated differential ionization cross section for positron helium and also positron krypton collisions the main advantage of this method is that it can describe dynamic effects occurring in collisions for instance the ctmc calculations showed that the probability of positron scattering to large angles after ionising the target may be comparable or even much greater than the probability of positronium formation they suggested that the disagreement between theory and experiment above 60 ev might be resolved by accounting for the flux in the experiments measuring positronium formation due to positrons scattered to angles that allow them to escape confinement xcite also applied the ctmc method to helium ionization by positron impact they obtained good agreement with experimental data of xcite results of ctmc reported by xcite overestimate the recent experimental data by xcite for the ps formation cross section below 60 ev this questions the applicability of the classical trajectory approach to positron scattering at low and intermediate energies a very comprehensive study of positron helium scattering using the close coupling method was carried out by xcite they used two kinds of expansions the first one consisting of 24 helium eigen and pseudostates and the lowest three ps eigenstates and the second one with only 30 helium eigen and pseudo states the helium target structure was modeled using a frozen core approximation which can produce good excited states but a less accurate ground state the atomic pseudostates were constructed using a slater basis for the 27state approximation only results in the energy range above the positronium formation threshold were given results for lower energies were unsatisfactory and it was suggested that this might be due to the lack of convergence from the use of an inaccurate helium ground state wavefunction the total cross sections from both the 27 and 30state approaches agreed well with the experimental results of xcite xcite and xcite for the energy range above the threshold of positronium formation for lower energies qualitative agreement was obtained in terms of the shape and the reproduction of the ramsauer townsend minimum near 2 ev while the theoretical results were a factor of 2 larger than the experimental data the ps formation cross section from the 27state approximation was in good agreement with the experimental data of xcite up to about 60 ev and with the data of xcite and xcite up to 90 ev above 100 ev the calculations were much lower than the experimental data of xcite and xcite while being closer to the data of xcite another close coupling calculation by xcite using a few helium and positronium states with a one electron description of the helium atom showed less satisfactory agreement with the experimental data chaudhuri and adhikari xcite performed calculations using only 5 helium and 3 positronium states in the expansion their results for ps formation agreed well with the experimental results by xcite at energies near the ps threshold and displayed a better agreement with the data of xcite at the higher energies however the theoretical results were much lower than the experimental data at energies near the maximum of the cross section xcite applied the hyper spherical cc approach to the problem however they also considered helium as a one electron target and thus the excitation and ps formation cross sections were multiplied by factor of 2 satisfactory results were obtained for the total ps formation and he2xmath22s and he2xmath22p excitation cross sections the method was not able to describe low energy scattering mainly because a one electron approach to helium is not realistic at low energies despite the obvious advantages of the above mentioned close coupling calculations in handling many scattering channels simultaneously none was able to describe low energy elastic scattering in addition the presence of pseudo resonances in cross sections below the ionization threshold xcite indicated that there was room for improvement the use of the frozen core he states also needed some attention as this yielded an inaccurate result for the ground state of helium the first application of the single centre ccc method to positron scattering on helium was made by xcite using very accurate helium wavefunctions obtained within the multi core approximation very accurate elastic cross section was obtained below the ps formation threshold by using orbitals with very high angular momenta it was suggested that the necessity for inclusion of very high angular momentum orbitals was to mimic the virtual ps formation processes the method also gave accurate results for medium to high energy scattering processes except that it was unable to explicitly yield the ps formation cross section interestingly the method was not able to produce a converged result at the ore gap region where only the elastic and ps formation channels are open which we shall discuss later in sec intcont comparison of the frozen core and the multi core results showed that the frozen core wavefunctions lead to around 10 higher cross sections xcite demonstrated that single centre expansions can give correct results below the ps formation threshold and at high energies where the probability of ps formation is small but for the full solution of the problem inclusion of the ps centre into expansions was required the initial application of the two centre ccc approach to the problem was within the frozen core approximation xcite it was then extended to a multi core treatment xcite while generally good agreement was found with experiment from low to high energies certain approximations made need to be highlighted the key problematic channels are those of the type ps hexmath2 electron exchange between ps and hexmath2 is neglected excitation of hexmath2 is also neglected while these may seem reasonable approximations for the helium target due to its very high ionisation threshold 246 ev for he and 544 ev for hexmath2 they become more problematic for quasi two electron targets such as magnesium discussed below positron scattering from the helium xmath23 metastable state has been theoretically studied for the first time by xcite at low and intermediate energies converged results for the total ps formation and breakup cross sections have been obtained with a high degree of convergence the obtained cross sections turned out to be significantly larger than those for scattering from the helium ground state alkali atoms have an ionisation threshold that is lower than 68 ev consequently for positron scattering on alkalis the elastic and ps formation channels are open at all incident positron energies for this reason theory has to treat appropriately the competition for the valence electron between the two positively charged centers the singly charged ionic core and the positron positron scattering on the lithium target was investigated by xcite using a one center expansion however convergence was poor due to the absence of ps formation channels xcite two center expansion was employed by xcite xcite xcite xcite xcite and xcite as expected these approaches gave better agreement with the experiment for positron lithium case the two center ccc approach to positron collisions with lithium was reported by xcite this is the most comprehensive study of the problem on an energy range spanning six orders of magnitude while convergence was clearly established and agreement with experiment for the total ps formation cross section xcite is satisfactory smaller experimental uncertainties would be helpful to provide a more stringent test of the theory positron scattering from atomic sodium has been intensively studied for more than two decades the first theoretical calculations relied on simple two center decomposition of the system wavefunction with only the ground states of sodium and ps atoms taken into account xcite then xcite conducted more complex close coupling calculations adding several low lying excited states for each positively charged center the obtained results turned out to be in reasonably good agreement with experimental data for both total xcite and ps formation xcite cross sections further enlargement of the number of channels in the close coupling calculations revealed that the theoretical ps formation cross sections xcite deviated systematically from the experimental results for low impact energies the experiment showed that the ps formation cross section became larger with decreasing energy while the most refined theoretical calculations utilizing different methods of solution predicted consistently the opposite to resolve the discrepancy xcite conducted the experimental study on ps formation in positron collisions with li and na atoms this experiment confirmed the earlier results of xcite the authors managed to extend the impact energy range down to 01 ev where the discrepancy between the theory and experiment was even larger for sodium in striking contrast for lithium the reasonable agreement of the measured cross section with the theoretical predictions was obtained with the use of the same methodology xcite applied the optical potential approach they found that their theoretical cross section increases with the decrease in the impact energies below 1 ev but faster than the experimental results unfortunately this result was obtained with the use of some approximations whose validity were not analyzed it would be instructive if the same optical potential approach was applied to the case of positron scattering on lithium xcite calculated ps formation in positron alkali collisions with the use of the hyper spherical close coupling method their results support the previous theoretical data xcite large two center ccc calculations of positron scattering by atomic sodium were reported by xcite despite being the most comprehensive to datethere was no resolution of the discrepancy with experiment for ps formation at low energies which we will highlight in sec alkali results while the lighter alkali atoms are well modeled by a frozen core hartree fock approximation or even an equivalent local core potential the heavier ones become more problematic with a reduced ionisation threshold positron interaction with the core electrons either directly or via exchange of the valence and the core electrons becomes a more important component of the interaction to the best of our knowledgethis has not been addressed to a demonstrable level of convergence by any theory nevertheless assuming that such problems are more likely to be a problem at the higher energies xcite considered threshold behaviour of the elastic and ps cross sections and their convergence properties at near zero energies for li na and k this work confirmed the expected threshold law proposed by xcite but was unable to resolve the discrepancy with the positron sodium experiment at low energies some earlier studies by xcite and xcite at low to intermediate energies were performed at a time when convergence was computationally impossible to establish xcite further developed the cco method to study positron scattering on rubidium at intermediate and high energies they calculated the ps formation and total cross sections their total cross section results appear to overestimate the experiment though outside the scope of this review we note that the complex scaling method was recently used to study resonance phenomena in positron scattering on sodium xcite and potassium xcite magnesium can be thought of as a quasi two electron target with the core electrons being treated by the self consistent field hartree fock approach positron scattering on magnesium is particularly interesting due to a large resonance in elastic scattering identified at low energies by xcite this was confirmed though at a slightly different energy by the one centre calculations of xcite which were able to be taken to convergence in the energy region where only elastic scattering was possible minor structure differences are likely to be responsible for the small variation in the position of the resonance two centre ccc calculations xcite also reproduced the resonance but had to make substantial approximations when treating the ps mgxmath2 interaction this is even more problematic than in the case of helium discussed above since now we have a multi electron hartree fock core agreement with experiment is somewhat variable but there are substantial experimental uncertainties particularly in the ps formation cross section other theoretical studies of positron magnesium scattering include those by xcite inert gases heavier than helium represent a particular challenge for theory which is unfortunate because they are readily accessible experimentally xcite just the target structure is quite complicated but some good progress has been made in the case of electron scattering by xcite for positron scattering once ps forms the residual ion is of the open shell type making full electron exchange incorporation particularly problematic the relatively high ionisation thresholds for such targets mean that there is always a substantial ore gap where the ps formation cross section may be quite large but unable to be obtained in one centre calculations which are constrained to have only elastic scattering as the open channel nevertheless outside the extended ore gap formed by the ps formation and the ionisation thresholds one centre ccc calculations can yield convergent results in good agreement with experiment xcite there are also first order perturbative calculations by xcite and some based on close coupling with convergence not fully established see xcite xcite studied positron scattering and annihilation on noble gas atoms using many body theory at energies below the ps formation threshold they demonstrated that at low energies the many body theory is capable of providing accurate results xcite used an impulse approximation to describe ps scattering on inert gases and provided quantitative theoretical explanation for the experimentally observed similarity between the ps and electron scattering for equal projectile velocities xcite according to xcite this happens due to the relatively weak binding and diffuse nature of ps and the fact that electrons scatter more strongly than positrons off atomic targets xcite developed a model potential approach to positron scattering on noble gas atoms based on an adiabatic method that treats the positron as a light nucleus the method was applied to calculate the elastic cross section below the ps formation threshold positron collisions with molecular hydrogen have been studied extensively by various experimental groups over the last 30 years xcite theoretical studies of this scattering system are challenging because of the complexities associated with the molecular structure and its non spherical nature rearrangement processes add another degree of complexity to the problem until recently theoretical studiesxcite have been focused only at certain energy regions in addition there are few theoretical studies which include the ps formation channels explicitly the first calculations of ps formation cross section xcite were obtained with the use of the first born approximation xcite used a coupled static model which only included the ground states of hxmath0 and ps this simple model was until recently the only coupled channel calculation available comprehensive review of the positron interactions with atoms and molecules has been given by xcite xcite have recently reported the total cross section for positron scattering from the ground state of hxmath0 below the ps formation threshold using density functional theory with a single center expansion their results are in good agreement with recent single centre ccc calculations of xcite below 1 ev xcite have also performed analysis of experimental and theoretical uncertainties using a modified effective range theory mert they concluded that a practically constant value of the total cross section between 3 ev and the ps formation threshold is likely to be an effect of virtual ps formation the recent single centre ccc calculations of positron scattering on molecular hydrogen by xcite and antiproton collisions with hxmath0 by xcite have shown that the ccc formalism can also be successfully applied to molecular targets in order to obtain explicit ps formation cross section a two centre approach is required with the first attempt presented by xcite they found some major challenges associated with the ps hxmath24 channel some severe approximations were required in order to manage the non spherical hxmath24 ion nevertheless some good agreement with experiment was found see sec h2results but considerably more work is required a major application of positron hydrogen scattering is to provide a mechanism for antihydrogen formation the idea is fairly simple with some accurate calculations performed quite some time ago xcite the basic idea is to time reverse the ps formation process to hydrogen formation from ps scattering on a proton and then to use the resultant cross sections for the case where the proton is replaced by an antiproton and hence forming antihydrogen xmath25 the advantage of antihydrogen formation via this process is that it is exothermic and so the cross section tends to infinity as the relative energy goes to zero xcite this behaviour is enhanced in the case of excited states with degenerate energy levels xcite antihydrogen formation is presently particularly topical due to several groups aegis xcite gbar xcite atrap xcite asacusa xcite and alpha xcite attempting to make it in sufficient quantity in order to perform spectroscopic and gravitational experiments xcite provided ccc results for ps energy starting at xmath26 ev which suffices for currently experimentally accessible energies of around 25 mev recent calculations of the cross sections for these processes will be discussed in sec antih in this sectionwe gave a general historical overview of various theoretical developments related to positron scattering on atomic targets and the hxmath0 molecule in the next section we consider in some detail basic features of the coupled channel formalism mainly in the context of convergent close coupling method and discuss the latest results here we describe basics of the close coupling approach based on the momentum space integral equations we consider the simplest case of scattering in a system of three particles positron to be denoted xmath27 proton xmath28 and electron xmath29 let us also call xmath27 the pair of proton with electron xmath28 positron with electron and xmath29 positron with proton we neglect spin orbit interactions in this casespacial and spin parts of the total three body wavefunction separate the latter can be ignored as it has no effect on scattering observables the spacial part of the total three body scattering wavefunction satisfies xmath30 where xmath31 is the full hamiltonian xmath32 is the three free particle hamiltonian xmath33 is the coulomb interaction between particles of pair xmath34 xmath35 the total hamiltonian can also be expressed in the following way xmath36 where xmath37 is the hamiltonian of the bound pair xmath38 xmath39 is the momentum of free particle xmath38 relative to cm of the bound pair xmath40 is the reduced mass of the two fragments and xmath41 is the interaction potential of the free particle with the bound system in channel xmath38 xmath42 coupled channel methods are based on expansion of the total wavefunction xmath43 in terms of functions of all asymptotic channels however since the asymptotic wavefunction corresponding to 3 free particles has a complicated form xcite this is not practical therefore we approximate xmath43 by expansion over some negative and discrete positive energy pseudostates of pairs xmath27 and xmath28 especially chosen to best reproduce the corresponding physical states suppose we have some xmath44 pseudostates in pair xmath27 and xmath45 in pair xmath28 satisfying the following conditions xmath46 and xmath47 where xmath48 is a pseudostate wavefunction of pair xmath38 and xmath49 is the corresponding pseudostate energy then we can write xmath50 where xmath51 is an unknown weight function xmath52 is the relative position of the particles in pair xmath53 xmath54 is the position of particle xmath53 relative to the centre of mass cm of pair xmath53 xmath55 see fig fig1jacobi for convenience here we use the same notation not only to denote a pair and a corresponding channel but also a quantum state in this pair and the channel so the indices of functions xmath56 and xmath48 additionally refer to a full set of quantum numbers of a state in the channel in the case of vectors xmath54 and xmath52 the indices still refer only to a channel and a pair in the channel respectively in principle at this formal stage one could keep the continuum part only for one of the pairs in order not to double up the treatment of the three body breakup channel however a symmetric expansion of the type phi with the continuum states on both centres was found to give fastest convergence in calculations with a manageable number of states xcite proton xmath28 and electron xmath29width321 now we use the bubnov galerkin principle xcite to find the coefficients xmath51 so that the expansion phi satisfies eq seh the best possible way accordingly we substitute the expansion phi into eq seh and require the result to be orthogonal to all xmath57 basis states ie xmath58 in this equation subscript xmath59 indicates integration over all variables except xmath59 now taking into account conditions diag1 and diag2 we can write eq ls1 in the following form xmath60 the potential operators xmath61 are given by xmath62 the condition imposed above in eq ls2 is a system of coupled equations for unknown expansion coefficients xmath63 these functions carry information on the scattering amplitudes we transform these integro differential equations for the weight functions to a set of coupled lippmann schwinger integral equations for transition amplitudes xmath64 to this endwe define the green s function in channel xmath38 xmath65 to describe the relative motion of free particle xmath38 and bound pair xmath38 with binding energy xmath66 we can now write the formal solution of the differential equation ls2 as xmath67 the addition of positive xmath68 defines the integration path around the singularity point at xmath6912 which is real for xmath70 and corresponds to the outgoing wave boundary conditions taking eq f eq2 to the asymptotic region one can demonstrate xcite that latexmath where xmath223 xmath224 is the dipole polarizability and xmath225 are adjustable parameters to fit some physical quantities eg energies of the valence electron mg is modelled as a he like system with two active electrons above a frozen hartree fock core xcite the interaction between an active electron and the frozen hartree fock core is calculated as a sum of the static part of the hartree fock potential and an exchange potential between an active and the core electrons as described in the previous subsection details of the two centre ccc method for positron mg collisions are given in xcite wavefunctions for the inert gases of ne ar kr and xe are described by a model of six p electrons above a frozen hartree fock core discrete and continuum target states are obtained by allowing one electron excitations from the p shell in the following way taking ne as an example self consistent hartree fock calculations are performed for the nexmath2 ion resulting in the 1s2s2p orbitals the 1s and 2s orbitals are treated as the inert core orbitals while the 2p hartree fock orbital is used as the frozen core orbital to form the target states a set of laguerre functions is used to diagonalize the quasi one electron hamiltonian of the nexmath226 ion utilising the nonlocal hartree fock potential constructed from the inert core orbitals the resulting 2p orbital differs substantially from the hartree fock 2p orbital a one electron basis suitable for the description of a neutral ne atom is built by replacing the 2p orbital from diagonalisation with the hartree fock one orthogonalized by the gram schmidt procedure the six electron target states are described via the configuration interaction ci expansion the set of configurations is built by angular momentum coupling of the wavefunction of 2pxmath227 electrons and the laguerre based one electron functions the coefficients of the ci expansion are obtained by diagonalization of the target hamiltonian the target orbital angular momentum xmath90 spin xmath6 and parity xmath228are conserved quantum numbers and diagonalization of the target hamiltonian is performed separately for each target symmetry xmath229 full details of the single centre ccc calculations for noble gas atoms are given in ref a two centre approach to inert gases has not yet been attempted positron hxmath0 scattering can be treated somewhat similar to the helium case we consider hxmath0 within the born oppenheimer approximation where the two protons are considered to be at a fixed internuclear distance denoted as xmath13 expansion for the total scattering wavefunction after separation of the spin part is similar to eq twfex7 for he however the wavefunctions for the target in the first term and the residual ion in the ps formation channel depend on xmath13 the residual ion of hxmath230 with same internuclear distance xmath13 is considered to be only in its ground state we only use a few ps eigenstates so as to take advantage of their analytical form the target states are obtained by diagonalizing the hxmath0 hamiltonian in a set of antisymmetrized two electron configurations built from laguerre one electron orbitals for each target symmetry characterized by the projection of orbital angular momentum parity and spin to calculate hxmath0 states we use the fixed nuclei approximation calculations are performed at the ground state equilibrium internuclei distance taken to be xmath13 14 axmath231 when xmath13 is set to 0 one should obtain the he results we used this test for both structure and scattering calculations details of hxmath0 structure calculations can be found in xcite the derivation of the rearrangement matrix elements are somewhat more difficult than for he because of their dependence on the nuclear separation and target orientation another difference is that partial wave expansion is done over the total angular momentum projection xmath93 it is convenient to choose the z axis to be along the xmath232 vector body frame then it is possible to transform the obtained results with this choice of z axis to any given orientation of the molecule to facilitate the calculations only the spherical part of the nuclear potentialis considered when calculating the rearrangement matrix elements xmath233 where xmath234 then the momentum space representation of the above positron nucleus potential can be shown to be xmath235 with these one further follows the procedure used for positron he calculations xcite for positron collisions with the ground state of hxmath0 only states with zero total spin are required and so xmath236 xmath237matrix elements are used for obtaining body frame scattering amplitudes these are then rotated by euler angles to transform them to lab frame scattering amplitudes orientationally independent cross sections are calculated by averaging over all rotations of the molecule xcite an orientationally averaged analytic born subtraction method xcite is employed for hxmath0 direct transition channels this helped reduce the number of partial waves requiring explicit calculation fundamentally in order for a theory to be useful it needs to be predictive in the close coupling approach to electron positron proton scattering on relatively simple targets where the structure is readily obtained there are two computational problems that need to be overcome the first is that for a given set of states used to expand the total wavefunction the resulting equations need to be solved to an acceptable numerical precision the second is to systematically increase the size of the expansion and demonstrate that the final results converge to a unique answer that is independent of the choice of the expansion so long as it is sufficiently large only once this is achieved can we be in a position to claim that the results are the true solution of the underlying schrdinger equation and hence predictive of what should happen in the experiment in the case of electron scattering there are only one centre expansions because electrons do not form bound states with the electrons of the target electron exchange is handled within the potential matrix elements all based on the coordinate origin at the nucleus accordingly establishing convergence in just the one centre approach is all that needs to be done though historically this was a major challenge xcite though convergence was shown to be to a result that disagreed with experiment xcite subsequent experiments showed excellent agreement with the ccc theory xcite which lead to reanalysis of the original data xcite yielding good agreement with the ccc theory and new experiments a similar situation occurred in the case of double photoionisation of helium xcite which in effect is electron scattering on the singly charged helium ion xcite for positron and proton scatteringthe issue of convergence is even more interesting due to the capacity of the projectile to form a bound state with a target electron this leads to a second natural centre in the problem which also requires treatment to convergence for positron scattering on atomic hydrogenthe ps formation threshold is at 68 ev while the ionisation threshold is 136 ev however ps formation is also a form of ionisation of the target except that the electron is captured to a bound state of the projectile any expansion of the ps centre using a complete basis will result in negative and positive energy states with the latter corresponding to three body breakup however expansion of the atomic centre will also generate independent positive energy states corresponding to three body breakup hence expansions using a complete basis on each centre will yield independent non orthogonal states corresponding to the same physical three body breakup process while this may appear to be a fundamental problem in practice it is an interesting strength of the method which allows to check internal consistency of the results by thiswe mean that the same results must be obtained from a variety of calculations utilising independent one and two centre expansions as detailed below we begin by considering two extremes the first attempts to obtain convergence using only the atomic centre while the second attempts convergence using two complete expansion on both centres will either converge and if they do will the convergence be to the same result and xmath238 arising upon diagonalisation of the respective hamiltonians in the ccc12xmath23912xmath0 positron hydrogen calculations herexmath240 states were obtained for each xmath90 with xmath241 for h states and xmath242 for ps states see kadyrov et al xcite energies arising from one centre left and two centre right ccc positron hydrogen calculations xmath243 is the number of h states in the one centre calculation which has no explicit ps states xmath244 is the number of h states in the two centre calculation with xmath245 explicit ps states first presented by bray et al xcite in fig energies typical energies arising in two centre calculations are given we see a similar spread of negative and positive energy states on both the h and the ps centres a single centre expansion based on the atomic centre would not have any ps states included in the calculation the results of the two types of calculations may be readily summarised by fig picture on the leftwe have one centre cross sections xmath246 where xmath247 taking the initial state to be the ground state of h xmath248 then xmath249 is the elastic scattering cross section and xmath250 corresponds to excitation whenever xmath251 and ionisation whenever xmath252 the elastic and excitation cross sections need to converge with increasing xmath243 individually however the ionisation cross sections converge as a sum yielding the total ionisation cross section xmath253 for xmath254 convergence of the one centre ccc calculations where there are no ps states has been studied extensively xcite briefly at energies below the ps formation threshold the important contribution of virtual ps formation is adequately treated via the positive energy atomic states of large angular momentum xmath255 this allows for convergence of elastic scattering cross section to the correct value at energies above the ionisation threshold the positive energy atomic pseudostates take into account both breakup and ps formation cross section in a collective way yielding the correct electron loss and excitation cross sections however in the extended ore gap region between the ps formation and ionisation thresholds no convergence is possible due to all positive energy pseudostates being closed convergence in two centre calculations is potentially problematic at all energies due to two independent treatments of the breakup processes in practicethis manifests itself as an ill conditioned system which requires high precision matrix elements and limits the size of the calculations ie xmath256 and xmath245 are typically substantially smaller than xmath243 it is for this reason that we have drawn the one center matrix to be substantially larger in fig picture furthermore the h ps matrix elements take at least an order of magnitude longer to calculate due to the non separable nature of the radial integrals xcite accordingly even with much smaller number of states with smaller xmath257 the two centre calculations take considerably longer to complete nevertheless convergence is obtained for individual transitions involving discrete states explicit ps formation and explicit breakup xcite internal consistency is satisfied if at energies outside the extended ore gap for discrete xmath258 atomic transitions the two approaches independently converge such that xmath259 furthermore at energies above the breakup threshold xmath260 where for some initial state xmath34 xmath261 in the extended ore gap only the two centre calculations are able to yield convergent results the great strength of the internal consistency check is that it is available outside the extended ore gap for every partial wave of the total orbital angular momentum checking that eq eloss is satisfied for every partial wave provides confidence in the overall results of the two completely independent calculations which will typically have very different convergence properties with increasing xmath8 and xmath90 due to the unitarity of the close coupling theory agreement for eq eloss suggests agreement for other channels and so eq disc will also hold obtained using the one and two centre ccc calculations see text the indicated points corresponding to the energies at which the calculations were performed are connected with straight lines to guide the eye the vertical lines are the ps formation and breakup thresholds spanning the extended ore gap the experimental data in the bottom left panel are due to xcite first presented by xcite p h in fig p h we give the example of an internal consistency check presented by xcite we see that outside the extended ore gap the two calculations are generally in very good agreement one systematic exception is at just above the ionisation threshold herethe breakup cross section is almost zero but the ps formation cross section is near its maximum even with xmath262 the one centre calculation does not have enough pseudostates of energy just a little above zero which would be necessary to reproduce the what should be step function behaviour in one centre ccc calculations due to explicit ps formation in the two centre calculationsthere are no such problems here or within the extended ore gap having checked the individual partial waves and summing over all to convergence excellent agreement is found with experiment having performed the internal consistency checks we remain confident in the theoretical results even if there is potentially a discrepancy with experiment at the lowest energy measured establishing convergence in the cross sections with a systematically increasing close coupling expansion places a severe test on the scattering formalism this is as relevant to positron scattering as it is for electron scattering pseudoresonances must disappear with increased size of the calculations and uncertainty in the final results can be established via the convergence study one of the earliest successes of the two centre ccc method for positron hydrogen xmath171wave scattering was to show how the higgins burke pseudoresonance xcite disappeared utilising a xmath18 basis of only xmath6states on each centre xcite the cross sections for all reaction channels were shown to converge to a few with a xmath263 basis of xmath6states interestingly the symmetric treatment of both centres was particularly efficient in terms of reaching convergence and eliminating pseudoresonances with no double counting problems the question of convergence in the case of the full positron hydrogen scattering problem was investigated by xcite setting xmath264 states of higher angular momentum were increased systematically the same level of convergence as in the xmath171wave model case was achieved with the xmath265 basis made of ten xmath6 nine xmath12 eight xmath13 and seven xmath266states for each centre for scattering on the ground state the largest calculations performed had a total of 68 states 34 each of h and ps states the convergence was checked for the total and other main cross sections corresponding to transitions to negative energy states reasonably smooth cross sections were obtained for all bases with xmath90convergence being rather rapid for the three cases considered xmath266states contribute only marginally fig6fig8 show the ccc results in comparison with other calculations and experimental data of detroit xcite and london xcite groups the ccc results agree well with experiment so do ccxmath17 calculations of xcite and ccxmath16 calculations of xcite note that in these calculations the xmath267 scaling rule was used to estimate the total ps formation also an energy averaging procedure was used to smooth the ccxmath11 calculations of xcite in the ccc method convergenceis established without such procedures being used for the breakup cross section the ccc results have two comparable contributions one from the excitation of the positive energy h pseudostates shown in fig fig8 as direct ionization and the other from excitation of positive energy ps pseudostates this was also noted by kernoghan et al xcite using the ccxmath11 calculations by contrast in ccxmath10type calculations the contribution to breakup comes only from from direct ionisation at the maximum of the cross sectionthe separately converged indirect contribution to the breakup cross section is approximately a third of the total however the ccxmath17 cross section of xcite is only a marginally smaller indicating that absence of ps positive energy states is absorbed by the positive energy h states fig1prl shows the ccc results of fig fig8 but against excess total energy to emphasize the lower energies the full ccchps results with breakup cross sections coming from both the h and ps centres are contrasted with those just from h and twice h labeled as ccchh we see that below about 20 ev excess energy the ccchps and ccchh curves are much the same indicating that the ps and h contributions to breakup converge to each other as the threshold is approached h breakup cross section as a function of excess energy calculated using the two center ccc method first presented by xcite the argument to the ccc label indicates which center s positive energy states were used see text the experiment is due to xcite utilising the ccc method xcite reported calculations of positron hydrogen scattering near the breakup threshold in order to examine the threshold law the results are given in fig fig2prl the wannier like threshold law derived by xcite is in good agreement with the ccc results below 1 ev excess energy this law was derived for the xmath268 partial wave and xcite showed the same energy dependence holds in all partial waves as for the full problem the contributions from both centres to the breakup cross section converge to each other with decreasing excess energy without any over completeness problems h xmath171wave model breakup cross section as a function of excess energy calculated using the two center ccc method from xcite as in fig fig1prl the argument to the ccc label indicates which center s positive energy states were used the wannier like threshold law is due to xcite fig2prl helium in its ground state is the most frequently used target in experimental studies of positron atom scattering first measurements on positron helium scattering were carried out by xcite in 1972 sincethen many other experimental studies have been conducted xcite further developments of positron beams in terms of energy resolution and beam intensities have recently motivated more experimental studies xcite in general the results from the experiments agree well with each other a complete theoretical approach from low to high energieshad been lacking until the development of the ccc method for the problem by xcite a vast amount of experimental data is available for integrated cross sections for positron scattering from the helium ground state the total ccc calculated cross section is shown in fig tcs exp in comparison with experimental data and other calculations considerable discussion on the topic has been presented by xcite it suffices to say that so long as an accurate ground state is used obtained from a multi core mc treatment agreement with experiment is outstanding across all energies we expect the frozen core fc treatment of helium to result in systematically larger excitation and ionization cross sections because it understimate the ionization potential by around 084 ev generally a larger ionisation potential leads to a smaller cross section in the ccc method we are not free to replace calculated energies with those from experiment as this leads to numerical inconsistency consequently there is no way to avoid the extra complexity associated with the mc calculations if high accuracy is required the total ps formation breakup and electron loss cross sections are given in fig fig3in1a b and c respectively beginning with ps formation given the minor variation in the measurements agreement between the various theories and experiment is satisfactory however turning our attention to the breakup cross section we see that mc ccc appears to be substantially higher than experiment yetwhen these cross sections are summed to form the electron loss cross section the agreement with the experiment of xcite which measured this directly is good given the complexity of the problem and the experimental uncertainties the agreement with experiment is very satisfying s and b he2xmath22p excitations by positron impact experiment is due to xcite the fc ccc and mc ccc results are from xcite other calculations are due to xcite xcite xcite and xcite the cross sections of 2xmath22s and 2xmath22p excitation of helium are presented in fig figexa and b respectively the mc ccc result for xmath269s is in good agreement with the data of xcite while the xmath269p result is somewhat lower than experiment the fact that the fc ccc xmath269p results agree better with the experimental data is fortuitous other available theories show some systematic difficulties for these relatively small cross sections the cross sections for the rather exotic ps formation in the 2xmath6 and 2xmath12 excited states are presented in fig figpsexa and b respectively the cross sections are particularly small nevertheless agreement with the sole available experiment of xcite is remarkable and b 2xmath12 states experimental data for ps2xmath12 are due to xcite the fc ccc and mc ccc results are from xcite and other calculations are due to xcite and xcite thus we have seen that there is good agreement between theory and experiment for the integrated cross sections for positron scattering on helium and hydrogen in their ground states in both casesthe ionisation thresholds are well above 68 ev and so a one centre calculation is applicable at low energies where elastic scattering is the only open channel though experimentally challenging positron scattering on either h or he metastable excited states results in ps formation being an open channel at all energies taking the example of positron scattering from the xmath270s metastable state of heliumthe ps threshold is negative 206 ev this collision system has been extensively studied by xcite as far as we are aware no experimental studies have been conducted for positron scattering on metastable states of helium given the experimental work on electron scattering from metastable helium xcite in a group that also has a positron beam we are hopeful that in the future there might be experimental data available for such systems as there are still unresolved discrepancies between theory and experiment regarding electron scattering from metastable states of he xcite using positrons instead of electrons may assist with their resolution just like for h and he in metastable states for positron collisions with alkali metal atoms in their ground state both elastic and ps1s formation channels are open even at zero positron energy accordingly we require two centre expansions even at the lowest incident energies xcite conducted two centre calculations with different basis sets to achieve results that are independent of the laguerre exponential fall off parameter xmath271 and convergent with the laguerre basis size xmath272 for target orbital angular momentum xmath273 li along with the experimental points by xcite and theoretical calculations by xcite the truncated basis ccc tr calculation is an attempt to reproduce the states used by xcite see xcite for details fig4li figure fig4li shows the positronium formation cross section agreement between the various calculations is quite good while comparison with the experimental data of xcite is rather mixed the key feature is that the ps formation cross section diminishes with decreasing energy supported by all theories and consistent with experiment overall it appears there is no major reason to be concerned unfortunately changing the target to sodium substantial discrepancies between theory and experiment arise and remain unresolved to date one of the motivations for extending the ccc theory to two centre calculations of positron alkali scattering was to address this problem xcite performed the most extensive study of this problem that included one and two centre calculations despite establishing convergence and consistency of the two approaches no improvement on previous calculationswas found na scattering experiment is due to xcite and the calculations due to xcite and xcite fig4na we begin by considering the total cross section for positron sodium scattering presented in fig fig2na we see good agreement between various two centre calculations with the one centre ccc2170 calculation behaving as expected agreeing with ccc11614 only above the ionisation threshold and not being valid or even convergent below the ionisation threshold all of the two centre calculations are considerably above the experiment at low energies curiously the situation is reversed for the total ps formation component of the total cross section presented in fig fig4na where now all of the theories are considerably below the experiment given that the total cross section at the lowest energies considered is the sum of elastic and ps formation cross sections the presented discrepancies with experiment imply that the theoretical elastic scattering cross sections are overwhelmingly high xcite why this would be the case remains a mystery lower panel and scaled ps formation xmath274 upper panel cross sections for positron lithium scattering as a function of energy 136xmath275 for the zeroth partial wave calculated with the indicated cccxmath276 laguerre basis parameters as presented by xcite li faced with the problems identified above xcite considered threshold behaviour in positron scattering on alkali atoms followingxcite we expect exothermic reactions such as ps formation in positron alkali scattering to result in a cross section that tends to infinity as xmath277 as the positron energy xmath275 goes to zero this is not yet evident in fig fig4li or fig fig4na for the considered energies nevertheless xcite did obtain the required analytical behaviour but only in the zeroth partial wave in fig li the ps formation cross section is presented for the zeroth partial wave multiplied by xmath278 so as to demonstrate the expected threshold behaviour the convergence study of xcite is also presented where the effect of adding the psxmath279 state was able to be reproduced by atomic pseudostates of high orbital angular momentum xcite found that higher partial waves xmath79 become major contributors to ps formation at energies above xmath280 ev and have a threshold behaviour as xmath281 and so rise rapidly with increasing energies it is the contributions of the higher partial waves that is responsible for the behaviour in the ps formation cross section seen in fig fig4li but for positron sodium scattering na we find the same is the case for positron sodium scattering in fig na we present xmath282 and xmath283 for the zeroth partial wave demonstrating the expected threshold behaviour details of the convergence study are discussed by xcite it suffices to say that there is a range of combinations of atomic and ps states that should yield convergent results with such studies being considerably easier at the lower energies where there are only two open channels as in the case of lithiumthe zeroth partial wave is dominant only below xmath280 ev with the results presented in fig fig4na being dominated by the higher partial waves but for positron potassium scattering k lastly having been unable to explain the discrepancy between experiment and theory for positron sodium scattering xcite also considered the positron potassium scattering system however much the same behaviour as for the lighter alkalis was found see fig consequently the discrepancy between experiment and theory for low energy positron scattering on sodium remains unresolved one interesting aspect of the presented elastic cross sections for positron scattering on the alkalis are the minima in the case of li andna they are at just above 0001 ev whereas for k the minimum is at around 004 ev given that in all cases we have just two channels open elastic and ps formation we have no ready explanation for the minima or their positions the generally smaller ps formation cross section shows no structure in the same energy region which is surprising given the substantial minima in the elastic cross section for experimentalists the transition from say sodium to magnesium for the purpose of positron scattering is relatively straightforward not so in the case of theory two valence electrons on top of a hartree fock core makes for a very complicated projectile target combination to treat computationally however with an ionization energy of 76 ev the single centre approach is valid below 08 ev allowing for a test of the two centre method in this energy region this is an important test because in the single centre approach there are no approximations associated with explicit ps formation and the core is fully treated by the hartree fock approach rather than an equivalent local core potential approximation one centre positron magnesium ccc calculations were presented by xcite and two centre ones by xcite the results confirmed the existence of a low energy shape resonance predicted earlier by xcite the results are presented in fig figure3 mg given that the resonance is at a very low energy and that a slight energy difference in the target structure may affect its position the agreement between the theories is very encouraging as explained in sec intcont one centre calculations are unable to yield convergent results within the extended ore gap presently between 08 ev and 76 ev the unphysical structures displayed within this energy region in the calculations of xcite depend on the choice of basis with just one example presented unfortunately the agreement with the experiment of xcite for the total cross section has the unexpected feature of being good above the ionisation threshold and poor below see fig figure4 mg given that the validity of the two centre ccc approach should be energy independent we are unable to explain the discrepancy mg elastic scattering cross sectionthe first vertical line is at the ps formation threshold the second one is at the mg ionization threshold the two centre ccc calculations are due to xcite the one centre due to xcite and the variational ones due to xcite figure3 mg mg total scattering cross section the vertical line indicates the mg ionization threshold experimental data are due to xcite and the calculations are as for fig figure3 mg the ps formation cross section is presented in fig figure5 mg where there are large experimental uncertainties the experimental data of xcite are preliminary estimations for the upper and the lower limits of the ps formation cross section the two centre ccc results are compared with the previous calculations while there is substantial variation the theories tend to favour the upper limit estimates mg ps formation cross section experimental data for upper and lower limits of the ps formation cross section are due to xcite the vertical line indicates the position of the ps formation threshold the calculations are due to xcite xcite xcite walters data taken from ref xcite and xcite figure5 mg positron scattering on inert gas atoms has been studied using the single centre ccc method xcite as discussed in the introduction the complexity of adding the second ps centre is immense and has not yet been attempted within a convergent close coupling formalism the large ionisation thresholds mean that the single centre calculations are valid for elastic scattering on the substantial energy range below the ps formation threshold as well as above the ionisation threshold but no explicit ps formation cross section may be determined this is particularly unfortunute in light of the intriguing cusp like behaviour observed by xcite across the ps formation threshold due to the small magnitude of the structures a highly accurate theoretical treatmentis required but does not yet exist the experimental and theoretical situation for positron noble gas collision systems has been recently reviewed extensively by xcite if we allow the internuclear separation of the two protons in hxmath0 to be fixed then the extra complexity relative to the helium atom is somewhat manageable various cross sections for xmath284 scattering have been recently calculated by xcite using the two center ccc method this represents the most complex implementation of the two centre formalism to date issues regarding convergence with laguerre based molecular and ps states have been discussed in some detail calculations with only up to three psxmath184 psxmath279 and psxmath285 states on the ps centre were presented above 10 ev at lower energiesthe current implementation of the two centre formalism fails to pass the internal consistency check with the single center calculations of xcite the low energies are particularly sensitive to the approximations of the treatment of the virtual ps formation in the field of the highly structured hxmath24 ion herewe just present the key two centre results in comparison of experiment and theory experimental data are due to xcite xcite xcite xcite and xcite the single centre cccxmath286 results are due to xcite the two centre ccc calculations are due to xcite figure3h2 in fig figure3h2 the theoretical results are compared with the available experimental data for the grand total cross section good agreement between the two and one centre calculation above the ionisation threshold is very satisfying even if in this energy region the theory is somewhat below experiment good agreement with experiment of the two centre ccc results below the ionisation threshold dominated by the elastic and ps formation cross sections is particularly pleasing figure4h2 shows the ps formation cross section which is a substantial component of the gtcs particularly near its maximum around 20 ev there is a little variation between the three ccc calculations particularly at the lower energies with the largest ccc calculation being uniformly a little lower than experiment hxmath0 collisions experimental data are due to xcite shaded region incorporates upper and lower limits with their uncertainties xcite and xcite coupled static model calculations are due to xcite the ccc calculations are due to xcite figure4h2 hxmath0 collisions experimental data are due to xcite xcite and xcite the ccc calculations are due to xcite figure5h2 hxmath0 collisions experimental data are due to xcite and xcite the ccc calculations are due to xcite one centre and xcite two centre figure6h2 in fig figure5h2 the ccc results for the direct total ionization cross section tics are compared with the available experimental data the experimental data of xcite and xcite are in agreement with each other but differ from measurements of xcite between 30 and 100 ev the largest ccc calculation ccc14xmath03 which contains xmath6 xmath12 and xmath13atomic orbitals together with the three lowest energy ps states is in better agreement with the measurements of xcite the ccc14xmath2871 and ccc14xmath2873 are systematically lower primarily due to the absence of the xmath13atomic orbitals due to the unitarity of the close coupling formalism larger xmath90orbitals are likely to only marginally increase the tics once we have convergence in the elastic scattering large xmath90orbitals not required the gtcs is set with the distribution between elastic electron excitation ps formation and total ionisation cross sections thereby being constrained xcite finally the total electron loss cross section is given in fig figure6h2 and is the sum of tics and ps formation cross sections it is useful because the one centre ccc approach is able to yield this cross section at energies above the ionisation threshold this provides for an important internal consistency check we see that around 50 ev the largest two centre ccc calculation is somewhat lower than the one centre calculation given the increase in the cross section due to the inclusion of xmath13orbitals adding xmath266orbitals would go someway to reduce the discrepancy increasingthe laguerre basis xmath288 in the one centre calculations would increase the cross section just above the ionisation threshold due to an improving discretisation of the continuum despite the immense complexity of the positron molecular hydrogen scattering problemconsiderable progress in obtaining reasonably accurate cross sections has been made as stated earlier via eq antihform antihydrogen formation is effectively the time reverse process of ps formation upon positron hydrogen scattering accordingly it only takes place at positron energies above the psxmath5 formation threshold where xmath5 indicates the principal quantum number furthermore we require two centre calculations because only these have explicit ps and h states given that we are interested in as large cross sections as possible and that the exothermic transition cross sections go to infinity at threshold xcite the primary energy range of interest is within the extended ore gap as given in fig p h scattering on antiprotons calculated using the ccc method first presented by xcite for ps1s the variational calculations xcite are for antihydrogen formation in the 1s state only ccc calculated unconnected points presented for comparison while the uba calculations of xcite and xcite and the ccc calculations generally are for antihydrogen formation in all open states the three experimental points are due to xcite there are few calculations of antihydrogen formation that are accurate at low energies as far as we are aware apart from the ccc method xcite only the variational approach of xcite has yielded accurate results the unitarised born approximation uba calculations xcite are not appropriate at low energies and neither are first order approximations xcite large cross sections for antihydrogen formation occur for transitions between excited states of ps and h to have these states accurate the laguerre bases xmath289 for both h and ps need to be sufficiently large xcite used xmath290 with the resultant energies given in fig energies while it is trivial to have larger xmath289 as far as the structure is concerned the primary limiting factor is to be able to solve the resultant close coupling equations in fig energies the positive energies correspond to the breakup of h and also of ps and yet the two represent the same physical process this leads to highly ill conditioned equations xciteconsidered non symmetric treatments of the two centres by dropping the ps positive energy states while these also satisfy internal consistency checks they have not proved to be more efficient in yielding convergence with increasing xmath289 in the cross sections of present interest a comprehensive set of antihydrogen formation cross sections for low energy ps incident on an antiproton has been given by xcite they used the original numerical treatment of the green s function in eq lskmat in fig p ps we present the summary of the total antihydrogen formation cross sections for specified initial psxmath291 as presented by xcite excellent agreement with the variational calculations xcite available only for the ground states of h and ps gives us confidence in the rest of the presented ccc results as predicted by xcite the ground state cross section goes to infinity as xmath292 at threshold whereas for excited states this is modified to xmath293 at threshold due to the long range dipole interaction of degenerate ps xmath294 states as explained by xcite the cross sections increase steadily with increasing principle quantum number of ps with the transition to the highest available principal quantum number of h being the most dominant contribution xcite consequently there is considerable motivation to push the calculations even further to larger xmath289 there is a second source of ill conditioning of the close coupling equations to that discussed previously given that we are particularly interested in low energies ie just above the various thresholds the singularity in the green s function of eq lskmat occurs very close to zero energy a typically used symmetric treatment on either side of the singularity results in very large positive and negative values which contribute to the ill conditioning via precision loss formation cross sections for xmath295 in p psxmath296 scattering for the zeroth partial wave the old results arise from solving the original ccc equations lskmat the new results are due to the solution with the green s function being treated analytically see xcite very recently xcite showed that the green s function of the coupled lippmann schwinger equations lskmat can be treated analytically the method that was previously used in calculating the optical potential in the coupled channel optical method xcite was applied directly to the green s function in the lippmann schwinger equation lskmat in doing so they showed that they could improve on the above xmath290 basis to reach xmath297 and thereby consider transitions from psxmath298 states in fig new we compare the old and the new numerical formulations and find that the latter is quite superior in fact for xmath297 xcite were unable to obtain numerical stability in the original formulation with variation of integration parameters over the momentum in eq lskmat and presented just a combination that yielded somewhat reasonable agreement with the results of the new numerical method the new technique also proved to be particularly advantageous in studying threshold phenomena in positroniumantiproton scattering xcite we have presented an overview of recent developments in positron scattering theory on several atoms and molecular hydrogen with particular emphasis on two centre calculations that are able to explicitly treat positronium formation while considerable progress has been made there remain some major discrepancies between theory and experiment such as for low energy positron sodium scattering considerable technical development is still required for complicated atoms such as the heavier inert gases to incorporate ps formation in a systematically convergent way a general scheme for doing so for molecular targets is also required it is also important to state that there are still fundamental issues to be addressed in the case of breakup while no overcompleteness has been found in two centre calculations with near complete bases on both centres determining the resulting differential cross sections remains an unsolved problem xcite considered the simplest energy differential cross section which describes the probablity of an electron of a certain energy being ejected if we have contributions to this process from both centres how do we combine them furthermore diagonalising the ps hamiltonian yields pseudostates of positive energy ps rather than the energy of the individual electron or positron it may be that the only practical way to obtain differential ionisation cross sections is to restrict the ps centre to just negative energy eigenstates and obtain differential cross sections solely from the atomic centre positive energy pseudostates as is done for electron impact ionisation xcite however this kind of approach may not be capable of describing the phenomenon of ps formation in continuum as seen by xcite this is currently under investigation support of the australian research council and the merit allocation scheme on the national facility of the australian partnership for advanced computing is gratefully acknowledged this work was supported by resources provided by pawsey supercomputing centre with funding from the australian government and the government of western australia acknowledges partial support from the us national science foundation under award no phy1415656 274 natexlab1122 in ed volume pp in eds pp in eds pp et al et al et al cern et al et al et al edition
much progress in the theory of positron scattering on atoms has been made in the ten years since the review of xcite we review this progress for few electron targets with a particular emphasis on the two centre convergent close coupling and other theories which explicitly treat positronium ps formation while substantial progress has been made for ps formation in positron scattering on few electron targets considerable theoretical development is still required for multielectron atomic and molecular targets
introduction theory recent applications of the ccc theory to positron scattering antihydrogen formation concluding remarks
currently a number of intense mid infrared light sources are being developed xcite spurred on by their uses in sub attosecond pulse generation xcite strong field holography xcite and laser induced electron diffraction xcite the low frequency and high intensity of these new sources mean that the tunneling picture is an appropriate framework for describing how these light sources interact with atoms and molecules in this work we deal with the process of dissociative tunneling ionization in molecules where a static electric field tunnel ionizes an electron after which the nuclei dissociate to our knowledgethis is the first work on the theory of dissociative tunneling ionization in the theorywe treat the nuclear and electronic degrees freedom on an equal footing and fully quantum mechanically the reflection principle xcite is often used to describe the process of dissociative ionization this principle can be applied within the framework of the born oppenheimer bo approximation to relate the nuclear kinetic energy release ker spectrum to the nuclear wave function it was formulated as early as 1928 xcite and later put on a more rigorous foundation xcite see also references therein for a list of early uses in time dependent cases where the time scale of the electric field is shorter than that of nuclear motion one assumes that the electrons make an instantaneous frank condon transition to a dissociative electronic state the probability distribution in the new electronic state is then the absolute value squared of the initial nuclear wave function times some dipole coupling factor in casethis dipole coupling factor is almost constant the wave packet that enters the dissociative state is practically identical to the initial nuclear wave function classical energy conservation then dictates that the nuclear ker spectrum can be obtained by reflecting the nuclear wave function in the dissociative potential curve this is the regime considered mostly in the literature in the time independent tunneling case the electronic ionization rate takes the same role as the dipole coupling factor in the time dependent case that is it multiplies the nuclear wave function before it enters the continuum however this electronic rate has an exponential dependence on the internuclear coordinate and can by no means be considered constant as was also pointed out in ref it is therefore essential to consider the effect of this additional factor on the ker spectrum the spectrum can not be found simply by reflection of the nuclear wave function imaging of the nuclear wave function is made possible through the reflection principle by applying it in reverse on a measured ker spectrum xcite this is often referred to as coulomb explosion imaging in the tunneling case the exponential dependence of the electronic rate on the internuclearcoordinate means that the product of the electronic rate and the nuclear wave function is essentially different from the bare nuclear wave function and the electronic rate therefore needs to be included to image the nuclear wave function based on the ker spectrum in ref xcite it was demonstrated that the bo approximation breaks down for weak fields in this casethe weak field asymptotic theory wfat xcite provides us with accurate results for the ker spectrum the paper is organized as follows in sec sec theory the theory for dissociative tunneling ionization of homonuclear molecules is outlined we derive an exact expression for the ker spectrum and a corresponding expression in the bo approximation section sec1d calculation exemplifies the theory with numerical reduced dimensionality calculations numerically exact ker spectra are compared to ker spectra obtained in the bo approximation using the reflection principle imaging of the nuclear part of the wave function from the ker spectrum is demonstrated section sec conclusion outlook concludes the paper atomic units xmath0 are used throughout we consider a three body system consisting of two heavy nuclei with masses xmath1 and charges xmath2 and one electron with mass xmath3 and charge xmath4 in the center of mass framethese have coordinates xmath5 and xmath6 fulfilling xmath7 let us introduce the reduced masses xmath8 effective charge xmath9 and jacobi coordinates xmath10 we assume that the orientation of the internuclear axis xmath11 is fixed in space we also assume that the field is directed along the xmath12direction xmath13 and choose to consider xmath14 for definiteness due to the azimuthal symmetry of the molecule only the polar angle xmath15 between xmath11 and xmath16 matters this xmath15 angle takes the role as an external parameter and we omit explicit reference to it in the following with these assumptionswe can write the time independent schrdinger equation se within the single active electron approximation as xmath17 psimathbfrr 0labeleq schrodingerendaligned where the effective xmath18 potential describes how the nuclei interact with each other and the effective xmath19 potential describes how the electron interacts with the nuclei for a system with several electronsthe xmath18 potential represents the bo potential of the system without the active electron in this work we assume that xmath18 is monotonically decreasing ie it corresponds to a purely dissociative bo curve we assume that the nuclei can not pass through each other this gives the boundary condition xmath20 and we consider eq eq schrodinger in the interval xmath21 we also impose outgoing wave boundary conditions in the electronic coordinate xmath22 the exact form of these will be specified below with these boundary conditionsthe wave function we seek as a solution of eq eq schrodinger is a siegert state xcite with a complex energy xmath23 where xmath24 is the ionization rate and it is normalized by xmath25 the outgoing wave boundary condition in the electronic coordinate means that the solution we seek to eq eq schrodinger describes tunneling of the electron this tunneling is followed by dissociation of the nuclei for the considered class of strictly dissociative potentials xmath18 in the followingwe will describe the energy distribution of the dissociated nuclei our aim is to describe the energy distribution of the nuclei ie the ker spectrum after the molecule is ionized by tunneling of the electron to this endwe need to consider the problem in the xmath26 limit where the electron is far away from the nuclei in this limitwe assume that the electron nuclear interaction potential takes the form xmath27 where xmath28 is the total charge of the remaining core system this assumption makes our problem separable in electron and nuclear coordinates in this asymptotic region by seeking the partial solutions in the form xmath29 eq eq schrodinger can be written as the separated equations xmath30 fmathbfrk 0labeleq asxeq left frac12 m frac d2 d r 2 ur erright gr k 0labeleq asreq endaligned with separation constants given by xmath31 where we assume xmath32 and xmath33 is the wave number for the state xmath34 equation eq zerobc amounts to xmath35 we choose the continuum solutions of eq eq asreq to be real and normalized by xmath36 the conditions eqs eq zerobcgeq gnorm completely specify the nuclear problem eq eq asreq the electronic problem eq eq asxeq has a potential consisting of a coulomb term and a linear field term this problem is separable in parabolic coordinates xcite which we will therefore use first we introduce mass scaled quantities xmath37 then the following form of the parabolic coordinates is introduced as in ref xcite eq parabcoord xmath38 with this choice of coordinates a potential barrier forms in the xmath39 coordinates and therefore xmath39 takes the role as the tunneling coordinate in the asymptotic region xmath40 eq eq asxeq has a solution that is a linear combination of partial solutions of the form xcite xmath41 where the outgoing wave xmath42 is given by xmath43rightlabeleq fetaendaligned xmath44 is the ionization channel function defined by xmath45 phinuxivarphi betanu phinuxivarphilabeleq xiphiadeqendaligned and xmath46 is a set of parabolic quantum numbers labeling the different ionization channels see fig fig parabcoord with our choice of xmath14the potential in eq eq xiphiadeq goes to infinity as xmath47 goes to infinity so the parabolic channels xmath48 are purely discrete the gray paraboloid is the same for a smaller value of xmath39 the electron is ionized in the negative xmath12 direction due to its negative charge given that the electric field points in the positive xmath12direction the xmath44 states eq eq xiphiadeq live in the constant xmath39 paraboloids the colors in the blue red surface illustrates an example of the nodal structure of such a xmath44 state the curvature of the paraboloids means that the xmath44 states are bound this means that xmath39 is the only coordinate where we have to consider the wave function at infinity ie xmath39 is the tunneling coordinate for large xmath39the polar angle xmath15 which specifies the orientation of the molecule does not matter for the asymptotic form of the wave function in parabolic coordinates though it matters for the size of the coefficients eq eq spectrumampldef the full wave function can be expressed as a linear combination discrete in xmath48 continuous in xmath49 of the xmath50 products xmath51 where the asymptotic expansion coefficient xmath52 can be calculated by xmath53 xmath54 indicates integration wrt the coordinates xmath47 and xmath55 over their full range note that the polar angle xmath15 which we suppressed in the notation only enters eq eq spectrumampldef through the wave function xmath56 the ker dissociation spectrum into the channel xmath48 is defined in terms of these expansion coefficients by xmath57 this is the main observable of interest by inserting eq eq feta and eq eq spectrumampldef and assuming xmath58 to be real which is approximately the case for small xmath59 we obtain xmath60 the exact ker spectrum in the channel xmath48 can thus be obtained by projecting the wave function on the channel state xmath44 and further projecting this on the continuum states xmath34 of the xmath18 potential the total ker spectrum can then be obtained by summing over all the channels xmath61 in the xmath62 limit the total rate can be obtained by xmath63 now that we have a recipe for finding the exact ker spectrum we consider some approximations for ease of predictions and gain in physical insight we first consider the bo approximation which appears in the limit xmath64 in this limit xmath65 andthe wave function takes the form xmath66 the electronic and nuclear part of bo wave function fulfills the bo equations xmath67 psiemathbfrr 0labeleq boeleceq left frac12 m frac d2 d r 2 ureerf etextbofright chir 0labeleq bonuceq endaligned we impose zero boundary condition for the nuclear wave function xmath68 and the following normalizations xmath69 in the asymptotic limit xmath40 the electronic eq eq boeleceq takes the same form as eq eq asxeq and it can be written in parabolic coordinates in the same manner the electronic wave function then takes the outgoing wave form xmath70 where xmath42 is from eq eq feta and xmath44 are solutions of eq eq xiphiadeq with xmath58 replaced by xmath71 in both the asymptotic coefficient xmath72 defines the ionization amplitude in channel xmath48 xcite the partial electronic ionization rate is given by xmath73 by considering the flux of the electron probability through a surface at large negative xmath12 one can show that in the weak field limit the total electronic rate xmath74 is given as a sum over xmath48 of all the partial electronic rates by inserting the bo wave function into eq eq specexpr we obtain xmath75 the separation of electronic and nuclear coordinates in the bo approximation means that this expression for the ker spectrum does not contain any explicit reference to electronic coordinates as opposed to the expression for the exact spectrum eq eq specexpr equation eq specbo is similar to a result previously put forward in the literature eq 1 of ref xcite except that the correct complex ionization amplitude xmath72 was taken as xmath76 in the caseswe have considered the phase variations of xmath72 are sufficiently small that they can be safely neglected explaining the successful use of the aforementioned replacement in ref xcite but this is not generally true to evaluate the integral in eq eq specbo we will use the reflection principle xcite at the heart of the reflection principlelies an important mathematical component which we denote the reflection approximation xcite this approximation amounts to setting xmath77 which is exact in the xmath78 limit in eq eq gasdelt xmath79 is the classical turning point potentials so there is only one classical turning point for the xmath34 function defined by xmath80 in order to determine the derivative xmath81 the form of the dissociative xmath18 potential must be known inserting eq eq gasdelt in eq eq specbo yields xmath82 this result shows that using the reflection approximation in conjunction with the bo approximation we obtain a ker spectrum that is expressed as a product of a jacobian factor the electronic rate and the field dressed nuclear wave function eq eq bonuceq this is a lot simpler to calculate than evaluating either integrals in eqs eq specexpr or eq specbo and is easily reversed to give a way to image the field dressed nuclear wave function and it is applicable to any molecule with a dissociative bo curve the exact electronic rate xmath83 is often not available since finding it requires solving the electronic problem eq eq boeleceq which is a highly non trivial task for many systems in such cases the weak field asymptotic theory wfat xcite can be employed to obtain the rate wfat is an analytic theory which expresses the ionization rate in terms of properties of the field free state it is applicable in the weak field limit let xmath84 and xmath85 denote the adiabatic eigenvalues and eigenfunctions solving eq eq xiphiadeq for xmath86 with the field free electronic energy xmath87 replacing xmath58 ref xcite provides analytic expressions for these quantities in terms of these the asymptotic field free electronic wave function can be written xmath88 where xmath89 the electronic wfat rate is then given by xcite xmath90 where the field factor xmath91 is defined by xmath92 and the asymptotic coefficients xmath93 can be found from the electronic wave function by inversion of eq eq fieldfreeelecwf xmath94 wfat can also be applied for the exact state and not just in the bo approximation as above in this sectionwe will give the pertaining formulas let as before xmath95 and xmath96 denote the adiabatic eigenvalues and eigenfunctions solving eq eq xiphiadeq for xmath86 now with the field free energy xmath97 in terms of these the asymptotic field free wave function can be written xcite xmath98 where xmath99 the wfat xcite yields the following expression for the ker spectrum xmath100 where the field factor xmath101 is given by xmath102 and the field free asymptotic coefficients xmath103 can be found by inversion of eq eq fieldfreewf xmath104solving eq eq schrodinger in 3d is a computationally heavy task so we have used a 1d model to illustrate our central points in this sectionwe compare exactly calculated ker spectra with those obtained through the bo approximation eq eq specresultbo and the wfat eq eq specwfat within this 1d model in the followingwe will consider a model of hxmath105 as an example the potentials we consider are thus xmath106 with xmath107 and xmath108 the interaction between the nuclei and the electrons xmath109 is described by a soft core coulomb potential the function xmath110 is chosen in such a way that the bo potential of this potential reproduces the bo potential energy curve of 3d hxmath105 xcite we use the method described in ref xcite to solve the 1d equivalent of eq eq schrodinger given by xmath111 psiz r 0labeleq schrodinger1dendaligned in the 1d model the index xmath48 which describes what happens in the paraboloids of constant xmath39 transversal to xmath12 is of no meaning and it hence does not appear in any of the 1d equivalents of the 3d equations the 1d equivalent of the exact ker spectrum eq eq specexpr is xmath112 equations eq specbo and eq specresultbo apply to the 1d case with appropriately redefined quantities the wfat expressions eqs eq ratewfatelec eq fieldfactorelec and eq specwfat eq fullfieldfactor are the same as in the 1d case but the asymptotic coefficients are now found from xmath113 and xmath114 in eq eq1dasympcoeff xmath115 denotes the field free 1d electronic bo wave function light blue shaded area in the lower xmath116 bo curve is multiplied by the electronic rate xmath117 dashed purple line and reflected in the dissociative xmath18 bo curve to give a ker spectrum solid blue line in upper right corner eq eq specresultbo using the relation xmath118 to translate xmath49 into xmath119 this is compared to the exact ker spectrum xmath120 red dashed line eq eq specexpr1d a field strength of xmath121 was used for this calculation the solid gray line in the lower part of the figure shows the field free nuclear wave function xmath122 the surface plot in the upper part of the figure shows the continuum states xmath34 of the xmath18 potential these are solutions of eq eq asreq figure fig spec1dground illustrates how the bo approximation can be used in conjunction with the reflection principle to determine the ker spectrum the figure shows a calculation for the ground state of the hxmath105 model at xmath121 the field dressed nuclear wave function xmath123 is multiplied by the electronic rate xmath117 the exponential dependence of the electronic rate xmath117 on the internuclear coordinate means that the product xmath124 see eq eq specresultbo has its maximum at a value of xmath125 which is significantly different from the maximum of the bare nuclear wave function at xmath126 this in turn means that the transition to the continuum which is determined by the product xmath124 and not the bare nuclear wave function is far from vertical in xmath119 with respect to the initial nuclear wave function and the spectrum peaks at a lower energy around xmath127 and not at xmath128 using wfat within the bo approximationwe can make a statement about in which direction the maximum of the spectrum shifts when the field is varied in these approximationsthe main dependence of the electronic rate on the field is contained in the exponent xmath129 see eq eq fieldfactorelec the electronic energy xmath87 in terms of which xmath130 is defined generally depends very much on the system considered in the case of hxmath105it is a monotonically increasing function of xmath119 since when the two potential wells around each of the nuclei start to overlap the electron is more tightly bound this in turn means that the electronic rate is an increasing function of xmath119 as can also be seen in fig fig spec1dground when the strength of the field increases the exponent xmath129 grows but at the same time the slope of this exponent with respect to xmath119 decreases since xmath131 is multiplied by a smaller number the smaller slope means that the location of the maximum of the product xmath124 is shifted less from the maximum of xmath123 as the field strength increases and conversely as the field strength is decreased the maximum of the product xmath124 is shifted more towards larger xmath119 these shifts are directly reflected in the spectrum which is given as the reflection of the xmath124 product in the bo and reflection approximations figure fig spec1d shows ker spectra obtained using as initial state the first vibrationally exited state of hxmath105 we have chosen to show these results as they are for the lowest state with a non trivial nodal structure in xmath119 in the figure twodifferent field strengths are considered in the top panelwe see that the nodal structure of the nuclear wave function is reflected in the ker spectrum although one peak is a lot larger than the other this asymmetry can be understood in the bo approximation see eq eq specresultbo as due to the fact that the electronic rate xmath117 has an exponential dependence on xmath119 in the wfatit can be understood as resulting from the exponential dependence of the field factor eq eq fullfieldfactor on xmath49 for the lower field strength the structures at xmath132 are not visible as the ker spectrum falls below the numerical precision limit of our calculation eq specexpr1d dashed dotted blue line bo combined with reflection principle eq eq specresultbo short dashed green line wfat 1d equivalent of eq eq specwfat the insets show the normalized ker spectra on a linear scale the critical field for use of bo eq eq fbo is for hxmath105 xmath133 a xmath134 and xmath135 b xmath136 and xmath137 for the large field strength fig fig spec1da we see that the bo ker spectrum has a shape much closer to the exact ker spectrum than for the lower field strength also the maximum value of the bo ker spectrum is more than an order of magnitude closer to the maximum value of the exact ker spectrum for the larger field strength this can be understood on the basis of the retardation argument provided in ref xcite the bo approximation is expected to hold as long as the electron is close enough to the nuclei that the time it takes for the electron to go to its present location from the nuclei is shorter than the time it takes for the nuclei to move a typical electron velocity can be estimated as xmath138 where xmath139 is the equilibrium internuclear distance which for hxmath105 is xmath140 a typical time scale for the nuclear motion can be estimated as xmath141 where xmath142 is obtained by expanding the bo potential around xmath139 to second order xmath143 using these estimates ref xcite defines a critical distance xmath144 such that for xmath145 we expect bo to work well while for xmath146 we expect it to break down since the magnitude of the wave function is essentially unchanged after the tunneling the bo approximation is expected to work well when the outer turning point is within this xmath147 distance so a critical field xmath148 can be estimated such that the bo approximation is expected to give good results for larger fields but fail for smaller fields the two field strengths of fig fig spec1d lies on either side of this critical field which for the system under consideration is xmath133 as we increase the field strength further the bo gives even better results for the lower field strength where bo fails we can apply the wfat see sec sec fullwfat in fig fig spec1d we see that the shape of the wfat ker spectrum indeed is closer to the exact ker spectrum than the bo ker spectrum for the weaker field strength and it is also closer in magnitude to the maximum value for the larger field strengththe wfat ker spectrum is further from the exact ker spectrum in both shape and magnitude upper right corner eq eq specexpr1d at xmath121 the magnitude of the asymptotic wave function has been found by reversing the reflection principle giving xmath149 using the relation xmath118 to translate xmath49 into xmath119 from this the field dressed nuclear wave function has been imaged by dividing with the electronic rate xmath117 and normalizing in the lowest part of the plot the short dashed purple line shows this imaging using the exact electronic rate xmath74 the long dashed red line shows it using the bo wfat approximation xmath150 eq eq ratewfatelec the solid gray line shows the field free nuclear wave function xmath122 the shaded light blue area shows the field dressed nuclear wave function xmath123 the surface plot in the upper part of the figure shows the continuum states xmath34 the field dressed nuclear wave function can be imaged from a measurement of the ker spectrum by inverting eq eq specresultbo for fields sufficiently large that the bo approximation applies to demonstrate thiswe have taken the exact ker spectrum from our calculation at xmath121 for the first vibrationally exited state and divided it by the jacobian factor and the electronic rate to obtain an image of the nuclear density since an experimental ker spectrum is typically not known on an absolute scale we have then normalized this quantity in a calculation on a more complicated systemthan the one considered here the exact electronic rate is often not available so we also show the result using the wfat approximation for the electronic rate eq eq ratewfatelec the results are compared to the nuclear wave function known from the calculation in our model in fig fig reconstructedchi they do not agree perfectly but the nodal structure is correctly reproduced for smaller field strengthswhere the bo is not applicable this type of imaging is not possible the ker spectrum however does give us access to the asymptotic wave function as it is the norm square of the expansion coefficients of this see eq eq wfexpansion for the cases we have looked at the phase of the asymptotic coefficient xmath151 varies very little over the range where it has support in our model we have access to the full wave function and this we show in fig fig wf the imaging through the 1d equivalent of eq eq wfexpansion would only give access to the part at large negative xmath152 in the classically allowed region at large negative xmath152 the maximum of the wave function follows a classical trajectory this is a prediction of the wkb theory which applies as long we are not too close to the turning line the classical trajectories can be found using newton s second law xmath153 eq clastrajnewton a tempting choice of initial condition for the differential eqs eq clastrajnewton would be to choose the xmath154 values at the intersection of the outer turning line and the maximum ridge of the wave function with zero velocity in both xmath152 and xmath119 direction however the wkb fails near the turning line and therefore we can not expect the wave function to follow a classical trajectory here instead we have chosen as initial condition some point at the maximum of the wave function at a large negative xmath152 value away from the turning line the influence of the xmath155 potential can be neglected for sufficiently large negative xmath152 in this region we can write the separated energy conservation equations xmath156 eq clastrajenergy the initial velocities have then been determined from eqs eq clastrajenergy using the real part of the total quantum energy for xmath157 and the xmath49 at which the ker spectrum xmath120 eq eq specexpr1d peaks the classical trajectories shown in fig fig wf were found using such initial conditions and then propagated inwards from fig fig wf it can be seen that contrary to the exact wave function the position of the ridge of the bo wave function in xmath119 does not change with xmath152 this is expected as the bo approximation appears in the limit of infinite nuclear mass so classical motion in the nuclear coordinate is not possible the asymptotic wave function that we can image using eq eq wfexpansion is therefore a non bo wave function it might seem strange that the bo is able to give the correct ker spectrum when the spectrum is the norm square of the expansion coefficients of the asymptotic wave function and the bo gives a wrong description of this asymptotic wave function however the fact that the bo wave function does not obtain a probability current or velocity in the classical picture in the xmath119direction does not alter its projection on the continuum states the important point is whether the bo wave function is similar to the exact wave function as it emerges at the outer turning line after tunneling and this is the case if the turning line is within the critical bo distance xmath147 eq eq zbo in fig fig wf we also see that for the larger field strength the tunneling is completed before the critical bo distance is reached contrary to at the smaller field strength we see that for the large field strength the electronic and full turning lines agree quite well in the region where most of the wave function is localized but for the smaller field strength they do not for xmath158 solid purple lines full turning lines xmath159 the long dashed red line shows for each xmath12 the xmath119 at which the wave function xmath160 has its maximum the solid pink line shows a classical trajectory eq eq clastrajnewton the black dot at the end of the classical trajectory is the exit point xmath161 determined from the maximum of the spectrum xmath162 see main text the short dashed lines are the simple straight line estimates for the tunneling and initial classical motion described around eq eq directions one can notice that a phenomenon reminiscent of light refraction occurs for the wave function around the turning line in fig fig wf it is evident that the direction in which the maximum of the wave function moves changes noticeably at the turning line when the wave function escapes from the classically forbidden tunneling region into the classically allowed region the change of direction is due to the two different types of motion involved when the wave function emerges from the tunneling region it has essentially zero average velocity in the xmath119 direction this means that we can apply the reflection principle in reverse on the spectrum to find the xmath163 coordinate at which the maximum of the wave function emerges from the tunneling region by the relation xmath164 where xmath162 is the value of xmath49 for which the spectrum xmath120 has its maximum the xmath12 value corresponding to this xmath163 can then by found by considering the turning line xmath165 in fig fig wfmax we see that near the turning line the location of the wave function ridge differs from the classical trajectory this is expected since the prediction that the wave function ridge should follow a classical trajectory comes from wkb theory which fails near the turning line nevertheless we can roughly describe the dissociative tunneling ionization process in two steps first the system tunnels from the central region around xmath166 to the exit point xmath161 this motion can roughly be described by a straight line from the maximum of the nuclear wave function xmath123 that has the largest xmath119 value since this is the maximum that will dominate the tunneling to the exit point notice that this tunneling is not simply the electron tunneling out but a correlated process involving both the electronic and nuclear degrees of freedom in the classically allowed regionthe initial direction of the wave function from the exit point can be found from the classical trajectory the initial slope of the classical trajectory that starts at the exit point xmath161 with zero velocity in both xmath12 and xmath119 directions can be found to be xmath167 this is not exactly the trajectory that describes the motion of the wave function ridge but it is quite close these two directions are different as they come from different types of motion and hence we see the refraction like phenomenon at the turning line we have formulated theory for the dissociative tunneling ionization process and derived exact formulas for the ker spectrum as well as approximations in the framework of the bo and reflection approximations we have demonstrated that the reflection principle can be used in conjunction with the bo approximation to image the field dressed nuclear wave function from the ker spectrum for weaker fields where the bo approximation fails the wfat can be used to find the ker spectrum we have also demonstrated a qualitative difference between asymptotic bo and exact wave functions as the latter shows classical motion in the nuclear coordinate whereas the former does not move at all due to the infinite nuclear mass of the bo approximation around the turning linethe wave function exhibits a behavior similar to refraction of light this work was supported by the erc stg project no 277767tdmet and the vkr center of excellence quscope the numerical results presented in this work were performed at the centre for scientific computing aarhus httpphysaudk forskning cscaa o i t acknowledges support from the ministry of education and science of russia state assignment no 36792014k 29ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty linkdoibase 101126science1218497 doibase httpdxdoiorg101038ncomms7611 linkdoibase 101103physrevx5021034 linkdoibase 101364optica2000623 linkdoibase 101103physrevlett98013901 linkdoibase 101103physrevlett111033002 linkdoibase 101126science1198450 linkdoibase 101103physreva84043420 linkdoibase 101038nature10820 linkdoibase 101103physrev32858 doibase httpdxdoiorg10106311679721 linkdoibase 101103physrevlett108073202 linkdoibase 101103physreva90063408 linkdoibase 101103physrevlett92163004 linkdoibase 101103physrevlett823416 linkdoibase 101103physreva58426 linkdoibase 101103physrevlett111153003 linkdoibase 101103physreva84053423 linkdoibase 101103physreva89013421 linkdoibase 101103physrev56750 linkdoibase 101103physrevlett792026 linkdoibase 101103physreva82023416 noop linkdoibase 101103physreva87043426 linkdoibase 101103physreva532562 linkdoibase 101103physreva67043405 linkdoibase 101103physrevlett98253003 linkdoibase 101103physreva91013408
we present a theoretical study of the dissociative tunneling ionization process analytic expressions for the nuclear kinetic energy distribution of the ionization rates are derived a particularly simple expression for the spectrum is found by using the born oppenheimer bo approximation in conjunction with the reflection principle these spectra are compared to exact non bo ab initio spectra obtained through model calculations with a quantum mechanical treatment of both the electronic and nuclear degrees freedom in the regime where the bo approximation is applicable imaging of the bo nuclear wave function is demonstrated to be possible through reverse use of the reflection principle when accounting appropriately for the electronic ionization rate a qualitative difference between the exact and bo wave functions in the asymptotic region of large electronic distances is shown additionally the behavior of the wave function across the turning line is seen to be reminiscent of light refraction for weak fields where the bo approximation does not apply the weak field asymptotic theory describes the spectrum accurately
introduction theory illustrative 1d calculations conclusion
in our solar system zodiacal dust grains are warm xmath4150k and found within xmath23au of the sun slow but persistent collisions between asteroids complemented by material released from comets now replenish these particles similar warm dust particles around other stars are also expected and would be manifested as excess mid infrared emission the implication of warm excess stars for the terrestrial planet building process has prompted many searches including several pointed observing campaigns with however a lack of consensus of what constitutes a warm excess has resulted in ambiguity and some confusion in the field for example spitzer surveys with mips revealed a number of stars with excess emission in the 24xmath0 band however very few of these may turn out as genuine warm excess stars because the detected 24xmath0 emission is mostly the wien tail of emission from cold t xmath5 150k dust grains xcite for black body grains xmath6 txmath7rxmath72rxmath8xmath9 where rxmath10 is the distance of a grain from a star of radius rxmath7 and temperature txmath7 due to the dependence of xmath6 on txmath7 and rxmath7 the terrestrial planetary zone tpz around high mass stars extends further out than that around low mass stars therefore rxmath10 is not a good way to define the tpz while dust equilibrium temperature is equally applicable to all main sequence stars in our solar system txmath10 is 150k near the outer boundary of the asteroid belt xmath235au and the zodiacal dust particles are sufficiently large xmath230xmath0 that they do radiate like blackbodies to specify a tpz independent of the mass of the central star we define the tpz to be the region where txmath11 150k then an a0 star has 25au and an m0 star has 09au as the outer boundary of their tpz because of the way it is defined tpz applies only to the location of grains that radiate like a blackbody according to the spitzer surveys listed above the presence of dust in the tpz characterized by excess in the mid ir is quite rare for stars xmath410myrs old for ages in the range of xmath12myr a posited period of the terrestrial planet formation in our solar system only a few stars appear to possess warm dust according to our analysis see xmath13 5 and table 1 xmath14 cha a b8 member of 8 myr old xmath14 cha cluster xcite xmath14 tel hd 172555 a0 and a7 type members of the 12 myr old xmath15 pic moving group xcite hd 3003 an a0 member of the 30 myr old tucana horologium moving group xcite and hd 113766 an f3 binary star 12xmath16 separation xcite in the lower centaurus crux lcc association xcite in this paper we present the a9 star ef cha another example of this rare group of stars with warm dust at the epoch of terrestrial planet formation hipparcos 2mass and mid course experiment msx xcite sources were cross correlated to identify main sequence stars with excess emission at mid ir wavelengths out of xmath268000 hipparcos dwarfs with xmath17 xmath18 60 20 see xcite for an explanation of this xmath17 constraint in a search radius of 10xmath16 xmath21000 stars within 120 pc of earth were identified with potential msx counterparts spectral energy distributions sed were created for all xmath21000 msx identified hipparcos dwarfs observed fluxes from tycho2 xmath19 and xmath20 and 2mass xmath21 xmath22 and xmath23 were fit to a stellar atmospheric model xcite via a xmath24xmath25 minimization method see xcite for detailed description of sed fitting from these sed fits about 100 hipparcos dwarfs were retained that showed apparent excess emission in the msx 8xmath0 band that is the ratio msx flux photosphere flux msx flux uncertainty must be xmath26 30 since a typical positional 3xmath27 uncertainty of msx is xmath26xmath16 xcite and msx surveyed the galactic plane a careful background check is required to eliminate contamination sources by over plotting the 2mass sources on the digital sky survey dss images we eliminated more than half of the apparent excess stars that included any dubious object ie extended objects extremely red objects etc within a 10xmath16 radius from the star among the stars that passed this visual check ef cha was selected for follow up observations at the gemini south telescope independent iras detections at 12 and 25xmath0 made ef cha one of the best candidates for further investigation an n band image and a spectrum of ef cha were obtained using the thermal region camera spectrograph t recs at the gemini south telescope in march and july of 2006 gs2006a q10 respectively thanks to the queue observing mode at gemini observatory the data were obtained under good seeing and photometric conditions the standard beam switching mode was used in all observations in order to suppress sky emission and radiation from the telescope data were obtained chopping the secondary at a frequency of 27 hz and noddding the telescope every xmath230sec chopping and noddingwere set to the same direction parallel to the slit for spectroscopy standard data reduction procedures were carried out to reduce the image and the spectrum of ef cha at n band raw images were first sky subtracted using the sky frame from each chop pair bad pixels were replaced by the median of their neighboring pixels aperture photometry was performed with a radius of 9 pixels 09xmath16 and sky annuli of 14 to 20 pixels the spectrum of a standard star hd 129078 was divided by a planck function with the star s effective temperature 4500k and this ratioed spectrum was then divided into the spectrum of ef cha to remove telluric and instrumental features the wavelength calibration was performed using atmospheric transition lines from an unchopped raw frame the 1d spectrum was extracted by weighted averaging of 17 rows for the n band imaging photometry the on source integration time of 130 seconds produced s n xmath26 30 with fwhm xmath2054xmath16 for the n band spectrum a 886 second on source exposure resulted in s n xmath26 20 a standard star hip 57092 was observed close in time and position to our target and was used for flux calibration of the n band image of ef cha for spectroscopy hd 129078 a kiii 25 star was observed after ef cha at a similar airmass and served as a telluric standard while our paper was being reviewed spitzer multiband imaging photometer for spitzer mips archival images of ef cha at 24 and 70xmath0 were released from the gould s belt legacy program led by lori allen ef cha was detected at mips 24xmath0 band but not at mips 70xmath0 band we performed aperture photometry for ef cha at 24xmath0 on the post bcd image produced by spitzer science center mips pipeline we used aperture correction of 1167 for the 24xmath0 image given at the ssc mips website httpsscspitzercaltechedumipsapercorr with aperture radius of 13and sky inner and outer annuli of 20 and 32 respectively for mips 70xmath0 data we estimated 3 xmath27 upper limits to the non detection on the mosaic image that we produced using mopex software on bcd images table 2 lists the mid ir measurements of ef cha from msx iras mips and gemini t recs observations the t recs n band image fov of 288xmath16 xmath28 216 xmath16 confirmed that no other mid ir source appears in the vicinity of ef cha and that the mid ir excess detected by the space observatories iras msx originates solely from ef cha a strong silicate emission feature in the n band spectrum figure 1 indicates the presence of warm small xmath29 xmath1 5xmath0 see figure 6 in xcite dust particles amorphous silicate grains dominate the observed emission feature however crystalline silicate structure probably forsterite appears as a small bump near 113xmath0 xcite polycyclic aromatic hydrocarbon pah particles can also produce an emission feature at 113xmath0 however absence of other strong pah emission features at 77 and 86xmath0 indicates that the weak 113xmath0 feature does not arise from pahs furthermore although pah particles do appear in some very young stellar systems they have not been detected around stars as old as 10 myr in contrast crystalline silicates such as olivine forsterite etc are seen in a few such stellar systems xcite the dust continuum excess of ef cha was fit with a single temperature blackbody curve at 240k by matching the flux density at 13xmath0 and the mips 70xmath0 upper limit figure 1 the 3 xmath27 upper limit at mips 70xmath0 band indicates that the dust temperature should not be colder than 240k figure 1 shows that mips 24xmath0 flux is xmath230mjy lower than iras 25xmath0 flux due to the small mips aperture size compared with iras mips24xmath0 flux often comes out smaller when nearby contaminating sources are included in the large iras beam the ground based t recs 288xmath28216 image of ef cha at n band however shows no contaminating source in the vicinity of ef cha thus the higher flux density at iras 25xmath0 perhaps indicates the presence of a significant silicate emission feature near 18xmath0 included in the wide passband of the iras 25xmath0 filter 185 xmath30 298xmath0 a recent spitzer irs observation of another warm excess star bd20 307 xcite shows a similar discrepancy between the iras 25xmath0 flux and mips 24xmath0 flux in the presence of a significant silicate emission feature at xmath218xmath0 weinberger et al 2007 in preparation consistent with our interpretation mips 24xmath0 flux is slightly above our 240k dust continuum fit the wide red wing of an 18xmath0 silicate emission feature could contribute to a slight increase in mips 24xmath0 flux ef cha was detected in the rosat x ray all sky survey with xmath31 which suggests a very young age for an a9 star see figure 4 in xcite on the basis of this x ray measurement hipparcos distance 106 pc location in the sky ra 12xmath3207xmath33 dec 79xmath32 and proper motion pmra 402xmath3412 pmde 84xmath3413 in mas yr ef cha is believed to be a member of the cha near moving group avg ra 12xmath3200xmath33 avg dec 79xmath32 avg pmra 4113xmath3413 avg pmdec 332xmath34086 in mas yr xcite which is xmath210 myr old and typically xmath290pc from earth large blackbody grains in thermal equilibrium at 240k would be located xmath243au from ef cha while small grains especially those responsible for the silicate emission features in our n band spectrum radiate less efficiently and could be located at xmath2643au recent spitzer mips observations confirmed that all aforementioned xmath13 1 warm excess stars do not have a cold dust population indicating few large grains at large distances xcite lack of cold large grains in turn suggests local origin of the small grains seen in these warm excess stars without cold excess from spitzer mips 70xmath0 data small grains in efcha should originate in the tpz probably by the breakup of large grains in the tpz rather than inward migration from an outer disk even in the unlikely event that silicate emission comes from small grains in an outer disk that were blown away by radiation pressure as in vega xcite the dominant carrier of 240k continuum emission would still be large grains aigen li 2007 private communication the fraction of the stellar luminosity reradiated by dust xmath35 is xmath36 which was obtained by dividing the infrared excess between 7xmath0 and 60xmath0 by the bolometric stellar luminosity this xmath35 is xmath210000 times larger than that of the current sun s zodiacal cloud xmath37 but appears to be moderate for known debris disk systems at similar ages see figure 4 in xcite xcite show that the ratio of dust mass to xmath35 of a debris system is proportional to the inverse square of dust particle semimajor axis for semimajor axes between xmath29au and xmath2100au for systems with dust radius xmath59au this relationship overestimates the dust mass instead we calculate the mass of a debris ring around ef cha using xmath38 eqn 4 in xcite where xmath39 is the density of an individual grain xmath40 is the dust luminosity and xmath41 is the average grain radius because xcite analyzed xmath42 lep a star of similar spectral type to ef cha we adopt their model for grain size distribution assuming rxmath10 43au xmath39 25 g xmath43 and xmath41 3xmath0 the dust mass is xmath44 g xmath210xmath45 xmath46 grains with xmath29 xmath47 3xmath0 will radiate approximately as blackbodies at wavelengths shorter than xmath22xmath48xmath29 xmath220xmath0 as may be seen from figure 1 most of the excess ir emission at ef cha appears at wavelengths xmath120xmath0 for blackbody grains at xmath243au with for example radius xmath29 xmath47 3xmath0 the poynting robertson p r dragtime scale was used for the stellar luminosity of ef cha with a bolometric correction of 0102 xcite absolute visual magnitude of ef cha is 235 based on the hipparcos distance of 106 pc is only xmath49 years much less than 10 myrs yet smaller grains with xmath50 13xmath0 would be easily blown away by radiation pressure on a much shorter time scale successive collisions among grains can effectively remove dust particles by grinding down large bodies into smaller grains which then can be blown out the characteristic collision time orbital periodxmath35 of dust grains at 43au from this a9 star is xmath51 years both pr time and collision timewere derived assuming no gas was present in the disk while gas has not been actively searched for in ef cha few debris disk systems at xmath4 10 myr show presence of gas indicating early dispersal of gas xcite a possibility of an optically thin gas disk surviving around a xmath210 myr system was investigated by xcite however their model of gas disk is pertinent to cool dust at large distances xmath4 120 au but not to warm dust close to the central star as in ef cha in addition a recent study of ob associations shows that the lifetime of a primordial inner disk is xmath1 3myr for herbigae be stars xcite based on the very short time scales of dust grain removal essentially all grains responsible for significant excess emission at ef cha in the mid ir are therefore likely to be second generation not a remnant of primordial dust debris disk systems in the tpz during the epoch of planet formation debris disk systems in the tpz during the epoch of planet formation the presence of hot dust has been recognized around other xmath210myr stars for example tw hya hd 98800 hen 3 600 in the tw hydrae association interesting characteristics of these systems are their large xmath35 xmath52 and late k and m spectral types tw hya and hen 3 600 show flat ir sed up to 160xmath0 consistent with active accretion in their disks xcite combined with the presence of substantial gas emission lines from tw hya the observed infrared excess emission at least for these two stars appears to arise from gaseous dusty disks left over from the protostellar environment on the other hand a lack of gas emission xcite and the quadruple nature of the hd 98800 system has invoked a flared debris disk as an alternative explanation for its large infrared excess emission xcite many young stars come in multiple systems however they hardly display such a high xmath35 as that of hd 98800 thus the dust disk around hd 98800 might be an unusual transient pheonomeon such that it still contains a dust population composed of a mixture of promordial grains and replenished debris some stars display a mixture of warm and cold grains where the overall infrared excess emissions is dominated by the cold dust table 1 summarizes the currently known disk systems with warm dust regardless of spectral type and the presence of cold excess or remnant primordial dust at xmath53myr what separates ef cha from the stars described in the previous paragraph is that most of the infrared excess emission if not all arises from warm dust in the tpz and as described in xmath1351 these grains are clearly not a remnant of the protostellar disk recent spitzer mips observation shows null detection of ef cha at 70xmath0 band leaving the presence of substantial cold dust unlikely see figure 1 this result is consistent with the recent spitzer observations of other similar table 1 early type warm excess systems xmath14 tel hd 3003 hd 172555 and hd 113766 in which cold dust from a region analogous to the sun s kuiper belt objects is missing xcite in the following discussion we characterize warm excess stars as those with warm dust in the tpz only and without cold excess ie we exclude stars like xmath15 pictoris the fact that all currently known warm excess stars at ages between xmath12 myr belong to nearby stellar moving groups offers an excellent opportunity to address how frequently warm excess emission appears among young stars in the solar neighborhood xcite list suggested members of stellar moving groups and clusters ie xmath14 cha cluster tw hydra association xmath15 pictoris moving group cha near moving group and tucana horologium association at ages xmath12 myr within 100 pc of earth currently spitzer mips archive data are available for all 18 members of the xmath14 cha cluster all 24 members of twa all 52 members of tucana horologium 25 out of 27 xmath15 pic moving group and 9 out of 19 members of cha near moving group multiple systems were counted as one object unless resolved by spitzer for example in the xmath15 pictoris moving group hd 155555a hd 155555b hd155555c were counted as a single object however hip 10679 hip 10680 were counted as two objects table 1 shows that the characteristics of dust grains depend on the spectral type of the central star all six warm debris disks considered in this paper harbor early type central stars earlier than f3 however late type stars in table 1 for example pds 66 a k1v star from lcc sometimes show characteristics of t tauri like disk excess ig xmath54 flat ir sed etc even at xmath410 myr xcite such apparent spectral dependency perhaps arises from the relatively young ages xmath210myr of these systems in which late type stars still possess grains mixed with primordial dust due to a longer dust removal time scale xcite in the above mentioned five nearby stellar moving groups 38 out of 129 stars with spitzer mips measurementshave spectral types earlier than g0 therefore we find xmath213 538 occurrence rate for the warm excess phenomenon among the stars with spectral types earlier than g0 in the nearby stellar groups at xmath12 myr beta pic is the only early type star among the remaing 33 which has both warm and cold dust for lcc at least one hd 113766 out of 20 early type members is a warm excess star giving 5 frequency see table 1 2 in xcite this rate can reach a maximum of 30 when we take into account five early type lcc members that show excess emission at 24xmath0 but have only upper limit measurements at mips 70xmath0 band g0 type was chosen to separate the two apparently different populations because no g type star except t cha appears in table 1 furthermore the spectral type of t cha is not well established g2g8 and it may be a k type star like other k m stars in table 1 rhee et al 2007 in preparation analyze all spectral types in the young nearby moving groups and conclude that the warm excess phenomenon with xmath55 occurs for between 4 and 7 this uncertainty arises because some stars have only upper limits to their 70xmath0 fluxes we thank aigen li for helpful advice and the referee for constructive comments that improved the paper this research was supported by nasa grant nag5 13067 to gemini observatory a nasa grant to ucla and spitzer go program 3600 based on observations obtained at the gemini observatory which is operated by the association of universities for research in astronomy inc under a cooperative agreement with the nsf on behalf of the gemini partnership the national science foundation united states the particle physics and astronomy research council united kingdom the national research council canada conicyt chile the australian research council australia cnpq brazil and conicet argentina this research has made use of the vizier catalogue access tool cds strasbourg france and of data products from the two micron all sky survey the latter is a joint project of the university of massachusetts and the infrared processing and analysis center california institute of technology funded by the national aeronautics and space administration and the national science foundation acke b van den ancker m e 2004 426 151 beichman c a et al 2005 626 1061 beichman c a et al 2006 639 1166 bryden g et al 2006 apj 636 1098 chen c h jura m 2001 560 l171 chen c h jura m gordon k d blaylock m 2005 623 493 chen c h et al 2006 166 351 clarke a j oudmaijer r d lumsden s l 2005 363 1111 cox a n 2000 allen s astrophysical quantities 4th ed edited by arthur n cox publisher new york aip press springer 2000 isbn 0387987460 dent w r f greaves j s coulson i m 2005 359 663 dommanget j nys o 1994 communications de lobservatoire royal de belgique 115 1 egan m p et al 2003 vizier online data catalog 5114 0 furland et al 2007 apj in press astro ph07050380 gorlova n et al 2004 apjs 154 448 gorlova n rieke g h muzerolle j stauffer j r siegler n young e t stansberry j h 2006 649 1028 hauschildt p h allard f baron e 1999 512 377 hernndez j calvet n hartmann l briceo c sicilia aguilar a berlind p 2005 129 856 hines d c et al 2006 638 1070 kessler silacci j et al 2006 639 275 lisse c m beichman c a bryden g wyatt m c 2007 658 584 low f smith p s werner m chen c krause v jura m hines d c 2005 apj 631 1170 luhman k l steeghs d 2004 609 917 mamajek e m lawson w a feigelson e d 1999 516 l77 mamajek e e lawson w a feigelson e d 2000 544 356 mamajek e e meyer m r liebert j 2002 124 1670 megeath s t hartmann l luhman k l fazio g g 2005 634 l113 pascucci i et al 2006 651 1177 rhee j h larkin j e 2006 640 625 rhee j h song i zuckerman b mcelwain m 2007 660 1556 rieke g h et al 2005 620 1010 schuetz o meeus g sterzik m f 2005 431 175 silverstone m d et al 2006 639 1138 smith p s hines d c low f j gehrz r d polomski e f woodward c e 2006 644 l125 song i zuckerman b weinberger a j becklin e e 2005 nature 436 363 song i zuckerman b bessell m 2007 submitted to apj su k y l et al 2005 628 487 takeuchi t artymowicz p 2001 557 990 wyatt m c smith r greaves j s beichman c a bryden g lisse c m 2007 658 569 zuckerman b song i bessell m s webb r a 2001 562 l87 zuckerman b song i webb r a 2001 559 388 zuckerman b song i 2004 42 685 zuckerman b song i weinberger a bessell m 2007 apj letter in preparation ccccccccccccccccc xmath14 cha b8 55 97 237 11000 320 15 8 yes debris xmath14 cha 1 ef cha a9 75 106 192 7400 240 10 10 yes debris cha near 2 hd 113766 f3v 75 131 7000 350 150 10 yes debris lcc 356 hd 172555 a7v 48 29 152 8000 320 81 12 yes debris xmath15 pic 346 xmath14 tel a0v 50 48 161 9600 150 21 12 yes debris xmath15 pic 36 hd 3003 a0v 51 46 159 9600 200 092 30 yes debris tucana horologium 78 xmath15 pic a5v 39 19 137 8600 110 26 12 no debris xmath15 pic 4 hd 98800 k5ve 91 47 4200 160 1100 8 no primordial debris twa 27 tw hya k8ve 111 56 111 4000 150 xmath262200 8 no primordial debris twa 27 hen 3 600 m3 121 42 3200 250 1000 8 no primordial debris twa 27 ep cha k55 112 97 139 4200 xmath261300 8 no primordial debris xmath14 cha 1011 echa j08433 7905 m325 140 97 087 3400 xmath262000 8 no primordial debris xmath14 cha 1011 ek cha m4 152 97 079 3300 xmath26590 8 no primordial debris xmath14 cha 1011 en cha m45 150 97 3200 xmath26480 8 no primordial debris xmath14 cha 1011 echa j08415 7853 m475 144 97 051 3200 xmath26480 8 no primordial debris xmath14 cha 1011 echa j08442 7833 m575 184 97 040 3000 xmath26400 8 no primordial debris xmath14 cha 1011 t cha g2g8 119 66 10 no primordial debris cha near 712 pds 66 k1ve 103 xmath285 135 4400 xmath262200 10 no primordial debris lcc 69 cccccccc 8xmath0 828 167 xmath349 119 48 msx n 104 164 xmath3414 84 80 gemini trecs n 77 1297 trecs xmath56xmath57 100 spectrum 12xmath0 115 152 xmath3432 58 94 iras 24xmath0 240 80 xmath344 14 66 mips 25xmath0 237 110 xmath3421 14 96 iras 70xmath0 700 xmath5 224 17 xmath5 207 mips
most vega like stars have far infrared excess 60xmath0 or longward in iras iso or spitzer mips bands and contain cold dust xmath1150k analogous to the sun s kuiper belt region however dust in a region more akin to our asteroid belt and thus relevant to the terrestrial planet building process is warm and produces excess emission in mid infrared wavelengths by cross correlating hipparcos dwarfs with the msx catalog we found that ef cha a member of the recently identified xmath210myr old cha near moving group possesses prominent mid infrared excess n band spectroscopy reveals a strong emission feature characterized by a mixture of small warm amorphous and possibly crystalline silicate grains survival time of warm dust grains around this a9 star is xmath3 yrs much less than the age of the star thus grains in this extra solar terrestrial planetary zone must be of second generation and not a remnant of primodial dust and are suggestive of substantial planet formation activity such second generation warm excess occurs around xmath2 13 of the early type stars in nearby young stellar associations
introduction msx search for mid-ir excess stars ground-based follow-up observations & mips photometry results discussion
there is a number of new physics np models beyond the standard model sm of particle physics motivated by the hierachy andor the fine tuning problem in the sm most np models propose new states with tev scale masses a few examples are the susy models models with extra gauge bosons xmath6 models and the models with extra dimensions when the masses of the np states are heavier than the center of mass cm energy of the collider the effects of the np can be measured indirectly in terms of the deviations of the sm observables such as the total cross section and various asymmetries the deviations from the sm in the scattering processes are determined by the mass spin and coupling strength of the new states being exchanged by the initial state particles the question of how to distinguish new states with different spins and couplings at the low energies arises at the sub tev xmath7 collider while the cern large hadron collider lhc will probe np models with the tev scale masses we certainly need the precision measurements to distinguish signature of one model from the others the precision measurements of the four fermion scattering at the xmath7 collider are expected to efficiently reveal the nature of the intermediate states being exchanged by the fermions the angular distributions and the asymmetries induced by various new states provide information of the spin and coupling of the interactions at the international linear collider ilc with the center of mass energy xmath8 gev the tev masses could not be observed directly as the resonances since they are heavier than xmath9 low energy taylor expansion is a good approximation for the signals induced from the np models and the corrections will be characterized by the higher dimensional operators for the 4fermion scattering the leading order np signals from the states with spin0 and spin1 such as leptoquarks sneutrino with xmath10parity violating interactions xcite and xmath6 will appear as the dimension6 contact interaction at low energies as a candidate for the np state with spin2 the interaction induced by the massive gravitons xmath11 can be characterized at the low energies by the effective interaction of the form xmath12xcite a dimension8 operator in the viewpoint of effective field theory this effective interaction does not need to be originated from the exhanges of massive graviton states and it is not the most generic form of the interaction containing dimension8 operators however it certainly has the gravitational interpretation due to the use of the symmetric energy momentum tensor xmath13 it can be thought of as the low energy effective interaction induced by exchange of the kk gravitons in add xcite and rs xcite scenario interacting with matter fields in the non chiral fashion in the braneworld scenario where the sm particles are identified with the open string states confined to the stack of d branes subspace and gravitonsare the closed string states propagating freely in the bulk spacetime xcite table top experiments xcite and astrophysical observations allow the quantum gravity scale to be as low as tevs xcite since the string scale xmath14 in this scenario is of the same order of magnitude as the quantum gravity scale it is possible to have the string scale to be as low as a tev the tev scale stringy excitations would appear as the string resonances sr in the xmath15 processes at the lhc xcite the most distinguished signals would be the resonances in the dilepton invariant mass distribution appearing at xmath16 each resonance would contain various spin states degenerate at the same mass these srs can be understood as the stringy spin excitations of the zeroth modes which are identified with the gauge bosons in the sm they naturally inherit the chiral couplings of the gauge bosons in this article it will be shown that the leading order stringy excitations of the exchanging modes identified with the gauge bosons in the four fermion interactions will contain both spin1 and 2 their couplings will be chiral inherited from the chiral coupling of the zeroth mode identified with the gauge boson namely we construct the tree level stringy amplitudes with chiral spin2 interactions in addition to a stringy dimension8 spin1 contribution contrasting to the dimension6 contributions from other xmath17 model at low energies which can not be described by the non chiral effective interaction of the form xmath12 as stated above this article is organized as the following in sectionii we discuss briefly the construction of the stringy amplitudes in the four fermion scattering as is introduced in the previous work xcite the comments on the chiral interaction are stated and emphasized in section iii the low energy stringy corrections are approximated the angular momentum decomposition reveals the contribution of each spin state induced at the leading order in section iv we calculate the angular left right forward backward and center edge asymmetries the extensions topartially polarized beams are demonstrated in section v in section vi we make concluding remarks and discussions the low energy xmath18 expressions for the asymmetriesxmath19 induced by the sm and the np models kk gravitons and sr up to the order of xmath20 are given in the appendix the 4fermion processes that we will consider are the scattering of the initial electron and positron into the final states with one fermion and one antifermion xmath21 we will ignore the masses of the initial and final states particles and therefore consider only the processes with xmath22 where xmath23 the physical process will be identified as xmath24 the xmath25 and xmath1channel are labeled xmath26 and xmath27 respectively to make a relatively model independent phenomenological study of the tev scale stringy kinematics we adopt the parametrised bottom up approach for the construction of the tree level string amplitudes xcite in this approach the gauge structure and the assignment of chan paton matrices to the particlesare not explicitly specified the main requirement is that the relevant string amplitudes reproduce the sm amplitudes in the low energy limit xmath28 field theory limit this implies that we identify the zeroth mode string states as the gauge bosons of the sm the expression for the open string 4fermion tree level helicity amplitude for the process xmath29 with xmath30 is xcite xmath31 where xmath32 and the regge slope xmath33 the interaction factors from the exchange of photon and xmath34 chiral are given by xmath35 here xmath36 and the xmath37 coupling xmath38 the neutral current couplings are xmath39 the chan paton parameter xmath40 represents the tree level stringy interaction which can not be determined in the field theory limit xmath41 since xmath42 the amplitude for other helicity combination xmath43 is given by xmath44 and an index exchange in the xmath45 factor xmath46 since the veneziano like function xmath47 and xmath48 are appearing with the chiral couplings the factor xmath45 in the amplitudes the piece of the stringy states induced by this function will have the similar chiral couplings to those of the sm on the other hand the purely stringy interaction piece proportional to xmath40 are assumed to be non chiral and the values of xmath40 are set to the same value for all helicity combinations for xmath49 we can use taylor expansion to approximate the leading order stringy corrections to the amplitudes they are in the form of the dimension8 operator xmath50 the amplitude in eq eq3 becomes xmath51 again for xmath52 the amplitude is xmath44 and xmath53 of the above the stringy correction can be decomposed into the contribution from the angular momentum states xmath54 using the wigner functions xmath55 and xmath56 xmath57 where xmath58 is the angle between the incoming electron and the outgoing antifermion in the cm frame from eq eq12eq13 we can see that the couplings of the xmath54 states proportional to xmath59 inherit the chirality from the coupling xmath45 of the zeroth mode gauge boson exhange in the sm this is the distinctive feature of the couplings of the string states in this bottom up approach because of the chirality of the coupling the xmath60 interaction in these stringy amplitudes can not be described by the effective interaction of the form xmath12 as long as we couple the spin2 state xmath61 to the energy momentum tensor xmath62 which does not contain information of the chirality of the fermions the interaction will always be non chiral therefore the effective interaction of this kind can never describe the chiral stringy interaction induced by the worldsheet stringy spin excitations in the string models under consideration ie the models which address chiral weak interaction as a comparison we give the sm kk amplitudes where the kk part is induced by the effective interaction of the form xmath12 as xmath63 for xmath64 and xmath65 the kk gravitons contributions can be represented by the wigner functions as xmath66 the chiral spin1 and spin2 stringy interaction will lead to remarkable and unique phenomenological signatures in the 4fermion scattering at the electron positron collider even when compared with the contributions from kk gravitons as we will see in the following the left right xmath67 and forward backward xmath68 asymmetries quantify the degrees of the chirality of the interaction under consideration regardless of the spin of the intermediate states on the other hand the center edge asymmetry xmath69 does not contain any information on the chirality of the spin1 interaction at least in the massless limit xcite for spin2 xmath69 shows dependence on the chirality of the couplings of the intermediate states in the scattering as we will present in section v therefore we can use xmath69 to distinguish np models with spin2 mediator from the models mediated by the spin1 state among the class of models with spin2 interactions we can use the xmath70 to distinguish one model from another as demonstrated in fig 1fig and fig 10fig12fig for the case of sr versus kk gravitons the angular left right asymmetry is defined as a function of xmath71 as xmath72 where xmath73 is the cross section of the scattering of the 100 leftrighthandedly polarized electron beam with the 100 rightlefthandedly polarized positron beam the angular left right asymmetries induced by the stringy corrections are plotted as in fig 1fig in comparison to the kk graviton model it is interesting to comment that the angular left right asymmetries induced by the stringy corrections differ significantly from the sm distributions only in the quark xmath1 and xmath2 final states the difference in xmath74 is hardly visible this feature is similar to the asymmetries induced by the kk gravitons xcite 55 in the forward backward asymmetry is defined as xmath75 where xmath76 is the number of events in the forwardbackward direction the numerical values of the unpolarised forward backward asymmetries for the sm and the stringy amplitudes with xmath77 gev xmath78 tev and xmath79 are xmath80 the deviations of the values with the stringy corrections from the sm values are linearly dependent on xmath40 and xmath81 at the leading order as we can see from the expression in the polarized beams section with xmath82 this is true only for the scattering at low energies comparing to the string scale at higher energiesxmath83 or around the srs at xmath84 the forward backward asymmetries become very small due to the non chiral choice of the chan paton parameters xmath40 which efficiently dilutes the chirality of the interaction xcite the center edge asymmetry is defined as a function of the cut of the central region xmath85 as xmath86 labeleq210endaligned the deviations of the center edge asymmetry from the sm values of the unpolarised beams xmath87 are plotted with respect to xmath85 in fig 4fig a few interesting remarks are worthwhile making there is distinctive feature between xmath69 of the lepton xmath74 and the quarks xmath1 and xmath2 the effect of the non chiral purely stringy interaction represented by the value of xmath40 is opposite for xmath74 and xmath1 on the other hand the features of xmath74 and xmath2type quark are for the small value of xmath88 roughly the same for large value of xmath89 the deviation from the sm of the xmath2type becomes negative and appears very distinctive from the corresponding value of the xmath74 this strange behaviour is originated from the competing contributions between xmath90 appearing in the xmath91 terms and the terms of order xmath92 it should be emphasized that the numerical results as in fig 4fig are calculated using full expression upto the order of xmath91 they are different from the results obtained from the approximation using the leading order upto xmath92 as is given in section v the difference becomes very obvious in the xmath2type quark with high value of xmath89 let xmath93 be the degrees of the longitudinal polarization of the xmath94 beam defined as xmath95 xmath96 where xmath97 is the number of particle xmath98 with the leftrighthanded helicity in the beams the polarized differential cross section can be expressed as xmath99 labeleq31endaligned where xmath100 upperlower signs are for xmath101 xmath102 represents the scattering involving the leftrighthanded electron in practice the observable left right asymmetry is defined with respect to the partially polarized beams of electron and positron by taking difference of the total number of events when the polarizations of the beams are inverted it is therefore given by xmath103 where xmath104 for xmath105 xmath106 is xmath107 while xmath106 becomes xmath108 when xmath109 the full expressions of xmath110 for the kk gravitons and sr models are given in the appendix with the partially polarised beams of electron and positron we can calculate xmath68 using eq eq31 the full expressions for the asymmetries induced by kk gravitons and sr are given in the appendix up to the leading order of xmath92 in the energy taylor expansion the polarized forward backward asymmetry induced by the stringy corrections is xmath111 xmath112 labeleq33 endaligned where the einstein summation convention is implied for xmath113 xmath114 and xmath115 the other combination is obtained by exchanging xmath116 the sm value is given by xmath117 with xmath118 and xmath119 10fig12fig show xmath68 induced by kk gravitons and sr in comparison to the sm values with respect to the cm energy assuming the string scale xmath78 tev and the effective quantum gravity scale xmath120 tev the differences between the asymmetries induced by the two models appear at higher cm energies even in the case when they are indistinguishable at the low cm energies this is due to the fact that while the chirality of the stringy interaction keeps the terms of both order xmath92 and xmath91 in the expression of xmath68 the non chiral graviton interaction on the other hand keeps only the term of order xmath121 in the numerator xmath122 and only term of order xmath123 in the denominator xmath124 see appendix this leads to distinguishable aspects of the two models with the partially polarised beams of electron and positron we can calculate xmath69 using eq eq31 the full expressions for the asymmetries induced by kk gravitons and sr are given in the appendix the deviations from the sm values for various polarizations of the beams for the stringy model are plotted as in fig 7fig9fig up to the leading order of xmath92 in the energy taylor expansion the polarized center edge asymmetry induced by the stringy corrections is xmath125 xmath126 where the sm value is given by xmath127 eq eq c3 is unique for the spin1 contribution in the 4fermion scattering any np particles with spin1 will not change this xmath85dependence remarkably the spin2 contributions either in the form of kk xcite or the string states eq eq35 will induce the deviations from this sm value proportional to xmath128 similar to the xmath68 case the chirality of the stringy interaction keeps terms of both order xmath92 and xmath91 in the expression of xmath129 while in the case of kk gravitons there is no terms of order xmath121 in the constant term xmath130 and the denominator xmath131 the differences become obvious when xmath132 is relatively large xmath133 as shown in fig 4fig 55 in 40 in 40 in 40 in 40 in 40 in 40 inwe have constructed and approximated the tree level stringy amplitudes of the scattering xmath134 the low energy stringy corrections for the 4fermion processes contain both spin1 and spin2 contributions with the chiral couplings inherited from the zeroth mode states identified with the gauge bosons in the sm the chirality of the couplings is diluted by the non chiral choice of the purely stringy piece of the stringy interaction characterized by xmath40 the contributions from both stringy spin1 and spin2 are of dimension8 in nature the low energy dimension8 spin1 contribution is remarkably distinctive from the dimension6 contributions in other xmath17 models the chirality of stringy spin2 interaction also leads to unique phenomenological features distinguishable from the non chiral spin2 interaction induced by the kk gravitons or massive gravitons then we investigated the signatures of the tev scale string model at the xmath7 collider in comparison to the kk gravitons using angular left right forward backward and center edge asymmetries the deviations of the asymmetries from the sm values are investigated separately for each model all asymmetries show significant differences between the low energy corrections of the two models for the xmath7 collider with variable center of mass energies from 500 to 1000 gev and assuming xmath135 tev the forward backward asymmetries show drastic differences between stringy signals and the kk graviton ones the center edge asymmetries also show significant differences between the two models if the chan paton parameter xmath132 representing purely stringy piece of the interaction in the sr model is sufficiently large xmath133 the origin of the differences between the two models is mainly another reason is the fact that sr also has spin1 contribution in addition to spin2 the chirality of the interactions while the string interaction is chiral the interaction induced by the kk gravitons couple to the energy momentum tensor xmath62 is non chiral as we can see in the appendix from the full espressions of xmath19 the chirality of the stringy interaction keeps almost all of the terms of order xmath92 and xmath91 while the non chirality of the interaction of the kk gravitons eliminates certain terms of order xmath121 and xmath123 in the asymmetries specifically the xmath136 term in xmath110 contains only the sm term in the case of kk gravitons in contrast to the stringy case in xmath68 as discussed above xmath137 in the case of kk gravitons does not contain the term of order xmath138 for xmath129 xmath139 does not contain terms of order of xmath121 in the case of kk gravitons this is in contrast to the processes where the stringy interaction is non chiral as well as having only the spin2 contributions such as in the scattering xmath140 in which case the two models give exactly the same low energy signatures xcite in the intermediate energy range xmath141 with xmath142 since the deviations induced by kk gravitons and sr depend only on xmath143 the results in this article therefore are also valid for the clic with center of mass energies 3 to 6 tev with xmath144 tev i would like to thank tao han for helpful discussions this work was supported in part by the us department of energy under contract number de fg02 01er41155 xmath145 section xmath146 xmath147 section1 xmath148 xmath149xmath150 section2 xmath151 section3 xmath152xmath153 a antoniadis n arkani hamed s dimopoulos and g dvali phys lett b436 257 1998hep ph9804398 n arkani hamed s dimopoulos and gr dvali phys b429 263 1998 hep ph9803315 l randall and r sundrum phys lett 83 3370 1999 hep ph9905221 a antoniadis n arkani hamed s dimopoulos and g dvali phys lett b436 257 1998hep ph9804398 g shiu and sh tye phys rev d58 106007 1998hep th9805157 le ibanez r rabadan and am uranga nucl phys b542 112 1999hep th9808139 i antoniadis c bachas and e dudas nucl phys b560 93 1999hep th9906039 k benakli phys rev d60 104002 1999hep ph9809582 e dudas and j mourad nucl phys b575 3 2000hep th9911019 e accomando i antoniadis and k benakli nucl phys b579 3 2000hep ph9912287 le ibanez f marchesano and r rabadan jhep 0111 002 2001hep th0105155 g aldazabal s franco le ibanez r rabadan and am uranga jhep 0102 047 2001hep ph0011132 m cvetic g shiu and am uranga phys rev lett 87 201801 2001hep th0107143 d cremades le ibanez and f marchesano nucl b643 93 2002hep th0205074 c kokorelis nucl phys b677 115 2004hep th0207234 c kokorelis jhep 0208 036 2002hep th0206108 i klebanov and e witten nucl b664 3 2003hep th0304079 m axenides e floratos c kokorelis jhep 0310 006 2003hep th0307255 smullin aa geraci dm weld j chiaverini s holmes and a kapitulnik phys rev d72 122001 2005 hep ph0508204 eg adelberger br heckel and ae nelson ann partl sco 53 77 2003 hep ph0307284
we investigate the tev scale stringy signals of the four fermion scattering at the electron positron collider with the center of mass energy xmath0 gev the nature of the stringy couplings leads to distinguishable asymmetries comparing to the other new physics models specifically the stringy states in the four fermion scattering at the leading order corrections are of spin1 and 2 with the chiral couplings inherited from the gauge bosons identified as the zeroth mode string states the angular left right forward backward center edge asymmetries and the corresponding polarized beam asymmetries are investigated the low energy stringy corrections are compared to the ones induced by the kaluza klein kk gravitons the angular left right asymmetry of the scattering with the final states of xmath1 and xmath2type quarks namely xmath3 and xmath4 shows significant deviations from the standard model values the center edge and forward backward asymmetries for all final states fermions also show significant deviations from the corresponding standard model values the differences between the signatures induced by the stringy corrections and the kk gravitons are appreciable in both angular left right and forward backward asymmetries xmath5 04 cm
introduction open-string amplitudes for the four-fermion interactions low-energy stringy corrections the asymmetries partially polarized beams conclusions acknowledgments angular left-right asymmetry forward-backward asymmetry center-edge asymmetry
character varieties of xmath1manifold groups provide a useful tool in understanding the geometric structures of manifolds and notably the presence of essential surfaces in this paperwe wish to investigate xmath2character varieties of symmetric hyperbolic knots in order to pinpoint specific behaviours related to the presence of free or periodic symmetries we will be mostly concerned with symmetries of odd prime order and we will concentrate our attention to the subvariety of the character variety which is invariant by the action of the symmetry see section s invariantch for a precise definition of this action and of the invariant subvariety as already observed in xcite the excellent component of the character variety containing the character of the holonomy representation is invariant by the symmetry since the symmetry can be chosen to act as a hyperbolic isometry of the complement of the knot hilden lozano and montesinos also observed that the invariant subvariety of a hyperbolic symmetric more specifically periodic knot can be sometimes easier to determine than the whole variety this follows from the fact that the invariant subvariety can be computed using the character variety of a two component hyperbolic link such link is obtained as the quotient of the knot and the axis of its periodic symmetry by the action of the symmetry itself indeed the link is sometimes much simpler than the original knot in the sense that its fundamental group has a smaller number of generators and relations making the computation of its character variety feasible this is for instance the case when the quotient link is a xmath3bridge link hilden lozano and montesinos studied precisely this situation and were able to recover a defining equation for the excellent components of several periodic knots up to ten crossings inwhat follows we will be interested in the structure of the invariant subvariety itself and we will consider not only knots admitting periodic symmetries but also free symmetries our main result shows that the invariant subvariety has in general a different behaviour according to whether the knot admits a free or periodic symmetry thm main if xmath4 has a periodic symmetry of prime order xmath5 then xmath6 contains at least xmath7 components that are curves and that are invariant by the symmetry on the other hand for each prime xmath5 there is a knot xmath8 with a free symmetry of order xmath9 such that the number of components of the invariant character variety of xmath8 is bounded independently of xmath9 the main observation here is that the invariant subvariety for a hyperbolic symmetric knot or more precisely the zariski open set of its irreducible characters can be seen as a subvariety of the character variety of a well chosen two component hyperbolic link even when the symmetry is free to make the second part of our result more concrete in section s examples we study an infinite family of examples all arising from the two component xmath3bridge link xmath10 in rolfsen s notation with xmath3bridge invariant xmath11 our construction provides infinitely many knots with free symmetries such that the number of irreducible components of the invariant subvarieties of the knots is universally bounded the invariant subvarieties of periodic knots over fields of positive characteristic exhibit a peculiar behaviour it is well known that for almost all odd primes xmath9 the character variety of a finitely presented group resembles the character variety over xmath12 for a finite set of primes though the character variety over xmath9 may differ from the one over xmath13 in the sense that there may be jumps either in the dimension of its irreducible components or in their number in this casewe say that the variety ramifies at xmath9 the character varieties of the knots studied in xcite provide the first examples in which the dimension of a well defined subvariety of the character variety is larger for certain primes herewe give an infinite family of periodic knots for which the invariant character variety ramifies at xmath9 where xmath9 is the order of the period in this case the ramification means that the number of xmath14dimensional components of the invariant subvariety decreases in characteristic xmath9 this gives some more insight in the relationship between the geometry of a knot and the algebra of its character variety namely the primes that ramify the paper is organised as follows section s quotientlink is purely topological and describes how one can construct any symmetric knot starting from a well chosen two component link section s chvar provides basic facts on character varieties and establishes the setting in which we will work in section s invariantch we introduce and study invariant character varieties of symmetric knots the first part of theorem thm main on periodic knots is proved in section s periodic while in section s free we study properties of invariant character varieties of knots with free symmetries the proof of theorem thm main is achieved in section s examples where an infinite family of free periodic knots with the desired properties is constructed finally in section s modp we describe how the character varieties of knots with period xmath9 may ramify xmath0 let xmath4 be a knot in xmath15 and let xmath16 be a finite order diffeomorphism of the pair which preserves the orientation of xmath15 if xmath17 acts freely we say that xmath17 is a free symmetry of xmath4 if xmath17 has a global fixed point then according to the positive solution to smith s conjecture xcite the fixed point set of xmath17 is an unknotted circle and two situations can arise either the fixed point set of xmath17 is disjoint from xmath4 and we say that xmath17 is a periodic symmetry of xmath4 or it is not in the latter case xmath17has order xmath3 its fixed point set meets xmath4 in two points and xmath17 is called a strong inversion of xmath4 in all other cases xmath17 is called a semi periodic symmetry of xmath4 note that if the order of xmath17 is an odd prime then xmath17 can only be a free or periodic symmetry of xmath4 we start by recalling some well known facts and a construction that will be central in the paper let xmath18 be a hyperbolic two component link in the xmath1sphere such that xmath19 is the trivial knot let xmath20 be an integer and assume that xmath21 and the linking number of xmath19 and xmath22 are coprime we can consider the xmath21fold cyclic cover xmath23 of the solid torus xmath24 which is the exterior of xmath19 and contains xmath22 the lift of xmath22 in xmath25 is a connected simple closed curve xmath26 let xmath27 be a meridian longitude system for xmath19 on xmath28 and let xmath29 be its lift on xmath30 the slopes xmath31 for xmath32 on xmath30 are equivariant by the action of the cyclic group xmath33 of deck transformations and the manifold xmath34 obtained after dehn filling along xmath35 is xmath15 the action of the group of deck transformations xmath33 on xmath25 extends to an action on xmath34 which is free if xmath36 is prime with xmath21 and has a circle of fixed points if xmath37 for all other values of xmath38 the action is semi periodic that is a proper subgroup of xmath33 acts with a circle of fixed points for a fixed xmath38 the image of xmath26 in xmath34 is a knot that we will denote by xmath4 admitting a periodic or free symmetry of order xmath21 according to whether xmath37 or prime with xmath21 for xmath21 large enough the resulting knot xmath4 is hyperbolic because of thurston s hyperbolic dehn surgery theorem xcite eg r surgery of course the above construction can be carried out for arbitrary integer values of xmath38 however it is not restrictive to require the value of xmath38 to be xmath39 and xmath40 indeed assume that xmath41 where xmath42 the knot xmath4 resulting from xmath43 surgery along xmath25 coincides with the knot xmath44 obtained the same manner but starting from a different link xmath45 and choosing xmath46 as dehn filling slope the link xmath45 is obtained from xmath47 by dehn surgery of slope xmath48 along xmath19 the following proposition shows that periodic and free symmetric knots can always be obtained this way p quotientlink let xmath4 be a hyperbolic knot admitting a free or periodic symmetry of order xmath21 then there exist a two component hyperbolic link xmath18 with xmath19 the trivial knot and an integer xmath49 such that the knot xmath4 can be obtained by the above construction the statement is obvious if the symmetry is periodic in this case the link xmath47 consists of the image xmath19 of the axis of the symmetry and the image xmath22 of the knot xmath4 in the quotient of xmath15 by the action of the symmetry hyperbolicity of the link is a straightforward consequence of the hyperbolicity of xmath4 and the orbifold theorem if the symmetry is free some extra work is necessary the quotient of xmath15 by the action of the free symmetry is a lens space containing a hyperbolic knot xmath22 image of xmath4 consider the cores of the two solid tori of a genusxmath14 heegaard splitting for the lens space induced by an invariant genusxmath14 splitting of xmath15 up to small isotopy one can assume that xmath22 misses one of them say xmath50 note that the free homotopy class of xmath50 is non trivial both in the lens space and in the complement of xmath22 observe moreover that the exterior of xmath50 is a solid torus let xmath51 denote the lift of xmath50 if xmath52 is a hyperbolic link then we are done by taking xmath53 otherwise we will modify the choice of xmath54 first of all note that the link xmath52 is not split this is a consequence of the equivariant sphere theorem and the fact that xmath54 is invariant hence xmath55 is irreducible and boundary irreducible in addition xmath56 is not seifert fibered because a dehn filling on xmath57 yields xmath58 which is hyperbolic thus the only obstruction to hyperbolicity is that xmath59 could be toroidal assume that its jsj decomposition is nontrivial and let xmath60 be the piece of this splitting that is closest to xmath4 in particular xmath60 is invariant by the action of the symmetry the boundary of xmath60 consists of xmath61 some tori xmath62 xmath63 and possibly a torus xmath64 that separates xmath60 from xmath57 we shall modify xmath57 so that xmath65 and xmath37 which will yield hyperbolicity by hyperbolicity of xmath4 for xmath66 each xmath67 either bounds a solid torus in xmath58 or it is contained in a ball in xmath58 notice that xmath64 must bound a solid torus in xmath58 because xmath57 is not contained in a ball else the link xmath52 would be split in addition none of the xmath68 can bound a solid torus in xmath58 by nontriviality of the jsj decomposition first we modify xmath57 so that xmath69 let xmath25 be the solid torus bounded by xmath64 then xmath70 and xmath25 must be equivariant in addition xmath25 is not knotted because xmath57 is the trivial knot but also a satellite with companion xmath71 then the modification consists in replacing xmath54 by the core of xmath25 this makes xmath72 boundary parallel and hence inessential finally we get rid of the tori xmath68 let xmath73 denote the xmath1ball containing xmath67 for xmath74 on each ballthere is a proper arc xmath75 such that xmath76 is a knot exterior with boundary parallel to xmath77 replace equivariantly each xmath78 by a solid torus this does not change xmath4 because the balls xmath79 which are disjoint from xmath4 are replaced again by balls on the other hand this may change xmath54 to xmath80 but since every knot exterior has a degree one map onto the solid torus we find a degree one map from xmath81 onto xmath82 and since xmath54 is unknotted so is xmath80 note that for a given xmath4 the choice of xmath47 is not unique indeed links are not determined by their complements and there are infinitely many slopes on the boundary of a solid torus such that performing dehn filling along them gives the xmath1sphere see also remark r surgery note that if xmath4 admits a semi free symmetry then either all powers of the symmetry that act as periods have the same fixed point set or the union of their fixed point sets consists of two circles forming a hopf link in the first situationa hyperbolic link xmath47 can be constructed as in the case of periodic knots in the second situation one can construct xmath47 by choosing one of the two components of the hopf link but xmath47 will not be hyperbolic in general since we only consider symmetries of odd prime order in the following we are not going to analyse this situation further let xmath83 be a finitely presented group given a representation xmath84 its character is the map xmath85 defined by xmath86 xmath87 the set of all characters is denoted by xmath88 given an element xmath89 we define the map xmath90 proposition xg the set of characters xmath88 is an affine algebraic set defined over xmath91 which embeds in xmath92 with coordinate functions xmath93 for some xmath94 the affine algebraic set xmath88 is called the character variety of xmath83 it can be interpreted as the algebraic quotient of the variety of representations of xmath83 by the conjugacy action of xmath95 note that the set xmath96 in the above proposition can be chosen to contain a generating set of xmath83 for xmath83 the fundamental group of a knot exterior we will then assume that it always contains a representative of the meridian a careful analysis of the arguments in xcite shows that proposition proposition xg still holds if xmath97 is replaced by any algebraically closed field provided that its characteristic is different from xmath3 let xmath98 denote the field with xmath9 elements and xmath99 its algebraic closure we have proposition xgfp let xmath100 be an odd prime number the set of characters xmath101 associated to representations of xmath83 over the field xmath99 is an algebraic set which embeds in xmath102 with the same coordinate functions xmath93 seen in proposition proposition xg moreover xmath101 is defined by the reductions mod xmath9 of the polynomials over xmath91 which define xmath103 let xmath104 be an algebraically closed field of characteristic different from xmath3 a representation xmath105 of xmath83 in xmath106 is called reducible if there is a xmath14dimensional subspace of xmath107 that is xmath108invariant otherwise xmath105 is called irreducible the character of a representation xmath105 is called reducible respectively irreducible if so is xmath105 the set of reducible characters coincides with the set of characters of abelian representations such set is zariski closed and moreover is a union of irreducible components of xmath88 that we will denote xmath109 xcite assume now that xmath83 is the fundamental group of a link in the xmath1sphere with xmath110 components in this case xmath111 is an xmath110dimensional variety that coincides with the character variety of xmath112 ie the homology of the link in the casewhere xmath113 that is the link is a knot xmath111 is a line parametrised by the trace of the meridian when xmath114 that is the link has two components xmath111 is parametrised by the traces xmath115 of the two meridians and that xmath116 of their product subject to the equation xmath117 the subvariety of abelian characters is well understood for the groups that we will be considering hence in the rest of the paper we will only consider the irreducible components of xmath88 that are not contained in the subvariety of abelian characters notation we will denote by xmath118 the zariski closed set which is the union of of the irreducible components of xmath88 that are not contained in the subvariety of abelian characters if xmath83 is the fundamental group of a manifold or orbifold xmath60 we will write for short xmath119 instead of xmath118 similarly if xmath83 is the fundamental group of the exterior of a link xmath47 we shall write xmath120 instead of xmath118 notice that if xmath83 is the fundamental group of a finite volume hyperbolic manifold then xmath118 is non empty for it contains the character of the hyperbolic holonomy assume now that xmath121 is in xmath122 the automorphism xmath121 induces an action on both xmath88 and xmath118 defined by xmath123 this action only depends on the class of xmath121 in xmath124 since traces are invariant by conjugacy moreover the action on the character varieties is realised by an algebraic morphism defined over xmath125 it follows readily that the set of fixed points of the action is zariski closed and itself defined over xmath125 as a consequence the defining relations of the variety of characters that are fixed by the action considered over a field of characteristic xmath9 an odd prime number are just the reduction xmath0 of the given equations with integral coefficients in this section we define and study the invariant subvariety of xmath4 where xmath4 is a hyperbolic knot admitting a free or periodic symmetry of order an odd prime xmath9 let xmath17 denote the symmetry of xmath4 of order xmath9 and let xmath126 be the associated link as defined section s quotientlink denote by xmath127 the space of orbits of the action of xmath17 on the exterior xmath58 of the knot xmath4 recall that xmath127 is obtained by a possibly orbifold dehn filling on the component xmath19 of the link xmath47 we have xmath128 which splits if and only if xmath17 is periodic note that if xmath17 is free then the quotient group xmath129 can also be seen as the fundamental group of the lens space quotient in any case we see that xmath17 defines an element xmath130 of the outer automorphism group of xmath131 remark now that since xmath127 is obtained by dehn filling a component of xmath47 the exterior xmath132 of the link xmath47 is naturally embedded into xmath127 let xmath133 be an element of xmath134 corresponding to the image of a meridian of xmath19 via this natural inclusion it maps to a generator of xmath129 let xmath135 be the automorphism of xmath131 induced by conjugacy by xmath133 note that xmath121 is a representative of xmath130 thus the symmetry xmath17 induces an action on the character variety xmath6 of the exterior of xmath4 as defined in the previous section we have seen that the fixed point set of this action is an algebraic subvariety of xmath6 we will denote by xmath136 the union of its irreducible components that are not contained in xmath137 note that xmath136 is non empty for the character of the holonomy is fixed by the action remark also that each irreducible component of xmath136 contains at least one irreducible character by definition indeed each irreducible component of xmath136 contains a whole zariski open set of irreducible characters we shall call xmath136 the invariant subvariety of xmath4 let us now consider how the different character varieties of xmath4 and xmath47 are related it is straightforward to see that the character variety xmath138 of the quotient of the exterior xmath58 of xmath4 by the action of the symmetry injects into the character variety xmath120 of the exterior of xmath47 indeedthe orbifold fundamental group of xmath127 is a quotient of the fundamental group of xmath47 induced by the dehn filling along the xmath19 component of xmath47 on the other hand there is a natural map from xmath138 to the invariant submanifold xmath136 of xmath4 induced by restriction in the short exact sequence above assume now that xmath139 is a character in xmath136 associated to an irreducible representation xmath105 of xmath4 we will show that xmath105 extends in a unique way to a necessarily irreducible representation of xmath127 giving a character in xmath138 observe that here we only use that xmath9 is odd this proves that the above natural map is one to one and onto when restricted to the zariski open set of irreducible characters note that if xmath105 is a representation of xmath131 that extends to a representation of xmath134 then necessarily its character must be fixed by the symmetry xmath17 for the action of xmath133 on xmath131 is by conjugacy and can not change the character of a representation the idea is to extend xmath105 to xmath134 by defining xmath140 in such a way that the action of xmath133 by conjugacy on the normal subgroup xmath131 coincides with the action of the automorphism xmath121 we know that xmath141rhocirc f since xmath105 is irreducible xmath142 acts transitively on the fibre of xmath139 so that there exists an element xmath143 such that xmath144 xcite the element xmath60 is well defined up to multiplication times xmath145 ie up to an element in the centre of xmath142 the fact that xmath17 has odd order implies that there is a unique way to choose the sign and so that xmath146 is well defined note that in some instances xmath140 can be the identity we have thus proved the following fact prop invsubvar let xmath4 be a hyperbolic knot admitting a symmetry xmath17 of prime odd order the restriction map from the xmath17invariant subvariety of xmath4 to the character variety of xmath127 induces a bijection between the zariski open sets consisting of their irreducible characters proposition prop invsubvar holds more generally for hyperbolic knots admitting either a free or a periodic symmetry of odd order and for character varieties over fields of positive odd characteristic let xmath4 be a hyperbolic knot admitting a periodic symmetry xmath17 of odd prime order xmath9 let xmath18 be the associated quotient link denote by xmath147 the coordinate of the variety xmath120 corresponding to the trace of xmath133 proposition prop invsubvar implies at once that xmath136 is birationally equivalent to a subvariety of xmath148 where xmath149 and xmath150 notethat since xmath9 is odd the set xmath151 equals xmath152 in particular this includes a lift to xmath153 of the holonomy of xmath127 when xmath154 observe that this means that the image of the meridian is conjugate to xmath155 a rotation of angle xmath156 that has order xmath9 in xmath153 prop periodic the variety xmath157 contains at least xmath7 irreducible curves xmath158 each of which contains at least one irreducible character as a consequence all these components are birationally equivalent to a subvariety xmath159 of xmath136 furthermore the curves xmath159 are irreducible components of the whole xmath6 not only the invariant part first of all remark that the intersection of xmath120 with the hyperplane xmath160 contains the holonomy character xmath161 of the hyperbolic orbifold structure of xmath127 in particular a component of xmath162 is an irreducible curve xmath163 containing xmath161 the so called excellent or distinguished component this is the curve that viewed as a deformation space allows to prove thurston s hyperbolic dehn filling theorem xcite eg the character xmath161 takes values in a number field xmath164 containing the subfield xmath165 of degree xmath166 the galois conjugates of xmath161 are contained in xmath167 for some xmath168 as xmath152 is precisely the set of galois conjugates of xmath169 this yields the xmath166 components defined by xmath170 xmath171 though the number of conjugates may be larger depending on the degree of the number field xmath164 to prove the assertion that these curves are irreducible components of xmath6 notice that the restriction xmath172 is the holonomy of the hyperbolic structure of xmath58 therefore by calabi weil rigidity the zariski tangent space of xmath173 at xmath172 is one dimensional this space equals the cohomology group of xmath58 with coefficients in the lie algebra xmath174 twisted by the adjoint of the holonomy cf xcite using for instance simplicial cohomology the dimension of this cohomology can be established by the vanishing or not of certain polynomials with integer coefficients in the entries of the representation in particular the same dimension count is true for its galois conjugates this zariski tangent space gives an upper dimension bound that establishes the final claim remark that xmath136 may contain other components than the ones described above in particular if xmath22 is itself hyperbolic there is at least one extra component whose characters correspond to representations that map xmath133 to the trivial element that is the lift of the excellent component of xmath175 let xmath4 be a hyperbolic knot which is periodic of prime order xmath176 then xmath6 contains at least xmath177 irreducible components which are curves in addition there is an extra irreducible component when xmath22 itself is hyperbolic by considering the abelianisation xmath178 of the fundamental group of the orbifold xmath127 it is not difficult to prove that xmath179 consists of xmath180 lines on the other hand the abelianisation of the fundamental group of the exterior of xmath4 consists in a unique line which is fixed pointwise by the action induced by xmath17 on xmath181 it follows that in general the fixed subvariety of the whole character variety of xmath4 is not birationally equivalent to the whole character variety of the orbifold for this reasonwe have restricted our attention to xmath136 let xmath4 be a hyperbolic knot admitting a free symmetry xmath17 of odd prime order xmath9 let xmath18 be the associated link as defined in section s quotientlink see in particular proposition p quotientlink in this case the irreducible characters of xmath136 are mapped inside the subvariety of xmath120 obtained by intersection with the hypersurface defined by the condition that its characters correspond to representations that send xmath182 to the trivial element note that in xmath134 one has xmath183 we write xmath184 thus the representations of xmath127 must satisfy xmath185 this provides a motivation to look at the restriction to the peripheral subgroup xmath186 generated by xmath133 and xmath187 xmath188 when this restriction has finiteness properties we are able to find uniform bounds on the number of components of xmath189 prop bound assume that is a finite map then there is a constant xmath26 depending only on xmath120 such that the number of components of xmath189 is xmath190 notice that the components of xmath120 have dimension at least two xcite the hypothesis in proposition prop bound implies in particular that they are always surfaces we give in the next section an example of a link for which is a finite map as a consequence we have c freebound there exists a sequence of hyperbolic knots xmath8 parametrised by infinitely many prime numbers xmath9 such that xmath8 has a free symmetry xmath17 of order xmath9 but xmath191 is bounded uniformly on xmath9 since xmath9 and xmath38 are coprime there exist xmath192 and xmath193 such that the elements xmath194 and xmath195 generate the fundamental group of xmath196 the character variety xmath197 is a surface in xmath198 with coordinates xmath199 xmath200 and xmath201 defined by the equation xmath117 the equations xmath202 and xmath203 determine a line xmath204 contained in the surface xmath197 which corresponds to the subvariety of characters of representations that are trivial on xmath194 to count the components of xmath136it is enough to count the components of xmath205 the map xmath206 being finite there is a zariski open subset of each irreducible component of xmath120 on which the map is finite to one as a consequencethere is a finite number xmath207 of curves in xmath120 which are mapped to points of xmath197 it follows that the number of irreducible components xmath205 is bounded above by xmath208 where xmath209 is the cardinality of the generic fibre of xmath206 consider the two component xmath3bridge link xmath10 pictured in figure f link bridge link xmath10 and the generators of its fundamental grouptitlefigheight151 for each prime xmath210 and each xmath211 one can construct a symmetric knot xmath4 as described in section s quotientlink since the absolute value of the linking number of the two components of xmath47 is xmath1 the construction does not give a knot for xmath212 which must thus be excluded using wirtinger s method one can compute a presentation of its fundamental group xmath213 where the generators xmath133 and xmath214 are shown in figure f link having chosen the meridian xmath133 the corresponding longitude is xmath215 an involved but elementary computation gives the following defining equation for xmath120 xmath216 where xmath50 xmath217 and xmath218 represent the traces of xmath133 xmath214 and xmath219 respectively the equation can also be found in xcite note that the variety consists of two irreducible components the first one being that of the abelian characters a similar computation gives an expression for the trace of xmath187 in terms of xmath50 xmath217 and xmath218 xmath220 we want to understand the generic fibre of the restriction map xmath221 where xmath120 is a surface contained in xmath222 with coordinates xmath50 xmath217 and xmath218 and xmath197 is also a surface contained in xmath222 but with coordinates xmath223 xmath224 and xmath225 for each fixed point xmath226 in xmath227 the fibre of xmath206 consists of the points xmath228 which satisfy xmath229 once xmath50 is replaced by its value xmath223 the points we are interested in correspond to the intersection of two curves in xmath230 with coordinates xmath231 we see immediately that for generic values of xmath224 each point of xmath232 is the image of at most a finite number of points in xmath120 and such finite number is bounded above by the product of the degrees of the two polynomials in xmath217 and xmath218 ie xmath233 this shows that proposition prop bound applies to this link and corollary c freebound holds let xmath126 be a hyperbolic link with two components such that xmath19 is trivial assume that xmath234 for each odd prime number xmath9 that does not divide the linking number xmath235 the knot xmath22 lifts to a knot xmath8 in the xmath9fold cyclic cover of xmath15 branched along xmath19 by construction see section s quotientlink xmath8 is periodic of period xmath9 realised by xmath17 and the invariant subvariety xmath191 contains at least xmath7 irreducible components of dimension xmath14 these components of xmath191 are constructed in proposition prop periodic as the intersection of the character variety xmath120 with a family of xmath7 parallel hyperplanes these parallel planes correspond to a hypersurface which is the vanishing locus of the minimal polynomial for xmath236 in the variable xmath223 such polynomial can be easily computed from the xmath9th cyclotomic polynomial and is defined over xmath125 the characters of xmath191 correspond to representations of the orbifold xmath237 note that xmath238 may have further components besides those provided by proposition prop periodic since the orbifold may admit irreducible representations that are trivial on xmath133 these irreducible representations correspond to characters for which xmath239 in any case xmath238 contains at least xmath7 components of dimension xmath14 if we consider the character variety of xmath237 in characteristic xmath9 we have that since the only elements of order xmath9 are parabolic the entire character variety must be contained in the hyperplane defined by xmath239 we note that if xmath9 is not a ramified prime for xmath237 then it must contain as many xmath14dimensional irreducible components as the one over xmath12 that is at least xmath7 let us now turn our attention to the subvariety of xmath120 which consists in the intersection of xmath120 with the hyperplane xmath239 we remark that it is non empty since it must contain the character of the holonomy representation of xmath47 we are interested in its irreducible components of dimension xmath14 these are in finite number say xmath207 depending on xmath47 only and constitute an affine variety of dimension xmath14 that we shall denote xmath240 standard arguments of algebraic geometry show that for almost all odd primes xmath192 the character variety xmath120 as well as its subvariety xmath240 have the same properties over an algebraically closed field of characteristic xmath192 they have over the complex numbers in particular xmath240has xmath207 irreducible components we start by considering the invariant variety xmath191 and show that this variety ramifies at xmath9 if xmath9 is large enough indeed if this were not the case the above discussion implies that the number of irreducible curves of xmath191 should be at least xmath7 on one hand and at most xmath207 on the other it follows readily that xmath191 ramifies at xmath9 now since xmath7 curves of the invariant variety xmath191 are also irreducible components of xmath241 and since xmath191 is defined over xmath125 the character variety of xmath8 ramifies at xmath9 too the polynomial equations defined over xmath125 of the character variety of the orbifold xmath237 generate a non radical ideal when considered xmath242 since the minimal polynomial of xmath243 is not reduced when considered xmath0 john w morgan and hyman bass editors volume 112 of pure and applied mathematics academic press inc orlando fl 1984 papers presented at the symposium held at columbia university new york 1979
we study character varieties of symmetric knots and their reductions xmath0 we observe that the varieties present a different behaviour according to whether the knots admit a free or periodic symmetry ams classification primary 57m25 secondary 20c99 57m50 keywords character varieties hyperbolic knots symmetries
introduction symmetric knots and two-component links character varieties the character variety of @xmath47 and the invariant subvariety of @xmath4 knots with periodic symmetries knots with free symmetries a family of examples invariant character varieties over fields of positive characteristic
of the three most fundamental parameters of a star mass age and composition age is arguably the most difficult to obtain an accurate measure direct measurements of mass eg orbital motion microlensing asteroseismology and atmospheric composition eg spectral analysis are possible for individual stars but age determinations are generally limited to the coeval stellar systems for which stellar evolutionary effects can be exploited eg pre main sequence contraction isochronal ages post main sequence turnoff individual stars can be approximately age dated using empirical trends in magnetic activity element depletion rotation or kinematics that are calibrated against cluster populations andor numerical simulations eg xcite however such trends are fundamentally statistical in nature and source to source scatter can be comparable in magnitude to mean values age uncertainties are even more problematic for the lowest mass stars m xmath5 05 mxmath2 as post main sequence evolution for these objects occurs at ages much greater than a hubble time and activity and rotation trends present in solar type stars begin to break down eg xcite for the vast majority of intermediate aged 110 gyr very low mass stars in the galactic disk barring a few special cases eg low mass companions to cooling white dwarfs xcite age determinations are difficult to obtain and highly uncertain ages are of particular importance for even lower mass brown dwarfs m xmath5 0075 mxmath2 objects which fail to sustain core hydrogen fusion and therefore cool and dim over time xcite the cooling rate of a brown dwarf is set by its age dependent luminosity while its initial reservoir of thermal energy is set by gravitational contraction and hence total mass as such there is an inherent degeneracy between the mass age and observable properties of a given brown dwarf in the galactic field population one can not distinguish between a young low mass brown dwarf and an old massive one from spectral type luminosity or effective temperature alone this degeneracy can be resolved for individual sources through measurement of a secondary parameter such as surface gravity which may then be compared to predictions from brown dwarf evolutionary models eg xcite however surface gravity determinations are highly dependent on the accuracy of atmospheric models which are known to have systematic problems at low temperatures due to incompleteness in molecular opacities eg xcite and dynamic atmospheric processes eg xcite discrete metrics such as the presence of absence of absorption depleted in brown dwarfs more massive than 0065 mxmath2 at ages xmath6200 myr xcite are generally more robust but do not provide a continuous measure of age for brown dwarfs in the galactic field population binary systems containing brown dwarf components can be used to break this mass age degeneracy without resorting to atmospheric models specifically systems for which masses can be determined via astrometric andor spectroscopic orbit measurements and component spectral types effective temperatures andor luminosities assessed can be compared directly with evolutionary models to uniquely constrain the system age eg xcite furthermore by comparing the inferred ages and masses for each presumably coeval component such systems can provide empirical tests of the evolutionary models themselves a benchmark example is the young xmath7300 myr binary and perhaps triple brown dwarf system gliese 569b xcite with both astrometric and spectroscopic orbit determinations and resolved component spectroscopy this system has been used to explicitly test evolutionary model tracks and lithium burning timescales xcite as well as derive component ages which are found to agree qualitatively with kinematic arguments eg xcite other close binaries with astrometric or spectroscopic orbits have also been used for direct mass determinations eg xcite but these systems generally lack resolved spectroscopy and therefore precise component characterization they have also tended to be young preventing stringent tests of the long term evolution of cooling brown dwarfs older nearby very low mass binaries with resolved spectra eg xcite generally have prohibitively long orbital periods for mass determinations recently we identified a very low mass binary system for which a spectroscopic orbit and component spectral types could be determined the late type source 2mass j03202839xmath00446358 hereafter 2mass j0320xmath00446 xcite our independent discoveries of this system were made via two complementary techniques hereafter bl08 identified this source as a single lined radial velocity variable with a period of 067 yr and separation xmath704 au following roughly 3 years of high resolution near infrared spectroscopic monitoring see xcite hereafter bu08 demonstrated that the near infrared spectrum of this source could be reproduced as an m85 plus t5xmath11 unresolved pair based on the spectral template matching technique outlined in xcite the methods used by these studies have yielded both mass and spectral type constraints for the components of 2mass j0320xmath00446 and thus a rare opportunity to robustly constrain the age of a relatively old low mass star and brown dwarf system in the galactic disk in this article we determine a lower limit for the age of 2mass j0320xmath00446 by combining the radial velocity measurements of bl08 and component spectral type determinations of bu08 with current evolutionary models our method is described in xmath8 2 which includes discussion of sources of empirical uncertainty and systematic variations from four sets of evolutionary models we obtain lower limits on the age component masses and orbital inclination of the system and compare our age constraint to expectations based on kinematics magnetic activity and rotation of the primary component in xmath83 we discuss our results focusing in particular on how future observations could provide bounded limits on the age and component masses of this system and thereby facilitate tests of the evolutionary models themselves at late ages evolutionary models predict the luminosities and effective temperatures of cooling brown dwarfs over time parameters that have been shown to correlate well with spectral type eg xcite luminosity is the more reliable parameter being based on the measured distance and broad band spectral flux of a source as opposed to model dependent determinations of photospheric gas temperature andor radius however in the case of 2mass j0320xmath00446 neither distance nor component fluxes have been measured the latter due to the fact that this system is as yet unresolved and for the near future unresolvable see xmath8 3 we therefore used the component spectral types of this system and luminosity measurements for similarly classified single unresolved sources from xcite and xcite to estimate the component luminosities for the m85 primary there are 13 m8m9 field dwarfs with bolometric luminosities parallax distance and broad band spectral flux measurements reported in the studies listed above two of the sources the m8 lhs 2397a a known binary xcite and the m9 lp 944 20 believed to be a younger system xmath7500 myr xciteare unusual sources and therefore excluded from this analysis the mean bolometric magnitude of the remaining stars is xmath9 1336xmath1029 mag corresponding to xmath10 345xmath1012 for the t5xmath11 secondary there are fewer field brown dwarfs with reliable luminosity measurements 1 t45 dwarf and 5 t6 dwarfs and these show considerably greater scatter in their bolometric magnitudes xmath9 172xmath106 this scatter may be due in part to unresolved multiplicity which appears to be enhanced amongst the earliest type t dwarfs xcite hence we estimated the luminosity of 2mass j0320xmath00446b using the xmath11spectral type relation of xcite 137376e1 190250e1 173083e2 740013e3 175144e3 114234e4 232248e06 where xmath11 xmath12sptxmath13 and sptt0 10 sptt5 15 etc a mean xmath9 1709xmath1029 xmath10 494xmath1017 was adopted where we have taken into account the uncertainty in the secondary spectral type and the xmath11spectral type relation 022 mag this value agrees well with estimates from xmath9 169xmath104 and xmath9 173xmath106 in order to assess systematic uncertainties in the derived age and component properties we considered four different sets of evolutionary models in our analysis the cloudless models of hereafter tucson models the cond cloudless models of hereafter cond models and the cloudless and cloudy models from hereafter sm08 models all four sets of models assume solar metallicity which is appropriate given that composite red optical and near infrared spectra of 2mass j0320xmath00446 show no indications of subsolar metallicity the choice of cloudless evolutionary models referring to the absence of condensate clouds in atmospheric opacities is driven partly by their availability in addition the spectral energy distributions of the m85 and t5 components of 2mass j0320xmath00446 are minimally affected by condensate cloud opacity eg xcite however cloud opacity in the intermediate l dwarf stage may slow radiative cooling during this phase and bias the inferred age of the t type secondary sm08 although xcite have claimed that clouds have only a small effect on evolution to test this possibility we chose to examine both the cloudless and cloudy sm08 models the latter of which takes into account photospheric cloud opacity in thermal evolution through the use of atmospheric models generated according to the prescriptions outlined in xcite and sm08 figure figmodels compares the luminosity estimates for the two components of 2mass j0320xmath00446 to the evolutionary tracks of each model set the luminosities and their uncertainties constrain the mass age parameter space of each component as illustrated in figure figmvst component masses generally increase with system age as more massive low mass stars and brown dwarfs take longer to radiate their greater reservoir of heat energy from initial contraction the mass of the primary of this system reaches an asymptotic value of xmath7008009 mxmath2 for ages xmath61 gyr consistent with a hydrogen fusing very low mass star if the system is younger than xmath7400 myr the primary could be substellar note that ages xmath5300 myr primary masses xmath140065 mxmath2 can be ruled out based on the absence of absorption at 6708 in the unresolved red optical spectrum of this source xcite the mass of the secondary increases across the full age range shown in figure figmvst as this component is substellar up to 10 gyr there is some divergence in the evolutionary tracks at late ages for this component however the tucson models predict a mass near the hydrogen burning limit while the sm08 cloudless and cloudy models predict masses above and below the li burning minimum mass respectively the kink in the mass age relation of 2mass j0320xmath00446b at ages of 200 300 myr particularly in the cond and sm08 models reflects the prolonged burning of deuterium in brown dwarfs with masses just above 0013 mxmath2 producing higher luminosities at this temporary stage of evolution the mass ratio of the system xmath15 mxmath16mxmath17 also increases as a function of age ranging from xmath702 at 100 myr to a maximum of xmath708 at 10 gyr with constraints in the mass age phase space provided by the component luminosities and evolutionary models we can now use the radial velocity orbit to break the mass age degeneracy the radial velocity variations measured by bl08 only probe the recoil velocity of the primary of the 2mass j0320xmath00446 system these observations provide a coupled constraint between the masses and inclination of the system xmath18 bl08 where xmath19 is the inclination angle of the orbit and mxmath17 and mxmath16 are the masses of the primary and secondary components in solar mass units respectively we can make a geometric constraint that xmath20 which yields a transcendental equation for the lower limit of the secondary component mass of the system as a function of the primary component mass using our age dependent lower bound for the latter based on the evolutionary models including luminosity uncertainties the constraint on xmath21 from eqn 1 translates into a minimum secondary mass as a function of age as shown in figure figmvst finally the age at which the upper bound of the secondary component mass based on the evolutionary models crosses the radial velocity minimum mass line corresponds to the minimum age of the system all four models predict a minimum age for 2mass j0320xmath00446 in the range 1722 gyr table tabmodelfit this age is in qualitative and quantitative agreement with those inferred by bl08 from the kinematics of the 2mass j0320xmath00446 system space motions of this systemare xmath22 38xmath15 km sxmath23 xmath24 20xmath13 km sxmath23 and xmath25 32xmath14 km sxmath23 where we assume an lsr solar motion of xmath26 10 km sxmath23 xmath27 525 km sxmath23 and xmath28 717 km sxmath23 xcite xcite equation 8 predicts an age xmath2916 gyr at the 95 confidence level for these kinematics andstellar age activity trends in the case of the latter the optical spectrum of 2massj0320xmath00446 shows no detectable hxmath30 emission xcite even though xmath2990 of nearby m8m9 dwarfs exhibit such emission xcite for comparison xcite estimate an activity lifetime ie timescale for hxmath30 emission to drop below detectable levels of 8xmath31 gyr for m7 dwarfs this age may be too high of an estimate for 2mass j0320xmath00446 as the increase in activity lifetimes for spectral types m2m7 observed by xcite may not continue for later spectral types magnetic field lines are increasingly decoupled from lower temperature photospheres and the frequency and strength of hxmath30 emission decrease rapidly beyond type m7m8 eg xcite hence the absence of magnetic emission from 2mass j0320xmath00446a is merely indicative of an older age as is its kinematics rotation is a third commonly used empirical age diagnostic for stars based on the secular angular momentum loss observed amongst solar type stars through the emission magnetized stellar winds eg xcite however so called gyrochronological relations calibrated for these stars are not necessarily applicable to lower mass objects due both to the fully convective interiors of the latter and the decoupling of field lines from low temperature atmospheres eg xcite indeed 2mass j0320xmath00446a proves to be a relatively rapid rotator with an equatorial spin velocity of xmath32 165xmath105 km sxmath23 and a rotation period xmath14 74 hr bl08 for solar type stars this rapid rotation generally indicates a young age an extrapolation in color of gyrochronology relations by equation 3 assuming xmath33 xmath34 21 for 2mass j0320xmath00446a eg xcite yields an age of only xmath701 myr this is inconsistent with the absence of absorption space kinematics and lack of magnetic emission from this source clearly rotation does not provide a useful age metric for the 2mass j0320xmath00446 system further emphasizing the breakdown of age angular momentum trends in the lowest mass stars the derived minimum age and radial velocity constraints of the 2mass j0320xmath00446 system allow us to constrain model dependent minimum masses for its components as well mxmath17 xmath35 00800082 mxmath2 and mxmath16 xmath35 00530054 mxmath2 the ranges in these values reflect variations between the four evolutionary models table tabmodelfit note that there is essentially no difference in the minimum masses inferred from the cloudy and cloudless sm08 models the minimum mass ratio of the system xmath36 060062 is also consistent across the evolutionary models and in accord with the general preference of large mass ratio systems observed amongst very low mass binaries xmath2990 of known binaries with xmath37 mxmath2 have xmath38 see xcite finally while the minimum ages and masses of 2mass j0320xmath00446 are inferred assuming xmath20 it is possible to constrain the maximum masses and minimum orbital inclination of this system assuming an upper age limit adopting xmath39 10 gyr we infer xmath40 080086 corresponding to xmath41 53xmath4259xmath42 this constraint is only slightly more restrictive than the xmath43 lower limit determined by bl08 assuming that the fainter secondary must have a lower mass than the primary it does not significantly improve the chances that this system is an eclipsing edge on system xmath703 by geometry maximum primary 0088 mxmath2 and secondary masses 00660075 mxmath2 are effectively set by the luminosity constraints and evolutionary models here we see the most significant difference between the models a 13 discrepancy in the maximum mass of the secondary component likely due to different treatments of light element fusion near the li and h burning minimum masses this variation confirms the importance of older binary systems as tests of long term brown dwarf evolution particularly near fusion mass limits we also note that there is little difference xmath74 in the maximum masses inferred from the sm08 cloudy and cloudless models illustrating again the negligible role of cloud opacity in the long term evolution of brown dwarfs like 2mass j0320xmath00446b the combination of component luminosities radial velocity orbit of the primary and evolutionary models have allowed us to estimate the minimum age of the 2mass j0320xmath00446 system and its component masses the ages are consistent between four evolutionary models of brown dwarfs and are more precise although not necessarily more accurate than estimates based on as yet poorly constrained statistical trends in kinematics magnetic activity and angular momentum evolution however a bounded age estimate remains elusive due to the unknown inclination of the system and determination of component masses as discussed in bl08 the inclination of the 2mass j0320xmath00446 orbit is irrelevant if the radial velocity orbit of the secondary can be determined in that case one need only compare the derived system mass ratio and component luminosities to evolutionary models to obtain a bounded age estimate measurement of the secondary motion in the xmath44band data of bl08 was not possible due to the very large flux contrast between the components bl08 rule out a contrast ratio of xmath5101 at these wavelengths spectral template fits from bu08 predict a contrast ratio of xmath63501 a more effective approach would be the acquisition of radial velocity measurements in the 12 13 xmath4 band where the t dwarf secondary is considerably brighter and the contrast ratio is closer to 201 depending on the absolute magnitude scale see discussion in bu08 at these contrasts the radial velocity of the secondary can be measured using existing techniques for high contrast spectroscopic binary systems eg xcite alternately if this system is observed to eclipse then xmath45 and the age of the system is uniquely determined table tabmodelfit lists the ages corresponding to this scenario ranging from 25xmath46 gyr to 32xmath47 gyr for the four models examined uncertainties include the full range of possible primary and secondary masses for which eqn 1 and the luminosity constraints are satisfied the relatively small age uncertainties estimated in this scenario 2560 are dominated by uncertainties in the component luminosities which could be measured from primary and secondary eclipse depths over a broad range of optical and infrared wavelengths eg xcite such measurements are currently more feasible than resolved imaging measurements as the tight separation inferred from the radial velocity orbit xmath48 xmath49 04 au bl08 assuming xmath50 implies a projected separation of xmath517 mas below the diffraction limit of the keck 10 m telescope at near infrared wavelengths yet the scientifically valuable measurements possible in an eclipsing scenario must be tempered by this scenario s low probability for a maximum age of 10 gyr we can only constrain the inclination of the 2mass j0320xmath00446 system to xmath3 and hence an eclipse probability of xmath703 regardless of whether 2mass j0320xmath00446 is an eclipsing pair determination of its orbit inclination andor component masses is a necessary step for testing brown dwarf evolutionary models at late ages specifically through agreement of system parameters with model isochrones cf xcite the power of such a test is the long lever arm of time provided by older field binaries such as 2mass j0320xmath00446 resulting in large differences in luminosities and effective temperatures for the more common near equal mass systems eg xcite there may in fact be many such systems to exploit in this manner simulations by bu08 of brown dwarf pairs in the vicinity of the sun predict that 12 25 of all m8l5 dwarf binaries have component spectral types that can be inferred from unresolved near infrared spectroscopy using the method outlined in xcite perhaps as many as 50 of these systems may be short period radial velocity variables xcite identification and follow up of these systems would complement the evolutionary model tests currently provided by younger systems xcite and would more robustly address uncertainties associated with low temperature light element fusion interior thermal transport and substellar interior structure as their effects are compounded over time the authors thank i baraffe a burrows m marley and d saumon for making electronic versions of their evolutionary models available and m liu for identifying the roundoff errors in the xmath11spectral type relation in xcite we also thank our referee k luhman for his helpful critique of the original manuscript cb acknowledges support from the harvard origins of life initiative this publication has made use of the vlm binaries archive maintained by nick siegler at httpwwwvlmbinariesorg burgasser a j reid i n siegler n close l m allen p lowrance p j gizis j e 2007 in planets and protostars v eds b reipurth d jewitt and k keil univ arizona press tucson p 427wilson j c miller n a gizis j e skrutskie m f houck j r kirkpatrick j d burgasser a j monet d g 2003 in brown dwarfs iau symp 211 ed e martn san francisco asp p 197lcccc minimum age gyr 17 22 22 20 minimum xmath51 080 53xmath42 082 55xmath42 083 56xmath42 086 59xmath42 mxmath17 mxmath2 00820088 00800088 00820088 00820088 mxmath16 mxmath2 00540075 00530070 00540069 00540066 mxmath16mxmath17 062089 060087 061084 061080 age for xmath52 gyr 25xmath46 32xmath47 30xmath53 28xmath54
2mass j03202839xmath00446358ab is a recently identified late type m dwarf t dwarf spectroscopic binary system for which both the radial velocity orbit for the primary and spectral types for both components have been determined by combining these measurements with predictions from four different sets of evolutionary models we determine a minimum age of 20xmath103 gyr for this system corresponding to minimum primary and secondary masses of 0080 mxmath2 and 0053 mxmath2 respectively we find broad agreement in the inferred age and mass constraints between the evolutionary models including those that incorporate atmospheric condensate grain opacity however we are not able to independently assess their accuracy the inferred minimum age agrees with the kinematics and absence of magnetic activity in this system but not the rapid rotation of its primary further evidence of a breakdown in angular momentum evolution trends amongst the lowest luminosity stars assuming a maximum age of 10 gyr we constrain the orbital inclination of this system to xmath3 more precise constraints on the orbital inclination andor component masses of 2mass 0320xmath00446ab through either measurement of the secondary radial velocity orbit optimally in the 1213 xmath4 band or detection of an eclipse only 03 probability based on geometric constraints would yield a bounded age estimate for this system and the opportunity to use it as an empirical test for brown dwarf evolutionary models at late ages
introduction the age of 2massj0320@xmath00446 discussion
non hermitian operator has been introduced phenomenologically as an effective hamiltonian to fit experimental data in various fields of physics xcite in spite ofthe important role played non hermitian operator in different branches of physics it has not been paid due attention by the physics community until the discovery of non hermitian hamiltonians with parity time symmetry which have a real spectrum xcite it has boosted the research on the complex extension of quantum mechanics on a fundamental level ann jmp1jpa1jpa2prl1jmp2jmp3jmp4jpa3jpa4jpa5 non hermitian hamiltonian can possess peculiar feature that has no hermitian counterpart a typical one is the spectral singularity or exceptional point for finite system which is a mathematic concept it has gained a lot of attention recently xcite motivated by the possible physical relevance of this concept since the pioneer work ofmostafazadeh xcite the majority of previous works focus on the non hermitian system arising from the complex potential mean field nonlinearity pra2jpa6ali3pra13prd2prd3prd4prd5prd6prd7prd8 as well as imaginary hopping integral xcite in this paper we investigate the physical relevance of the spectral singularities for non hermitian interacting many particle system the non hermiticity arises from the imaginary interaction strength for two particle case the exact solution shows that there exist a series of spectral singularities forming a spectrum of singularity associated with the central momentum of the two particles we consider dynamics of two bosons as well as fermions in one dimensional system with imaginary delta interaction strength it shows that the two particle collision leads to amplitude reduction of the wave function for fermion pair the amplitude reduction depends on the spin configuration of two particles remarkably in both cases the residual amplitude can vanish only when the relative group velocity of two single particle gaussian wave packets with equal width reaches the magnitude of the interaction strength this phenomenon of complete particle pair annihilation is the direct result of the spectral singularity we also discuss the complete annihilations of a singlet fermion pair and a maximally two mode entangled boson pair based on the second quantization formalism this paper is organized as follows in section hamiltonian and solutions we present the model hamiltonian and exact solution in section dynamical signature we construct the local boson pair initial state as initial state which is allowed to calculate the time evolution based on this we reveal the connection between the phenomenon of complete pair annihilation and the spectral singularity in section second quantization representation we extend our study a singlet fermion pair and a maximally two mode entangled boson pair based on the second quantization formalism finally we give a summary in section summary we start with an one dimensional two distinguishable particle system with imaginary delta interaction the solution can be used to construct the eigenstates of two fermion and boson systems the hamiltonian has the form xmath0 where xmath1 and we use dimensionless units xmath2 for simplicity introducing new variables xmath3 and xmath4 where xmath5 we obtain the following hamiltonian xmath6 withxmath7here xmath3 is the center of mass coordinate and xmath4 is the relative coordinate the hamiltonian is separated into a center of mass part and a relative part and can be solvable exactly the eigenfunctions of the center of mass motion xmath8 are simply plane waves while the hamiltonian xmath9 is equivalent to that of a single particle in an imaginary delta potential which has been exactly solved in the refxcite then the eigen functions of the original hamiltonian can be obtained and expressed as xmath10 right labelwfeven left fracigamma ksin left kleft x1x2right right texttextrmsignleft x1x2right right notagendaligned in symmetrical form andxmath11 labelwfoddin antisymmetrical form the corresponding energy is xmath12with the central and relative momenta xmath13 the symmetrical wavefunction xmath14 is the spatial part wavefunction for two bosons or two fermions in singlet pair while the antisymmetrical wavefunction xmath15 only for two triplet fermions before starting the investigation on dynamics of two particle collision we would like to point that there exist spectral singularities in the present hamiltonian it arises from the same mechanism as that in the single particle systems xcite we can see that the eigen functions with even parity and momentum xmath16 can be expressed in the formxmath17with energy xmath18we note that function xmath19 satisfiesxmath20 0which accords with the definition of the spectral singularity in ref it shows that there exist a series of spectral singularities associated with energy xmath21 for xmath22 which constructs a spectrum of spectral singularities we will demonstrate in the following section that such a singularity spectrum leads to a peculiar dynamical behavior of two local boson pair or equivalently singlet fermion pair the emergence of the spectral singularity induces a mathematical obstruction for the calculation of the time evolution of a given initial state since it spoils the completeness of the eigen functions and prevents the eigenmode expansion nevertheless the completeness of the eigen functions is not necessary for the time evolution of a state with a set of given coefficients of expansion it does not cause any difficulty in deriving the time evolution of an initial state with arbitrary combination of the eigen functions namely any linear combination of function set xmath23 or xmath24 can be an initial state and the time evolution of it can be obtained simply by adding the factor xmath25 in order to investigate the dynamical consequence of the singularity spectrum we consider the time evolution of the initial state of the form xmath26 where xmath27 is the normalization factor which will be given in the following and xmath28 gleft kright exp left frac12beta 2k k02ikr0 right endaligned here xmath29 and xmath30 is arbitrary real number we explicitly havexmath31wherexmath32furthermore from the identity xmath33 4beta 2left left x1bright 2left x2bright 2b2right left alpha 24beta 2right x1x2 notagendaligned we can see that the cross term xmath34 vanishes if we take xmath35 the initial state can be written as a separable form xmath36 left k0gamma right left left varphi left x2right varphi left x1right uleft x2x1right varphi left x1right varphi left x2right uleft x1x2right right right notagendaligned where xmath37 is heaviside step function and xmath38 labelphi in this case xmath27 can be obtained as xmath39without loss of generality we have set the initial center of mass coordinate xmath40 and dropped an overall phase xmath41 we note that functions xmath42 and xmath43 represent gaussian functions with centers at xmath44 and xmath45 respectively obviously the probability contributions of xmath46 and xmath47 are negligible under the condition xmath48 we then yield xmath49 labelinitial state1 which represents two boson wavepacket state with the same width group velocity xmath50 and location xmath51 herethe renormalization factor has been readily calculated by gaussian integral so far we have construct an expected initial state without using the biothogonal basis set the dynamics of two separated boson wavepackets can be described by the time evolution as that in the conventional quantum mechanics it is presumable that before the bosons start to overlap they move as free particles with the center moving in their the group velocities xmath52 and the width spreading as function of time xmath53 we concern the dynamic behavior after the collision to this end we calculate the time evolution of the given initial state which can be expressed as xmath54 by the similar procedure as above we find that the evolved wave function can always be written in the separated formxmath55wherexmath56frac4beta 2pi left 1 4beta 2t2right exp left frac2beta 2left r k02right 2 1 4beta 2t2fracileft 16beta 4r24rk0tk02right 4 16beta 2t2right andxmath57 left cos left krright fracigamma ksin left kleftvert rrightvert right right exp left ik2tright texttextrmdkwhere the normalization factor xmath58 straightforward algebra shows thatxmath59where xmath60 22left 4beta 4t21right idelta pm right delta pm fracbeta 4left leftvert rrightvert pm r0right 2t2k02tmp 2k0left leftvert rrightvert pm r0right 2left 4beta 4t21right endaligned in the case of xmath61 xmath62 the probability distribution is xmath63 which leads the total probability under the case xmath64xmath65 we can see that after the collision the residual probability becomes a constant and vanishes when xmath66 it shows that when the relative group velocity of two single particle gaussian wave packets with equal width reaches the magnitude of the interaction strength the dynamics exhibits complete particle pair annihilation in order to demonstrate such dynamic behavior and verify our approximate result the numerical method is employed to simulate the time evolution process for several typical situations the profiles of xmath67 are plotted in fig 1 we would like to point that the complete annihilation depends on the relative group velocity which is the consequence of singularity spectrum this enhances the probability of the pair annihilation for a cloud of bosons which mayprovide an detection method of the spectral singularity in experiment in this section we will investigate the two particle collision process from another point of view and give a more extended example by employing the second quantization representation the initial state in eq initial state1 can be expressed as the form xmath68 where xmath69 xmath70 is the creation operator for a boson in single particle state with the wavefunction xmath71 and xmath72 denotes the vacuum state of the particle operator similarly if we consider a fermion pair the initial state in eq initial state1 can be written asxmath73where xmath74 xmath75 is the creation operator for a fermion in single particle state with the wavefunctionxmath76herexmath77are the spin part of wavefunction we see that the initial state in eq fermion pair is singlet pair with maximal entanglement in contract state xmath78 should not lose any amplitude after collision on the other hand we can extend our conclusion to other types of initial state for instance we can construct the initial state withxmath79 gleft kright left k k0right exp left frac12beta 2 left k k0right 2ikr0right notagendalignedwhich are also local states in xmath80 and xmath81 spaces respectively in coordinate space the above wavefunction has the fromxmath82 notag left k0gamma right left left varphi left 1right left x1right varphi left x2right varphi left 1right left x2right varphi left x1right right uleft x2x1right right notag left left left x1rightleftarrows x2right right right endalignedwhich can be reduced toxmath83 labelinitial state2under the approximation xmath48 herexmath84 is the normalized constant and xmath85 by the same procedure at time xmath86 the evolved wavefunction isxmath87withxmath88 k0left 1 2ibeta 2tright 32exp left fracbeta 2 left leftvert rrightvert pm left r02k0tright right 2 2left 4beta 4t21right idelta pm right delta pm fracbeta 4left leftvert rrightvert pm r0right 2t2k02tmp 2k0left leftvert rrightvert pm r0right 2left 4beta 4t21right endalignedwhere the normalization factorxmath89 in the case of xmath61 xmath62 the probability distribution is xmath90 which leads the total probabilityxmath65the profiles of xmath91 are plotted in fig 2 we can see that the same behavior occurs in the present situation in order to clarify the physical picture we still employ the second quantization representation by introducing another type of boson creation operator xmath92 xmath70 withxmath93then the initial state in eq initial state2 can be expressed asxmath94which is maximally two mode entangled state in summary we identified a connection between spectral singularities and dynamical behavior for interacting many particle system we explored the collision process of two bosons as well as fermions in one dimensional system with imaginary delta interaction strength based on the exact solution we have showed that there is a singularity spectrum which leads to complete particle pair annihilation when the relative group velocity is resonant to the magnitude of interaction strength the result forthis simple model implies that the complete particle pair annihilation can only occur for two distinguishable bosons maximally two mode entangled boson pair and singlet fermions which may predict the existence of its counterpart in the theory of particle physics
motivated by the physical relevance of a spectral singularity of interacting many particle system we explore the dynamics of two bosons as well as fermions in one dimensional system with imaginary delta interaction strength based on the exact solution it shows that the two particle collision leads to amplitude reduction of the wave function for fermion pair the amplitude reduction depends on the spin configuration of two particles in both cases the residual amplitude can vanish when the relative group velocity of two single particle gaussian wave packets with equal width reaches the magnitude of the interaction strength exhibiting complete particle pair annihilation at the spectral singularity
introduction hamiltonian and solutions dynamical signature second quantization representation summary and discussion
the strong observational evidence for an accelerating universe xcite has sparked a widespread search for a dynamical explanation beyond a bare cosmological constant a plethora of other models have been proposed with quintessence a dynamical scalar field that behaves essentially as a modern day inflaton field being perhaps the simplest example see xcite in this context many potentials have been introduced that yield late time acceleration and tracking behaviour see xcite among other approaches modified gravity models have attracted great interest see xcite but also some criticism partly because they were introduced as purely phenomenological models but more seriously because it was not clear that they possessed a satisfactory newtonian limit in the solar system or that they were free of ghosts see xcite in this paper we investigate the propagating degrees of freedom of the so called cddett model xcite there already exist detailed studies of the newtonian limit xcite and the supernovae contraints xcite for this model herewe derive conditions that they be free of ghosts and that they have a real propagation speed less than or equal to that of light as we review below a transformation of the action shows that modified gravity models are equivalent to a number of scalar fields linearly coupled to higher order curvature invariants in the casein which these curvature invariants are fourth order the relevant one for the modified gravity models of refs xcite we obtain conditions for the propagating degrees of freedom to be well behaved in their late time attractor solutions friedmann robertson walker spacetimes with accelerating expansion this extends previous work which established their consistency in de sitter backgrounds xcite we find that while untroubled by ghosts the accelerating power law attractors in general have superluminal tensor and scalar modes which may place severe theoretical constraints on these models our starting point is the action proposed in xcite which we write in the form xmath0 labelstarta where xmath1 is a constant xmath2 xmath3 and xmath4 we have introduced xmath5 for generality but note that its presence does not change the late time behaviour of the accelerating attractors since for an accelerating universe both the xmath6 einstein hilbert term and the dark matter density become negligible in other words the exponent of the power law attractor does not depend on xmath7 see xcite finally we take the function xmath8 to be of the form xmath9 where a sum over xmath10 is implied the action starta can be written as that of einstein gravity coupled to a scalar field a form more suitable for analysing the propagating degrees of freedom see the appendix for a general analysis consider xmath11 labelstep1 where of course xmath12 otherwise the action is not finite the variation of this action with respect to xmath13 leads to xmath14 and using this relation action step1 and action starta yield the same equations of motion note that when xmath15 and xmath16 this action is equivalent to einstein hilbert gravity coupled to a single scalar through a gauss bonnet gb term xmath17 the coupling of a scalar field with a quadratic expression of the curvature invariants emerges naturally in the context of string theory in particular as was shown in xcite by gross and sloan in the low energy effective action the dilaton is coupled to a gauss bonnet term it is well known that such a term expanded about a minkowski vacuum ensures that the theory is ghost free see xcite it might then seem that taking the xmath18 to be the gb combination is a sensible choice because string theory predicts such a coupling to exist and string theory does not have ghosts however in models like ours for which minkowski spacetime is not a solution choosing the gb combination of parameters xmath18 is not a sufficient condition for the non existence of ghosts a ghost is a propagating degree of freedom whose propagator has the wrong sign and which therefore gives rise to a negative norm state on quantisation such states are allowed off shell in gauge field perturbation theory but are unacceptable as physical particles a theory of gravity with fourth order derivatives in the kinetic term inevitably has ghosts xcite but even a theory with second order derivatives alone has other potential problems once we break lorentz invariance as in a friedmann robertson walker frw background the kinetic terms of a field even though second order in derivatives may still have the wrong sign or may give rise to a propagation speed which is greater than 1 or imaginary to see this in more detail consider the action for a scalar field xmath13 s d4 x 12tt 2 12st 2 e problemaction the propagation speed of this scalar is xmath19 one may wish to impose one or more of the following conditions 1 a real propagation speed xmath20 otherwise all perturbations have exponentially growing modes 2 a propagation speed less than light xmath21 we will talk about this issue more in detail in section iii 3 no ghosts xmath22 to ensure a consistent quantum field theory clearly unless xmath23 and xmath24 are positive and their ratio less than one we will have instabilities superluminal propagation or ghosts we will see that in studying the action for small perturbations of the metric in modified gravity theories we will generally encounter actions of the form e problemaction if xmath25 the action starta can be written in terms of an einstein hilbert term plus a particular extra piece involving two new scalar fields furthermore because of the special properties of the gauss bonnet term the equations of motion are no longer 4th order but remain 2nd order in the fields taking the action step1 and introducing a new scalar field xmath26 we have s d4 x2bfr ub2ffr2gb hn1 where u4n2 fa34n2nn1 with xmath27 and xmath28 the gauss bonnet invariant making a field redefinition xmath29 the equation of motion for xmath30 is xmath31 and the gravitational equations then become r g2rf 2grf 8rf 4rf4grf 4rf12g0 independent of the background after application of the bianchi indentities these equations are second order in derivatives of the fields thanks to the gauss bonnet combination it is known that adding terms quadratic in the curvature invariants to the einstein hilbert action with a cosmological constant yields an extra scalar mode and a spin2 mode which is generically a ghost because of its fourth order field equation see xcite thus provided we are expanding around a constant xmath13 background such as de sitter space we can directly infer that modifed gravity models are generically afflicted by spin2 ghosts however in our case it is clear that the higher derivative terms cancel out identically because xmath15 as was already found in xcite the remaining extra scalar degree of freedom xmath30 is due to the presence of the extra xmath32 term in the lagrangian which vanishes if xmath16 crucially though the vanishing of the fourth order term is a necessary but not sufficient condition for the absence of ghosts in the spin2 sector as we described above one must also separately check the signs of the second order derivatives with respect to both time and space a check was not performed in xcite in this paperwe derive and study the kinetic terms for both the spin 0 and spin 2 fields in time dependent backgrounds to which derivatives of xmath33 contribute we find that both fields may be afflicted by instabilities or ghosts contrary to the claim of the absence of ghosts in frw spacetimes made in xcite we show that the special case of an empty accelerating universe the late time attractor of an frw cosmology in the modified gravity model under consideration the propagating states are generically but not universally superluminal over the xmath34 parameter space if the second order derivatives of the spin2 and spin0 fields do not have the correct signs in frw spacetimes then the theory may be inconsistent the existence of a ghost mode would lead for example to the over production of all particles coupled to it one may think of a theory with ghosts as an effective theory with no ghosts above some cutoff thereby restoring consistency see xcite however this cut off must be less than about 3 mev xcite a further condition that one may wish to impose is that the propagation speeds be less than or equal to unity one worry is that the existence of superluminal modes on the relevant cosmological backgrounds may lead to a catastrophic signature of causality violation see eg xcite other authors xcite have discussed the problem of superluminal propagation in non lorentzian backgrounds and in particular have suggested that the presence of superluminal modes would introduce a second horizon the so called sound horizon different from the light causal horizon which may lead to ambiguities and inconsistencies in black hole thermodynamics xcite furthermore it has been pointed out xcite that for some set of initial conditions superluminal modes may yield ill posed cauchy problems in particular in our case nothing prevents xmath35 and xmath36 the respective speeds of the scalar and tensor modes becoming infinite in generalthis would lead to causally connected spatial sections and eventually to an ill posed cauchy problem on the other hand in the context of non commutative geometry other authors have studied superluminal propagation and shown that there is no causality violation if there is a preferred reference frame xcite given these different possibilities in this paper we will present the constraints from superluminal propagation in a clearly distinct way from those arising from ghosts so as to allow readers to impose fewer or more constraints depending on the particular theory they are working with in this sectionwe begin with the special case xmath37 which we refer to as the modified gb action we vary with respect to xmath13 and xmath38 write xmath39 so that xmath40 and use the many useful identities contained in ref xcite to derive xmath41right nonumber rmunu left left ga 4 box f rightdealmudebenu 4 nanunamu f galbe 8 narhnamu f dealrhdebenu right nonumber left ralnurhobe 4 narhnanu f frac12uphigalbe right halbe labele deltagendaligned in order to establish whether the theory is stable and ghost free we must examine the second variation of the action this can be organized as before xmath42 the easiest term is xmath43 and we can simplify the mixed term using the field equation for xmath13 to obtain xmath44 where xmath45 after integration by parts we then have xmath46 narhnaside f endaligned we simplify the final term using the field equation for xmath38 and organise according to the number of derivatives of xmath47 in order to check that the original action is ghost free we need to establish that the fourth order terms vanish and that terms involving two derivatives of the metric have the appropriate sign it is already straightforward to see from eq e deltag that there can be no fourth order derivatives as the terms containing derivatives of the riemann and ricci tensors and the ricci scalar have already cancelled the remaining second order terms are g2 s2 d4x h h h 12 h h a a 12 h h h h where xmath48 in an frw space time the background fields break lorentz invariance with g a2 f 00b g g00c where xmath49 xmath50 and where we have assumed spatial flatness herea prime denotes a derivative with respect to conformal time xmath51 and xmath52 in this backgroundit is convenient to decompose the metric perturbation into scalar vector and tensor modes in the usual way to check the sign of the kinetic term for the spin2 particle we first identify it in the expansion of the metric hij g3ij ij cij hij where the symbol xmath53 denotes covariant differentiation with respect to the spatial metric xmath54 as the transverse traceless xmath55 xmath56 tensor mode xmath57 it is straightforward to show that the tensor part of the second variation which is second order in derivatives of xmath58 is g2st2 d4x rearranging and using xmath59 to display the time and space derivatives separately we find g2 st2 d4x we therefore have two conditions for a stable theory free of ghosts f f0 f 0 condeta the ratio of the coefficients of the second derivatives is the propagation velocity squared of the spin2 mode therefore in terms of physical time xmath60 and planck units xmath61 the condition that a background have a real and non superluminal spin2 propagation speed is 0c2 2 1 spin2speed cond1 where a dot indicates a derivative with respect to xmath62 and xmath63 further the condition that the spin2 mode not be a ghost is 0 1 8 hfcond2 the same strictures apply to the scalar spin0 mode whose kinetic term is much more difficult to evaluate fortunately our lagrangian is a special case of a class of theories studied in ref there it was shown that the gauge invariant combination xmath64 one of the bardeen scalars satisfies a3 q p 0 where q p if the spin2 propagator is well behaved the sign of the scalar time derivatives is also correct thanks to eq cond2 and there are no spin0 ghosts finally the condition that the scalar propagation speed be real and non superluminal reads 0c0 2 1 4383 1 spin0speed cond3 having established these results let us now focus on the particular class of models in hand we have f a241n1 where xmath65 without loss of generality we fix to unity the coefficient of the square of the riemann tensor in the lagrangian ie xmath66 and note that the xmath13 equation of motion in a flat frw background then yields r2gb24h2a since we are interested in the behaviour of these actions in the universe at late times we study the attractor solutions for the cddett model when the matter is diluted away because of the expansion in general xcitethe attractor solutions can be written as xmath67 with xmath68 for the gauss bonnet combination it was found that the relevant accelerating power law attractor is given by xmath69 we see that p1 with xmath70 for an accelerating universe and therefore f 4n1p1an2 f96n14n3p12an3tf thus it is clear that if xmath71 or xmath72 condeta are satisfied for an accelerating universe xmath68 however the spin2 propagation speed is c2 2 which for the pure gb theory approaches 1 from above and so the graviton propagates faster than light for the scalar condition cond3 amounts to 0c0 2 1 83 1 a sufficient condition to satisfy these relations is 4n74 p 4n3 which is clearly satisfied for the pure gb theory for any xmath73 to summarise there are no ghosts or instabilities for the modified gb attractor solutions for any xmath73 however the graviton propagates superluminally which may render the pure gb theory inconsistent finally the pure gb combination is not phenomenologically viable in this case xmath74 and xmath13 vanishes as xmath75 approaches to zero this means that for this combination in the cddett model xmath76 is a singularity of the equations of motion and it not possible to change the sign of xmath75 the universe can never change from deceleration to acceleration this is reminiscent of the problem encountered in some other modified gravity theories xcite in this section we study the propagation of the spin2 and spin0 modes when matter is present which changes the background cosmology and hence the coefficients of the second derivatives in the action as we discovered in the previous subsection a realistic model should have xmath25 so that the modification is not pure gauss bonnet the conditions cond1cond2spin2speedcond3spin0speed can be generalized in a straightforward but lengthy way see xcite and depend on the parameter xmath77 recall we are assuming that xmath78 as required to cancel fourth order derivatives the correct signs in the spin2 propagator are assured if 1 4bfr 8 f 0condgen1 e realtensorspeedmatter 1 4bfr8hf 0condgen2 e noghostmatter with the propagation speed condition reading c2 2 1 e sublumgraviton the coupling of xmath79 to the ricci scalar outside the gb combination gives an extra propagating scalar degree of freedom xmath80 see xcite the other bardeen scalar the two scalar modes have the same speed of propagation xmath35 and once again if the graviton propagator is well behaved there is only the following extra condition to be satisfied 0c0 21fhf fh condgen3 where q14bfrfr8fh2q21 4bfr8hf these conditions put additional bounds on the parameter space spanned by the xmath81 which define the cddett modified gravity theories if we require that they hold at all times during the evolution of the universe in particular we may require that they hold when the universe expands with a power law xmath82 for which r f3t4n4 where 112p23b2p12 2pp1 26p2p1 3 therefore we have that c2 21 43t4n2b2 8pn1 latet2 c0 2 1 latet0 this relation holds for any power law behaviour unless either xmath83 or xmath84 becomes zero we expect power law expansion at early time where the universe should behave as an ordinary friedmann model and at late time where it approaches an accelerating attractor solution let us consider a matter dominated universe xmath85 assuming a positive xmath86 to ensure the no ghost conditions are satisfied in this caseit is clear that the tensor modes are superluminal and their speed tends to one as xmath87 on the other hand for a late time attractor solution the accelerating power law is given by xcite p where xmath88 the exponent xmath89 is real for large xmath90 if xmath91 or xmath92 furthermore at late times eq latet2 takes the approximate form c2 2 which means that for non superluminal behaviour we require p4n3 it should be noted that the value xmath93 can not be considered because this would imply that xmath94 for large xmath90 one requires 85 to avoid superluminal tensor modes at late times eq latet0 takes the approximate form c0 2 1 scalla for large xmath90 eq scalla implies that the scalar modes are not superluminal if 51 for large xmath90 in the region xmath95 both tensor and scalar modes are not superluminal in the case of xmath96one can see that the allowed region is xmath97 where the lower bound is given by xmath98 and the upper one by xmath99 this additional constraint would rule out the region xmath100 where xmath89 is real identifed by mena at al as producing cosmologies consistent with supernovae data a plot of the allowed values of xmath101 and xmath90 after imposing the no ghost constraint or the no superluminal constraint or both is shown in fig f guarda 190xmath90 plane for the constraints the light grey area corresponds to the region in which only the no ghost constraint eq e noghostmatter holds the darker area represents the points at which both the no ghost and the positive squared velocity conditions eqs e realtensorspeedmatter e noghostmatter condgen3 hold at the same time finally the darkest region is the region of the plane at which all the constraints no ghost xmath102 eqs e realtensorspeedmatter e noghostmatter condgen3 e sublumgraviton hold for both scalar and tensor modestitlefig f guarda the search for a satisfactory model for the acceleration of the universe has been pursued in many different ways recently models attempting to explain such behavior by changing the gravity sector have been proposed xcite in particular the cddett model xcite has the attractive feature of the existence of accelerating late time power law attractors while satisfying solar system constraints xcite in this paperwe have investigated the consistency of the propagating modes tensor and scalar for the action xmath103 in order for this action to be ghost free it is necessary but not sufficient to set xmath15 xcite so that there are no fourth derivatives in the linearised field equations what remained was the possibility that the second derivatives might have the wrong signs and also might allow superluminal propagation at some time in a particular cosmological background for example for the case xmath16 for which the modification is a function of the gauss bonnet term we found that the accelerating power law attractor solutions give propagators with the correct signs but with a spin2 mode propagating faster than light we have also examined the general second order cddett modified gravity theory in a frw background with matter which is parametrized by the energy scale xmath7 and by xmath104 the deviation of the ricci scalar squared term from that appearing in the gauss bonnet combination or equivalently xmath105 we found that the theories are ghost free but contain superluminally propagating scalar or tensor modes over a wide range of parameter space in conclusion we note that there are likely to be further constraints from compatibility with cmb data as we have changed gravity on large scales quite significantly to investigate this point is beyond the remit of the current paper we want to thank gianluca calcagni sean carroll gia dvali renata kallosh kei ichi maeda shinji mukhoyama burt ovrut paul saffin karel van acoleyen and richard woodard for helpful comments and discussions adf is supported by pparc mt is supported in part by the nsf under grant phy0354990 and by research corporation 99 a g riess et al supernova search team collaboration astron j 116 1009 1998 arxiv astro ph9805201 s perlmutter et al supernova cosmology project collaboration astrophys j 517 565 1999 arxiv astro ph9812133 a g riess et al supernova search team collaboration astrophys j 607 665 2004 arxiv astro ph0402512 j l tonry et al supernova search team collaboration astrophys j 594 1 2003 arxiv astro ph0305008 d n spergel et al arxiv astro ph0603449 l page et al arxiv astro ph0603450 g hinshaw et al arxiv astro ph0603451 n jarosik et al arxiv astro ph0603452 c b netterfield et al boomerang collaboration astrophys j 571 604 2002 arxiv astro ph0104460 n w halverson et al astrophys j 568 38 2002 arxiv astro ph0104489 n weiss phys lett b 197 42 1987 c wetterich nucl b 302 668 1988 b ratra and p j e peebles phys d 37 3406 1988 p j e peebles and b ratra astrophys j 325 l17 1988 j a frieman c t hill a stebbins and i waga phys lett 75 2077 1995 arxiv astro ph9505060 k coble s dodelson and j a frieman phys rev d 55 1851 1997 arxiv astro ph9608122 p j e peebles and a vilenkin phys d 59 063505 1999 arxiv astro ph9810509 p j steinhardt l m wang and i zlatev phys d 59 123504 1999 arxiv astro ph9812313 i zlatev l m wang and p j steinhardt phys lett 82 896 1999 arxiv astro ph9807002 p g ferreira and m joyce phys lett 79 4740 1997 arxiv astro ph9707286 a r liddle and r j scherrer phys d 59 023509 1999 arxiv astro ph9809272 e j copeland a r liddle and d wands phys d 57 4686 1998 arxiv gr qc9711068 s c c ng n j nunes and f rosati phys d 64 083510 2001 arxiv astro ph0107321 s bludman phys rev d 69 122002 2004 arxiv astro ph0403526 s m carroll v duvvuri m trodden and m s turner phys d 70 043528 2004 arxiv astro ph0306438 s m carroll a de felice v duvvuri d a easson m trodden and m s turner phys d 71 063513 2005 arxiv astro ph0410031 s capozziello s carloni and a troisi arxiv astro ph0303041 c deffayet g r dvali and g gabadadze phys d 65 044023 2002 arxiv astro ph0105068 k freese and m lewis cardassian expansion a model in which the universe is flat matter phys b 540 1 2002 arxiv astro ph0201229 n arkani hamed s dimopoulos g dvali and g gabadadze arxiv hep th0209227 g dvali and m s turner arxiv astro ph0301510 s nojiri and s d odintsov mod a 19 627 2004 arxiv hep th0310045 n arkani hamed h c cheng m a luty and s mukohyama jhep 0405 074 2004 arxiv hep th0312099 m c b abdalla s nojiri and s d odintsov class 22 l35 2005 arxiv hep th0409177 d n vollick phys d 68 063510 2003 arxiv astro ph0306630 g r dvali g gabadadze and m porrati phys b 485 208 2000 arxiv hep th0005016 c deffayet g r dvali g gabadadze and a i vainshtein phys d 65 044026 2002 arxiv hep th0106001 a nunez and s solganik phys b 608 189 2005 arxiv hep th0411102 t chiba jcap 0503 008 2005 arxiv gr qc0502070 a nicolis and r rattazzi jhep 0406 059 2004 arxiv hep th0404159 m a luty m porrati and r rattazzi jhep 0309 029 2003 arxiv hep th0303116 k koyama phys rev d 72 123511 2005 arxiv hep th0503191 d gorbunov k koyama and s sibiryakov arxiv hep th0512097 i navarro and k van acoleyen phys lett b 622 1 2005 arxiv gr qc0506096 o mena j santiago and j weller arxiv astro ph0510453 d j gross and j h sloan nucl b 291 41 1987 b zwiebach phys b 156 315 1985 n h barth and s m christensen phys d 28 1876 1983 k s stelle gen 9 353 1978 a hindawi b a ovrut and d waldram phys d 53 5583 1996 arxiv hep th9509142 n boulanger t damour l gualtieri and m henneaux nucl b 597 127 2001 arxiv hep th0007220 r p woodard arxiv astro ph0601672 i navarro and k van acoleyen arxiv gr qc0511045 s m carroll m hoffman and m trodden phys rev d 68 023509 2003 arxiv astro ph0301273 j m cline s jeon and g d moore phys d 70 043543 2004 arxiv hep ph0311312 a adams n arkani hamed s dubovsky a nicolis and r rattazzi arxiv hep th0602178 a hashimoto and n itzhaki phys d 63 126004 2001 arxiv hep th0012093 e babichev v f mukhanov and a vikman arxiv hep th0604075 a d rendall class 23 1557 2006 arxiv gr qc0511158 a vikman arxiv astro ph0606033 c bonvin c caprini and r durrer arxiv astro ph0606584 s l dubovsky and s m sibiryakov arxiv hep th0603158 c armendariz picon and e a lim jcap 0508 007 2005 arxiv astro ph0505207 j c hwang and h noh phys d 61 043511 2000 arxiv astro ph9909480 d a easson f p schuller m trodden and m n r wohlfarth phys d 72 043504 2005 arxiv astro ph0506392 j c hwang and h noh phys d 71 063536 2005 arxiv gr qc0412126 in this appendix we demonstrate how actions for non standard models of gravity can be rewritten in the form of einstein gravity with a non minimal coupling to one or more scalar fields this is not a new result see most recently eg xcite but the modified gravity action barba is a special case which needs separate consideration consider the action xmath106 where xmath107 are monomials in the curvature invariants with xmath108 xmath109 and xmath110 defined earlier but where we allow the possibility of higher order terms if the function xmath111 is at least twice differentiable except possibly at isolated points then this is easily seen to be equivalent to the action xmath112 labelscalargenact where xmath113 are a set of auxiliary scalar fields one for each of the terms xmath107 and xmath114 the first variation is xmath115 labelfirst var where xmath116 we immediately see that provided the matrix of second derivatives xmath117 is non singular xmath118 and we return to the original action genact this was one of the results in xcite however the possibility of a singular matrix was not considered in models of the form barba there exist degeneracies in the parameters of the form xmath119 where xmath120 are arbitrary constants and xmath121 are orthonormal vectors in the space of curvature invariants in this case xmath122 xmath123 and higher derivatives also vanish as our modified gravity example suggests we can reduce the number of scalar fields by taking linear combinations of the xmath107 normal to the subspace spanned by the xmath121 if this subspace is spanned by xmath127 then we can define a new set of variables xmath128 xmath129 the function xmath130 is now independent of xmath131 so we can write xmath132int d4xsqrtgfphia zprimea phiafaphi in the cddett model the lagrangian density depends on xmath108 xmath109 and xmath110 only through the combinations xmath108 xmath133 there is one degeneracy and so there are only two scalar fields required to put the action into the linearised form of eq scalargenact when xmath134 in hn1 the field associated with xmath135 may be trivially solved for to give the einstein hilbert term
we consider modified gravity models involving inverse powers of fourth order curvature invariants using these models equivalence to the theory of a scalar field coupled to a linear combination of the invariants we investigate the properties of the propagating modes even in the case for which the fourth derivative terms in the field equations vanish we find that the second derivative terms can give rise to ghosts instabilities and superluminal propagation speeds we establish the conditions which the theories must satisfy in order to avoid these problems in friedmann backgrounds and show that the late time attractor solutions generically exhibit superluminally propagating tensor or scalar modes
introduction the physical degrees of freedom propagation in frw spacetimes conclusions scalar fields and modified gravity
recent developments in photoelectron spectroscopy have challenged the apparent simple truth that the fermi surface of cuprate superconductors is simply the one corresponding to lda band structures with the only effect of the closeness to the mott hubbard insulator being a moderate correlation narrowing of the band width the discovery of the shadow bandsxcite the temperature dependent pseudogap in the underdoped statexcite and the substantial doping dependence of the quasiparticle band structurexcite leave little doubt that a simple single particle description is quite fundamentally inadequate for these materials moreover photoemission experiments on one dimensional 1d copper oxidesxcite have shown very clear signatures of spin charge separation the equally clear nonobservation of these signatures in the cuprate superconductors at any doping level advises against another apparent simple truth namely that the fermi surface seen in the cuprates is simply that of the spinons in a 2d version of the tomonaga luttinger liquid tll realized in 1d motivated by these developments we have performed a detailed exact diagonalization study of the electron removal spectrum in the 1d and 2d xmath0xmath1xmath2 model this model reads xmath7 there by the constrained fermion operators are written as xmath8 and xmath9 denotes the spin operator on site xmath10 the summation xmath11 extends over all pairs of nearest neighbors in a 1d or 2d square lattice the electron removal spectrum is defined as xmath12 denote the ground state energy and wave function for small finite clusters this function can be evaluated numerically by means of the lanczos algoritmxcite in 1d the xmath0xmath1xmath2 model is solvable by bethe ansatz in the case xmath2xmath13xmath14xcite but even for this limit the complexity of the bethe ansatz equations precludes an evaluation of dynamical correlation functions forthe closely related hubbard model in the limit xmath15xmath16xmath17 the bethe ansatz equations simplifyxcite and an actual calculation of the spectral function becomes possiblexcite in all other cases lanczosdiagonalization is the only way to obtain accurate results for xmath18xcite in order to analyze our numerical results we first want to develop an intuitive picture of the scaling properties of the elementary excitations in 1d which will turn out to be useful also in 2d it has been shown by ogata and shibaxcite that for xmath15xmath16xmath17 the wave functions can be constructed as products of a spinless fermion wave function which depends only on the positions of the holes and a spin wave function which depends only on the sequence of spins a naive explanation for this remarkable property is the decay of a hole created in a nel ordered spin background into an uncharged spin like domain wall and a charged spinless domain wall then since it is the kinetic energy xmath19xmath0 which propagates the charge like domain walls whereas the exchange energy xmath19xmath2 moves the spin like domain walls one may expect that the two types of domain walls have different energy scales namely the excitations of the charge part of the wave function ie the holons have xmath0 as their energy scale whereas those of the spin part ie the spinons have xmath2 as their energy scale scanning the low energy excitation spectrum of 1d xmath0xmath1xmath2 rings then shows that indeed most of the excited states have excitation energies of the form xmath20xcite which indicates the presence of two different elementary excitations with different energy scales surprisingly enough the low energy spectrum of the 2d model shows the same scaling behavior of the excitation energies as in 1dxcite which seems to indicate the existence of two types of spin and charge excitations if very different nature also in this case other cluster results indicate however that these two types of excitations do not exist as free particles the dynamical density correlation function which corresponds to the particle hole excitations of holons and shows sharp low energy peaks in 1dxcite is essentially incoherent in 2d and has practically no sharp low energy excitationsxcite the optical conductivity in 2d shows an incoherent high energy part with energy scale xmath2xcite which is completely unexpected for the correlation function of the current operator which acts only on the charge degrees of freedom there is moreover rather clear numerical evidencexcite that the hole like low energy excitations can be described to very good approximation as spin xmath21 spin bagsxcite ie holes dressed heavily by a local cloud of spin excitations to obtain further information about similarities and differences between 1d and 2d also in comparison to the spectroscopic results we have performed a systematic comparison of the electron removal spectra in both cases as will become apparent there are some similarities but also clear differences we suggest that the main difference between 1d and 2d is a strong attractive interaction between spinon and holon in 2d which leads to a band of bound states being pulled out of the continuum of free spinon and holon states this band of bound states which arenothing but simple spin xmath21 fermions corresponding to the doped holes then sets the stage for the low energy physics of the system ie true spin charge separation as in 1d never occurs we begin with a discussion of the 1d model at half filling figure fig1 shows the electron removal spectra for the xmath22site ring let us first consider the left panel where energies are measured in units of xmath2 then one can distinguish different types of states according to their scaling behavior with xmath0 there is one band of peaks connected by the thin full line whose energies relative to the single hole ground state at xmath23xmath13xmath24 remains practically unchanged under a variation of xmath0 ie these states have xmath2 as their energy scale as a remarkable fact this band abruptly disappears half way in the brillouin zone ie there are no peaks whose energy scales with xmath2 beyond xmath23xmath13xmath24 this looks like a half filled free electron band with a fermi level crossing at xmath24 which however is quite remarkable because inverse photoemission is not possible at half filling next in addition to this xmath2band there are several groups of peaks whose excitation energy shows a very systematic progression with xmath0 indeed when plotting the same spectra but measuring energies in units of xmath0 right panel of fig fig1 these peaks coalesce ie to excellent approximation the energy scale of these states is xmath0 this coexistence of states with different energy scales can be nicely seen in the double peak for xmath25xmath13xmath26 and momentum xmath27 the peak with lower binding energy falls into the xmath2band the one with the higher binding energy belongs to the xmath0band the dispersion of the xmath0band resembles a slightly asymmetric parabola with minimum near xmath24 for the low excitation energies that we are considering the states that fall onto this parabola correspond to the creation of a spinon with momentum xmath4xmath13xmath28 and a holon of momentum xmath29 since the spinon momentum is fixed this group of states then simply traces out the holon dispersion on the other hand the xmath2branch corresponds to the holon momentum being fixed at the minimum of the holon dispersion and thus traces out the spinon dispersion this building principle for the spectra can be pushed further namely one might expect that not only xmath4 but any spinon momentum may serve as the starting point for a complete branch of peaks which trace out the full holon dispersion that this is indeed the caseis shown in fig fig2 there the entire width of the spectra is shown and we have chosen the zero of energy at the excitation energy of either the topmost xmath2peak at xmath30 left panel or the topmost xmath2peak at xmath31 right panel due to this choice of the zero of energy the energy xmath6 of the spinon with the respective momentum drops out then when measuring energies in units of xmath0 different holon bands become sharp ie their energy relative to the respective spinon energy scales accurately with xmath0 moreover these different groups of peaks to good approximation all trace out the same simple backfolded nearest neighbor hopping dispersion ie the dispersion of the holon is simply xmath32 as discussed above the first holon band is shifted by the spinons fermi momentum xmath30 so that its dispersion near the band minimum at xmath33 could be seen in fig we have also verified that by alligning the spinon peaks at xmath34 yet another complete holon band can be identified we can thus infer the following building principle for the spectral function the basis for the whole construction is the half filled spinon band with dispersion xmath35 this is indicated by the thick dashed line in fig then each xmath36point of this band provides the basis for a complete holon band xmath37 which is hooked on to the spinon band at its band maximum these holon bands are indicated by the thin full lines in fig fig3a comparison with the numerical results in this case for the xmath38site ring in fig fig3b shows that indeed to excellent approximation the poles of the single particle spectral function fall onto these bands there are some deviations at high binding energies which however are most probably a deficiency of the lanczos spectra which are highly accurate only at low excitation energy moreover the holon bands in fig fig3b have been slightly shifted ie they are hooked on to the spinon band not precisely at their maximum we have verified that this shift has oscillating sign for different chain lengths so that it probably is a finite size effect as an interesting feature the pole strength seems to be constant along each of these holon bands ie the weight is a function only of the spinon momentum this seems not to be correct for xmath23xmath13xmath17 and xmath23xmath13xmath39 here it should be noted that for these momenta the holon band intersects itself which leads to a doubling of the peak weight in the thermodynamic limit the density of bands increases while simultaneously their spectral weight decreases resulting in incoherent continua comparing with the exact results of sorella and parolaxcite for the case xmath15xmath16xmath17 it is obvious that the outermost holon band in our calculation originating from the spinon fermi momentum develops into a cusp like singularity of the spectral weight the spinon band itself whose energy scale is xmath2 turns into a second dispersionless cusp in this limit which skims at zero excitation energy between xmath23xmath13xmath17 and xmath23xmath13xmath24 sorella and parola found the excitation energy of the dispersive cusp to be xmath40 which corresponds to a the backfolded and shifted nearest neighbor hopping band xmath41 summarizing the data for 1d we see that the entire electron removal spectrum obeys a very simple building principle which moreover holds for all momenta and frequencies analyzing the scaling of the different features with xmath2 andxmath0 one can identify branches of states which trace out the dispersion of the true elementary excitations of the tll namely the collective spin and charge excitations the dispersions of the spinons and holons are both consistent with simple nearest neighbor hopping bands the spinons moreover have a half filled fermi surface while these results may not be really new or surprising we note that they demonstrate that exploiting the scaling properties of excitation energies provides a very useful method to identify the different subbands in the following we will make extensive use of this principle to address the far less understood problems of 2d and finite doping we proceed to the 2d model and also consider first the case of half filling the spectra shown below refer to the standard xmath38site cluster which is the largest cluster for which the calculation of the electron removal spectrum is feasible also in the doped case the xmath42net for this cluster which is shown in fig fig3a consists of the group of momenta which roughly follows the xmath43 direction and a second group along xmath44xmath16xmath45 we would like to stress that results for other clusters are completely consistent with those for the xmath38site cluster then the left panel of fig fig4 shows the photoemission spectrum for this cluster at half filling thereby we again focus on energies within a few xmath2 from the top of the band and measure energies in units of xmath2 when the spectra are aligned at the top of the band the positions of the other dominant low energy peaks do not show a strong variation with xmath0 some peaks do show a slight but systematic drift with xmath0 which however is much weaker than in 1d a peculiar feature is the peak at xmath46 whose relative excitation energy decreases rather than increases with xmath0 inspection shows however that the very weak dispersion along the line xmath44xmath16xmath45 ie the lowest three momenta in fig fig4 scales with xmath0 to good approximation a possible explanation is the fact that a hole in a 2d system has two distinct mechanisms for propagation firstly by string truncation which gives effective hopping integrals xmath19xmath2 and secondly by hopping along spiral pathsxcite which gives smaller effective hopping integrals xmath19xmath0xcite it can be shownxcite that the dispersion relation for a single hole to good approximation can be written as xmath47 where xmath48 are numerical constants the first term which originates from the string truncation mechanism gives a dispersion which is degenerate along xmath44xmath16xmath45 and this degeneracy is lifted by the second term which is the contribution from the spiral paths this naturally explains the scaling of the dispersion along this line with xmath0 comparing with 1d we note that with the exception of xmath49 the xmath2band is present in the entire brillouin zone ie the spinon fermi surface seen at half filling in 1d does not exist we turn to the right panel of fig fig4b which shows the entire width of the spectra with energies measured in units of xmath0 it is first of all quite obvious that the spectra generally are more diffuse than in 1d with sharp features existing only in the immediate neighborhood of the top of the band except for one relatively sharp high energy peak at xmath49 next among the diffuse features at high energy there are some whose energy accurately scales with xmath0 although these peaks are rather broad so that the assignment of a dispersion is not really meaningful their centers of gravity can be roughly fitted by the expression xmath50 which is reminiscent of the dispersion of the holon cusp found by sorella and parolaxcite in 1d an important difference as compared to 1d is the fact that this xmath0band does not seem to reach the top of the photoemission spectrum rather it stays an energy of xmath19xmath0 below the xmath2band which forms the first ionization states we believe that in 1d language the most plausible interpretation of the data is the formation of bound states of spinon and holon assuming a strong attraction between these two excitations which may originate eg from the well known string mechanism for hole motion in an antiferromagnetxcite one may expect that a band of bound states is pulled out of the continuum of free spinons and holons this band of bound states corresponds to the xmath2band which however has a small contribution xmath51 in its dispersion due to the spiral path mechanism such a bound state of spinon and holon should be a spin bag like spin xmath21 fermion ie a hole heavily dressed by spin excitations there is strong numerical evidencexcite that this is indeed the character of the low energy states in 2d at low doping one may expect however that such a bound state may not be stable for all momenta and we believe that this is the reason for the absence of a xmath2peak at xmath49 in this picture the 2d analogue of the holon is not a coherently propagating excitation because it is bound to the much slower spinon by the linearly ascending string potential this picture fits nicely with the diffuse character of the dynamical density correlation function in 2dxcite this function which in a tll should measure basically the response of the free holons in 2d has almost exclusively diffuse high energy peaks with virtually no sharp low energy peaks moreover the unexpected in the framework of spin charge separation appearance of xmath2 as energy scale in the optical conductivity is also readily understood in terms of the dipole excitations of a bound spinon holon pairxcite summarizing the data for 2d we see a band of quasiparticle peaks which predominantly has xmath2 as its energy scale and some diffuse high energy band with energy scale xmath0 both the absence of the spinon fermi surface as well as the lack of sharp holon bands are in clear contrast to the situation in 1d the formation of bound states of spinon and holon resulting in a split off band of spin bag like spin xmath21 fermions explains this in a natural way we return to 1d and consider the doped case figure fig5 shows the spectral function for the xmath22site ring with xmath26 holes measuring excitation energies in units of xmath2 left panel we can again identify the spinon band for xmath26 holes in xmath22 sitesthe nominal fermi momentum is xmath4xmath13xmath52 ie half way between xmath31 and xmath24 and the spinon band extends up to this momentum as was the case at half filling some other peaks show a systematic progression of their excitation energy and switching the unit of energy to xmath0 right panel again makes a nearly complete holon band visible to which these peaks belong the holon band again takes the form of a backfolded tight binding band but this time the top of the parabola around xmath23xmath13xmath24 is missing the holon band now seems to touch the fermi energy at xmath4 and at xmath5xmath13xmath53 the latter momentum is half way between xmath54 and xmath55 this picture of the spectral function nicely fits with the recent exact calculation in the limit xmath15xmath16xmath17 by penc et alxcite on the photoemission side this calculation showed a high intensity band which is very similar to the backfolded tight binding dispersion of the holon band in addition there was a dispersionless low intensity band at zero excitation energy which corresponds to the spinon band in the limit xmath15xmath16xmath17 for both the exact result in the limit xmath15xmath16xmath56 and our numerical data for finite xmath2 there are thus two branches of states which reach excitation energy zero the main band which touches xmath3 at xmath4 and the shadow bandxcite which reaches xmath3 at xmath5 the fermi level crossings of these two bands may be thought of producing the well known marginal singularities in the electron momentum distribution xmath57 at xmath4 and xmath5 found by ogata and shibaxcite the numerical spectra demonstrate a peculiar feature of the tll namely a kind of pauli exclusion principle which holds for both holons and spinons the dispersions of both types of excitations become incomplete upon doping ie the spinon fermi surface shrinks as if the spinons were spin xmath21 particles while simultaneously the top of the holon band is sawed off as if the holons were spinless fermions it should be noted that this is quite naturally to be expected in that the rapidities for the different particles in the bethe ansatz solution both obey a pauli like exclusion principlexcite this has negative implications for eg slave boson mean field calculations which necessarily have to treat one type of excitation as a boson while spin charge separation is often quoted as justification for the mean field decoupling it is obvious that this approximation must fail to reproduce the excitation spectrum even qualitatively in 1d the only situation where spin charge separation is really established for a more quantitative discussion of the fermi points we note that the fermi momentum for hole concentration xmath58 is xmath59 for this momentumthe first branch of low energy excitations reaches xmath3 for small xmath58the second branch of low energy excitations comes up to xmath3 at xmath60 the two marginal singularities thus enclose a hole pocket of length xmath61 as one would expect for holes corresponding to spinless fermions it is easy to see that this hole pocket is nothing but the manifestation of the holon fermi surface around xmath23xmath13xmath39 the lowest charge excitations which may be thought of as corresponding to a particle hole excitation between the two edges of the holon pocket have wave vector xmath62 ie the holon pocket has a diameter of xmath63 precisely the distance between the two marginal singularities the spectral function for the doped case thus follows the same building principle as for the case of half filling with the sole difference being that occupied spinon or holon momenta are no longer available for the construction of final states the singularities in xmath57 may be thought of as enclosing a hole pocket corresponding to spinless fermions and thus reflect the fermi surface of the holons the two holon pockets are placed such that their inner edges at xmath64 enclose the volume corresponding to the fermi sea of spinons of density xmath65 we proceed to the doped case in 2d let us note from the very beginning that for very simple technical reasons the situation is much more unfavorable in this case to begin with due to the higher symmetry of 2d clusters the available xmath42 meshes are much coarser for example amongst the xmath66 allowed momenta in the xmath66 site cluster only xmath67 xmath42points are actually non symmetry equivalent so that the amount of nonredundant information is much smaller than in 1d next unlike 1d where a unique relationship exists between hole density and fermi momentum most electron numbers in small 2d clusters correspond to open shell configurations with highly degenerate ground states for noninteracting particles in an open shell situation multiplet effectsare guaranteed to occur so that it is in general unpredictable which momenta are occupied and which ones are not this holds for a fermi liquid but is most probably true also for other effective particles unexpected problems may arise from this bearing this in mind one therefore may not expect to see a similarly detailed and clear picture as in 1d then figure fig6 shows the photoemission spectra for the xmath66site cluster with two holes we first consider the left hand panel where energies are measured in units of xmath2 comparing with fig fig5 some similarities are quite obvious the excitation energies of the topmost peaks at xmath46 xmath68 are independent of xmath0 although the spectra for xmath25xmath13xmath26 show a slight deviation so that we can identify a band of states with energy scale xmath2 the situation actually is not entirely clear in that the peak at xmath69 is so close in energy to the one at xmath68 that it is not possible to decide if their energy difference scales with xmath2 or xmath0 next the topmost peaks at xmath70 and xmath71 show a systematic progression with xmath0 which is very reminiscent of eg fig fig1 plotting the same spectra with energy scale xmath0 indeed to good approximationaligns these peaks although the peak at xmath70 still has a slight drift ie their excitation energy relative to the topmost peak at xmath68 scales with xmath0 moreover one can identify a number of diffuse features at energies between xmath72 and xmath73 which also are roughly aligned these are indicated by the dashed line in analogy with 1d we can thus distinguish different branches of states with different energy scales in their excitation energies while the coarseness of the xmath42meshes introduces some uncertainty the data are consistent with a xmath2band dispersing upwards in the interior of the antiferromagnetic brillouin zone and a xmath0band dispersing downwards in the outer part ie the same situation as seen in 1d a major difference is the fact that the features at higher binding energies are all very diffuse at least for xmath15xmath74xmath75 more significantly despite the fact that its energy scale seems to be xmath0 the dispersion of the shadow band is much weaker than in 1d in other words the effective mass of that band is xmath19xmath76 but with a very large prefactor we proceed to the xmath38site cluster also doped with two holes see fig fig7 choosing xmath2 as the unit of energy we see the already familiar situation the topmost peaks for the states at xmath77 and xmath78 are aligned although xmath25xmath13xmath26 again deviates slightly and several other peaks show a systematic progression with xmath0 an unexpected exception is xmath46 where a well defined peak actually is not observed changing to energy scale xmath0aligns a number of these peaks which suggests that these peaks form a xmath0band which originates from the topmost peak at xmath77 this is a second unexpected feature of the xmath38site cluster in that for the spectra in 1d and for those of the xmath66site cluster in 2d the most intense xmath0band always seemed to originate from the topmost peak of the photoemission spectrum we can only speculate that these unusual features are the consequence of eg the multiplet effects mentioned above we also note in this context that the spectra at xmath46 look actually quite different for xmath66 and xmath38 site cluster which shows the impact of finite size effects ascribing the special behavior at xmath46 to finite size effects we have a quite similar picture as in the xmath66site cluster ie the topmost peaks for spectra inside the antiferromagnetic zone have xmath2 as their energy scale whereas the topmost peaks in the outer part of the zone have energy scale xmath0 this also holds for xmath44 which is on the boundary of the antiferromagnetic zone as was the case in the xmath66 site cluster the shadow band while having xmath0 as its energy scale has a much weaker dispersion than in 1d indeed fitting the xmath0bands in both xmath66 and xmath38 site cluster by an expression of the form xmath79 requires to choose xmath80 it is tempting to speculate that this may actually be xmath81 as one would expect eg in the gutzwiller picture another notable feature is that the xmath0band is restricted to the outer part of the brillouin zone only the diffuse high energy band indicated by the dashed line in figure fig7 seems to scale with xmath0 for completeness we would like to mention that a similar analysis was not possible for the xmath82site cluster with xmath26 holes the reason is essentially that for some momenta there are no more sharp peaks but rather a multitude of densely spaced small peaks due to this we were not able to assign any defined bands or groups of peaks which showed a systematic scaling of their excitation energy we have also performed this kind of analysis for the xmath82 site cluster with xmath83 holes and found no more indication of the energy scale xmath2 at this somewhat higher concentration the entire spectra scale with xmath0 summarizing the data for 2d hole doping seems to lead to behavior which is more reminiscent of 1d than for half filling in that the xmath2band dispersing upwards in the inner part of the brillouin zone and the xmath0band dispersing downwards in the outer part seem to exist also in this case much unlike 1d however the shadow band while in principle having xmath0 as its energy scale still has a very weak dispersion so that the band structure in the doped case is practically identical to that in the undoped systemxcite we note however that the fact that the shadow band has xmath0 as energy scale has profound implications for its explanation there have been attempts to interpret the shadow band in bi2212 as a dynamical replica of the main band created by scattering of quasiparticles in the standard tight binding band from antiferromagnetic spin fluctuationsxcite experimentally however the fact that the shadow bands are observed also in the overdoped compoundsxcite where antiferromagnetism is very weak as well as the fact that they do not seem to become more pronounced in the underdoped compoundsxcite where antiferromagnetism is strong both suggest otherwise on the theoretical side we believe that our data very clearly rule out this interpretation both the main band and the spin correlation functionxcite have xmath2 as their relevant energy scale and it would be very hard to understand how the energy scale of xmath0 for the shadow band should emerge from a combination of these two types of excitations in fact the relatively accurate scaling with physically very different parameters suggests completely different propagation mechanisms for the two types of excitations we therefore believe that the shadow band is a separate branch of excitations probably best comparable to the states which produce the xmath5 singularity in the 1d systems in the previous sections we have investigated the photoemission spectrum for the one and two dimensional xmath0xmath1xmath2 model by studying the parameter dependence of the spectra we could in 1d identify branches of states which trace out the dispersions of the elementary excitations of the tll the spinons and holons both elementary excitations have a simple nearest neighbor hopping dispersion but with different band width that of the spinons is xmath19xmath2 that of the holons xmath19xmath0 in the doped case there are two groups of states which touch the fermi energy see figure figx inside the noninteracting fermi surface there is a whole continuum of bands dispersing upwards to xmath3 the uppermost of these bands traces out the spinon dispersion and has xmath2 as its energy scale the lowermost band traces out the holon dispersion and has xmath0 as its energy scale in the thermodynamic limitthese bands degenerate into cusps and merge at xmath3 in the outer half of the brillouin zone there are only states which have xmath0 as their energy scale these reach the fermi energy at xmath5 giving rise to a second fermi point while the resolution in xmath23 and xmath84 available in our finite clusters is not sufficient to make statements about extreme low energy excitations the positions of the singularities in the electron momentum distribution as determined from exact solutions clearly shows that both branches of states indeed do touch xmath3 the two singularities may be thought of enclosing a hole pocket of extent xmath85 which is essentially the image of the holon fermi surface in 2d at half filling the situation is quite different while it is still possible to distinguish bands with different scaling behavior with xmath2 and xmath0 the spinon fermi surface present in 1d does not exist and the holons seem to correspond to overdamped resonances rather than sharp excitations as in 1d we propose that strong attraction between spin and charge excitations most probably due to the well known string mechanism pulls a band of bound states out of the continuum of free holon and spinon states the relevant physics thus is that of spin bag like spin 12 quasiparticles as suggested by a considerable amount of numerical evidence for the doped case in 2d the situation is less clear and actually somewhat ambiguous the numerical photoemission spectra show some analogy with 1d in that there seems to be a high intensity main band with energy scale xmath2 dispersing upwards in the inner part of the brillouin zone and a low intensity shadow band with energy scale xmath0 dispersing downwards in the outer part of the brillouin zone see figure figx in contrast to 1d the dispersion of the shadow band is much weaker ie while the energy scale of the dispersion is xmath0 it has an additional very small prefactor of the order of the hole concentration moreover the xmath0band seems limited to the outer part of the brillouin zone ie there are no indications for a holon band with energy scale xmath0 dispersing upwards in the inner part of the brillouin zone only in the xmath66site cluster a diffuse band with energy scale xmath0can be roughly identified at higher binding energies the different energy scales of main band and shadow band suggest that these are excitations of quite different nature and in particular rule out the explanation that that the shadow band is created by scattering from antiferromagnetic spin fluctuations turning to experiment the results for 2d immediately suggests a comparison with the data of aebi et alxcite these authors found that in addition to the bright part of the band structure which seems to be consistent with the noninteracting one there is also a low intensity replica shifted approximately by xmath49 which had been consistently overlooked in all previous studies if one wants to make a correspondence to the situation for the xmath0xmath1xmath2 model one thus should identify this low intensity part with the xmath0band dispersing downwards in the xmath0xmath1xmath2 model our data imply that the shadow band should have a slightly different dispersion than the main band the limitations of the cluster method probably preclude any meaningful quantitative statements but it might be interesting to see if this difference in dispersion can be resolved experimentally we conclude by outlining a somewhat speculative scenario based on the assumption that the two bands represent indeed different excitations which persist at all temperatures and independent of antiferromagnetic correlations in this case the topology in 2d opens an interesting possibility whereas in 1d the two classes of low energy excitations forming the xmath4 and xmath5 singularities in xmath86 are well separated in xmath42 and xmath84 space for simple topological reasons the experimental data of aebi et al indicate that the main and shadow band intersect at certain points in the brillouin zone see fig fig9 left panel neglecting the small difference in dispersion between main and shadow band we might therefore model the low energy excitation spectrum by the effective hamiltonian xmath87 where xmath88 is the dispersion of the main band xmath89 and the xmath90 and xmath91 operators refer to the main and shadow band respectively choosing a dispersion of the form xmath92 this hamiltonian reproduces the fermi surface topology found by aebi quite well see fig fig9 however as mentioned above the two branches of excitations intersect at some points of the brillouin zone so that already a small mixing between the two bands which in turn may originate from the spinon holon interaction has a dramatic effect on the topology of the low energy excitation spectrum namely adding a term of the form xmath93 ie a hybridization between the two types of bands even relatively small values of xmath94 open up a gap around xmath44 and transform the fermi surface transformed into a hole pocket see left panel of fig fig9 thereby we fix the chemical potential by requiring that the number of xmath90 and xmath91 particles remains unchanged it is easy to see that the area covered by the pockets then equals the hole concentration xmath58 precisely as it was the case in 1d thereby the fermi surface has predominant main band character at its inner edge and shadow band character at the outer edge implying a very different visibility in photoemission finally it is tempting to speculate that the pseudo gap order parameter xmath94 decreases with increasing temperature hole concentration its vanishing at a certain temperature xmath95 then could produce a crossover from the hole pockets to the large fermi surface at xmath95 which picture would nicely reproduce the pseudogap phenomenology observedxcite in cuprate superconductors financial support of r e by the european community and of y o by the saneyoshi foundation and a grant in aid for scientific reserach from the ministry of education science and culture of japan is most gratefully acknowledged et al 72 2757 1995 s la rosa et al preprint a g loeser et al science 273 325 1996 d s marshall et al phys lett 76 4841 1996 c kim et al phys rev lett 77 4054 1996 e dagotto rev phys 66 763 1994 bares and m blatter phys lett 64 2567 1990 m ogata and h shiba phys b 41 2326 1990 s sorella and a parola j phys cond mat 4 3589 1992 k penc k hallberg and h shiba phys lett 77 1390 1996 j favand et al e print cond mat 9611223 r eder et al unpublished r eder y ohta and s maekawa phys lett 74 5124 1995 r eder p wrobel and y ohta phys b 54 r11034 1996 e dagotto and j r schrieffer phys b 43 8705 1990 r eder and y ohta phys b 50 10043 1994 j riera and e dagotto unpublished j r schrieffer x g wen and s c zhang phys b 39 11663 1989 s a trugman phys b 37 1597 1987 r eder and k w becker z phys b 78 219 1990 see also m vojta and k w becker phys b 54 15483 1996 l n bulaevskii e l nagaev and d i khomskii sov jetp 27 836 1968 r eder y ohta and t shimozato phys b 50 3350 1994 a chubukov phys b 52 r3840 1995
we present an exact diagonalization study of the single particle spectral function in the 1d and 2d xmath0xmath1xmath2 model by studying the scaling properties with xmath2 and xmath0 we find a simple building pattern in 1d and show that every spectral feature can be uniquely assigned by a spinon and holon momentum we find two types of low energy excitations a band with energy scale xmath2 and high spectral weight disperses upwards in the interior part of the brillouin zone and reaches xmath3 at xmath4 and a band with energy scale xmath0 and low spectral weight disperses downwards in the outer part of the zone touching xmath3 at xmath5 an analogous analysis of the 2d case at half filling shows that the xmath0band exists also in this case but is diffuse and never reaches the fermi energy for the doped case in 2d the picture is more reminiscent of 1d in particular the main band with a dispersion xmath6 and the shadow band with energy scale xmath0 can be identified also in this case this leads us to propose that the shadow bands discovered by aebi et al in bi2212 are the 2d analogue of the xmath5 singularity in 1d systems and unrelated to antiferromagnetic spin fluctuations 2
introduction one dimension, half filling two dimensions, half filling one dimension, doped case two dimensions, doped case discussion
in the physics literature stochastic particle systems in a limit of large system size are often described by a mean field master equation for the time evolution of a single lattice site xcite for conservative systems these equations are very similar to mean field rate equations in the study of cluster growth models see eg xcite and the references therein we focus on particle systems where only one particle jumps at a time which corresponds to monomer exchange in cluster growth models as studied in xcite and also in the well known becker dring model xcite while these mean field equations often provide the starting point for the analysis and have an intuitive form to our knowledge their connection to underlying particle systems has not been rigorously established so far in this paper we provide a rigorous derivation of this equation for misanthrope type processes xcite with bounded jump rates and homogeneous initial conditions on a complete graph the limit equation describes the dynamics of the fraction xmath0 of lattice sites with a given occupation number xmath1 and also provides the master equation of a birth death chain for the limiting single site dynamics of the process note that no time rescaling is required and the limiting dynamics are non linear ie the birth and death rates of the chain depend on the distribution xmath2 even though the limiting birth death dynamics is irreducible under non degenerate initial conditions the non linearity leads to conservation of the first moment of the initial distribution resulting in a continuous family of stationary distributions as has been observed before for other non linear birth death chains see eg xcite to establish the mean field property in the limit we show the asymptotic decay of correlations by bounding percolation clusters in the graphical construction of the process with branching processes up to finite times similar to xcite existence of limits follows from standard tightness arguments and our proof also includes a simple uniqueness argument for solutions of the limit equation while uniqueness has been establish for more complicated coagulation fragmentation models xcite we could not find a result covering our case in the literature under certain conditions on the jump rates stochastic particle systems can exhibit a condensation transition where a non zero fraction of all particles accumulates in a condensate provided the particle density exceeds a critical value xmath3 condensing models with homogeneous stationary product measures have attracted significant research interest see eg xcite for recent summaries including zero range processes of the type introduced in xcite inclusion processes with a rescaled system parameter xcite and explosive condensation models xcite while the stationary measures have been understood in great detail on a rigorous level xcite the dynamics of these processes continue to pose interesting mathematical questions first recent results for zero range and inclusion processes have been obtained on metastability in the stationary dynamics of the condensate location xcite approach to stationarity on fixed lattices under diverging particle density xcite and a hydrodynamic limit for density profiles below the critical value xcite our result provides a contribution towards a rigorous understanding of the approach to stationarity in the thermodynamic limit of diverging system size and particle number this exhibits an interesting coarsening regime characterized by a power law time evolution of typical observables which has been identified in previous heuristic results xcite also on finite dimensional regular lattices condensation implies that stationary measures for the limiting birth death dynamics only exist up to a first moment xmath3 above which xmath2 phase separates over time into two parts describing the mass distribution in the condensate and the background of the underlying particle system explicit travelling wave scaling solutions for the condensed part of the distribution have been found in xcite for zero range processes and in xcite for a specific inclusion process and will be discussed in detail the paper is organized as follows in section sec notation we introduce notation and state our main result with the proof given in section sec proof in section sec properties we discuss basic properties of the limit dynamics and its solutions as well as limitations and possible extensions of our result we present particular examples of condensing systems in section sec examples and provide a concluding discussion in section sec discussion we consider a stochastic particle system xmath4 of misanthrope type xcite on finite lattices xmath5 of size xmath6 configurations are denoted by xmath7 where xmath8 is the number of particles on site xmath9 and the state space is denoted by xmath10 the dynamics of the process is defined by the infinitesimal generator xmath11 here the usual notation xmath12 indicates a configuration where one particle has moved from site xmath9 to xmath13 ie xmath14 and xmath15 is the kronecker delta to ensure that the process is non degenerate the jump rates satisfy xmath16 since we focus on finite lattices only the generator is defined for all bounded continuous test functions xmath17 for a general discussion and the construction of the dynamics on infinite latticessee xcite we focus on complete graph dynamics ie xmath18 for all xmath19 and denote by xmath20 and xmath21 the law and expectation on the path space xmath22 of the process as usual we use the borel xmath23algebra for the discrete product topology on xmath24 and the smallest xmath23algebra on xmath22 such that xmath25 is measurable for all xmath26 we will study the processes xmath27 defined by the test functions xmath28 labelfk counting the fraction of lattice sites for each occupation number xmath29 expectations are denoted by xmath30frac1lsumx in lambdamathbbpletaxtkin 01 labeleq fl and we write xmath31 note that xmath32 are probability distributions on xmath33 for all xmath26 the time evolution is then given by xmath34 mathbbe bigmathcall fk boldsymboletat big labelevo and as usual this equation is not closed for finite system sizes xmath35 since the right handed side is not a function of xmath36 our aim is to derive a closed equation in the limit xmath37 in the following we consider a sequence in xmath35 of initial conditions xmath38 of the process such that xmath39 and such that there exists a probability distribution xmath40 with xmath41 the second condition excludes cases where the sequence xmath42 is not tight or does not have a unique limit the simplest choice with the required properties is of course a product measure with marginals xmath43 for all xmath35 by symmetry of the dynamics on the complete graph xmath44 is therefore permutation invariant for all xmath26 and we also have xmath45quadmboxfor allquad xin lambda we further assume that the jump rates are uniformly bounded with xmath46 our main theorem can be formulated as a convergence result for the single site dynamics with state space xmath33 xmath47 thmfactorization consider a process with generator on the complete graph with uniformly bounded rates eq cbounded and initial conditions satisfying initialcon1 and initialcon2 then the single site process converges weakly on path space xmath48 to a birth death chain with distribution xmath49 characterized by the mean field master equation xmath50 with initial condition xmath51 given by initialcon2 here we use the convention xmath52 for all xmath26 and recall that xmath53 for all xmath54 has a unique solution xmath55 and in particular xmath56 weakly as xmath57 for all xmath26 we see that xmath58 and with initialcon2 the limit is indeed as the master equation of a birth death chain with state space xmath33 birth rate xmath59 and death rate xmath60 note that the chain and its master equation are non linear since the birth and death rates depend on the distribution xmath61 further details are provided in section sec discussion the proof follows a standard approach we first establish existence of limits via a tightness argument then characterize all limit points as solutions of using a coupling to a branching process based on the graphical construction and finally show that has a unique solution for a given initial condition propexistence consider the process with generator and conditions as in theorem thmfactorization then the law of the single site process xmath62 is tight as xmath57 this implies existence of weak limit points xmath63 of the sequence xmath64 as defined in for all fixed xmath65 for each xmath35 large enough consider the single site process xmath66 for a fixed xmath67 with law xmath68 on the path space xmath48 we will show tightness of the sequence xmath68 as xmath37 which implies existence of limit points xmath69 since xmath70 this also provides existence of limit points xmath71 interpreting xmath72 as a mapping xmath73 is given as the image measure of xmath20 under xmath74 by a version of aldous criterion to establish tightness for xmath68 cf theorem 1610 in xcite it suffices to show that for any xmath75 xmath76 here xmath77 denotes the initial condition of the original process and xmath78 the corresponding path measure for fixed xmath79 and xmath9 from above consider the test function xmath80 to get xmath81nonumbernonumber frac1l1bigsumyneq xcetayetaxsumyneq xcetaxetaybigmathbbietaxgeq zetax boldsymboletamathbbietax zetax boldsymboleta endaligned with standard notation for indicator functions xmath82 by it s formula and with xmath83 we have for any xmath84 xmath85 where xmath86 is a martingale it has quadratic variation xmath87sint0s bigmathcallf2 2fmathcallfbig boldsymboletaudu and the integrand is easily computed to be xmath88boldsymboleta frac1l1bigsumyneq xcetayetaxsumyneq xcetaxetaybig since the rates are bounded we have for the first term in eq itoeta xmath89 as xmath90 which holds xmath91as uniformly in xmath92 and in xmath35 the same argument applies to the quadratic variation part where for xmath90 we get xmath93s int0s frac1l1bigsumyneq xcetayuetaxusumyneq xcetaxuetayubig du leq 2barcsto 0 almost sure convergence in eq limint and eq limquadratic uniformly in xmath92 and in xmath35 implies eq tightness therefore limit points xmath94 exist for all compact time interval following from the usual topology of weak convergence on path space xmath48 propcharac consider the process with generator and conditions as in theorem thmfactorization every limit point xmath94 of proposition propexistence satisfies the mean field rate equation eq misdifffk illustration of the graphical construction of the process for one dimensional nearest neighbour dynamics it is based on independent poisson processes xmath95 with jump events shown as xmath96 and xmath97 the sets xmath98 and xmath99 as given in lemma lemmabp possibly influencing xmath66 and xmath100 respectively are shown in red and blue we first collect some auxiliary results before giving the proof recall the standard graphical construction of interacting particle systems xcite which consists of a family of independent poisson point processes xmath101 for each pair xmath102 for a given xmath103 at the jump time of the point process a particle jumps from xmath9 to xmath13 with probability xmath104 this is illustrated in figure fig particlebranch for one dimensional nearest neighbour dynamics we say xmath105 is connected to xmath106 writing xmath107 if there exists a forward in time directed path along jump events in xmath108 from xmath105 to xmath106 equivalently consider running a contact process without recovery backward in time using all jump events of xmath108 in the time interval xmath109 starting with a single infection at site xmath9 then xmath107 if xmath13 is infected at time xmath110 we write xmath111 for all sites whose configuration at time xmath110 possibly influences xmath66 see figure fig particlebranch for an illustration in one dimension using the graphical construction the backward in time contact process xmath112 can be coupled to a pure birth process xmath113 with state space xmath114 and generator xmath115 such that xmath116 note that inequality holds since already infected sites can not be infected again and the forward process xmath117 is increasing and xmath118 lemmabp consider xmath119 and xmath120 as defined above with xmath121 then for each fixed xmath26 xmath122to 1quadmboxas ltoinfty it is immediate from the graphical construction and symmetry of the dynamics that conditioned on their sizes xmath123 xmath124 and xmath125 are uniform subsets of xmath5 it is further immediate that both processes evolve independently until the first intersection time xmath126 and that xmath127 thus we have for all fixed xmath128 and xmath123 xmath129nonumber fracl nx1l1cdotfracl ny1l2cdotsfracl nx nyl ny to 1quadtextrmasltoinfty endaligned also for xmath130 we can use the coupling to compare two independent copies xmath131 and xmath132 of a pure birth process with law denoted by xmath133 to get xmath134mathbbpnxtnxmathbbpnytnyto 1quadtextrmasltoinfty endaligned since the probability of attempted double infection of a given site xmath9 in the contact process vanishes as xmath37 therefore since xmath131 is xmath135 finite has a geometric distribution where xmath136ebarct big1ebarctbign1 we get xmath137 sumnx ny0inftyhlnx nycdotmathbbpnxtnxmathbbpnytny to sumnx ny0inftymathbbpnxtnxmathbbpnytny1endaligned as xmath37 by dominated convergence here we used that the integrand xmath138mathbbinxleq lmathbbiny leq l quadcdotfracmathbbplaxtnx aytny t tmathbbpnxtnxmathbbpnytnyquadto 1endaligned as xmath37 for all fixed xmath139 following from eq probemptygivensize and eq probsize applying the generator eq genmis with xmath140 to the test function xmath141 we get xmath142 nonumber frac1l1sumx yinlambda cketayfracdeltaketaxlfrac1l1sumxinlambdacketaxfracdeltaketaxl nonumber quadfrac1l1sumx yinlambda cetaxkfracdeltaketaylfrac1l1sumxinlambdacetax kfracdeltaketaxl nonumber quadfrac1l1sumx yinlambda cetax k1fracdeltak1etaylfrac1l1sumxinlambdacetax k1fracdeltak1etaxl nonumber nonumber quadfrac1l1sumx yinlambda ck1etayfracdeltak1etaxlfrac1l1sumxinlambdack1etaxfracdeltak1etaxl nonumber frac1l1sumyinlambdacketayfkboldsymboletafrac1l1sumxinlambdacetax kfkboldsymboleta nonumber quadfrac1l1sumxinlambdacetax k1fk1boldsymboletafrac1l1sumyinlambdack1etayfk1boldsymboletanonumber quadfrac1l1big bl kbl kbl k1bl k1bigendaligned here xmath143 and xmath144 are corrections resulting from diagonal terms in the sum over xmath145 and are uniformly bounded in xmath1 and xmath35 in the following we will show that xmath146 fulfills eq misdifffk from and we have xmath147mathbbelbiggfrac1lsumxinlambdacetax kfkboldsymboletabigg nonumber quadmathbbelbiggfrac1lsumxinlambdacetax k1fk1boldsymboletabiggmathbbelbiggfrac1lsumyinlambdack1etayfk1boldsymboletabiggnonumber quad o1lendaligned where we replaced pre factors xmath148 by xmath149 at the expense of a further correction of order xmath150 to conclude we will establish that expectations of product terms in factorize for the second term we have xmath151 since the rates are bounded and xmath152 we have xmath153 sumlgeq 0 cl kfrac1l2sumxneq ymathbbpletaxtletaytko1lnonumber sumlgeq 0cl kmathbbpletaxtletaytko1l where we can fix particular sites xmath154 in the last line by symmetry of the process now in order to use lemma lemmabp we write xmath155 mathbbpl bigetaxtketaytl axtcap aytemptysetbig quadmathbbpl bigetaxtketaytl axtcap aytneqemptysetbig endaligned and as xmath57 we have for the second term xmath156 leq mathbbpl big axtcap aytneq emptysetbig to 0 for the first term we write xmath157nonumber mathbbpl bigetaxtketaytlmid axtcap aytemptysetbigmathbbpl big axtcap aytemptysetbigendaligned where xmath158to 1 as xmath57 by lemma lemmabp conditional on xmath159 the events xmath160 and xmath161 are independent by construction and independence of initial conditions and therefore xmath162nonumber mathbbpl bigetaxtkmid axtcap aytemptysetbigmathbbpl bigetaytlmid axtcap aytemptysetbignonumber to fktfltlabelindiendaligned as xmath37 convergence to the limit points xmath163 and xmath164 uses again that the conditional event has limiting probability xmath165 with lemma lemmabp with bounded convergence in this implies factorization of xmath166 to sumlgeq 0cl kfltfktmathrmasltoinfty which follows analogously for the other terms in this completes the proof we consider solutions of xmath167 which are limit points of the sequence xmath32 since xmath168 and xmath169 for all xmath26 and xmath35 xmath170 for all xmath26 and xmath29 furthermore fatou s lemma implies xmath171 this also implies xmath172 with xmath173 let xmath174 be non linear operator with xmath175 for all xmath176 since xmath177 we have xmath178 let xmath179 be a solution to with xmath170 and xmath180 for all xmath181 then xmath182 is unique suppose xmath183 and xmath184 are two solutions of with above properties and xmath185 with the convention xmath186 we have xmath187 omitting the time argument of xmath183 to simplify notation in the following we use xmath188 together with boundedness of shift operator xmath189 ie xmath190 and the cauchy schwarz inequality xmath191 we get xmath192 leq 2left langle sfhatf qfhatfrangle barclvert fhatfrvert2 2 right qquad leftlangle fhatf sqfhatfranglebarclangle fhatf sfhatfrangle right qquad left2langle fhatf qfhatfrangle2barclvert fhatfrvert2 2right leq 16 barclvert fhatfrvert2 2 endaligned having also used xmath193 for all xmath29 to get the second inequality since we assume the initial condition xmath194 by gronwall s inequality we get xmath195 hence xmath196 for all xmath26 and the solution xmath182 is unique since xmath146 are limits of xmath197 we have xmath198 for all xmath181 we denote the xmath199 moment of xmath61 by xmath200 the limiting mean field equation is the master equation of the non linear birth death chain xmath201 on xmath33 with generator xmath202 where xmath53 for all xmath54 this is the limit dynamics of the single site process and the time dependent birth rates xmath203 and death rates xmath204 are given by xmath205 note that this immediately implies that xmath206 is stationary but in general xmath110 is not an absorbing state as long as xmath207 for some xmath208 as discussed in detail later the adjoint operator xmath209 then characterizes the right hand side of the master equation which can be written as xmath210 xmath61 is indeed a probability distribution on xmath33 for all xmath26 since we have xmath211 also as usual xmath212 which leads to xmath213 this implies that the expectation is conserved for the chain xmath201 ie xmath214 which corresponds to the particle density xmath215 in the original particle system note however that xmath216 is not a martingale since xmath217 and the conservation of xmath218 results from the non linearity of the process by assumption on the ratesxmath219 the chain is further irreducible unless xmath51 is degenerate but we will see below that the additional conserved quantity leads to non uniqueness for the stationary distribution a misanthrope type process with generator eq genmis on the complete graph has a family of stationary product measures xmath220 provided that xmath221 this is well known see eg xcite also for more general translation invariant dynamics under additional conditions on xmath219 the marginals are given explicitly by xmath222frac1zphiwnphinquadmboxwithquad wnprodk1n fracc1k1ck0 which are normalized by the partition function xmath223 the parameter xmath224 is the fugacity controlling the average particle density xmath225phipartialphilogzphi which is a monotone increasing function of xmath226 with xmath227 these distributions exist for all xmath228 the domain is of the form xmath229 or xmath230 where xmath231 is the radius of convergence of xmath232 and we denote by xmath233 the maximal density for the family of product measures if xmath234 then xmath235 and if xmath236 the model exhibits a condensation transition see eg xcite which we discuss in more detail in section sec examples then for each xmath237 the single site marginal xmath238 is a stationary solution of from we have the relation xmath239 with the usual convention xmath240 and xmath53 for all xmath54 this leads to xmath241 where in the second equality and the last equality we use and in the third equality we use therefore under condition we have an explicit stationary distribution for each value xmath242 of the conserved first moment provided it is not larger than xmath3 given by xmath243 with xmath244 such that xmath245 consider a fixed initial condition xmath51 for the limit equation with finite density xmath246 a natural corresponding sequence of initial conditions for the particle system are simply product measures xmath247 with marginals xmath248f0 in which case xmath43 for all xmath249 another useful choice is a conditional version of these measures with a fixed number of particles xmath250textrmand fl0pil netaxcdot if xmath251 is chosen to increase with xmath35 such that xmath252 then xmath253 as xmath37 weakly and in total variation distance the formulation of our main result requires iid initial conditions initialcon1 which provides permutation invariance of the dynamics and is otherwise used only in permutation invariance is also given under the conditional measures and the condition introduces only a small negative correlation between different occupation numbers xmath254 and xmath255 of order xmath149 this leads to a vanishing correction in and the proof can be easily adapted to also cover initial conditions with a fixed number of particles another generic initial condition of the form is to simply distribute xmath251 particles uniformly at random leading to binomial marginals xmath256 converging to poixmath215 variables as xmath257 and xmath258 given a family of stationary measures xmath243 a natural question is that of ergodicity ie for initial conditions xmath51 with first moment xmath259 does xmath61 converge to xmath243 with xmath245 while contraction arguments may by possible for particular jump rates xmath260 we are not aware of general results on convergence to stationary solutions for non linear dynamical systems that would answer this question on the restricted state spacexmath261 the process xmath262 is a finite state irreducible markov chain which converges to its unique stationary distribution xmath263 the equivalence of ensembles for such models has been established see eg xcite and references therein and ensures weak convergence xmath264tofkphi quadmboxas l ntoinfty n ltorho provided that xmath265 for condensing models with xmath266 the above holds with xmath267 which corresponds to a loss of mass in the condensate since the limit has only first moment xmath268 the sequence of marginals xmath269 is uniformly integrable if and only if xmath270 in which case convergence in holds also in xmath271 due to ergodicity for a finite state markov chain we have xmath272quadmboxas ttoinfty for each finite xmath35 which holds in total variation or xmath273 distance if the convergence xmath274 was uniformly in xmath128 this could be used to establish ergodicity for the limit process but the error bounds arising from lemma lemmabp are in fact of order xmath275 for some xmath276 since the branching processes in our coupling argument grow exponentially in time they are clearly only useful for xmath277 in particular for all fixed xmath128 and our proof does not provide uniform convergence in fact ergodicity breaking is a well known phenomenon in the presence of phase transitions eg for the contact process uniqueness of the stationary distribution is lost in infinite volume for solutions to however we still expect ergodicity at least for xmath278 and explicit heuristic scaling solutions for particular systems discussed in the next section support this even for xmath266 note that our main result in theorem thmfactorization holds independently of condition and instead requires boundedness of the rates xmath219 without conditionwe still expect a continuous family of stationary distributions for the birth death chain indexed by the first moment with similar ergodicity properties but we are not aware of related results results on some particular cases of non linear birth death chains can be found in xcite we discuss two examples of processes of type that exhibit condensation and have attracted significant recent research interest the second has unbounded rates and is not covered by our main theorem but we include it to illustrate the possible irregular behaviour and non existence of solutions to related to gelation in growth aggregation models for zero range processes zrp the jump rates depend only on the occupation of the departure site and we use the notation xmath279this leads to the rate equation taking the form xmath280 valid for all xmath29 with the convention xmath52 as before this is the master equation of a birth death chain with xmath1independent birth rate xmath281 and time independent death rate xmath282 which have been studied in xcite zrps satisfy for all choices of rates xmath283 and exhibit stationary product measures of the form an interesting example is given by the bounded jump rates xmath284 with parameters xmath285 and xmath286 for the measures we have xmath287 andstationary weights xmath288 the symbol xmath289 indicates asymptotic proportionality as xmath290 with a power law and a stretched exponential decay respectively these models have been studied in great detail see eg xcite and we have xmath291 when xmath292 or xmath293 and xmath294 if the density xmath266 the system exhibits condensation where a finite fraction of all particles concentrates in a single condensate site accordingly xmath295 is the stationary measure with maximal density xmath3 of the birth death chain with master equation intuitively the dynamic mechanism of condensation in this model is due to the decreasing jump rates xmath282 leading to an effective attraction between particles on sites with a large occupation number the system exhibits an interesting coarsening phenomenon where over time the condensed phase concentrates on a decreasing number of lattice sites with increasing occupation numbers there are only partial rigorous results so far on this question xcite and it has been studied heuristically in xcite and also xcite in terms of scaling solutions of while for initial conditions with xmath296 ergodicity as discussed in section sec ergo is expected to apply for xmath266 the solution to phase separates for large times according to the scaling ansatz xmath297 k fmathrmbulkk tunderbracefk tmathbbi1sqrtepsilontinfty k fmathrmcondk t with a scaling parameter xmath298 explained below the bulk part of the distribution applies to finite fixed occupation numbers and converges as xmath299 in analogy with the discussion in section sec ergo the condensed part xmath300 vanishes pointwise as xmath301 taking the scaling form xmath302 for xmath303 the scaling function xmath304 satisfies the second order linear differential equation xmath305 with additional constraint xmath306 as xmath307 and normalization xmath308 the solutions have been discussed in detail in xcite and show a unimodal bump corresponding to the mass distribution in the condensed phase with a total density of xmath309 equaling the excess mass which is missing in the bulk part for xmath310 the analogue tois more complicated and a detailed analysis is provided xcite with the first momentbeing conserved the simplest characterization of condensation dynamics is given by the second moment of the occupation numbers xmath311 using the scaling ansatz and computing xmath312 and with andthis is dominated by the condensed part and diverges as a power law xmath313 explosive condensation processes ecp have been introduced in xcite and further studied in xcite on a heuristic level the jump rates are of the form xmath314 are unbounded and diverge super linearly with occupation numbers on departure and target site for xmath315this model is called the inclusion process which has been studied on a rigorous level in xcite while our result does not apply to unbounded rates still represents the only possible limit dynamics for xmath163 and we expect convergence to actually hold at least as long as it has a unique solution rates of the form are related to collision kernels in aggregation models which have attracted significant research interest see eg xcite and references therein the rates satisfy condition and we have product measures of the form with xmath287 and xmath316 therefore xmath317 for xmath318 and as for models with bounded rates we expect xmath319 as xmath320 for all initial conditions with xmath321 if xmath322 we expect a scaling solution in analogy to zrps the exchange driven growth model studied in xcite corresponds to rates in the degenerate case xmath323 and provides a detailed analysis of the condensed part of the scaling solution note that in this case xmath324 and the mean field equation has an absorbing state corresponding to xmath325 as the only stationary distribution for all xmath326 effectively setting xmath327 still xmath328 is conserved and the dynamics of the particle system is not irreducible more and more sites become emptied over time and can not get occupied again thereafter the limiting master equation can be written as xmath329 for all xmath29 using xmath330 this involves the moment xmath331 which can be absorbed in a time change xmath332 leading to a standard birth death chain with symmetric rates xmath333 since xmath327all initial conditions with xmath334 lead to phase separated solutions of the form xmath335 now with xmath336 the results reported in xcite refer to xmath337 which for xmath338 again exhibits a scaling form xmath339 the scaling function again satisfies a second order linear differential equation xmath340 subject to normalization which has an explicit solution xmath341 for xmath318 there is no solution to the limit dynamics which exhibit instantaneous blow up of second moments also called gelation in the context of aggregation models see eg xcite on the level of the particle system this corresponds to the explosive condensation phenomenon studied in xcite for xmath342 where the time to reach the condensed state vanishes with increasing system size even in one dimensional geometries on the complete graph with xmath323 the behaviourcan again be characterized through the second moment as reported in xcite xmath343 the dynamical exponent for the power law cases above is given by xmath344 and for xmath345 the system exhibits finite time blow up at xmath346 which becomes instantaneous for xmath318 the boundary case xmath347 shows interesting multiscaling behaviour as discussed in section 3b xcite note that for xmath342 with only xmath318 leads to xmath236 and condensation is always explosive as mentioned above we have established the mean field equation as the limit dynamics of stochastic particle systems which provides an important ingredient for a rigorous analysis of the coarsening dynamics of condensing stochastic particle systems our result holds under arguably quite restrictive conditions which we discuss in detail in the following theorem thmfactorization is formulated for iid initial conditions and we have discussed in section sec properties how this can be extended to conditional product measures which introduce vanishing correlations and are permutation invariant in our proof permutation invariance is only used to establish existence in section sec existence this makes use of implying that the single site process xmath66 provides a realization of the limiting birth death chain since all estimates in section sec existence hold uniformly in xmath9 a similar argument can be used to establish tightness for the empirical process xmath348 this would allow for non permutation invariant initial conditions with vanishing correlations and a result on convergence of xmath349 mean field equations are often used as approximations in other geometries such as symmetric or asymmetric dynamics on xmath350dimensional regular lattices as usual the larger the dimension the better the approximation see eg xcite for details since our result does not involve any time scaling mean field averaging of the birth and death rates is achieved by a diverging number of neighbours of each lattice site this is a crucial ingredient in our proof in and in fact essential for any rigorous derivation of our arguments could be directly extended to graphs which are not complete but have a version of the above property condensing stochastic particle systems exhibit several time scales diverging with the system size eg for zrps this has been studied in xcite some of which have been identified recently also on a rigorous basis including hydrodynamics xcite and also metastable dynamics of the condensate xcite as we discussed in section sec properties convergence in our result does not hold uniformly in time and error estimates vanish on time scales at most of order xmath351 due to the coupling with branching processes boundedness of jump rates is the most restrictive condition that we expect to be not necessary for the limit result to hold but which would require a significant extension of our proof including eg a priori bounds on occupation numbers to use cut off arguments for the inclusion process with xmath352 our resultcan be established with other techniques which is current work in progress however the example of explosive processes with xmath318 shows that some growth conditions on the rates are necessary for convergence to to hold in cases of instantaneous blow up the single site process xmath66 does not have well defined limit dynamics for any xmath128 we are grateful to i armendriz for helpful advice on the proof in section sec limit w j acknowledges funding from dpst the royal thai government scholarship and s g partial support from the engineering and physical sciences research council epsrc grant no ep m0036201 w jatuviriyapornchai s grosskinsky coarsening dynamics in condensing zero range processes and size biased birth death chains journal of physics a mathematical and theoretical 49 18 2016 185005 m balzs f rassoul agha t sepplinen s sethuraman et al existence of the zero range process and a deposition model with superlinear growth rates the annals of probability 35 4 2007 12011249 t rafferty p chleboun s grosskinsky monotonicity and condensation in homogeneous stochastic particle systems to appear in annales de linstitut henri poincar probabilits et statistiques 2017
we study the single site dynamics in stochastic particle systems of misanthrope type with bounded rates on a complete graph in the limit of diverging system size we establish convergence to a markovian non linear birth death chain described by a mean field equation known also from exchange driven growth processes conservation of mass in the particle system leads to conservation of the first moment for the limit dynamics and to non uniqueness of stationary measures the proof is based on a coupling to branching processes via the graphical construction and establishing uniqueness of the solution for the limit dynamics as particularly interesting examples we discuss the dynamics of two models that exhibit a condensation transition and their connection to exchange driven growth processes keywords mean field equations misanthrope processes non linear birth death chain condensation
introduction notation and main result proof of the main result[sec:proof] properties of solutions examples of condensing particle systems[sec:examples] discussion acknowledgements
modulational instability mi refers to a process where a weak periodic perturbation of an intense continuous wave cw grows exponentially as a result of the interplay between dispersion and nonlinearity mi constitutes one of the most basic and widespread nonlinear phenomena in physics and it has been studied extensively in several different physical systems like water waves plasmas and optical devices xcite for a cubic nonlinearity as the one occurring in the nonlinear schrdinger equation used to model optical fibers the underlying physical mechanism can be understood in terms of four wave mixing between the pump signal and idler waves however the scalar four wave interactions in a homogeneous fiber can be phase matched and hence efficient only in the anomalous group velocity dispersion gvd regime in the normal gvd regime on the other hand mi can occur in detuned cavities xcite thanks to constructive interference between the external driving and the recirculating pulse alternatively mi with normal gvd can also arise in systems with built in periodic dispersion xcite among which dispersion oscillating fibers dofs have recently attracted renewed attention xcite in this case phase matching relies on the additional momentum carried by the periodic dispersion grating quasi phase matching the occurrence of unstable frequency bands can then be explained using the theory of parametric resonance a well known instability phenomenon which occurs in linearized systems for which at least one parameter is varied periodically during the evolution xcite up to now most experimental investigations realised in optical fibers have been performed with basic sinusoidal xcite or amplitude modulated xcite modulation formats in this work on the other hand we study a radically different periodic modulation of the gvd in the form of a periodic train or comb of dirac delta spikes this is a fundamental and widespread modulation format encountered in a variety of physical systems in optics delta combs have been exploited to model lumped amplification in long haul fiber optic transmission systems xcite or to model the power extraction in soliton based fiber lasers xcite moreover comb like dispersion profiled fibers have been exploited to generate trains of solitons starting from a beat signal xcite at more fundamental level kicked systemsare widely investigated as a paradigm for the emergence of chaos in perturbed hamiltonian systems with the delta kicked rotor being the most renowned example xcite its quantum version is described by a schrdinger equation forced by a dirac comb and has been extensively analyzed to study chaos in quantum systems xcite recirculating fiber loops have been used to reproduce the quantum kicked rotor with an optical system to study chaos and anderson localization xcite and to illustrate how an optical system can be used to mimic other physical systems that are more difficult to reproduce experimentally in the same vein we hope the experimental setup we propose in this paper could be used as an experimental platform to investigate such phenomena in the presence of nonlinearities a topic of much current interest finally the approach that we propose to analyse mi in the fiber with delta kicked gvd allows us on one hand to enlighten the featuress of the parametric resonance that are not dependent on the specific format of the modulation and on the other hand to compare and contrast the features of the ideal delta kicked profile with other formats including non ideal physically realizable kicking as well as widely employed profiles such as oscillating gvd the paper is organized as follows in section s centralfreq we provide a simple argument allowing to determine the central frequencies of the unstable sidebands for general periodically modulated fibers in section s diraccomb we then use floquet theory to analytically compute the width of the gain bands and as well as their maximum gain for dispersion kicked fibers in section s approxdelta we investigate numerically the effect of the smoothing of the delta comb in section s expresults we describe the experimental set up and we compare the experimental results with theory and numerical simulations based on the generalized nonlinear schrdinger equation we draw our conclusions in section s conclusions consider the nlse xmath0 we will assume the dispersion xmath1 and the nonlinearity coefficient xmath2 are of the form xmath3 where xmath4 and xmath5 are periodic functions of period xmath6 such that xmath7 and xmath8 let xmath9 be a stationary solution of we consider a perturbation of xmath10 in the form xmath11 where the perturbation xmath12 satisfies xmath13 inserting this expression in and retaining only the linear terms we find xmath14 writing xmath15 with xmath16 and xmath17 real functions we obtain the following linear system xmath18 finally taking the fourier transform of this system in the time variable xmath19 leads to xmath20 where we used the definiton xmath21 note that this is a hamiltonian dynamical system in a two dimensional phase plane with canonical coordinates xmath22 analyzing the linear instability of the stationary solution xmath10 therefore reduces to studying the solutions to for each xmath23 since the coefficients in the equation are xmath24periodic with period xmath6 floquet theory applies this amounts to studying the linearized evolution over one period xmath6 to obtain the floquet map xmath25 which in the present situation is the two by two real matrix defined by xmath26 as a result xmath27 note that xmath28 necessarily has determinant one since it is obtained by integrating a hamiltonian dynamics of which we know that it preserves phase space volume as a consequence if xmath29 is one of its eigenvalues then so are both its complex conjugate xmath30 and its inverse xmath31 this constrains the two eigenvalues of xmath32 considerably they are either both real or lie both on the unit circle now the dynamics is unstable only if there is one eigenvalue xmath29 satisfying xmath33 in which case both eigenvalues are real we will write xmath34 for the two eigenvalues of xmath28 we are interested in studying the gain that is xmath35 as a function of xmath23 xmath36 and xmath37 it measures the growth of xmath38 the gain vanishes if the two eigenvalues lie on the unit circle a contour plot of the gain in the xmath39 plane for the case of the delta comb dispersion modulation that is the main subject of this paper can be found in fig fig arnoldtongues the regions where the gain does not vanish are commonly referred to as arnold tongues we will explain below that whereas their precise form depends on the choice of xmath40 the position of their tips does not since the system is not autonomous it can not be solved analytically in general nevertheless the above observations will allow us to obtain some information about its instability for small xmath36 xmath37 and valid for all perturbations xmath40 whatever their specific form to see this we first consider the case xmath41 it is then straightforward to integrate the system the linearized floquet map is then given by xmath42 where xmath43 here xmath44 normal average dispersion since we restrict our investigations to the defocusing nls note that the matrix xmath45 has determinant equal to xmath46 as expected the eigenvalues of xmath45 can be readily computed as xmath47 what will happen if we now switch on the interaction terms xmath48 and xmath49 it is then no longer possible in general to give a simple closed form expression of the solution to which is no longer autonomous and hence of the linearized floquet map xmath50 nevertheless we do know that for small xmath51 the eigenvalues of xmath50 must be close to the eigenvalues xmath52 we then have two cases to consider xmath53 now xmath54 they are distinct and they both lie on the unit circle away from the real axis they then must remain on the unit circle under perturbation since for the reasons explained above they can not move into the complex plane away from the unit circle consequently in this case the stationary solution xmath10 is linearly stable under a sufficiently small perturbation by xmath55 and xmath56 and this statement does not depend on the precise form of xmath48 or of xmath49 in fact with growing xmath36 andor xmath37 the two eigenvalues will move along the unit circle until they meet either at xmath57 or at xmath58 for some critical value of the perturbation parameters only for values of the latter above that critical valuecan the system become unstable a pictorial description of this situation is shown in the left hand side of fig eigenvalues xmath59 xmath60 now xmath61 is a doubly degenerate eigenvalue of xmath62 under a small perturbation the degeneracy can be lifted and two real eigenvalues can be created one greater than one one less than one in absolute value the system has then become unstable of course it will now depend on the type of perturbation whether the system becomes unstable remains marginally stable the two eigenvalues do nt move at all but stay at xmath46 or xmath57 or becomes stable the two eigenvalues move in opposite directions along the unit circle a pictorial description of this situation is shown in the right hand side of fig eigenvalues for the dirac comb modulation of xmath1 which is our main object of study in this paper the details are given in the next section in conclusion examining one sees that only if xmath63 where xmath64 can an infinitely small hamiltonian perturbation of xmath62 lead to an unstable linearized dynamics near the fixed points xmath10 considered these values of xmath23 therefore correspond to the tips of the arnold tongues that is to the positions of the centers of the unstable sidebands of the defocusing nls under a general periodic perturbation xmath40 this is illustrated for a dirac comb modulation of the gvd in fig fig arnoldtongues one also observes in that figure that for a value of xmath23 close to some xmath65 the system becomes unstable only for a small but nonzero critical value of xmath36 that we shall compute below for the dirac delta comb gvd andxmath49 on the eigenvalues of the linearized floquet map floqlin black dots correspond to the unperturbed eigenvalues lying on the unit circle dashed line coloured dots show the new position of the eigenvalues after switching on the perturbations leading to a stable regime when xmath66 and an unstable one when xmath67 width340 equation omegal was derived in xcite by appealing to the theory of parametric resonance and poincar lindstedt perturbation theory our argument above is elementary and shows in a simple manner that the resonant frequencies xmath65 do not at all depend on the form of xmath68 or xmath5 note that if xmath69 and xmath70 a case considered in xcitexcite the system is equivalent to the equation of a harmonic oscillator of spatial frequency xmath71 sinusoidally modulated with period xmath6 in that casethe system leads to a mathieu equation for which it is known that resonance occurs when the period of the modulation is a integer multiple of the half spatial period of the oscillator which is xmath72 additional physical insight can be obtained by expanding equation omegal for small power ie assuming xmath73 at zero orderwe recover the well known quasi phase matching relation xcite xmath74 equation phasematching entails the conservation of the momentum made possible thanks to the virtual momentum carried by the dispersion grating of the four wave mixing interaction between two photons from the pump going into two photons in the symmetric unstable bands at lower stokes and higher antistokes frequencies with respect to the pump in the xmath75 plane for xmath76 xmath77 xmath78 and xmath79 the dashed black lines corresponds to the tips of the arnold tongues omegal at xmath80 and xmath81 the solid red lines corresponds to the gain bandwidth which can be computed from b mi gain for xmath82 red circles estimates of maximum gain gmax black crosses estimates of the bandwidth c solid blue curve mi gain for xmath83 dashed red curve approximation of maximum gain gmax fig arnoldtonguestitlefigwidth302 in the xmath75 plane for xmath76 xmath77 xmath78 and xmath79 the dashed black lines corresponds to the tips of the arnold tongues omegal at xmath80 and xmath81 the solid red lines corresponds to the gain bandwidth which can be computed from b mi gain for xmath82 red circles estimates of maximum gain gmax black crosses estimates of the bandwidth c solid blue curve mi gain for xmath83 dashed red curve approximation of maximum gain gmax fig arnoldtonguestitlefigwidth302 in the xmath75 plane for xmath76 xmath77 xmath78 and xmath79 the dashed black lines corresponds to the tips of the arnold tongues omegal at xmath80 and xmath81 the solid red lines corresponds to the gain bandwidth which can be computed from b mi gain for xmath82 red circles estimates of maximum gain gmax black crosses estimates of the bandwidth c solid blue curve mi gain for xmath83 dashed red curve approximation of maximum gain gmax fig arnoldtonguestitlefigwidth302 we now turn our attention to the computation of the gain xmath84 in particular for values of xmath23 close to the resonant frequencies we concentrate on the special case where the gvd is a dirac delta comb xmath85 1qquad gammam0 since in the rest of this paper xmath86 we will drop it from the notation to compute the gain we need to compute the linearized dynamics xmath87 and determine the behaviour of its eigenvalues xmath88 in the neighbourhood of xmath89 and xmath63 in the xmath75plane in this casethe linearized floquet map is easily seen to be explicitly given by xmath90 where xmath45 is defined by equation floqlin but now with xmath91 and xmath92 the characteristic polynomial of xmath93 is given by xmath94 so that the eigenvalues of eq linfloqbetam can be computed explicitly as xmath95 with xmath96 and xmath97 a taylor expansion of xmath98 about xmath99 yields xmath100 where xmath101 the dependence in xmath102 not in xmath36 entails that the sign of the kick has no incidence in this regime ie assuming xmath103 formula eq arnoldtongueslin shows that xmath104 is a saddle point for xmath105 if xmath106 is even xmath107 occurs close to xmath104 and if xmath106 is odd xmath108 close to xmath104 more precisely xmath109 from which we can find an estimate of the gain amplitude xmath110 and of the bandwidth xmath111 near the tips of the tongue at xmath65 as xmath112 xmath113 note that the threshold value for xmath36 above which instability occurs can be read off from the above by setting xmath114 which corresponds to xmath115 this confirms again as expected that an arbitrary small xmath36 will generate instability right at xmath63 in fig fig arnoldtonguesa we show an example of the analytically computed mi gain showing the first two arnold tongues as can be seen for a small enough strength of perturbation let s say xmath116 the approximation band gives a good estimate of the width of the parametric resonance see red curves this situation is detailed further in figs fig arnoldtonguesb c showing a section for xmath82 and xmath83 respectively finally a straightforward calculation gives the asymptotic behaviour of the gain xmath117 at xmath65 for xmath106 large and xmath36 fixed that is xmath118 with xmath119 whenever xmath120 and xmath121 otherwise as a function of xmath106 red circles estimated gain given by parameter values are xmath76 xmath77 xmath78 xmath79 xmath82 width302 in fig profilegainlarge we show an example of the analytically computed mi gain at xmath65 as a function of xmath106 we compare it to the approximation which is very accurate even for small xmath106 see red circles note in particular that the oscillating behaviour of the gain is well captured by which for xmath106 large enough and xmath36 small can be approximated by xmath122 summing up it is clear from the above that precisely at the values xmath65 which only depend on xmath6 and on xmath123 but not on the precise form of xmath68 any small perturbation can create an instability and hence a gain at frequenciesxmath23 near these particular values a minimal threshold strength of xmath36 is needed to create an instability this minimal value and even the fact that an instability is indeed generated does depend on the precise form of xmath68 forthe dirac comb the explicit expression for the gain in this regime can be read off from with xmath82 calculated numerically at the first parametric resonant frequency xmath124 for two approximations of delta functions as a function of their inverse height xmath125 solid blue curve gaussian approximation dashed red curve rectangular pulse approximation dashed horizontal line dirac delta limit dash dotted horizontal line sinusoidal modulation parameter values are xmath126 xmath77 xmath127 xmath79 in order to shed light on the dependence of the gain on the shape of the periodic modulation and also with an eye towards the experimental realization of the dirac comb fiber we now analyze what happens when the dirac comb is approximated by a train of physically realizable kicks we thus consider a smoothened dirac comb described by xmath128 where we normalize the positive function xmath129 in order to have xmath130 for a rectangular pulse of width xmath131 we get xmath132 for a gaussian function xmath133 the maximum amplitude of the kick can be calculated as xmath134 that in the limit xmath135 gives xmath136 note that in these models we have xmath137 xmath138 hence xmath139 corresponds to a rather symmetric situation where xmath140 is close to the midpoint between xmath141 and xmath142 so that xmath1 fluctuates symmetrically about its average value whereas xmath143 corresponds to a very asymmetric situation where xmath1 has a large abrupt peak the parameter xmath144 therefore controls the shape of the gvd modulation at fixed xmath140 and xmath145 or xmath146 as shown in the previous section by changing the shape of the kick we do not change the frequency of the parametric resonances the smoothing of the delta function nevertheless does modify the characteristics of the mi by changing the value of the gain as we now illustrate by computing the gain numerically at the resonant frequencies xmath65 an example of how the changing shape of the modulation xmath68 modifies the first parametric resonance is illustrated in fig deltanonperf that shows the gain xmath147 at fixed xmath124 and xmath36 as a function of the peak amplitude xmath144 or equivalently the width xmath148 of the kicks we make the following observations first a good approximation of the gain given by the dirac comb is obtained for xmath149 both for the rectangular and gaussian pulses second for xmath150 the gain of the square pulse modulation is zero as expected since we are then in the limit case of a constant modulation and normal gvd third it is apparent that the dirac comb gives the highest possible gain for a fixed area of the kicks and fixed xmath36 and xmath140 finally it is interesting to note that a sinusoidal modulation xmath69 with the same value of xmath36 gives a gain close to one half with respect to the delta case indeed for a sinusoidal modulation it has been shown that see equation 7 from ref xcite xmath151 by expanding equation gmaxsin for small xmath36 we get xmath152 at first order in xmath36 in conclusion a large concentrated perturbation of the gvd about its average enhances the mi gain it is well known that in homogeneous fibers the gvd depends on the diameter of the fiber one can therefore modulate the gvd by modulating the diameter of the fibers as a function of xmath24 as in xcite we manufactured three different microstructured optical fibers modulated by a series of gaussian pulses to approximate the ideal dirac delta comb studied in section s diraccomb the change of their outer diameters xmath153 along the fiber is represented in fig fig fibres expa as can be seen in the inset their diameters have a gaussian shape with a standard deviation xmath148 which is the same for all three fibers and very small xmath154 m compared to the period of the comb 10 m hence we can write xmath155 where xmath156 the three fibers have a very similar minimum diameter xmath157 while their maximum values are different we have xmath158172 xmath159 xmath160207 xmath159 and xmath161240 xmath159 for fibers labelled a b and c respectively corresponding to xmath162 to understand how the two experimental parameters xmath148 and xmath163 control the quality of the approximation of the delta function on the one hand and the value of xmath36 on the other hand we proceed as follows first xmath164 so that xmath165 a first order taylor expansion of xmath166 about xmath167 yields xmath168 comparing this to we find xmath169 and xmath170 hence with the notation of section s approxdelta xmath171 this corresponds to xmath172 proving that these gaussian pulses should induce a very similar parametric gain compared to ideal dirac delta functions see fig deltanonperf furthermore the height xmath163 of the gaussian pulse controls xmath36 which will allow us to investigate the impact of this parameter on the first mi side lobe gain as it was done in the theoretical study and illustrated in figfig arnoldtongues and xmath173 fig fibres expwidth302 for all three fibers the ratio of the diameter of the holes over xmath174 the pitch of the periodic cladding is assumed to be constant along the fiber and estimated to about 048 from scanning electron microscope images the diameter variations of the fibers are proportional to those of the pitch with xmath175 corresponding to the minimum value of the diameter xmath176 green line in fig fig fibres expa and xmath177 and xmath178 blue black and red curves respectively for the maximum values as an example the group velocity dispersion gvd curve corresponding to the minimum pitch value has been calculated from refxcite and is represented in fig fig fibres expb as a green curve its zero dispersion wavelength zdw is located at 1055 nm while those of the gvd curves corresponding to the maximum values of the diameters of fibers a b and c are red shifted to 1110 nm 1136 nm and 1168 nm respectively fig fig fibres expb in order to give another illustration of the large dispersion variations induced in these fibers by varying their diameters the maximum gvd values for fibers a b and c have been calculated at a fixed wavelength 10525 nm and compared to the background value as can be seen in fig fig fibres expb an increase of the diameters by a factor of only 127 145 and 175 leads to a one order of magnitude improvement on the gvd values 21 29 and 35 respectively under such large variation of the fiber diameter the gvd can no longer be considered as proportional to the pitch value as was the case in refxcite for instance this is illustrated in figfig fibres expc where the evolution of the gvd calculated at 10525 nm as a function of the fiber diameter is represented it can be seen to be well approximated by an affine function in the range between 140xmath179 m and 180 xmath179 m but not beyond as a consequence the shape of the gvd variations will be slightly different from the one of the diameter specifically for fiber c however we checked numerically that this can be considered as relatively weak distortions that do not significantly impact the gain of the mi process we can still consider that the key parameters remain the different heights in gvd of the gaussian like pulses in fibers a b and c in order to get a more complete picture of the impact of the fiber diameter variations on its guiding properties the variation of the nonlinear coefficient is plotted as a function of the fiber diameter in fig fig fibres expd the most important feature to note here is that the amplitude of variation is much smaller and only the same order as the one of the diameter itself hence these variations are more than one order of magnitude lower than those of the gvd and we have checked numerically that their impact on the mi process is negligible consequently we can infer that these fibers represent a good prototype to validate our theoretical investigation in the previous sections where only longitudinal gvd variations have been taken into account width302 the experimental setup is schematized in fig fig experimental setup the pump system is made of a continuous wave tunable laser tl diode that is sent into an intensity modulator mod in order to shape 2 ns square pulses at 1 mhz repetition rate they are amplified by two ytterbium doped fiber amplifiers ydfas at the output of which two successive tunable filters are inserted to remove the amplified spontaneous emission in excess around the pump these quasi cw laser pulses have been launched along the birefringent axis of the fibers the pump peak power has been fixed to 65 w and the pump wavelength at 10525 nm for fiber a the output spectrum recorded at its output is represented as a blue curve in fig fig exp et simul a two mi side lobes located at 48 thz appear on both sides of the pump these experimental results have been compared with numerical simulations performed by integrating the generalized nonlinear schrdinger equation we used the gvd xmath180 and xmath181 variations calculated from the measured diameter values see figs fig fibres exp other parameters are extracted from experiments and are listed in the caption of fig fig exp et simul note that we checked that except the longitudinal gvd variations all other parameters can be assumed to be constant and equal to the average values as can be seen in the blue curve in fig fig exp et simulb two symmetric mi side lobes also appear in the simulated spectrum in a very good agreement with experiments their positions have been compared with the predictions of equation omegal represented by green dashed lines in fig fig exp et simulb calculated with xmath182 an excellent agreement is also obtained in order to show that the mi gain is larger when the weight of the dirac delta function is increased we performed similar experiments in fibersb and c where the areas of the gaussian pulses are larger than in fiber a however in experiments due to the fact that the dirac comb has been approximated with a series of gaussian functions changing their amplitudes also modifies the average value of the dispersion as a consequence mi side lobes would be generated at different frequency shifts in order to keep constant the position of the mi side lobes and then provide a correct comparison with the theoretical study one have to take care to keep the average gvd value constant in all the fibers to do so experimentally in fibers b and c we slightly tuned the pump wavelength until the first mi side lobe is generated at 48 thz as in fiber a as can be seen in fig fig exp et simula red and black curves the position of the first mi side lobe in fibers b and c is indeed located at that frequency by tuning the pump wavelength to 10618 nm and 1067 nm respectively we can therefore consider that the average gvd values are very similar in the three fibers and hence that only the areas of the gaussian functions ie the equivalent of the dirac weights vary the amplitudes of the first mi side lobes generated in fibers b and c are indeed larger compared to fiber a as predicted by the theory this is in pretty good agreement with numerical simulations figfig exp et simulb where the same procedure was used we found that the average values in fibers b and c are xmath183 and xmath184 respectively the small discrepancy between these values is attributed to spurious longitudinal fluctuations arising during the drawing process indeed as can be seen in fig fig fibres expa the background over which gaussian pulses are superimposed is not perfectly flat and in fibre c it is not horizontal to counterbalance these imperfections it was necessary to adjust the average dispersion values we checked that with a series of perfect gaussian pulses superimposed on a flat and horizontal background the same average gvd value would be obtained furthermore we can note that in fibers b and c additional mi side lobes are generated due to the periodic modulation of the gvd labelled mii in fig exp et simula up to 5 in fiber b their positions are also well predicted by numerical simulations and by equation omegal green lines in fig exp et simulb this excellent agreement confirms that their positions indeed scale approximately as xmath185 xmath186 being the side lobe order that is the typical signature of the mi process occurring in dispersion oscillating fibers it was already reported experimentally in refsxcite with a sinusoidal variation of the gvd modulated in amplitude or not and it is now illustrated in this paper with a dirac delta comb moreover in fibers b and c two symmetric side lobes that are not predicted by the theory appear around the pump at about 215 thz labelled spurious side lobes in figfig exp et simula they result from a non phase matched four wave mixing process involving the pump the first and second mi side lobes the energy conservation relation involving these waves predicts a frequency shift of 22 thz from the pump for the fourth wave that is in good agreement with the shift of 215 thz measured experimentally a experiments and b numerical simulations parameters xmath187 fig exp et simulwidth302 modulation instability has been investigated theoretically and experimentally in dispersion kicked optical fibers an analytical expression of the parametric gain has been obtained allowing to predict the behavior of the mi process in such fibers specifically it was shown that increasing the weights of the dirac functions leads to larger mi gains for the first mi side lobe we exploit the fact that the dirac delta comb can be well approximated by a series of short gaussian pulses in order to perform an experimental investigation using microstructured optical fibers we then experimentally report for the first time to our knowledge multiple mi side lobes at the output of these dispersion kicked optical fibers we demonstrate that they originate from the periodic variations of the dispersion we also validate experimentally that increasing the height of the modulation leads to a larger gain for the first mi side lobe this illustrates that optical fibers constitute an interesting platform to realize experimental investigations of fundamental physical phenomena the present research was supported by the agence nationale de la recherche in the framework of the labex cempi anr11labx0007 01 equipex flux anr11eqpx0017 and by the projects topwave anr13js04 0004 fopafe anr12js09 0005 and noawe anr14achn0014
we study both theoretically and experimentally modulational instability in optical fibers that have a longitudinal evolution of their dispersion in the form of a dirac delta comb by means of floquet theory we obtain an exact expression for the position of the gain bands and we provide simple analytical estimates of the gain and of the bandwidths of those sidebands an experimental validation of those results has been realized in several microstructured fibers specifically manufactured for that purpose the dispersion landscape of those fibers is a comb of gaussian pulses having widths much shorter than the period which therefore approximate the ideal dirac comb experimental spontaneous mi spectra recorded under quasi continuous wave excitation are in good agreement with the theory and with numerical simulations based on the generalized nonlinear schrdinger equation
introduction identifying the gain band central frequencies calculation of the modulational instability gain bands: dirac comb approximations of the delta function experimental results conclusions acknowledgments
in the ctf ii drive beam gun cs te photocathodes are used to produce a pulse train of 48 electron bunches each 10ps long and with a charge of up to 10nc xcite in ctf the main limit to lifetime is the available laser power which requires a minimal quantum efficiency qe of 15 to produce the nominal charge although cs te photocathodes are widely used a complete understanding especially of their aging process is still lacking spectra of the qe against exciting photons may help to understand the phenomenon according to spicer xcite the spectra of the quantum efficiency qe of semiconductors with respect to the energy of the exciting photons xmath0 can be described as xmath1 where xmath2 is the threshold energy for photoemission cxmath3 and cxmath4 are constants to measure the spectral response of photocathodes wavelengths from the near uv throughout the visible are necessary to attain these an optical parametrical oscillator was built xcite a frequency tripled nd yag laser pumps a betabarium borate bbo crystal in a double pass configuration as shown in figfig opo the emerging signal beam with wavelengths between 409 nm and 710 nm is frequency doubled in two bbo crystals the wavelengths obtained are between 210 nm and 340 nm the idler beam delivers wavelengths between 710 nm and xmath5 nm the measurements of the spectral response of photocathodes were made in the dc gun of the photoemission lab at cern xcite at a field strength of about 8 mv m spectra were taken shortly after the evaporation of the cathode materials onto the copper cathode plug as well as after use in the ctf ii rf gun xcite at fields of typically 100 mv m to be able to interpret the spectra in terms of spicer s theory it was necessary to split the data into 2 groups one at low photon energy and one at high photon energy see figfig cath87 then the data can be fitted well with two independent curves following eqeq spicer which give two threshold energies for a typical fresh cs te cathode the high energy threshold is 35ev the low one is 17ev as shown in figfig cath87 upper curve this might be a hint that two photo emissive phases of cs te on copper exist several explanations are possible the copper might migrate into the cs te creating energy levels in the band gap or possibly not only csxmath4te but also other cs te compounds might form on the surface and these might give rise to photoemission at low photon energy a hint to thismight be that the ratio of evaporated atoms of each element is not corresponding to csxmath4te see below after use we found that not only the complete spectrum shifted towards lower quantum efficiency but also that the photoemission threshold for high qe increased to 41ev which is shown in figfig cath87 lower curve one might expect that the photocathode is poisoned by the residual gas preventing low energy electrons from escaping however because typical storage lifetimes are of the order of months the effect must be connected to either the laser light or the electrical field we also produced a cs te cathode on a thin gold film of 100 nm thickness as shown in figfig cath120 the shoulder in the low energy response disappeared it is difficult to fit a curve for the spicer model to the low energy data the high photoemission threshold is at 35ev at the moment this cathode is in use in the ctf ii gun and will be remeasured in the future in terms of lifetime this cathode is comparable to the best cs te cathodes as it has already operated for 20 days in the rf gun as a new material presented first in xcite we tested rubidium telluride we took spectra of qe before and after use in the ctf ii gun as for cs te remarkably with this material there was no shift in the photoemission threshold towards higher energies but only a global shift in qe see figfig rb2te this might be due to the lower affinity of rubidium to the residual gas detailed investigations are necessary to clarify this long lifetimes for cs te cathodes are achieved only when they are held under uhv xmath6 mbar other photocathode materials like k sb cs are immunized against gases like oxygen by evaporating thin films of csbr onto them xcite therefore we evaporated a csbr film of 2 nm thickness onto the cs te figfig csbr shows the spectrum before the csbr film square points and after it round points the qe at 266 nm dropped from 43 to 12 in addition the photoemission threshold was shifted from 39ev to 41ev a long term storage test showed no significant difference between uncoated and coated cathodes more investigations will determine the usefulness of these protective layers in order to increase the sensitivity of the on line qe measurement during evaporation of the photocathodes we monitored the process with light at a wavelength of 320 nm we did not see any significant improvement in sensitivity notably in the high qe region film thicknesses are measured during the evaporation process by a quartz oscillator xcite typical thicknesses for high quantum efficiencies at xmath7 nm are 10 nm of tellurium and around 15 nm of cesium this results in a ratio of the number of atoms of each species of xmath8 far from the stoichiometric ratio of 05 for csxmath4te it is known that tellurium interacts strongly with copper xcite so that not all of the evaporated tellurium is available for a compound with subsequently evaporated cesium therefore we used also mo and au as substrate material however the ratio between the constituents necessary for optimum qe did not change significantly another reason might be that instead of csxmath4te csxmath4texmath9 is catalytically produced on the surface this compound as well as some others was found to be stable xcite lifetime in ctf depends on parameters like maximum field strength on the cathode vacuum and especially extracted charge typically a cathode is removed from the gun if the qe falls below 15 as shown in figfig lifetime lifetime does not depend on the initial qe a cathode having an initial qe of 15 round points lasted as long as one with 5 triangles as shown in tabletab1 the average current produced in ctf ii is nearly a factor 10000 lower than what is required for the clic drive beam a test to produce 1mc is under preparation in the photoemission laboratory at cern the exact reproduction of the clic pulse structure would require the clic laser which is still in the design stage comparison of cathode relevant parameter colsoptionsheader tab1 in a collaboration between rutherford appleton laboratory and cern a test which is compatible with our current installation is the production of 1ma of dc current which requires a uv laser power of 300mw at the cathode for this test we will illuminate the cathode with pulses of 100ns to 150ns pulse length at repetition rates between 1khz and 6khz as tabletab1 shows this is a factor 1000 more average current than in ctf ii and also demonstrates the basic ability of the cathodes to produce the ctf 3 drive beam i26xmath10a clic is still a factor 75 away we are currently searching for ways to produce higher charges as well measurements of qe against photon energy are routinely made after production and after use of photocathodes we have demonstrated that both low energy and high energy responses agree well with spicer s theory a gold buffer layer reduces the low energy response of cs te cathodes more work is needed to understand the measurements of the stoichiometric ratio of cs te coating with 2 nm csbr significantly decreased the quantum efficiency without improving the storage lifetime for the high charge drive beam of clic it is still necessary to demonstrate the capabilities of cs te for which first tests will be done soon e chevallay j durand s hutchins g suberlucq m wrgel photocathodes tested in the dc gun of the cern photoemission laboratory nuclear instruments methods in physics research section a vol 340 1994 146 156 cern clic note 203 e shefer a breskin r chechik a buzulutskov bk singh m prager coated photocathodes for visible photon imaging with gaseous photomultipliers nuclear instruments methods in physics research section a vol433 no1 2 1999 502 506
for short high intensity electron bunches alkali tellurides have proved to be a reliable photo cathode material measurements of lifetimes in an rf gun of the clic test facility ii at field strengths greater than 100 mv m are presented before and after using them in this gun the spectral response of the cs te and rb te cathodes were determined with the help of an optical parametric oscillator the behaviour of both materials can be described by spicer s 3step model whereas during the use the threshold for photo emission in cs te was shifted to higher photon energies that of rb te did not change our latest investigations on the stoichiometric ratio of the components are shown the preparation of the photo cathodes was monitored with 320 nm wavelength light with the aim of improving the measurement sensitivity the latest results on the protection of cs te cathode surfaces with csbr against pollution are summarized new investigations on high mean current production are presented
introduction measurements of qe against photon energy stoichiometric ratio lifetime in ctf ii high charge test conclusion
the transition from a liquid to an amorphous solid that sometimes occurs upon cooling remains one of the largely unresolved problems of statistical physics xcite at the experimental level the so called glass transition is generally associated with a sharp increase in the characteristic relaxation times of the system and a concomitant departure of laboratory measurements from equilibrium at the theoretical level it has been proposed that the transition from a liquid to a glassy state is triggered by an underlying thermodynamic equilibrium transition xcite in that view an ideal glass transition is believed to occur at the so called kauzmann temperature xmath5 at xmath5 it is proposed that only one minimum energy basin of attraction is accessible to the system one of the first arguments of this type is due to gibbs and dimarzio xcite but more recent studies using replica methods have yielded evidence in support of such a transition in lennard jones glass formers xcite these observations have been called into question by experimental data and recent results of simulations of polydisperse hard core disks which have failed to detect any evidence of a thermodynamic transition up to extremely high packing fractions xcite one of the questions that arises is therefore whether the discrepancies between the reported simulated behavior of hard disk and soft sphere systems is due to fundamental differences in the models or whether they are a consequence of inappropriate sampling at low temperatures and high densities different alternative theoretical considerations have attempted to establish a connection between glass transition phenomena and the rapid increase in relaxation times that arises in the vicinity of a theoretical critical temperature the so called mode coupling temperature xmath6 thereby giving rise to a kinetic or dynamic transition xcite in recent years both viewpoints have received some support from molecular simulations many of these simulations have been conducted in the context of models introduced by stillinger and weber and by kob and andersen xcite such models have been employed in a number of studies that have helped shape our current views about the glass transition xcite in its simplest idealized version firstly analyzed in the schematic approach by bengtzelius et al xcite and independently by leutheusser xcite the mct predicts a transition from a high temperature liquid ergodic state to a low temperature arrested nonergodic state at a critical temperature xmath0 including transversale currents as additional hydrodynamic variables the full mct shows no longer a sharp transition at xmath0 but all structural correlations decay in a final xmath7process xcite similar effects are expected from inclusion of thermally activated matter transport that means diffusion in the arrested state xcite in the full mct the remainders of the transition and the value of xmath0 have to be evaluated eg from the approach of the undercooled melt towards the idealized arrested state either by analyzing the time and temperature dependence in the xmath8regime of the structural fluctuation dynamics xcite or by evaluating the temperature dependence of the so called xmath3parameter xcite there are further posibilities to estimates xmath0 eg from the temperature dependence of the diffusion coefficients or the relaxation time of the final xmath7decay in the melt as these quantities for xmath9 display a critical behaviour xmath10 however only crude estimates of xmath0 can be obtained from these quantities since near xmath0 the critical behaviour is masked by the effects of transversale currents and thermally activated matter transport as mentioned above on the other hand as emphasized and applied in xcite the value of xmath0 predicted by the idealized mct can be calculated once the partial structure factors of the system and their temperature dependence are sufficiently well known besides temperature and particle concentration the partial structure factors are the only significant quantities which enter the equations of the so called nonergodicity parameters of the system the latter vanish identically for temperatures above xmath0 and their calculation thus allows a rather precise determination of the critical temperature predicted by the idealized theory at this stageit is tempting to consider how well the estimates of xmath0 from different approaches fit together and whether the xmath0 estimate from the nonergodicity parameters of the idealized mct compares to the values from the full mct regarding this we here investigate a molecular dynamics md simulation model adapted to the glass forming nixmath1zrxmath2 transition metal system the nixmath11zrxmath12system is well studied by experiments xcite and by md simulations xcite as it is a rather interesting system whose components are important constituents of a number of multi component massive metallic glasses in the present contributionwe consider in particular the xmath13 composition and concentrate on the determination of xmath0 from evaluating and analyzing the nonergodicity parameter the xmath14parameter in the ergodic regime and the diffusion coefficients our paper is organized as follows in section ii we present the model and give some details of the computations section iii gives a brief discussion of some aspects of the mode coupling theory as used here results of our md simulations and their analysis are then presented and discussed in section iv the present simulations are carried out as state of the art isothermal isobaric xmath15 calculations the newtonian equations of xmath16 648 atoms 518 ni and 130 zr are numerically integrated by a fifth order predictor corrector algorithm with time step xmath17 25 10xmath18s in a cubic volume with periodic boundary conditions and variable box length l with regard to the electron theoretical description of the interatomic potentials in transition metal alloys by hausleitner and hafner xcite we model the interatomic couplings as in xcite by a volume dependent electron gas term xmath19 and pair potentials xmath20 adapted to the equilibrium distance depth width and zero of the hausleitner hafner potentials xcite for nixmath1zrxmath2 xcite for this model simulations were started through heating a starting configuration up to 2000 k which leads to a homogeneous liquid state the system then is cooled continuously to various annealing temperatures with cooling rate xmath21 15 10xmath22 k s afterwardsthe obtained configurations at various annealing temperatures here 1500 600 k are relaxed by carrying out additional isothermal annealing runs finally the time evolution of these relaxed configurations is modelled and analyzed more details of the simulations are given in xcite in this section we provide some basic formulae that permit calculation of xmath0 and the nonergodicity parameters xmath23 for our system a more detailed presentation may be found in refs the central object of the mct are the partial intermediate scattering functions which are defined for a binary system by xcite xmath24rightrangle quad labelt1endaligned where xmath25 is a fourier component of the microscopic density of species xmath26 the diagonal terms xmath27 are denoted as the incoherent intermediate scattering function xmath28rightrangle quad labelt2 the normalized partial and incoherent intermediate scattering functions are given by xmath29 where the xmath30 are the partial static structure factors the basic equations of the mct are the set of nonlinear matrix integrodifferential equations xmath31 where xmath32 is the xmath33 matrix consisting of the partial intermediate scattering functions xmath34 and the frequency matrix xmath35 is given by xmath36ijq2kb t xi misumkdeltaik leftbf s1qrightkjquad labelt6 xmath37 denotes the xmath33 matrix of the partial structure factors xmath38 xmath39 and xmath40 means the atomic mass of the species xmath26 the mct for the idealized glass transition predicts xcite that the memory kern xmath41 can be expressed at long times by xmath42 where xmath43 is the particle density and the vertex xmath44 is given by xmath45 and the matrix of the direct correlation function is defined by xmath46ij quad labelt9 the equation of motion for xmath47 has a similar form as eqt5 but the memory function for the incoherent intermediate scattering function is given by xmath48 xmath49 in order to characterize the long time behaviour of the intermediate scattering function the nonergodicity parameters xmath50 are introduced as xmath51 these parameters are the solution of eqs t5t9 at long times the meaning of these parameters is the following if xmath52 then the system is in a liquid state with density fluctuation correlations decaying at long times if xmath53 the system is in an arrested nonergodic state where density fluctuation correlations are stable for all times in order to compute xmath54 one can use the following iterative procedure xcite xmath55q cdot bf s qbf z nonumber fracq2bf sq bf nbf flbf flq bf sqbf z quad labelt13endaligned xmath56q nonumber q2 bf sq bf nbf flbf flq nonumber quadendaligned where the matrix xmath57 is given by xmath58 this iterative procedure indeed has two type of solutions nontrivial ones with xmath59 and trivial solutions xmath60 the incoherent nonergodicity parameter xmath61 can be evaluated by the following iterative procedure xmath62q quad labelt15 as indicated by eqt15 computation of the incoherent nonergodicity parameter xmath63 demands that the coherent nonergodicity parameters are determined in advance beyond the details of the mct equations of motion like t5 can be derived for the correlation functions under rather general assumptions within the lanczos recursion scheme xcite resp the mori zwanzig formalism xcite the approach demands that the time dependence of fluctuations a b is governed by a time evolution operator like the liouvillian and that for two fluctuating quantitites a scalar products b a with the meaning of a correlation function can be defined in case of a tagged particle this leads for xmath65 to the exact equation xmath66 with memory kernel xmath67 in terms of a continued fraction within xmath67are hidden all the details of the time evolution of xmath65 as proposed and applied in xcite instead of calculating xmath67 from the time evolution operator as a continued fraction it can be evaluated in closed forms once xmath65 is known eg from experiments or md simulations this can be demonstrated by introduction of xmath68 with xmath69 the laplace transform of xmath70 and xmath71 eqg1 then leads to xmath72 2left omega phi comega right 2 quad labelg5 on the time axis xmath73 is given by xmath74 adopting some arguments from the schematic mct eqg1 allows asymptotically finite correlations xmath75 that means an arrested state if xmath76 remains finite where the relationship holds xmath77 in order to characterize the undercooled melt and its transition into the glassy state we introduced in xcite the function xmath78 according to g7 xmath79 has the property that xmath80 in the arrested nonergodic state on the other hand if xmath81 there is no arrested solution and the correlations xmath65 decay to zero for xmath82 that means the system is in the liquid state from that we proposed xcite to use the value of xmath3 as a relative measure how much the system has approached the arrested state and to use the temperature dependence of xmath14 in the liquid state as an indication how the system approaches this state first we show the results of our simulations concerning the static properties of the system in terms of the partial structure factors xmath38 and partial correlation functions xmath83 to compute the partial structure factors xmath38 for a binary system we use the following definition xcite xmath84 where xmath85 are the partial pair correlation functions the md simulations yield a periodic repetition of the atomic distributions with periodicity length xmath86 truncation of the fourier integral in eqe5 leads to an oscillatory behavior of the partial structure factors at small xmath87 in order to reduce the effects of this truncation we compute from eqe5a the partial pair correlation functions for distance xmath88 up to xmath89 for numerical evaluation of eqe5 a gaussian type damping term is included xmath90 with xmath91 figfig1 figfig2a shows the partial structure factors xmath38 versus xmath87 for all temperatures investigated the figure indicates that the shape of xmath38 depends weakly on temperature only and that in particular the positions of the first maximum and the first minimum in xmath38 are more or less temperature independent to investigate the dynamical properties of the system we have calculated the incoherent scattering function xmath92 and the coherent scattering function xmath34 as defined in equations t1 and t2 figfig2b and figfig3a presents the normalized incoherent intermediate scattering functions xmath65 of both species evaluated from our md data for wave vector xmath93xmath94 with n 9 that means xmath95 nm xmath96 from the figure we see that xmath65 of both species shows at intermediate temperatures a structural relaxation in three succesive steps as predicted by the idealized schematic mct xcite the first step is a fast initial decay on the time scale of the vibrations of atoms xmath97 ps this step is characterized by the mct only globaly the second step is the xmath98relaxation regime in the early xmath8regimethe correlator should decrease according to xmath99 and in the late xmath8relaxation regime which appears only in the melt according the von schweidler law xmath100 between them a wide plateau is found near the critical temperature xmath101 in the melt the xmath7relaxation takes place as the last decay step after the von schweidler law it can be described by the kohlrausch williams watts kww law xmath102 where the relaxation time xmath103 near the glass transition shifts drastically to longer times the inverse power law decay for the early xmath8regime xmath104 is not seen in our data this seems to be due to the fact that in our system the power law decay is dressed by the atomic vibrations xcite and references therein according to our md results xmath65 decays to zero for longer times at all temperatures investigated this is in agreement with the full mct including transversal currents as additional hydrodynamic variables the full mct xcite comes to the conclusion that all structural correlations decay in the final xmath7process independent of temperature similar effects are expected from inclusion of thermally activated matter transport that means diffusion in the arrested state at xmath105 900 k 700 k the xmath65 drop rather sharply at large xmath106 this reflects aging effects which take place if a system is in a transient non steady state xcite such a behaviour indicates relaxations of the system on the time scale of the measuring time of the correlations the nonergodicity parameters are defined by eqt12 as a non vanishing asymptotic solution of the mct eqt5 fig fig3b presents the estimated xmath87dependent nonergodicity parameters from the coherent and incoherent scattering functions of ni and zr at t1005 k in order to compute the nonergodicity parameters xmath23 analytically we followed for our binary system the self consistent method as formulated by nauroth and kob xcite and as sketched in section iiia input data for our iterative determination of xmath107 are the temperature dependent partial structure factors xmath38 from the previous subsection the iteration is started by arbitrarily setting xmath108 xmath109 xmath110 for xmath111k we always obtain the trivial solution xmath112 while at t 1000 k and below we get stable non vanishing xmath113 the stability of the non vanishing solutions was tested for more than 3000 iteration steps from this resultswe expect that xmath0 for our system lies between 1000 and 1100 k to estimate xmath0 more precisely we interpolated xmath38 from our md data for temperatures between 1000 and 1100 k by use of the algorithm of press etal we observe that at xmath114 k a non trivial solution of xmath23 can be found but not at xmath115 k and above it means that the critical temperature xmath0 for our system is around 1005 k the non trivial solutions xmath23 for this temperature shall be denoted the critical nonergodicty parameters xmath116 they are included in fig fig3b by use of the critical nonergodicity parameters xmath116 the computational procedure was run to determine the critical nonergodicity parameters xmath117 for the incoherent scattering functions at t 1005 k fig3b also presents our results for the so calculated xmath117 herewe present our results about the xmath118function xcite described in section iiib the memory functions xmath119 are evaluated from the md data for xmath120 by fourier transformation along the positive time axis for completeness also xmath121 and 800 k data are included where the corresponding xmath120 are extrapolated to longer times by use of an kww approximation fig4a and fig fig4b show the thus deduced xmath119 for xmath122 nmxmath96 regarding their qualitative features the obtained xmath119 are in full agreement with the results in xcite for the nixmath123zrxmath123 system a particular interesting detail is the fact that there exists a minimum in xmath119 for both species ni and zr at all investigated temperatures around a time of 01 ps below this time xmath120 reflects the vibrational dynamics of the atoms above this value the escape from the local cages takes place in the melt and the xmath8regime dynamics are developed apparently the minimum is related to this crossover in fig fig5 and fig fig5a we display xmath124 that means xmath119 versus xmath120 in this figurewe again find the features already described for nixmath123zrxmath123 in xcite according to the plot there exist xmath87dependent limiting values xmath125 so that xmath119 for xmath126 is close to an universal behavior while for xmath127 marked deviations are seen xmath125 significantly decreases with increasing temperature it is tempting to identify xmath119 below xmath125 with the polynomial form for xmath119 assumed in the schematic version of the mct xcite in fig fig5 and fig fig5a the polynomial obtained by fitting the 1000 k data below xmath125 is included by a dashed line extrapolating it over the whole xmath128range by use of the calculated memory functions we can evaluate the xmath118 eqg8 in figfig6 and fig fig7 this quantity is presented versus the corresponding value of xmath120 and denoted as xmath129 for all the investigated temperatures xmath129 has a maximum xmath130 at an intermediate value of xmath128 in the high temperature regime the values of xmath130 move with decreasing temperature towards the limiting value 1 this is in particular visible in fig fig8 where we present xmath130 as function of temperature for both species ni and zr and wave vectors xmath95 nmxmath96 at temperatures above 1000 k the xmath3values increase approximately linear towards 1 with decreasing temperatures below 1000k they remain close below the limiting value of 1 a behavior denoted in xcite as a balancing on the borderline between the arrested and the non arrested state due to thermally induced matter transport by diffusion in the arrested state at the present high temperatures linear fit of the xmath3values for ni above 950 k and for zr above 1000 k predicts a crossover temperature xmath131 from liquid xmath132 to the quasi arrested xmath133 behavior around 970 k from the ni data and around 1020 k from the zr data we here identify this crossover temperature with the value of xmath0 as visible in the ergodic liquid regime and estimate it by the mean value from the ni and zr subsystems that means by xmath134 k while in xcite for the nixmath123zrxmath123 melt a xmath0value of 1120 k was estimated from xmath14 the value for the present composition is lower by about 120 k a significant composition dependence of xmath0 is expected according to the results of md simulation for the closely related coxmath11zrxmath12 system xcite over the whole xmath135range xmath0 was found to vary between 1170 and 650 k in coxmath11zrxmath12 with xmath0xmath136 xmath137 800 k regarding this the present data for the nixmath11zrxmath12 system reflect a rather weak xmath0 variation from the simulated atomic motions in the computer experiments the diffusion coefficients of the ni and zr species can be determined as the slope of the atomic mean square displacements in the asymptotic long time limit xmath138 fig fig9 shows the thus calculated diffusion coefficients of our nixmath1zrxmath2 model for the temperature range between 600 and 2000 k at temperatures above approximately 1000 k the diffusion coefficients for both species run parallel to each other in the arrhenius plot indicating a fixed ratio xmath139 in this temperature regime at lower temperatures the ni atoms have a lower mobility than the zr atoms yielding around 800 k a value of about 10 for xmath140 that means here the zr atoms carry out a rather rapid motion within a relative immobile ni matrix according to the mct above xmath0 the diffusion coefficients follow a critical power law xmath141 with non universal exponent xmath142 xcite in order to estimate xmath0 from this relationship we have adapted the critical power law by a least mean squares fit to the simulated diffusion data for 1000 k and above according to this fit the system has a critical temperature of about 850 900 k similar results for the temperature dependence of the diffusion coefficients have been found in md simulations for other metallic glass forming systems eg for nixmath123zrxmath123 xcite for nixmath143zrxmath12 xcite cuxmath144zrxmath145 xcite or nixmath146bxmath147 xcite in all cases like here a break is observed in the arrhenius slope in the mentioned zr systems this break is related to a change of the atomic dynamics around xmath0 whereas for nixmath146bxmath147 system it is ascribed to xmath148 as in xcite xmath0 andxmath148 apparently fall together there is no serious conflict between the obervations the present contribution reports results from md simulations of a nixmath1zrxmath2 computer model the model is based on the electron theoretical description of the interatomic potentials for transition metal alloys by hausleitner and hafner xcite there are no parameters in the model adapted to the experiments there is close agreement between the xmath0 values estimated from the dynamics in the undercooled melt when approaching xmath0 from the high temperature side the values are xmath149 k from the xmath3parameters and xmath150 k from the diffusion coefficients as discussed in xcite the xmath0estimates from the diffusion coefficients seem to depend on the upper limit of the temperature region taken into account in the fit procedure where an increase in the upper limit increases the estimated xmath0 accordingly there is evidence that the present value of 950 k may underestimate the true xmath0 by about 10 to 50 k as it based on an upper limit of 2000 k only taking this into account the present estimates from the melt seem to lead to a xmath0 value around 1000 kthe xmath0 from the nonergodicity parameters describe the approach of the system towards xmath0 from the low temperature side they predict a xmath0 value of 1005 k this value is clearly outside the range of our xmath0 estimates from the high temperature ergodic melt we consider this as a significant deviation which however is much smaller than the factor of two found in the modelling of a lennard jones system xcite the here observed deviation between the xmath0 estimates from the ergodic and the so called nonergodic side reconfirm the finding from the soft spheres modelxcite of an agreement within some 10 xmath4 between the different xmath0estimates
we use molecular dynamics computer simulations to investigate a critical temperature xmath0 for a dynamical glass transition as proposed by the mode coupling theory mct of dense liquids in a glass forming nixmath1zrxmath2system the critical temperature xmath0 are analyzed from different quantities and checked the consistency of the estimated values ie from i the non vanishing nonergodicity parameters as asymptotic solutions of the mct equations in the arrested state ii the xmath3parameters describing the approach of the melt towards the arrested state on the ergodic side iii the diffusion coefficients in the melt the resulting xmath0 values are found to agree within about 10 xmath4
introduction simulations theory results and discussions conclusion
in the study of physics the stability criteria of a system or configuration is one of the main interesting aspects unstable system or configurations are generally not realizable in nature and they are generally an intermediate stage in the dynamical evolution of a system a black hole system in general relativity can also be put in the above mentioned category the question one asks there is whether a black hole which is stable under some perturbation ie if we perturb the black hole from outside whether it comes back to its original state after some time or whether the perturbation grows unbound making the black hole unstable the study of black hole perturbations remains an extremely intriguing topic which has enormous effect on various important properties of a black hole xcite in general the dynamical evolution of perturbations of a black hole background can be classified into three stages the first of which consists of an initial outburst of wave depending completely on the initial perturbing field the second stage consists of damped oscillations known in the literature as the quasinormal modes qnm whose frequency turns out to be a complex number the real part representing the oscillation frequency and the imaginary part representing damping qn frequencies completely depend on the background and not on the field which is causing the perturbation and thereby giving immense importance to these modes which are used to determine the black hole parameters mass charge and angular momentum the third is a power law tail behaviour at very late times the equations governing the black hole perturbations in most of the cases can be cast into a schrdinger like wave equation the qnms are solutions to the wave equation with complex frequencies with a boundary condition which are completely ingoing at the horizon and purely outgoing at asymptotic infinity with the first ever detection of the gravitational wave xcitethe interest in studying black hole perturbations has gained another peak apart from the fact that the qn frequencies contain important information about the black hole parameters they were also found to be important from the point of view of ads cft correspondence it has been found that qnms in ads space time appear naturally in the description of the dual conformal field theory on the boundary xcite thereby directing the study of qnms towards ads black holes xcite although the qnms are classical in origin they have been shown to provide glimpses to quantum nature of black holes xcite however in the present work we will be focussing on the second of the above three stages of evolution of black hole perturbation in a regular black hole background in asymptotically de sitter space time the importance of studying black holes in de sitter space lies in the fact that our universe looks like asymptotically de sitter at very early and late times recent observational data also indicates that our universe is going through a phase of accelerated expansion xcite thereby providing the existence of a positive cosmological constant in general de sitter space turns out to be a maximally symmetric solution to the vacuum einstein equations with a positive cosmological constant just like the ads cft correspondence a holographic ds cft duality exists between gravity in de sitter space and conformal field theory on the boundary in one less dimension xcite coming back to the perturbations and stability of black holes in de sitter space there have been a lot of work xcitexcite on quasinormal modes of scalar electromagnetic gravitational dirac perturbations decay of charged fields asymptotic quasinormal modes and signature of quantum gravity etc on another front it is well known that general relativity is plagued with the appearance of singularities the problem of avoiding the singularities in general relativity therefore is one of the most fundamental ones and it is a very old problem in this regard regular black hole solutions play a central role when a black hole does not have a space time singularity at the origin it is termed as a regular black hole in the literature the first solution of such regular black holes with non singular geometry satisfying the weak energy condition were obtained by bardeen xcite which is now known as the bardeen black hole however the solution bardeen proposed lacked physical motivation because the solution was not a vacuum solution rather gravity was modified by introducing some form of matter and thereby introducing an energy momentum tensor in the einstein s equation the introduction of the energy momentum tensor was done in an ad hoc manner much later ayn beato and garca xcite showed the energy momentum tensor to be the gravitational field of some magnetic monopole arising out of a specific form of non linear electrodynamics subsequently many other solutions xcitexcite motivating the avoidance of singularity was proposed in the literature there were many works published regarding such regular black holes stability properties xcite qnms xcite thermodynamics xcite and geodesic structure xcite of regular black holes to mention a few very recentlyfernando xcite has proposed a de sitter branch for the regular bardeen black hole and calculated the grey body factor for such a black hole in this paper we will be discussing the qnms of the bardeen de sitter henceforth bds black hole due to scalar both massless and massive and dirac perturbations although study of scalar field perturbations in a black hole background and its corresponding qnms is not new the dirac field perturbations on the other hand are relatively less studied therefore apart from the scalar perturbations it will also be interesting to study the dirac perturbations in the regular black hole backgrounds in de sitter space the plan of the paper is as follows in the next section we briefly discuss the bds black hole in section 3we present a brief discussion of wkb method along with a study of the scalar qnms of the bds black holes section 4 deals with the dirac quasinormal modes of the bds black hole finally in section 5 we conclude the paper with a brief discussion on future directions in this section we will briefly discuss the bardeen de sitter bds black hole following the works of fernando xcite the author of this paper modified the works of xcite to incorporate a positive cosmological constant in the action xmath0 where xmath1 is the ricci scalar and xmath2 is function of the field strength tensor of the non linear electrodynamics xmath3 and its form is given by xmath4 in the above the parameter xmath5 is related to the magnetic charge and the mass of the black hole in the following manner xmath6 if one derives the equations of motion from the above actionaction then following equations will be arrived at xmath7 it was shown in xcite that a static spherically symmetric solution for the above set of equations exist xmath8 with xmath9 being given xmath10 the solution of xmath11 gives the horizon and in the particular case of bds black hole there may be three real roots implying three horizons the black hole inner and outer horizons along with the cosmological horizon there lies the possibility of getting either one real root corresponding to cosmological horizon only for a set of parameters of this theory or a possibility of getting degenerate roots corresponding to a merger of the inner and outer black hole horizons for a range of parameters xmath12 and xmath13 structurally the bds black hole is similar to the reissner nordstrm de sitter rnds or born infeld de sitter bids black holes which also admits a possibility of three distinct horizons as well as a single or degenerate horizons however it was shown in xcite that the event horizon is larger in the case of rnds black hole compared to a bds one the interesting nature of bds geometry is its non singular structure everywhere it can be checked by direct calculation that all the scalar curvatures xmath1 xmath14 xmath15 are finite everywhere except for the electromagnetic field invariant xmath16 which is singular at xmath17 xcite in this section we will consider the massless and massive scalar field perturbations of the bds black hole geometry to study the behaviour of the qnms in bds background with the given black hole parameters as discussed in section 2 bds background metric is given by equations metric and fr the klein gordon equation for a massless scalar field xmath18 is xmath19 which explicitly takes the form xmath20 as usual we introduce the ansatz for xmath18 as xmath21 with the above ansatz we have the standard schrdinger like wave equation for the perturbation of the bds metric by a scalar field is given by xmath22 where xmath23 the coordinate xmath24 is the standard tortoise coordinate related to radial coordinate xmath25 as xmath26 the advantage of using the tortoise coordinate lies in the fact that the range of the coordinate now extends between xmath27 to xmath28 whereas in the old radial coordinate xmath25 the physically accessible region lies between the black hole and cosmological horizon note also that the potential xmath29 as xmath30 it can be easily seen by plotting the scalar field potential against the radial coordinate for various values of the multipole number xmath31 that the xmath32 mode has a distinct local minimum between the black hole outer horizon and the cosmological horizon see fig 1 which was also pointed out in xcite for this reason the method used in this paper to evaluate the qnms for the bds black hole namely the wkb approach is not a valid one to evaluate qnms for xmath32 modes therefore from now on we will only talk about xmath33 modes for the massless scalar qnms of bds black hole as already stated we will solve the wave equation for complex qn frequencies semi analytically using the sixth order wkb method developed in xcite it has been shown extensively in literature that wkb method works extremely well for determining qn frequencies the sixth order wkb method is more accurate than the third order method and the former in fact gives results practically coinciding with those obtained from full numerical integration of the wave equation xcite for low overtones ie for modes with small imaginary parts and for all multipole numbers xmath34 the sixth order formula for a general black hole potential xmath35 is mentioned below xmath36 where xmath37 is peak value of xmath35 xmath38 xmath39 is the value of the radial coordinate corresponding to the maximum of the potential xmath35 and xmath40 is the overtone number qn frequencies xmath41 would be of the form xmath42 in equationqnmeqn xmath43 and xmath44 are given by xcite xmath45 lambda3fracnfrac122vr0bigfrac56912 left fracv03vr0 right477 188b2nonumber frac1384 left fracv032 v04vr03right 51 100b2nonumber frac12304left fracv04vr0right267 68b2nonumber frac1288leftfracv03v05vr02right19 28b2nonumber frac1288 leftfracv06vr0right5 4b2bigendaligned in the above expression xmath46 xmath47 at xmath48 and xmath49 xmath50 and xmath51 can be found in the appendix of xcite the above method also works extremely well in the eikonal limit of large xmath31 corresponding to large quality factor which will also be discussed in the paper using eqn qnmeqn we computed the qnms and in we plotted re xmath41 and magnitude of i m xmath41 vs black hole mass both re xmath41 and i m xmath41 decreases when mass xmath52 is increased in tablei we list the values of the qn frequencies obtained by using third order and sixth order wkb approach for the parameter range xmath53 and xmath54 the data from the table suggests that the value of the real part of the frequency shows a steady increase over its third order outcome but on the other hand the negative imaginary part obtained using sixth order wkb method shows a steady decline when compared to the third order result p3cmp3cmp35cmp35 cm multipole number overtone 3rd order wkb 6th order wkb xmath311n0 0300446 0089967i 0302242 0090150i n1 0278912 0278097i 02829930277074i n0 0499385 0088861i 0499841 0088903i xmath312n1 0485040 0269800i 0486281 0269658i n2 0461291 0456456i 0462177 0458553i n0 0698242 0088552i 06984170088563i xmath553n1 0687778 0267316i 0688277 0267273i n2 0669085 0449812i 0669173 0450547i n3 0644523 0635942i 0643394 0640730i n0 0897184 0088421i 0897268 0088426i n1 0888985 0266272i 0889230 0266255i xmath314n2 0873746 0446645i 0873717 0446952i n3 0853024 0629992i 0851862 0632179i n4 0828001 0815925i 08252850823178i n0 109619 0088354i 109624 0088356i n1 108946 0265738i 108960 0265730i n2 107666 0444914i 107663 0445061i xmath315n3 105883 0626438i 105796 0627545i n4 103691 0810304i 103452 0814191i n5 101159 0996158i 100748 1005740i in and we plot the behaviour of low lying qn frequencies vs xmath13 and xmath56 for different xmath31 both the plots reveals that re xmath41 and i m xmath41 decreases with increasing xmath13 real part of frequencies still increasing steadily with g increased and imaginary part decreases in magnitude we have also computed the qn frequencies for larger multipole number xmath31 with overtone xmath57 only we plot for xmath31 ranging between 1 to 40 in while we have fixed the values of xmath58 xmath59 and xmath57 rexmath41 increases linearly with xmath31 xcite while magnitude of imxmath41 first decreases and remains constant for larger xmath31 to examine the field oscillations we will define the quality factorqf as xmath60 we plotted the qf versus the parameters xmath13 and xmath56 in quality factor increases with increasing xmath56 and decreases with an increase in xmath13 thus qf implies that oscillations will be more with larger magnetic charge xmath56 and decay faster for small xmath13 it is worth mentioning here that by computing the lyapunov exponent the inverse of the instability timescale associated with the geodesic motion one can show that in the eikonal limit qnms of black holes in any dimensions are determined by the parameters of the circular null geodesics xcite this is a very strong result and is independent of the field equations the only assumption goes into the calculation is the fact that the black hole space time is static spherically symmetric and asymptotically flat however a non trivial example of non asymptotically flat near extremal schwarzschild de sitter black hole space time was also discussed in this context the same argument can be applied in case of bds black holes too in the limit of near extremal nariai or cold black holes where either the black hole horizon and the cosmological horizon merges or the inner and outer horizon coincides in these limits it may be possible to get the eikonal limit using the wkb method following xcite for massive scalar perturbation the klein gordon equationis given by xmath61 where xmath62 is scalar field mass similarly we chose the ansatz as in equation ansatz and finally we have the schrdinger like equation and modified effective potential as xmath63 where the tortoise coordinate xmath24 is related to xmath25 by xmath26 in we plot the effective potential xmath64xmath25 vs xmath25 for different scalar mass xmath62 we have chosen the parameters xmath65 xmath66 xmath67 and xmath68 vs xmath25 for various massesxmath62width302 notice that the peak of the potential depends on the scalar field mass xmath62 with other parameters fixed since qnms are known to be the waves trapped within the peak of this potential xcite as discussed in xcite we expect similar behaviour for bds black hole that the imaginary part of the quasinormal modes frequencies will decrease for large xmath62 however the real part of qnms will increase as xmath62 increases in and we have plotted the variations of imaginary and real part of xmath41 versus scalar field mass xmath69 for different fixed values of parameters xmath13 xmath52 xmath56 and xmath70 we have plotted all the data obtained by 3rd 4th 5th and 6th order wkb calculations simultaneously to compare the accuracy between different orders we observed from the plots of both imxmath41 and rexmath41 that for low overtone number xmath40 the accuracy between lower and higher order wkb is not much significant but for large xmath40 deviation is more indeed the magnitude of imxmath41 decreases with increasing scalar mass as expected on the contrary the magnitude of rexmath41 increases with increasing field mass in tableii we present the numerical values of qn frequencies with corresponding parameters since it is well known that wkb method is more accurate for xmath71 we have tabulated the qnms frequencies for xmath71 only colsoptionsheader ii as in massless case we plot in and the behaviour of qn frequencies with xmath13 and xmath56 for xmath57 and xmath72 respectively rexmath41 increases with increasing value of magnetic charge xmath56 while magnitude of imxmath41 decreases this behaviour of qnms can be well understood from the form of the potential as the height of the potential peak increases with xmath56 therefore real part of qnms increases on the other hand shows that rexmath41 decreases with an increase in cosmological constantxmath13 but imxmath41 increases with xmath13 in magnitude similarly if we plot the variation of potential with xmath13 the height decreases thus i m xmath41 increases hence we can say for scalar field perturbations with the scalar mass included the oscillations decay faster with large cosmological constant xmath13 and oscillates better for large magnetic charge xmath56 in this section we will extend our discussion to massless dirac perturbations for bds black holes as in xcite by starting from dirac equation in spherically symmetric curved background the schrdinger like equation we finally arrived at is given by xmath73 where xmath24 is the tortoise coordinate given by xmath74 xmath75 is the energy the effective potentials is given by xmath76 it is worth mentioning here that the potentials xmath77 and xmath78 corresponding to dirac particles and anti particles are supersymmetric to each other and derived from the same superpotential xmath79 we will evaluate the quasinormal modes by solving equation sc1 taking only xmath77 as it is well known that both dirac particles and anti particles have the same quasi normal spectra xcite in we showed the behaviour of the effective potential xmath77 only for bds black hole with spherical harmonic xmath70 for parameters xmath80 vs xmath25 for different values of xmath31 for the massless dirac perturbations width226 we have computed the massless fermion qnms semi analytically using sixth order wkb methodthe plots are shown below in we showed the variation of real and imaginary part of xmath41 with cosmological constant xmath13 and in the variation with magnetic charge xmath56 for different values of xmath31 with fixed overtone number are shown we can clearly see from the plots that rexmath41 slowly increases with an increase in the magnetic charge xmath56 of the bds black hole whereas it slowly decreases with increasing value of xmath13 whereas the behaviour of the imaginary part of the frequency reverses its role ie as we increase the cosmological constant the imaginary part increases however it decreases if we increase the magnetic charge keeping all other parameters fixed in this paper we have discussed the massless and massive scalar field perturbations and the massless fermionic perturbations for a regular bds black hole we have used sixth order wkb approximation method to calculate the qnms frequencies we studied how the frequencies vary as a function of the scalar field mass xmath62 multipole number xmath31 as well as with the parameters like the cosmological constant xmath13 black hole mass xmath52 and magnetic charge xmath56 we found that the qn frequencies decrease with an increase in black hole mass xcite the plots of frequencies versus the scalar mass show that re xmath41 increases with mass xmath62 while i m xmath41 decreases the figures also suggested that if we plot the frequencies from low to higher overtones taking into account different wkb orders we see that comparative accuracy is better for xmath81 we also found that re xmath41 decreases with an increase in cosmological constant xmath13 for scalar both massless and massive perturbations as well as with dirac perturbations but i m xmath41 decreases in massless and fermionic case however increases for the massive case when xmath13 is increased we have also studied the behaviour of how the q factor for the massless scalar field varies with xmath13 and xmath56 for massive scalar perturbations we see that mass xmath62 enhances the field oscillations and decreases the damping for small xmath13 unlike in the massless case where it is just the opposite in all the three scenariosreal frequency of oscillations re xmath41 increases steadily with magnetic charge xmath56 but the damping denoted by i m xmath41 decreases for future directions it would be interesting to study the time evolution of perturbations for this particular black hole apart from that in xcite the authors have used the conformal properties of the spinor field to obtain the dirac qnms for a higher dimensional schwarzschild tangherlini black hole they have described these modes in the light of the so called split fermion models where quarks and leptons exist on different branes in order to keep proton stability such split fermion theories also have massive fermions in the bulk and it will be interesting to study such massive dirac perturbations in the context of higher dimensional generalization of the bds black holes spphys kokkotas k d and schmidt b g 1999 living rev 2 2 nollert h p 1999 class quantum grav 16 r159 berti e cardoso v and starinets a o 2009 classquantgrav 26 163001 konoplya r a and zhidenko a 2011 revmodphys 83 793 836 abott b p et al 2016 phys 116 241102 birmingham d sachs i and solodukhin s n 2002 phys lett 88 151301 birmingham d sachs i and solodukhin s n 2003 phys rev d67 104026 horowitz g t and hubeny v e 2000 phys d62 024027 konoplya r a 2002 phys rev d66 044009 cardoso v and lemos j p s 2001 phys d63 124015 hod s 1998 phys 81 4293 motl l and neitzke a 2003 adv theor math phys 7 307 maggiore m 2008 physrevlett 100 141301 perlmutter s etal 1997 astrophys j 483 565 riess a g et al 1998 astron j 116 1009 tegmark m et al 2004 phys d69 103501 strominger a 2001 jhep 0110 034 strominger a 2001 jhep 0111 049 abdalla e castello branco k h c and lima santos a 2002 physrev d66 104018 konoplya r a 2003 phys rev d68 124017 konoplya r a and zhidenko a 2004 jhep 06 037 zhidenko a 2004 class quant grav 21 273 cardoso v and lemos j p s 2003 phys rev d67 084020 molina c 2003 phys rev d68 064007 lopez ortega a 2006 gen 38 1565 lopez ortega a 2006 gen grav 38 743 lopez ortega a 2007 gen 39 1011 lopez ortega a 2008 gen 40 1379 smirnov a a 2005 class 22 4021 bardeen j m 1968 proceedings of gr5 tbilisi 174 ayn beato e and garcia a 2000 phys b493 149 ayn beato and garca a 1998 phys rev lett 80 5056 bronnikov k a 2001 phys d63 044005 hayward s a 2006 phys lett 96 031103 dymnikova i 2004 class 21 4417 bronnikov k a and fabris j a 2006 phys 96 251101 moreno c and sarbach o 2003 phys rev d67 024028 flachi a and lemos j 2013 phys d87 024034 fernando s and correa j 2012 phys d86 64039 man j and cheng h 2014 gen grav 46 1559 zhou s chen j and wang y 2012 int d21 1250077 fernando s arxiv 161105337 gr qc fernando s correa juan 2012 phys rev d86 064039 cardoso v berti e witek h zanchin v t 2009 physrev d79 064016 akira ohashi and masa aki sakagami classical and quantum gravity 21 3973 d r brill and j a wheeler rev 29 465 1957 cho h t cornell a s doukas j and naylor w 2007 phys rev d75 104005 schutz b and will c m 1988 astrophys j 291 l33 iyer s and will c m 1987 phys rev d35 3621 konoplya r a 2003 phys revd68 024018 jing j l 2004 phys rev d69 084009
we compute the quasi normal qn frequencies for the regular bardeen de sitter bds black hole due to massless and massive scalar field perturbations as well as the massless dirac perturbations we analyze the behaviour of both real and imaginary parts of quasinormal frequencies by varying the black hole parameters
introduction a discussion on bds black hole qnms of massless and massive scalar perturbations in bds black hole dirac qnms in bds black hole summary and conclusion
nature has provided us with a variety of neutrino sources from the not yet observed 19 k cosmological background to the icecube pev neutrinos xcite whose origin is still mysterious neutrinos are intriguing weakly interacting particles after 1998 many unknown properties have been determined thanks to the discovery of neutrino oscillations first proposed in xcite and observed by the super kamiokande experiment using atmospheric neutrinos xcite this discovery is fundamental for particle physics for astrophysics and for cosmology neutrino oscillations is an interference phenomenon among the xmath0 mass eigenstates that occurs if neutrinos are massive and if the mass propagation basis and the flavor interaction basis do not coincide the maki nakagawa sakata pontecorvo matrix relates these two basis xcite within three active flavors such a matrix depends on three mixing angles one dirac and two majorana cp violating phases in the last two decades solar reactor and accelerator experimentshave precisely determined most of the oscillation parameters including the so called atmospheric xmath1evxmath2 and solar xmath3evxmath2 mass squared differences xcite moreover the sign of xmath4 has been measured since xmath5b neutrinos undergo the mikheev smirnov wolfenstein msw effect xcite in the sun xcite the sign of xmath6 is still unknown either xmath7 and the lightest mass eigenstate is xmath8 normal ordering or hierarchy or xmath9 it is xmath10 inverted ordering most of neutrino oscillation experiments can be interpreted within the framework of three active neutrinos however a few measurements present anomalies that require further clarification sterile neutrinos that do not couple to the gauge bosons but mix with the other active species could be the origin of the anomalies upcoming experiments such as stereo or cesox will cover most of the mixing parameters identified in particular by the reactor anomaly xcite among the fundamental properties yet to be determined are the mechanism for the neutrino mass the absolute mass value and ordering the neutrino nature dirac versus majorana the existence of cp violation in the lepton sector and of sterile neutrinos the combined analysis of available experimental results shows a preference for normal ordering and for a non zero cp violating phase currently favouring xmath11 although statistical significance is still low xcite in the coming decades experiments will aim at determining the mass ordering the dirac cp violating phase the neutrino absolute mass and hopefully nature as well moreover super kamiokande with gadolinium should have the sensitivity to discover the relic supernova neutrino background xcite electron neutrinos are constantly produced in our sun and in low mass main sequence stars through the proton proton pp nuclear reaction chain that produces 99 xmath12 of their energy by burning hydrogen into helium4 xcite the corresponding solar neutrino flux receives contributions from both fusion reactions and beta decays of xmath13be and xmath5b figure 1 first measured by r davis pioneering experiment xcite such flux was found to be nearly a factor of three below predictions xcite over the decades solar neutrino experiments have precisely measured electron neutrinos from the different pp branches usually referred to as the pp pep xmath13be and xmath5b and hep neutrinos the measurement of a reduced solar neutrino flux compared to standard solar model predictions the so called the solar neutrino deficit problem has been confirmed by experiments mainly sensitive to electron neutrinos but with some sensitivity to the other flavors the advocated solutions included unknown neutrino properties eg flavor oscillations a neutrino magnetic moment coupling to the solar magnetic fields neutrino decay the msw effect and questioned the standard solar model in particular the msw effect is due to the neutrino interaction with matter while they traverse a medium o and xmath14n neutrinos have not been observed yet xcitescaledwidth700 the solar puzzle is definitely solved by the discovery of the neutrino oscillation phenomenon xcite and the results obtained by the sno and kamland experiments see xcite for a review on solar neutrino physics in fact using elastic scattering charged and neutral current neutrino interactions on heavy water the sno experiment has showed that the measurement of the total xmath5b solar neutrino flux is consistent with the predictions of the standard solar model solar electron neutrinos convert into the other active flavors in particular the muon and tau neutrino components of the solar flux has been measured at 5 xmath15 xcite moreover the reactor experiment kamland has definitely identified the large mixing angle lma solution by observing reactor electron anti neutrino disappearance at an average distance of 200 km xcite the ensemble of these observations shows that low energy solar neutrinos are suppressed by averaged vacuum oscillations while neutrinos having more than 2 mev energy are suppressed because of the msw effect figure 2 theoretically one expects xmath16 with xmath17 for xmath18 mev solar neutrinos for high energy portion of the xmath5b spectrum the matter dominated survival probability is xmath19 see xcite the precise determination of the transition between the vacuum averaged and the lma solution brings valuable information since deviations from the simplest vacuum lma transition could point to new physics such as non standard neutrino interactions xcite be xmath5b neutrinos from the borexino experiment the results are compared to averaged vacuum oscillation prediction xmath20 mev and the msw prediction xmath21 mev taking into account present uncertainties on mixing angles figure from xcite scaledwidth700 the borexino experiment has precisely measured the low energy part of the solar neutrino flux namely the pep xcite xmath13be xcite moreover by achieving challenging reduced backgrounds the collaboration has reported the first direct measurement of pp neutrino the keystone of the fusion process in the sun the measured flux is consistent with the standard solar model predictions xcite the ensemble of solar observations has established that the sun produces xmath22 ergs s via the pp chain moreover the occurrence of the msw effect for the high energy solar neutrinos shows that these particles change flavor in vacuum in a very different way than in matter in fact in the central high density regions of the star the flavor coincide with the matter eigenstates during their propagation towards the edge of the sun they encounter a resonance if the msw resonance condition is fulfilled and evolve adiabatically through it depending on the neutrino energies squared mass difference value and the gradient of the matter density adiabaticity implies that the matter eigenstates mixing is suppressed at the resonance in the latter case electron neutrinos can efficiently convert into muon and tau neutrinos the msw phenomenon is analogous to the two level system in quantum mechanics it occurs in numerous contexts including the early universe at the epoch of the primordial elements formation massive stars like core collapse supernovae accretion disks aroung black holes and the earth future measurements will aim at observing solar neutrinos produced in the carbon nitrogen oxygen cno cycle which is thought to be the main mechanism for energy production in massive main sequence stars xcite borexino experiment has provided the strongest constraint on the cno cycle which represents 1 xmath12 of energy production in the sun consistent with standard solar model predictions xcite the achievement of increased purity both by borexino and sno could allow to reach the sensivity for this challenging measurement beyond furnishing confirmation of stellar evolutionary models the observation of cno neutrinos could help solving the so called solar opacity problem standard solar models predict solar neutrino fluxes from the pp cycle in agreement with observations however the gs98sfii and agss09sfii models differ for their treatment of the metal element contributions elements heavier than he the first model uses older abundances for volatile elements that are obtained by an absorption line analysis in which the photosphere is treated as one dimensional that yields a metallicity of xmath23 xmath24 and xmath25 being the metal and hydrogen abundances respectively with solar fusion ii cross sections gs98sfii the second model takes abundances from a three dimensional photospheric model xmath26 agss09sfii the latter produces a cooler core by xmath27 and lower fluxes of temperature sensitive neutrinos such as xmath5b ones a comparison of the solar parameters used in the two models and corresponding predictions on the neutrino fluxesare given in tables 1 and 2 of refxcite the solar opacity problem is the inconsistency between agss09 that uses the best description of the solar photosphere while gs98 has the best agreement with helioseimic data that are sensitive to the interior composition since there is approximately 30 xmath12 difference between c and n abundances in the two models a measurement of cno neutrinos with xmath28 precision which could be achieved in the future will allow to determine the solar opacity core collapse supernovae sne are stars with mass xmath29 xmath30 being the sun s mass whose core undergoes gravitational collapse at the end of their life these include types ii and ib c depending on their spectral properties they are of type ii if they exhibit h lines in their spectra and of type i if they do nt because the star has lost the h envelope sne iib have a thin h envelope type ii p and ii l present a plateau or a linear decay of the light curves after the peak the sne ib shows he and si lines while sne ic shows none of these indicating that before collapse the star has lost both the h envelope and he shells the supernova can still appear as bright if the h envelope is present otherwise it can be invisible type ib c xcite in 1960hoyle and fowler proposed that stellar death of snii and i b happens because of the implosion of the core xcite the same year colgate and johnson suggested that a bounce of the neutron star forming launches a shock that ejects the matter to make it unbound xcite the prompt model it was realised by colgate and white xcite that a gravitational binding energy of the order of xmath31 erg associated with the collapse of the star core to a neutron star ns would be released as neutrinos that would deposit energy to trigger the explosion arnett xcite and wilson xcite critized the model because it would not give enough energy wilson revisited the model and developed it further the ejection of the mantle would be preceded by an accreting phase in the so called delayed neutrino heating mechanism xcite the fate of a massive star is mainly determined by the initial mass and composition and the history of its mass loss their explosion produces either neutron stars or black holes directly or by fallback their initial masses range from 9 to 300 solar masses xmath30 stars having 6 8 xmath30 develop an o ne mg core while those with xmath32 xmath30 possess an iron core before collapse hypernovae are asymmetric stellar explosions with high ejecta velocities they are very bright producing a large amount of nickel they are often associated with long duration gamma ray bursts collapsars are all massive stars whose core collapse to a black hole and that have sufficient angular momentum to form a disk see eg xcite fig2 on 23 february 1987 sk xmath33 exploded producing sn1987a the first naked eye supernova since kepler s one in 1604 it was located in the large magellanic cloud a satellite galaxy of the milky way the determined distance is 50 kpc from the earth based on the expanding photosphere method from different groups which agree within 10 xmath12 see table i of xcite this method to establish extragalactic distances allows to cover a wide range from 50 kpc to 200 mpc from the observed light curve and simulationsit appears that the core mass of sn1987a progenitor was around 6 xmath30 and total mass xmath34 18 xmath30 and the progenitor radius about xmath35 cm xcite sn1987a is unique because it was observed in all wavelengths from gamma rays to radio and for the first time neutrinos were observed from the collapse of the stellar core the neutrinos was first discovered by kamiokande ii xcite then by imb xcite and baksan xcite the number of detected electron anti neutrinos events were 16 in kamiokande ii 8 in imb and 5 in baksan time energy sn angle and background rate for all the events is given in table i of the recent review xcite several hours before 5 events were seen in lsd detector that could be due to a speculative emission phase preceding the ones seen in the other detectors xcite such events are often discarded in the analysis of sn1987a data since their are object of debate the earliest observations of optical brightening were recorded 3 hours after neutrino s arrival an enthusiastic description of sn1987a discovery is reported in xcite three puzzling features concerning sn1987a has set constraints on stellar evolutionary models and supernova simulations the progenitor was a blue supergiant rather than a red supergiant while type ii supernovae were thought to be produced by red supergiants large mixing processes had transported radioactive nuclei from the deep core far into the h envelope of the progenitor and in the pre supernova ejecta producing anomalous chemical abundances the presence of three ring like geometry of the circumstellar nebula around the supernova figure fig sn1987 was implying a highly non spherical structure of the progenitor envelope and its winds xcite various explanations have been suggested for the presence of these rings the inner one being dated 20000 years before the explosion they might have originated by a binary merger event of that epoch xcite showing that rotation might have played a significant role in the dying star however the prolate deformation of the supernova ejecta at the center of the ring system might have a very different origin figure fig sn1987 in fact the presence of large mixing and the asymmetric ejecta indicates breaking of spherical symmetry due to hydrodynamical instabilities such as the bipolar standing accretion shock instability sasi xcite sn1987a remnant is likely not a black hole since the progenitor was light enough to be stabilised by nuclear equation of states consistent with measured neutron star masses xcite there is currently no sign as well of a bright pulsar as the one born from the supernova explosion in the crab nebula in 1054 sn1987a neutrino observations have been used to derive constraints on fundamental physics and the properties of neutrinos axions majorons light supersymmetric particles and on unparticles these are derived by the absence of non standard signatures by using the intrinsic neutrino signal dispersion or by the cooling time of the newborn neutron star many such limits have been superseded by direct measurements with controlled sources on earth while other remain valuable constraints for example from the three hours delay in the transit time of neutrinos and photons a tight limit can be on the difference between the speed of neutrinos xmath36 and light xmath37 is obtained ie xmath38 xcite sn1987a neutrinos have also confirmed the basic features of core collapse supernova predictions concerning the neutrino fluence time integrated flux and spectra from a comparative analysis of the observed neutrino events one gets as a best fit point xmath39 erg and xmath40 mev for the total gravitational energy radiated in electron anti neutrinos and their temperature respectively xcite accoding to expectations 99 xmath12 of the supernova gravitational binding energy should be converted in xmath41 neutrinos and anti neutrinos in the several tens of mev energy range such neutrinos are produced by pair annihilation electron capture and neutron bremstrahlung xmath42 xmath43 xmath44 xmath45 xmath46 if one considers that energy equipartition among the neutrino flavors is rather well satisfied one gets about xmath47 ergs and the emission time is also found to be of xmath48 s considering that the neutrino spectra are to a fairly good approximation thermal one gets for the average electron anti neutrino energy xmath49 giving 12 mev at the best fit point this appears currently more compatible with supernova simulations based on realistic neutrino transport although it has appeared much lower than the expected value of xmath48 mev claimed for a long time supernova neutrinos are tightly connected with two major questions in astrophysics namely what is the mechanism that makes massive stars explode and what is are the sites where the heavy elements are formed through the so called rapid neutron capture process or xmath50process neutrinos would contribute in neutrino driven winds in core collapse supernovae accretion disks around black holes and neutron star mergers in fact the interaction of electron neutrinos and anti neutrinos with neutrons and protons in such environments determines the neutron to proton ratio a key parameter of the r process obviously astrophysical conditions and the properties of exotic nuclei like masses xmath51decay half lives or fission are crucial in determining the abundances several studies have shown that neutrinos impact the neutron richness of a given astrophysical environment finally assessing their influencestill requires extensive simulations see the reviews in the focus issue xcite various mechanisms for the sn blast are investigated including a thermonuclear a bounce shock a neutrino heating a magnetohydrodynamic an acoustic and a phase transition mechanisms see xcite since the kinetic energy in sn events goes from xmath52 erg for sne up to several xmath53 erg for hypernovae the explosion driving mechanism have to comply among others with providing such energies the neutrino heating mechanism with non radial hydrodynamical instabilities convective overturn with sasi appear to be a good candidate to drive iron core collapse supernova explosions while the more energetic hypernovae events could be driven by the magnetohydrodynamical mechanism note that a new neutrino hydrodynamical instability termed lesa lepton number emission self sustained asymmetry has been identified xcite simulations of the lighter o ne mg core collapse supernovae do explode while this is not yet the case for iron core collapse ones successful explosions for two dimensional simulations with realistic neutrino transport have been obtained for several progenitors while the first three dimensional explosion are being obtained xcite neutrino propagation in cosmological or astrophysical environments is often described using effective isospins neutrino amplitudes the density matrix approach the path integral formalism or many body green s functions see xcite and xcite for a review note that the spin formalism gives a geometrical representation of neutrino evolution in flavor space herewe briefly describe how to derive neutrino evolution equations useful for astrophysical applications based on the mean field approximation to this aimwe use the density matrix formalism and follow the derivation in refxcite in the mass basis at each time the spatial fourier decomposition of a dirac neutrino field reads xmath54 with xmath55 where we note xmath56 and xmath57 the dirac spinors corresponding to mass eigenstates xmath58 are normalized as no sum over xmath58 xmath59 the standard particle and antiparticle annihilation operators in the heisenberg picture for neutrinos of mass xmath60 momentum xmath61 and helicity xmath62 satisfy the canonical equal time anticommutation relations xmath63 xmath64 and similarly for the anti particle operators in the flavor basis the field operator is obtained as xmath65 with xmath66 the maki nakagawa sakata pontecorvo unitary matrix xcite note that the indices can refer to active as well as to sterile neutrinos in the framework of three active neutrinos thethree mixing angles of xmath66 are now determined two are almost maximal while the third one is small xcite the dirac and majorana cp violating phases are still unknown xcite the flavor evolution of a neutrino or of an antineutrino in a background can be determined using one body density matrices namely expectation values of bilinear products of creation and annihilation operators xmath67 xmath68 where the brackets denote quantum and statistical average over the medium through which neutrino are propagating for particles without mixings only diagonal elements are necessary and relations e rhoe arho correspond to the expectation values of the number operators if particles have mixings as is the case for neutrinos the off diagonal contributions xmath69 of xmath70 and xmath71 account for the coherence among the mass eigenstates the mean field equations employed so far to investigate flavor evolution in astrophysical environments evolve the particle and anti particle correlators xmath70 and xmath71 however the most general mean field description includes further correlators first densities with wrong helicity states such as xmath72 are present these have already been shown to impact neutrino evolution in presence of magnetic fields xcite they also give non zero contributions if non zero mass corrections are included xcite moreover two point correlators called abnormal or pairing densities xcite xmath73 and the hermitian conjugate also exist equations of motion including them have first been derived in refxcite if neutrinos are majorana particles correlators similar to e kappa can be defined as done in ref xcite such as xmath74 or xmath75 that violate total lepton number the most general mean field evolution equations for dirac or majorana neutrinos evolving in an inhomogeneous medium is derived in refxcite the effective most general mean field hamiltonian takes the general bilinear form xmath76 xmath77 where xmath78 denotes the xmath58th component of the neutrino field in the mass basis eqe field the explicit expression of the kernel xmath79 depends on the kind of interaction considered charged or neutral current interactions non standard interactions effective coupling to magnetic fields etc it does not need to be specified to obtain the general structure of the equations but for practical applications equations of motion for the neutrino density matrix eqse rhoe arho can be obtained from the ehrenfest theorem xmath80 rangle and similarly for the other correlators spinor products can be introduced xmath81 xmath82 xmath83 xmath84 where the fourier transform of the mean field in eqse g1e g4 is defined as xmath85 important progress has been achieved in our understanding of how neutrinos change their flavor in massive stars a case which is much more complex than the one of our sun the msw effect in supernovae is well established since the star is very dense the msw resonance condition can be fulfilled three times for typical supernova density profiles xcite at high densitythe xmath101 resonance depending on xmath102 takes place but does not produce any spectral modification at lower densities two further resonances can occur that depend on xmath103 and xmath104 usually termed as the high resonance and the low resonances the sign of the neutrino mass squared differences determines if neutrinos or anti neutrinos undergo a resonant conversion the sign of xmath105 produces a low resonance in the neutrino sector the one of xmath106 keeps unknown the hierarchy problem the adiabaticity of the evolution at the resonances depends also on the neutrino energy and on the gradient of the matter density which is fulfilled for typical power laws that accord with simulations xcite recent calculations have shown the emergence of new phenomena due to the neutrino neutrino interaction the presence of shock waves and of turbulence see xcite for a review steep changes of the stellar density profile due to shock waves induce multiple msw resonances and interference phenomena among the matter eigenstates as a consequencethe neutrino evolution can become completely non adiabatic when the shock passes through the msw region as for the neutrino self interaction it can produce collective stable and unstable modes of the antineutrino gas and a swapping of the neutrino fluxes with spectral changes various models have been studied to investigate the impact of the self interactions on the neutrino spectra and the occurrence of collective instabilities that trigger flavour modifications in the star the first model so called bulb model was assuming that the spherical and azymuthal symmetries for neutrino propagation from the neutrino sphere homogeneity and stationarity within this model three flavor conversion regimes are present and well understood the synchronisation the bipolar oscillations and the spectral split for example the spectral split phenomenon is an msw effect in a comoving frame xcite or analogous to a magnetic resonance phenomenon xcite see xcite and references therein the interplay between matter and neutrino self interaction effects needs to be accurately considered in fact matter can decohere the collective neutrino modes since neutrinos with different emission angles in the so called multiangle simulations at the neutrinosphere have different flavor histories xcite it appears as for now that simulations based on realistic density profiles from supernova one dimensional simulations suppress neutrino self interaction effects however this is no longer true if non stationarity and inhomogeneity is considered small scale seed perturbations can create large scale instabilities xcite one should keep in mind that the solution of the full dynamical problem should involve the seven dimensions xmath107 to make the problemnumerically computable the models involve various approximations these are usually non stationarity homogeneity the spherical andor azymuthal symmetries however it has been shown that even if initial conditions have some symmetry the solutions of the evolution equations does not necessarily retain it xcite to avoid the demanding solution of the equations often the instabilities are determined by employing a linearised stability analysis xcite see eg xcite such analysisare useful to identify the location of the instability while they do not inform of the spectral modifications the neutrino spectral swapping turned out to be significant in the context of the bulb model while they could well reveal minor modifications in simulations including non stationarity inhomogeneities and a realistic description of the neutrino sphere see eg xcite for the latter refxcite has in fact shown that fast conversions can occur very close to the neutrino sphere even if mixings are not taken into account many general features are established but important questions remain in particular on the conditions for the occurrence of flavor modifications and its impact on the neutrino spectra another open question is the role of corrections beyond the usual mean field in the transition region this is between the dense region within the neutrinosphere which is boltzmann treated to the diluted one outside the neutrinosphere where collective flavor conversion occurs so far this transition has been treated as a sharp boundary where the neutrino fluxes and spectra obtained in supernova simulations is used as initial conditions in flavour studies extended descriptions describing neutrino evolution in a dense medium have recently been derived using a coherent state path integral xcite the born bogoliubov green kirkwood yvon hierarchy xcite or the two particle irreducible effective action formalism xcite see also xcite besides collisions two kinds of corrections in an extended mean field description are identified spin or helicity coherence xcite and neutrino antineutrino pairing correlations xcite such corrections are expected to be tiny but the non linearity of the equations could introduce significant changes of neutrino evolution in particular in the transition region numerical calculations are needed to investigate the role of spin coherence or neutrino antineutrino pairing correlations or of collisions a first calculation in a simplified model shows that helicity coherence might have an impact xcite neutrino flavor conversion also occurs in accretion disks around black holes xcite and binary compact objects such as black hole neutrons star and neutron star neutron star mergers xcite in particular flavour modification can be triggered by a cancellation of the neutrino matter and self interaction contributions in these scenarios this produces a resonant phenomenon called the neutrino matter resonance xcite another interesting theoretical development is the establishment of connections between neutrino flavor conversion in massive stars and the dynamics or behaviour of many body systems in other domains using algebraic methods refxcite has shown that the neutrino neutrino interaction hamiltonian can be rewritten as a reduced bardeen cooper schrieffer bcs hamiltonian for superconductivty xcite as mentioned above refxcite has included neutrino antineutrino correlations of the pairing type which are formally analogous to the bcs correlations the linearisation of the corresponding neutrino evolution equations has highlighted the formal link between stable and unstable collective neutrino modes and those in atomic nuclei and metallic clusters xcite the observation of the neutrino luminosity curve from a future extragalactic explosion would closely follow the different phases of the explosion furnishing a crucial test of supernova simulation predictions and information on the star and unknown neutrino properties in particular the occurrence of the msw effect in the outer layer of the star and collective effects depends on the value of the third neutrino mixing angle and the neutrino mass ordering the precise measurement of the last mixing angle xcite reduces the number of unknowns still the neutrino signal from a future supernova explosion could tell us about the mass ordering either from the early time signal in icecube xcite or by measuring the positron time and energy signal in cherenkov or scintillator detectors associated with the passage of the shock wave in the msw region xcite several other properties can impact the neutrino fluxes such as the neutrino magnetic moment xcite non standard interactions sterile neutrinos cp violation effects from the dirac phase exist but appear to be small xcite in spite of the range of predictions the combination of future observations from detection channels with different flavor sensitivities energy threshold and time measurements can pin down degenerate solutions and bring key information to this domain see eg xcite the supernova early warning system snews and numerous other neutrino detectors around the world can serve as supernova neutrino observatories if a supernova blows up in the milky way or outside our galaxy large scale detectors based on different technologies xcite including liquid argon water cherenkov and scintillator are being considered upcoming observatories are the large scale scintillator detector juno xcite and hopefully the water cherenkov hyper kamiokande xcite these have the potential to detect neutrinos from a galactic and an extragalactic explosion as well as to discover the diffuse supernova neutrino background produced from supernova explosions up to cosmological redshift of 2 the latter could be observed by egads ie the super kamiokande detector with the addition of gadolinium xcite for a review see xcite the main mission of high energy neutrino telescopes is to search for galactic and extra galactic sources of high energy neutrinos to elucidate the source of cosmic rays and the astrophysical mechanisms that produce them these telescopes also investigate neutrino oscillations dark matter and supernova neutrinos for icecube the 37 events collected in icecube with deposited energies ranging from 30 to 2 pev is consistent with the discovery of high energy astrophysical neutrinos at 57 xmath108 xcite the 2 pev event is the highest energy neutrino ever observed high energy neutrino telescopes are currently also providing data on neutrino oscillations measuring atmospheric neutrinos commonly a background for astrophysical neutrino searches using low energy samples both antaresxcite and icecube deepcore xcite have measured the parameters xmath109 and xmath110 in good agreement with existing data orca xcite and pingu xcite icecube extension in the 10 gev energy range could measure the mass hierarchy by exploiting the occurrence of the matter effect from neutrinos both from the msw and the parametric resonance occurring in the earth xcite neutrino telescopes are also sensitive to other fundamental properties such as lorentz and cpt violation xcite or sterile neutrinos 9 m g aartsen et al observation of high energy astrophysical neutrinos in three years of icecube data phys 113 2014 101101 arxiv14055303 b pontecorvo mesonium and anti mesonium sov jetp 6 1957 429 zh fiz 33 1957 549 y fukuda et al evidence for oscillation of atmospheric neutrinos phys 81 1998 1562 hep ex9807003 z maki m nakagawa and s sakata remarks on the unified model of elementary particles prog 28 1962 870 k a olive et al particle data group collaboration review of particle physics chin c 38 2014 090001 l wolfenstein neutrino oscillations in matter phys d 17 1978 2369 s p mikheev and a y smirnov resonance amplification of oscillations in matter and spectroscopy of solar neutrinos sov j nucl 42 1985 913 yad fiz 42 1441 1985 k eguchi et al first results from kamland evidence for reactor anti neutrino disappearance phys rev lett 90 021802 2003 w c haxton r g hamish robertson and a m serenelli solar neutrinos status and prospects ann astrophys 51 2013 21 arxiv12085723 g mention et al the reactor antineutrino anomaly phys d 83 073006 2011 r davis jr d s harmer and k c hoffman search for neutrinos from the sun phys lett 20 1968 1205 j n bahcall n a bahcall and g shaviv present status of the theoretical predictions for the cl36 solar neutrino experiment phys 20 1968 1209 a friedland c lunardini and c pena garay solar neutrinos as probes of neutrino matter interactions phys lett b 594 2004 347 hep ph0402266 g bellini et al first evidence of pep solar neutrinos by direct detection in borexino phys 108 2012 051302 arxiv11103230 c arpesella direct measurement of the be7 solar neutrino flux with 192 days of borexino data phys lett 101 2008 091302 arxiv08053843 g bellini et al neutrinos from the primary proton proton fusion process in the sun nature 512 no 7515 2014 383 b p schmidt r p kirshnerm and r g eastman expanding photospheres of type ii supernovae and the extragalactic distance scale astrophys j 395 366 1992 astro ph9204004 p podsiadlowski the progenitor of sn1987a astronomical society of the pacific 104 717 1992 k hirata et al kamiokande ii collaboration observation of a neutrino burst from the supernova sn 1987a phys rev lett 58 1490 1987 r m bionta g blewitt c b bratton d casper a ciocio r claus b cortez and m crouch et al observation of a neutrino burst in coincidence with supernova sn 1987a in the large magellanic cloud phys rev lett 58 1494 1987 e n alekseev l n alekseeva i v krivosheina and v i volchenko detection of the neutrino signal from sn1987a in the lmc using the inr baksan underground scintillation telescope phys b 205 209 1988 m aglietta g badino g bologna c castagnoli a castellina w fulgione p galeotti and o saavedra et al on the event observed in the mont blanc underground neutrino observatory during the occurrence of supernova 1987a europhys 3 1315 1987 a suzuki the 20th anniversary of sn1987a j phys 120 2008 072001 p podsiadlowski t s morris and n ivanova the progenitor of sn 1987a aip conf proc 937 125 2007 janka a marek and f s kitaura neutrino driven explosions twenty years after sn1987a aip conf proc 937 144 2007 arxiv07063056 k sato and h suzuki total energy of neutrino burst from the supernova sn1987a and the mass of neutron star just born phys lett b 196 267 1987 m j longo tests of relativity from sn1987a phys rev d 36 1987 3276 i tamborra et al self sustained asymmetry of lepton number emission a new phenomenon during the supernova shock accretion phase in three dimensions astrophys j 792 2014 96 arxiv14025418 janka t melson and a summa physics of core collapse supernovae in three dimensions a sneak preview arxiv160205576 j serreau and c volpe neutrino antineutrino correlations in dense anisotropic media phys rev d 90 no 12 125040 2014 doi101103physrevd90125040 arxiv14093591 a e lobanov and a i studenikin neutrino oscillations in moving and polarized matter under the influence of electromagnetic fields phys b 515 94 2001 doi101016s0370 26930100858 9 hep ph0106101 a banerjee a dighe and g raffelt linearized flavor stability analysis of dense neutrino streams phys d 84 053013 2011 arxiv11072308 d vnnen and c volpe linearizing neutrino evolution equations including neutrino antineutrino pairing correlations phys rev d 88 065003 2013 arxiv13066372 hep ph s chakraborty r s hansen i izaguirre and g raffelt self induced flavor conversion of supernova neutrinos on small scales jcap 1601 no 01 028 2016 arxiv150707569 h duan g m fuller and y z qian collective neutrino oscillations ann nucl part 60 2010 569 arxiv10012799 h duan and j p kneller neutrino flavor transformation in supernovae j phys g 36 2009 113201 arxiv09040974 g g raffelt and a y smirnov adiabaticity and spectral splits in collective neutrino transformations phys d 76 2007 125008 arxiv07094641 s galais and c volpe the neutrino spectral split in core collapse supernovae a magnetic resonance phenomenon phys d 84 2011 085005 arxiv11035302 a esteban pretel a mirizzi s pastor r tomas g g raffelt pd serpico and g sigl role of dense matter in collective supernova neutrino transformations phys d 78 085012 2008 arxiv08070659 f capozzi b dasgupta and a mirizzi self induced temporal instability from a neutrino antenna jcap 1604 no 04 043 2016 doi1010881475 7516201604043 arxiv160303288 g raffelt s sarikas and d de sousa seixas axial symmetry breaking in self induced flavor conversionof supernova neutrino fluxes phys 111 no 9 091101 2013 erratum phys lett 113 no 23 239903 2014 arxiv13057140 a malkus j p kneller g c mclaughlin and r surman neutrino oscillations above black hole accretion disks disks with electron flavor emission phys rev d 86 085015 2012 arxiv12076648 y l zhu a perego and g c mclaughlin matter neutrino resonance transitions above a neutron star merger remnant m frensel m r wu c volpe and a perego neutrino flavor evolution in binary neutron star merger remnants arxiv160705938 y abe et al indication for the disappearance of reactor electron antineutrinos in the double chooz experiment phys rev lett 108 2012 131801 arxiv11126353 f p an et al observation of electron antineutrino disappearance at daya bay phys rev lett 108 2012 171803 arxiv12031669 j k ahn observation of reactor electron antineutrino disappearance in the reno experiment phys 108 2012 191802 arxiv12040626 a b balantekin j gava and c volpe possible cp violation effects in core collapse supernovae phys b 662 396 2008 arxiv07103112 j gava and c volpe collective neutrinos oscillation in matter and cp violation phys d 78 083007 2008 arxiv08073418 j p kneller and g c mclaughlin three flavor neutrino oscillations in matter flavor diagonal potentials the adiabatic basis and the cp phase phys d 80 053002 2009 arxiv09043823 y pehlivan a b balantekin and t kajino neutrino magnetic moment cp violation and flavor oscillations in matter phys d 90 065011 214 arxiv14065489 d vnnen and c volpe the neutrino signal at halo learning about the primary supernova neutrino fluxes and neutrino properties jcap 1110 2011 019 arxiv11056225 k scholberg supernova neutrino detection ann nucl part 62 2012 81 arxiv12056003 f an et al juno collaboration neutrino physics with juno j phys g 43 030401 2016 arxiv150705613 physicsinsdet
we summarize the progress in neutrino astrophysics and emphasize open issues in our understanding of neutrino flavor conversion in media we discuss solar neutrinos core collapse supernova neutrinos and conclude with ultra high energy neutrinos
introduction solar neutrinos supernova neutrinos ultra-high energy neutrinos references
the kepler mission xcite with the discovery of over 4100 planetary candidates in 3200 systems has spawned a revolution in our understanding of planet occurrence rates around stars of all types one of kepler s profound discoveries is that small planets xmath8 are nearly ubiquitous eg and in particular some of the most common planets have sizes between earth sized and neptune sized a planet type not found in our own solar system indeed it is within this group of super earths to mini neptunes that there is a transition from rocky planets to non rocky planets the transition is near a planet radius of xmath9 and is very sharp occurring within xmath10 of this transition radius xcite unless an intra system comparison of planetary radii is performed where only the relative planetary sizes are important xcite having accurate as well as precise planetary radii is crucial to our comprehension of the distribution of planetary structures in particular understanding the radii of the planets to within xmath11 is necessary if we are to understand the relative occurrence rates of rocky to non rocky planets and the relationship between radius mass and bulk density while there has been a systematic follow up observation program to obtain spectroscopy and high resolution imaging only approximately half of the kepler candidate stars have been observed mostly as a result of the brightness distribution of the candidate stars those stars that have been observed have been done mostly to eliminate false positives to determine the stellar parameters of host stars and to search for nearby stars that may be blended in the kepler photometric apertures stars that are identified as possible binary or triple stars are noted on the kepler community follow up observation program website and are often handled in individual papers eg the false positive assessment of an koi or all of the kois can take into account the likelihood of stellar companions eg and a false positive probability will likely be included in future koi lists but presently the current production of the planetary candidate koi list and the associated parameters are derived assuming that all of the koi host stars are single that is the kepler pipeline treats each kepler candidate host star as a single star eg thus statistical studies based upon the kepler candidate lists are also assuming that all the stars in the sample set are single stars the exact fraction of multiple stars in the kepler candidate list is not yet determined but it is certainly not zero recent work suggests that a non negligible fraction xmath12 of the kepler host stars may be multiple stars xcite although other work may indicate that giant planet formation may be suppressed in multiple star systems xcite the presence of a stellar companion does not necessarily invalidate a planetary candidate but it does change the observed transit depths and as a result the planetary radii thus assuming all of the stars in the kepler candidate list are single can introduce a systematic uncertainty into the planetary radii and occurrence rate distributions this has already been discussed for the occurrence rate of hot jupiters in the kepler sample where it was found that xmath13 of hot jupiters were classified as smaller planets because of the unaccounted effects of transit dilution from stellar companions xcite in this paper we explore the effects of undetected stellar gravitationally bound companions on the observed transit depths and the resulting derived planetary radii for the entire kepler candidate sample we do not consider the dilution effects of line of sight background stars rather only potential bound companions as companions within xmath14 are most likely bound companions eg and most stars beyond xmath14 are either in the kepler input catalog xcite or in the ukirt survey of the kepler field and thus are already accounted for with regards to flux dilution in the kepler project transit fitting pipeline within 1 the density of blended background stars is fairly low ranging between xmath15 starsxmath16 xcite thus within a radius of 1 we expect to find a blended background line of sight star only xmath17 of the time therefore the primary contaminant within 1 of the host stars are bound companions we present here probabilistic uncertainties of the planetary radii based upon expected stellar multiplicity rates and stellar companion sizes we show that in the absence of any spectroscopic or high resolution imaging observations to vet companions the observed planetary radii will be systematically too small however if a candidate host star is observed with high resolution imaging hri or with radial velocity rv spectroscopy to screen the star for companions the underestimate of the true planet radius is significantly reduced while imaging and radial velocity vetting is effective for the kepler candidate host stars it will be even more effective for the k2 and tess candidates which will be on average 10 times closer than the kepler candidate host stars the planetary radii are not directly observed rather the transit depth is the observable which is then related to the planet size the observed depth xmath18 of a planetary transit is defined as the fractional difference in the measured out of transit flux xmath19 and the measured in transit flux xmath20 xmath21 if there are xmath22 stars within a system then the total out of transit flux in the system is given by xmath23 and if the planet transits the xmath24 star in the system then the in transit flux can be defined as xmath25 where xmath26 is the flux of the star with the transiting planet xmath27 is the radius of the planet and xmath28 is the radius of the star being transited substituting into equation eq single flux the generalized transit depth equation in the absence of limb darkening or star spots becomes xmath29 for a single star xmath30 and the transit depth expression simplifies to just the square of the size ratio between the planet and the star however for a multiple star system the relationship between the observed transit depth and the true planetary radius depends upon the brightness ratio of the transited star to the total brightness of the system and on the stellar radius which changes depending on which star the planet is transiting xmath31 the kepler planetary candidates parameters are estimated assuming the star is a single star xcite and therefore may incorrectly report the planet radius if the stellar host is really a multiple star system the extra flux contributed by the companion starswill dilute the observed transit depth and the derived planet radius depends on the size of star presumed to be transited the ratio of the true planet radius xmath32 to the observed planet radius assuming a single star with no companions xmath33 can be described as xmath34 where xmath35 is the radius of the assumed single primary star and xmath26 and xmath28 are the brightness and the radius respectively of the star being transited by the planet this ratio reduces to unity in the case of a single star xmath36 and xmath37 for a multiple star system where the planet orbits the primary star xmath38 the planet size is underestimated only by the flux dilution factor xmath39 however if the planet orbits one of the companion stars and not the primary star then the ratio of the primary star radius xmath35 to the radius of the companion star being transited xmath28 affects the observed planetary radius in addition to the flux dilution factor are plotted as a function of companion to primary brightness ratios bottom axis and mass ratios top axis for possible binary systems top plot or triple bottom plot systems this figure is an example for the g dwarf koi299 similar calculations have been made for every koi in each plot the dark blue stars represent the correction factors if the planet orbits the primary star equation eq primary ratio the red circles represent the correction factors if the planet orbits the secondary star and the light blue triangles represent the correction factors if the planet orbits the tertiary star equation eq full ratio the lines are third order polynomials fit to the distributions unity is marked with a horizontal dashed line but for the m dwarf koi1085 demonstrating that the details of the derived correction factors are dependent upon the koi properties to explore the possible effects of the undetected stellar companions on the derived planetary parameters we first assess what companions are possible for each koi for this work we have downloaded the cumulative kepler candidate list and stellar parameters table from the nasa exoplanet archive the cumulative list is updated with each new release of the koi lists as a result the details of any one star and planet may have changed since the analysis for this paper was done however the overall results of the paper presented here should remain largely unchanged for the koi lists the stellar parameters for each koi were determined by fitting photometric colors and spectroscopically derived parameters where available to the dartmouth stellar evolution database xcite the planet parameters were then derived based upon the transit curve fitting and the associated stellar parameters other stars listed in the kepler input catalog or ukirt imaging that may be blended with the koi host stars were accounted for in the transit fitting but in general as mentioned above each planetary host star was assumed to be a single star we have restricted the range of possible bound stellar companions to each koi host star by utilizing the same dartmouth isochrones used to determine the stellar parameters possible gravitationally bound companions are assumed to lie along on the same isochrone as the primary star for each koi host star we found the single best fit isochrone characterized by mass metallicity and age by minimizing the chi square fit to the stellar parameters effective temperature surface gravity radius and metallicity listed in the koi table we did not try to re derive stellar parameters or independently find the best isochrone fit for the star we simply identified the appropriate dartmouth isochrone as used in the determination of the stellar parameters xcite we note that there exists an additional uncertainty based upon the isochrone finding in this work we did not try to re derive the stellar parameters of the host stars but rather we simply find the appropriate isochrone that matches the koi stellar parameters thus any errors in the stellar parameters derivations in the koi list are propagated here this is likely only a significant source of uncertainty for nearly equal brightness companions once an isochrone was identified for a given star all stars along an isochrone with absolute kepler magnitudes fainter than the absolute kepler magnitude of the host star were considered to be viable companions ie the primary host star was assumed to be the brightest star in the system the fainter companions listed within that particular isochrone were then used to establish the range of possible planetary radii corrections equation eq full ratio assuming the host star is actually a binary or triple star higher order eg quadruple stellar multiples are not considered here as they represent only xmath40 of the stellar population xcite we have considered six specific multiplicity scenarios 1 single star xmath41 2 binary star planet orbits primary star 3 binary star planet orbits secondary star 4 triple star planet orbits primary star 5 triple star planet orbits secondary star 6 triple star planet orbits tertiary star based upon the brightness and size differences between the primary star and the putative secondary or tertiary companions we have calculated for each koi the possible factor by which the planetary radii are underestimated xmath0 if the star is single the correction factor is unity and if in a multiple star system the planet orbits the primary star only flux dilution affects the observed transit depth and the derived planetary radius eq eq primary ratio for the scenarios where the planets orbits the secondary or tertiary star the planet size correction factors eq eq full ratio were determined only for stellar companions where the stellar companion could physically account for the observed transit depth if more than 100 of the stellar companion light had to be eclipsed in order to produce the observed transit in the presence of the flux dilution then that star and all subsequent stars on the isochrone with lower mass was not considered viable as a potential source of the transit for example for an observed 1 transit no binary companions can be fainter than the primary star by 5 magnitudes or more an eclipse of such a secondary star would need to be more than 100 deep the stellar brightness limits were calculated independently for each planet within a koi system so as to not assume that all planets within a system necessarily orbited the same star figures fig koi299 and fig koi1085 show representative correction factors xmath0 for koi299 a g dwarf with a super earth sized xmath42 planet and for koi1085 an m dwarf with an earth sized xmath43 planet the planet radius correction factors xmath0 are shown as a function of the companion to primary brightness ratio bottom x axis of plots and the companion to primary mass ratio top x axis of plots and are determined for the koi assuming it is a binary star system top plot or a triple star system bottom plot the amplitude of the correction factor xmath0 varies strongly depending on the particular system and which star the planet may orbit if the planet orbits the primary star then the largest the correction factors are for equal brightness companions xmath44 for a binary system and xmath45 for a triple system with an asymptotic approach to unity as the companion stars become fainter and fainter if the planet orbits the secondary or tertiary star the planet radius correction factor can be significantly larger ranging from xmath46 for binary systems and xmath47 or more for triple systems depending on the size and brightness of the secondary or tertiary star it is important to recognize the full range of the possible correction factors but in order to have a better understanding of the statistical correction any given koi or the koi list as a whole may need we must understand the mean correction for any one multiplicity scenario and convert these into a single mean correction factor for each star to do this we must take into account the probability the star may be a multiple star the distribution of mass ratios if the star is a multiple the probability that the planet orbits any one star if the stellar system has multiple stars and whether or not the star has been vetted and how well it is has been vetted for stellar companions in order to calculate an average correction factor for each multiplicity scenario we have fitted the individual scenario correction factors as a function of mass ratio with a 3xmath48order polynomial see fig fig koi299 and fig koi1085 because the isochrones are not evenly sampled in mass taking a mean straight from the isochrone points would skew the results the polynomial parameterization of the correction factor as a function of the mass ratio enables a more robust determination of the mean correction factor for each multiplicity scenario if the companion to primary mass ratio distribution was uniform across all mass ratios then a straight mean of the correction values determined from each polynomial curve would yield the average correction factor for each multiplicity scenario however the mass ratio distribution is likely not uniform and we have adopted the form displayed in figure 16 of xcite that distribution is a nearly flat frequency distribution across all mass ratios with a xmath49 enhancement for nearly equal mass companion stars xmath50 this distribution is in contrast to the gaussian distribution shown in xcite however the more recent results of xcite incorporate more stars a broader breadth of stellar properties and multiple companion detection techniques the mass ratio distribution is convolved with the polynomial curves fitted for each multiplicity scenario and a weighted mean for each multiplicity scenario was calculated for every koi for example in the case of koi299 fig fig koi299 the single star mean correction factor is 10 by definition for the binary star cases the average scenario correction factors are 114 planet orbits primary and 228 planet orbits secondary for the triple stars cases the correction factors are 116 planet orbits primary 275 planet orbits secondary and 461 planet orbits tertiary for koi1085 fig fig koi1085 the weighted mean correction factors are 118 156 124 161 and 229 respectively to turn these individual scenario correction factors into an overall single mean correction factor xmath3 per koi the six scenario corrections are convolved with the probability that a koi will be a single star a binary star or a triple star the multiplicity rate of the kepler stars is still unclear xcite and indeed there may be some contradictory evidence for the the exact value for the multiplicity rates of the koi host stars eg but the multiplicity rates appear to be near xmath51 similar to the general field population in the absence of a more definitive estimate we have chosen to utilize the multiplicity fractions from xcite a 54 single star fraction a 34 binary star fraction and a 12 triple star fraction xcite we have grouped all higher order multiples xmath52 into the single category of triples given the relatively rarity of the quadruple and higher order stellar systems for the scenarioswhere there are multiple stars in a system we have assumed that the planets are equally likely to orbit any one of the stars 50 for binaries 333 for triples the final mean correction factors xmath3 per koi are displayed in figure fig mean factor the median value of the correction factor and the dispersion around that median is xmath53 this median correction factor implies that assuming a star in the koi list is single in the absence of any observational companion vetting yields a statistical bias on the derived planetary radii where the radii are underestimated on average by a factor xmath54 and the mass density of the planets are overestimated by a factor of xmath55 from figure fig mean factor it is clear that the mean correction factor xmath3 depends upon the stellar temperature of the host star as most of the stars in the koi list are dwarfs the lower temperature stars are typically lower mass stars and thus have a smaller range of possible stellar companions thus an average value for the correction factor 15 represents the sample as a whole but a more accurate value for the correction factor can be derived for a given star with a temperature between xmath56k using the fitted xmath57order polynomial xmath58 where xmath59 in the absence of any specific knowledge of the stellar properties other than the effective temperature and in the absence of any radial velocity or high resolution imaging to assess the specific companion properties of a given koi see section sub vetting the above parameterization equation eq factor unvetted can be used to derive a mean radii correction factor xmath3 for a given star for g dwarfs and hotter stars the correction factor is near xmath60 as the stellar temperature mass of the primary decreases to the range of m dwarfs the correction factor can be as low as xmath61 to the quoted radii uncertainties xmath62from the cumulative koi list see equation eq total unc for the red histogram it is assumed that the kois are single as is the case in the published koi list for the blue histogram it is assumed that each koi has been vetted with radial velocity rv and high resolution imaging see section sub vetting the vertical dashed lines represent the median values of the distributions xmath63 for the unvetted kois and xmath64 for the vetted kois see section sub vetting the mean correction factor is useful for understanding how strongly the planetary radii may be underestimated but an additional uncertainty term derived from the mean radius correction factor is potentially more useful as it can be added in quadrature to the formal planetary radii uncertainties the formal uncertainties presented in the koi list are derived from the uncertainties in the transit fitting and the uncertainty in the knowledge of the stellar radius and they are calculated assuming the kois are single stars we can estimate an additional planet radius uncertainty term based upon the mean radii correction factor as xmath66 where xmath27 is the observed radius of the planet adding in quadrature to the reported uncertainty a more complete uncertainty on the planetary radius can be reported as xmath67 where xmath68 is the uncertainty of the planetary radius as presented in the koi list the distribution of the ratio of the more complete koi radius uncertainties xmath69 to the reported koi radius uncertainties xmath68 is shown in figure fig ratio unc including the possibility that a koi may be a multiple star increases the planetary radii uncertainties while the distribution has a long tail dependent upon the specific system the planetary radii uncertainties are underestimated as reported in the koi list on average by a factor of 17 the above analysis has assumed that the kois have undergone no companion vetting as is the assumption in the current koi list in reality the kepler project has funded a substantial ground based follow up observation program which includes radial velocity vetting and high resolution imaging in this section we explore the effectiveness of the observational vetting the observational vetting reduces the fraction of undetected companions if there is no vetting or all stars are assumed to be single as is the case for the published koi list then the fraction of undetected companions is 100 and the mean correction factors xmath3 are as presented above if every stellar companion is detected and accounted for in the planetary parameter derivations then the fraction of undetected companions is 0 and the mean correction factors are unity reality is somewhere in between these two extremes to explore the effectiveness of the observational vetting on reducing the radii corrections factors and the associated radii uncertainties we have assumed that every koi has been vetted equally and all companions within the reach of the observations have been detected and accounted thus the corrections factors depend only on the fraction of companions stars that remain out of the reach of vetting and undetected in this simulation we have assumed that all companions with orbital periods of 2 years or less and all companions with angular separations of xmath70 or greater have been detected this of course will not quite be true as random orbital phase effects inclination effects companion mass distribution stellar rotation effects etc will diminish the efficiency of the observations to detect companions we recognize the simplicity of these assumptions however the purpose of this section is to assess the usefulness of observational vetting on reducing the uncertainties of the planetary radii estimates not to explore fully the sensitivities and completeness of the vetting typical follow up observations include stellar spectroscopy a few radial velocity measurements and high resolution imaging the radial velocity observations usually include xmath71 measurements over the span of xmath72 months and are typically sufficient to identify potential stellar companions with orbital periods of xmath73 years or less while determining full orbits and stellar masses for any stellar companions detected typically requires more intensive observing we have estimated that 3 measurements spanning xmath74 months is sufficient to enable the detection of an rv trend for orbital periods of xmath75 years or less and mark the star as needing more detailed observations the amplitude of the rv signature and hence the ability to detect companions does depend upon the masses of the primary and companion stars massive stars with low mass companions will display relatively low rv signatures however rv vetting for the kepler program has been done at a level of xmath76 m s which is sufficient to detect at xmath77 a late type m dwarf companion in a two year orbit around a mid b dwarf primary indeed the rv vetting is made even more effective by searching for companions via spectral signatures xcite the high resolution imaging via adaptive optics lucky imaging andor speckle observations typically has resolutions of xmath78 eg based upon monte carlo simulations in which we have averaged over random orbital inclinations and eccentricities we have calculated the fraction of time within its orbit a companion will be detectable via high resolution imaging with typical high resolution imaging of 005 we have estimated that xmath79 of the stellar companions will be detected at one full width half maximum fwhm005 of the image resolution and beyond and xmath80 at xmath81 fwhm 01 of the image resolution and beyond to determine what fraction of possible stellar companions would be detected in such a scenario we have used the nearly log normal orbital period distribution from xcite to convert the high resolution imaging limits into period limits we have estimated the distance to each koi by determining a distance modulus from the observed kepler magnitude and the absolute kepler magnitude associated with the fitted isochrone the median distance to the kois was found to be xmath82 pc corresponding to xmath83 au for 01 imaging using the isochrone stellar mass the semi major axis detection limits were converted to orbital period limits assuming circular orbits combining the 2year radial velocity limit and the xmath70 imaging limit we were able to estimate the fraction of undetected companions for each individual koi see figure fig logp the distribution of the fraction of undetected companions ranges from xmath84 and on average the ground based observations leave xmath85 of the possible companions undetected for the kois see figure fig logp the mean correction factors xmath3 are only applicable to the undetected companions for the stars that are vetted with radial velocity andor high resolution imaging the intrinsic stellar companion rate for the kois of 46 xciteis reduced by the unvetted companion fraction for each koi that is we assume that companion stars detected in the vetting have been accounted for in the planetary radii determinations and the unvetted companion fraction is the relevant companion rate for determining the correction factors in the koi299 example fig fig logp the undetected companion rate used to calculate the mean radii correction factor is xmath86 this lower fraction of undetected companions in turn reduces the mean correction factors for the vetted stars which are displayed in figure fig mean factor blue points instead of a mean correction factor of xmath87 the average correction factor is xmath88 if the stars are vetted with radial velocity and high resolution imaging the mean correction factor still changes as a function of the primary star effective temperature but the dependence is much more shallow with coefficients for equation eq factor unvetted of xmath89 see figure fig mean factor the above analysis has concentrated on the kepler mission and the associated koi list but the same effects will apply to all transit surveys including k2 xcite and tess xcite if the planetary host stars from k2 and tess are also assumed to be single with no observational vetting the planetary radii will be underestimated by the same amount as the kepler kois fig fig mean factor and eq eq factor unvetted many k2 targets and nearly all of tess targets will be stars that are typically xmath90 magnitudes brighter than the stars observed by kepler and therefore k2 and tess targets will be xmath91 times closer than the kepler targets the effectiveness of the radial velocity vetting will remain mostly unaffected by the brighter and closer stars but the effectiveness of the high resolution imaging will be significantly enhanced instead of probing the stars to within xmath92au the imaging will be able to detect companion stars within xmath93 au of the stars as a result the fraction of undetected companions will decrease significantly even for the kepler stars that undergo vetting via radial velocity and high resolutionimaging xmath94 of the companions remain undetected but for the stars that are 10 times closer that fraction decreases to xmath95 see figure fig logp this has the strong benefit of greatly reducing the mean correction factors for the stars that are observed by k2 and tess and are vetted for companions with radial velocity and high resolution imaging the mean correction factor for vetted k2tess like stars is only xmath96 the correction factor has a much flatter dependence on the primary star effective temperature because the majority of the possible stellar companions are detected by the vetting the coefficients for equation eq factor unvetted become xmath97 the mean radii correction factors for vetted k2tess planetary host stars correspond to a correction to the planetary radii uncertainties of only xmath75 in comparison to a correction of xmath98 if the k2tess stars remain unvetted for k2 and tess where the number of candidate planetary systems may outnumber the kois by an order of magnitude or more single epoch high resolution imaging may prove to be the most important observational vetting performed while the imaging will not reach the innermost stellar companions radial velocity observations require multiple visits over a baseline comparable to the orbital periods an observer is trying to sample in contrast the high resolution imaging requires a single visit or perhaps one per filter on a single night and will sample the majority of the expected stellar companion period distribution understanding the occurrence rates of the earth sized planets is one of the primary goals of the kepler mission and one of the uses of the koi list xcite it has been shown that the transition from rocky to non rocky planets occurs near a radius of xmath99 and the transition is very sharp xcite however the amplitude of the uncertainties resulting from undetected companions may be large enough to push planets across this boundary and affect our knowledge of the fraction of earth sized planets we have explored the possible effects of undetected companions on the derived occurrence rates the planetary radii can not simply be multiplied by a mean correction factor xmath0 as that factor is only a measure of the statistical uncertainty of the planetary radius resulting from assuming the stars are single and only a fraction of the stars are truly multiples instead a monte carlo simulation has been performed to assign randomly the effect of unseen companions on the kois the simulation was performed 10000 times for each koi for each realization of the simulation we have randomly assigned the star to be single binary or triple star via the 54 the 34 and the 12 fractions xcite if the koi is assigned to be a single star the mean correction factors for the planets in that system are unity xmath100 if the koi star is a multiple star system we have randomly assigned the stellar companion masses according to the masses available from the fitted isochrones and using the mass ratio distribution of xcite finally the planets are randomly assigned to the primary or to the companion stars ie 50 fractions for binary stars and 333 fractions for triple stars once the details for the system are set for a particular realization the final correction factor for the planets are determined from the polynomial fits for the individual multiplicity scenarios eg fig fig koi299 and fig koi1085 for each set of the simulations we compiled the fraction of planets within the following planet radii bins xmath101 xmath102 xmath103 corresponding to earth sized super earth mini neptune sized and neptune to jupiter sized planets the raw fractions directly from the koi list for these three categories of planets are 333 460 and 207 note that these are the raw fractions and are not corrected for completeness or detectability as must be done for a true occurrence rate calculation these fractions are necessary for comparing how unseen companions affect the determination of fractions finally we repeated the simulations but using the undetected multiple star fractions after vetting with radial velocity and high resolution imaging had been performed thus effectively increasing the fraction of stars with correction factors of unity the distributions of the change in the fractions of planets in each planet category compared to the raw koi fractions are shown in figure fig occurrence rates if the occurrence rates utilize the assumed single koi list ie unvetted then the earth sized planet fraction may be overestimated by as much as xmath7 and the giant planet fraction may be underestimated by as much as 30 interestingly the fraction of super earth mini neptune planets does not change substantially this is a result of smaller planets moving into this bin and larger planets moving out of the bin in contrast if all of the kois undergo vetting via radial velocity and high resolution imaging the fractional changes to these bin fractions are much smaller xmath104 for the earth sized planets and xmath105 for the neptune jupiter sizes planets we present an exploration of the effect of undetected companions on the measured radii of planets in the kepler sample we find that if stars are assumed to be single as they are in the current kepler objects of interest list and no companion vetting with radial velocity andor high resolution imaging is performed the planetary radii are underestimated on average by a factor of xmath106 corresponding to an overestimation of the planet bulk density by a factor of xmath107 because lower mass stars will have a smaller range of stellar companion masses than higher mass stars the planet radius mean correction factor has been quantified as a function of stellar effective temperature if the kois are vetted with radial velocity observations and high resolution imaging the planetary radius mean correction necessary to account for undetected companions is reduced significantly to a factor of xmath108 the benefit of radial velocity and imaging vetting is even more powerful for missions like k2 and tess where the targets are on average ten times closer than the kepler objects of interest with vetting the planetary radii for k2 and tess targets will only be underestimated on average by 10 given the large number of candidates expected to be produced by k2 and tess single epoch high resolution imaging may be the most effective and efficient means of reducing the mean planetary radius correction factor finally we explored the effects of undetected companions on the occurrence rate calculations for earth sized super earth mini neptune sized and neptune sized and larger planets we find that if the kepler objects of interest are all assumed to be single as they currently are in the koi list then the fraction of earth sized planets may be overestimated by as much as 15 20 and the fraction of large planets may be underestimated by as much as 30 the particular radial velocity observations or high resolution imaging vetting that any one koi may or may not have undergone differs from star to star companion vetting simulations presented here show that a full understanding and characterization of the planetary companions is dependent upon also understanding the presence of stellar companions but is also dependent upon understanding the limits of those observations for a final occurrence rate determination of earth sized planets and more importantly an uncertainty on that occurrence rate the stellar companion detections or lack thereof must be taken into account the authors would like to thank ji wang tim morton and gerard van belle for useful discussions during the writing of this paper this research has made use of the nasa exoplanet archive which is operated by the california institute of technology under contract with the national aeronautics and space administration under the exoplanet exploration program portions of this work were performed at the california institute of technology under contract with the national aeronautics and space administration
we present a study on the effect of undetected stellar companions on the derived planetary radii for the kepler objects of interest kois the current production of the koi list assumes that the each koi is a single star not accounting for stellar multiplicity statistically biases the planets towards smaller radii the bias towards smaller radii depends on the properties of the companion stars and whether the planets orbit the primary or the companion stars defining a planetary radius correction factor xmath0 we find that if the kois are assumed to be single then on average the planetary radii may be underestimated by a factor of xmath1 if typical radial velocity and high resolution imaging observations are performed and no companions are detected this factor reduces to xmath2 the correction factor xmath3 is dependent upon the primary star properties and ranges from xmath4 for a and f stars to xmath5 for k and m stars for missions like k2 and tess where the stars may be closer than the stars in the kepler target sample observational vetting primary imaging reduces the radius correction factor to xmath6 finally we show that if the stellar multiplicity rates are not accounted for correctly occurrence rate calculations for earth sized planets may overestimate the frequency of small planets by as much as xmath7
introduction effects of companions on planet radii possible companions from isochrones mean radii correction factors (@xmath0) effect of undetected companions on the derived occurrence rates summary
factor analysis is one of the most useful tools for modeling common dependence among multivariate outputs suppose that we observe data xmath0 that can be decomposed as xmath1 where xmath2 are unobservable common factors xmath3 are corresponding factor loadings for variable xmath4 and xmath5 denotes the idiosyncratic component that can not be explained by the static common component here xmath6 and xmath7 respectively denote the dimension and sample size of the data model eq11 has broad applications in the statistics literature for instance xmath8 can be expression profiles or blood oxygenation level dependent bold measurements for the xmath9th microarray proteomic or fmri image whereas xmath4 represents a gene or protein or a voxel see for example xcite the separations between the common factors and idiosyncratic components are carried out by the low rank plus sparsity decomposition see for example xcite the factor model eq11 has also been extensively studied in the econometric literature in which xmath10 is the vector of economic outputs at time xmath9 or excessive returns for individual assets on day xmath9 the unknown factors and loadings are typically estimated by the principal component analysis pca and the separations between the common factors and idiosyncratic components are characterized via static pervasiveness assumptions see for instance xcite among others in this paper we consider static factor model which differs from the dynamic factor model xcite xcite xcite the dynamic model allows more general infinite dimensional representations for this type of model the frequency domain pca xcite was applied on the spectral density the so called dynamic pervasiveness condition also plays a crucial role in achieving consistent estimation of the spectral density accurately estimating the loadings and unobserved factors are very important in statistical applications in calculating the false discovery proportion for large scale hypothesis testing one needs to adjust accurately the common dependence via subtracting it from the data in eq11 xcite in financial applications we would like to understand accurately how each individual stock depends on unobserved common factors in order to appreciate its relative performance and risks in the aforementioned applications dimensionality is much higher than sample size however the existing asymptotic analysis shows that the consistent estimation of the parameters in model eq11 requires a relatively large xmath7 in particular the individual loadings can be estimated no faster than xmath11 but large sample sizes are not always available even with the availability of big data heterogeneity and other issues make direct applications of eq11 with large xmath7 infeasible for instance in financial applications to pertain the stationarity in model eq11 with time invariant loading coefficients a relatively short time series is often used to make observed data less seriallycorrelated monthly returns are frequently used to reduce the serial correlations yet a monthly data over three consecutive years contain merely 36 observations to overcome the aforementioned problems and when relevant covariates are available it may be helpful to incorporate them into the model let xmath12 be a vector of xmath13dimensional covariates associated with the xmath4th variables in the seminal papers by xcite and xcite the authors studied the following semi parametric factor model xmath14 where loading coefficients in eq11 are modeled as xmath15 for some functions xmath16 for instance in health studies xmath17 can be individual characteristics eg age weight clinical and genetic information in financial applications xmath17 can be a vector of firm specific characteristics market capitalization price earning ratio etc the semiparametric model eq12 however can be restrictive in many cases as it requires that the loading matrix be fully explained by the covariates a natural relaxation is the following semiparametric model xmath18 where xmath19 is the component of loading coefficient that can not be explained by the covariates xmath17 let xmath20 we assume that xmath21 have mean zero and are independent of xmath22 and xmath23 in other words we impose the following factor structure xmath24 which reduces to model eq12 when xmath25 and model eq11 when xmath26 when xmath17 genuinely explains a part of loading coefficients xmath27 the variability of xmath28 is smaller than that of xmath27 hence the coefficient xmath19 can be more accurately estimated by using regression model eq13 as long as the functions xmath29 can be accurately estimated let xmath30 be the xmath31 matrix of xmath32 xmath33 be the xmath34 matrix of xmath35 xmath36 be the xmath37 matrix of xmath38 xmath39 be the xmath37 matrix of xmath19 and xmath40 be xmath31 matrix of xmath5 then model eq14 can be written in a more compact matrix form xmath41 we treat the loadings xmath36 and xmath39 as realizations of random matrices throughout the paper this model is also closely related to the supervised singular value decomposition model recently studied by xcite the authors showed that the model is useful in studying the gene expression and single nucleotide polymorphism snp data and proposed an em algorithm for parameter estimation we propose a projected pca estimator for both the loading functions and factors our estimator is constructed by first projecting xmath30 onto the sieve space spanned by xmath22 then applying pca to the projected data or fitted values due to the approximate orthogonality condition of xmath42 xmath40 and xmath39 the projection of xmath30 is approximately xmath43 as the smoothing projection suppresses the noise terms xmath39 and xmath40 substantially therefore applying pca to the projected data allows us to work directly on the sample covariance of xmath43 which is xmath44 under normalization conditions this substantially improves the estimation accuracy and also facilitates the theoretical analysis in contrast the traditional pca method for factor analysis eg xcite xcite is no longer suitable in the current context moreover the idea of projected pca is also potentially applicable to dynamic factor models of xcite by first projecting the data onto the covariate space the asymptotic properties of the proposed estimators are carefully studied we demonstrate that as long as the projection is genuine the consistency of the proposed estimator for latent factors and loading matrices requires only xmath45 and xmath7 does not need to grow which is attractive in the typical high dimension low sample size hdlss situations eg xcite in addition if both xmath6 and xmath7 grow simultaneously then with sufficiently smooth xmath29 using the sieve approximation the rate of convergence for the estimators is much faster than those of the existing results for model eq11 typically the loading functions can be estimated at a convergence rate xmath46 and the factor can be estimated at xmath47 throughout the paper xmath48 and xmath49are assumed to be constant and do not grow let xmath50 be a xmath37 matrix of xmath51 model eq13 implies a decomposition of the loading matrix xmath52 where xmath36 and xmath39 are orthogonal loading components in the sense that xmath53 we conduct two specification tests for the hypotheses xmath54 the first problem is about testing whether the observed covariates have explaining power on the loadings if the null hypothesis is rejected it gives us the theoretical basis to employ the projected pca as the projection is now genuine our empirical study on the asset returns shows that firm market characteristics do have explanatory power on the factor loadings which lends further support to our projected pca method the second tests whether covariates fully explain the loadings our aforementioned empirical study also shows that model eq12 used in the financial econometrics literature is inadequate and more generalized model eq15 is necessary as claimed earlier even if xmath55 does not hold as long as xmath56 the projected pca can still consistently estimate the factors as xmath45 and xmath7 may or may not grow our simulated experiments confirm that the estimation accuracy is gained more significantly for small xmath7 s this shows one of the benefits of using our projected pca method over the traditional methods in the literature in addition as a further illustration of the benefits of using projected data we apply the projected pca to consistently estimate the number of factors which is similar to those in xcite and xcite different from these authors our method applies to the projected data and we demonstrate numerically that this can significantly improve the estimation accuracy we focus on the case when the observed covariates are time invariant when xmath7 is small these covariates are approximately locally constant so this assumption is reasonable in practice on the other hand there may exist individual characteristics that are time variant eg see xcite we expect the conclusions in the current paper to still hold if some smoothness assumptions are added for the time varying components of the covariates due to the space limit we provide heuristic discussions on this case in the supplementary material of this paper xcite in addition note that in the usual factor model xmath50 was assumed to be deterministic in this paper however xmath50 is mainly treated to be stochastic and potentially depend on a set of covariates but we would like to emphasize that the results presented in section 1541512515 under the framework of more general factor models hold regardless of whether xmath50 is stochastic or deterministic finally while some financial applications are presented in this paper the projected pca is expected to be useful in broad areas of statistical applications eg see xcite for applications in gene expression data analysis throughout this paper for a matrix xmath57 let xmath58 and xmath59 xmath60 denote its frobenius spectral and max norms let xmath61 and xmath62 denote the minimum and maximum eigenvalues of a square matrix for a vector xmath63 let xmath64 denote its euclidean norm the rest of the paper is organized as follows section sec2 introduces the new projected pca method and defines the corresponding estimators for the loadings and factors sections 1541512515 and s4 provide asymptotic analysis of the introduced estimators section sec5 introduces new specification tests for the orthogonal decomposition of the semiparametric loadings section sec6 concerns about estimating the number of factors section sec7 presents numerical results finally section sec8 concludes all the proofs are given in the and the supplementary material xcite in the high dimensional factor model let xmath50 be the xmath37 matrix of loadings then the general model eq11 can be written as xmath65 suppose we additionally observe a set of covariates xmath66 the basic idea of the projected pca is to smooth the observations xmath67 for each given day xmath9 against its associated covariates more specifically let xmath68 be the fitted value after regressing xmath67 on xmath69 for each given xmath9 this results in a smooth or projected observation matrix xmath70 which will also be denoted by xmath71 the projected pca then estimates the factors and loadings by running the pca based on the projected data xmath70 here we heuristically describe the idea of projected pca rigorous analysis will be carried out afterward let xmath72 be a space spanned by xmath73 which is orthogonal to the error matrix xmath40 let xmath74 denote the projection matrix onto xmath72 whose formal definition will be given in eq25 below at the population level xmath74 approximates the conditional expectation operator xmath75 which satisfies xmath76 then xmath77 and xmath78 hence analyzing the projected data xmath79 is an approximately noiseless problem and the sample covariance has the following approximation xmath80 we now argue that xmath33 and xmath81 can be recovered from the projected data xmath70 under some suitable normalization condition the normalization conditions we impose are xmath82 under this normalization using eq21a xmath83 we conclude that the columns of xmath84 are approximately xmath85 times the first xmath86 eigenvectors of the xmath87 matrix xmath88 therefore the projected pca naturally defines a factor estimator xmath89 using the first xmath86 principal components of xmath90 the projected loading matrix xmath81 can also be recovered from the projected data xmath71 in two equivalent ways given xmath33 from xmath91 we see xmath92 alternatively consider the xmath93 projected sample covariance xmath94 where xmath95 is a remaining term depending on xmath96 right multiplying xmath81 and ignoring xmath95 we obtain xmath97 hence the normalized columns of xmath81 approximate the first xmath86 eigenvectors of xmath98 the xmath93 sample covariance matrix based on the projected data therefore we can either estimate xmath81 by xmath99 given xmath89 or by the leading eigenvectors of xmath98 in fact we shall see later that these two estimators are equivalent if in addition xmath100 that is the loading matrix belongs to the space xmath72 then xmath50 can also be recovered from the projected data the above arguments are the fundament of the projected pca and provide the rationale of our estimators to be defined in section sec23 we shall make the above arguments rigorous by showing that the projected error xmath101 is asymptotically negligible and therefore the idiosyncratic error term xmath40 can be completely removed by the projection step as one of the useful examples of forming the space xmath102 and the projection operator this paper considers model eq14 where xmath17 s and xmath32 s are the only observable data and xmath103 are unknown nonparametric functions the specific case eq12 with xmath104 was used extensively in the financial studies by xcite xcite and xcite with xmath17 s being the observed market characteristic variables we assume xmath86 to be known for now in section sec6 we will propose a projected eigenvalue ratio method to consistently estimate xmath86 when it is unknown we assume that xmath105 does not depend on xmath9 which means the loadings represent the cross sectional heterogeneity only such a model specification is reasonable since in many applications using factor models to pertain the stationarity of the time series the analysis can be conducted within each fixed time window with either a fixed or slowly growing xmath7 through localization in time it is not stringent to require the loadings be time invariant this also shows one of the attractive features of our asymptotic results under mild conditions our factor estimates are consistent even if xmath7 is finite to nonparametrically estimate xmath105 without the curse of dimensionality when xmath17 is multivariate we assume xmath29 to be additive for each xmath106 there are xmath107 nonparametric functions such that xmath108 each additive component of xmath109 is estimated by the sieve method define xmath110 to be a set of basis functions eg b spline fourier series wavelets polynomial series which spans a dense linear space of the functional space for xmath111 then for each xmath112 xmath113 here xmath114 are the sieve coefficients of the xmath115th additive component of xmath105 corresponding to the xmath116th factor loading xmath117 is a remaining function representing the approximation error xmath118 denotes the number of sieve terms which grows slowly as xmath45 the basic assumption for sieve approximation is that xmath119 as xmath120 we take the same basis functions in eq24 purely for simplicity of notation define for each xmath121 and for each xmath122 xmath123 then we can write xmath124 let xmath125 be a xmath126 matrix of sieve coefficients xmath127 be a xmath128 matrix of basis functions and xmath129 be xmath37 matrix with the xmath130th element xmath131 then the matrix form of eq23 and eq24 is xmath132 substituting this into eq15 we write xmath133 we see that the residual term consists of two parts the sieve approximation error xmath134 and the idiosyncratic xmath40 furthermore the random effect assumption on the coefficients xmath39 makes it also behave like noise and hence negligible when the projection operator xmath74 is applied based on the idea described in section sec21 we propose a projected pca method where xmath72 is the sieve space spanned by the basis functions of xmath42 and xmath74 is chosen as the projection matrix onto xmath72 defined by the xmath93 projection matrix xmath135 the estimators of the model parameters in eq15 are defined as follows the columns of xmath136 are defined as the eigenvectors corresponding to the first xmath86 largest eigenvalues of the xmath87 matrix xmath137 and xmath138 is the estimator of xmath36 the intuition can be readily seen from the discussions in section sec21 which also provides an alternative formulation of xmath139 as follows let xmath140 be a xmath141 diagonal matrix consisting of the largest xmath86 eigenvalues of the xmath93 matrix xmath142 let xmath143 be a xmath37 matrix whose columns are the corresponding eigenvectors according to the relation xmath144 described in section sec21 we can also estimate xmath36 or xmath81 by xmath145 we shall show in lemma la1add that this is equivalent to eq26 therefore unlike the traditional pca method for usual factor models eg xcite xcite the projected pca takes the principal components of the projected data xmath71 the estimator is thus invariant to the rotation transformations of the sieve bases the estimation of the loading component xmath39 that can not be explained by the covariates can be estimated as follows with the estimated factorsxmath89 the least squares estimator of loading matrix is xmath146 by using eq21 and eq22 therefore by eq15 a natural estimator of xmath147 is xmath148 consider a panel data model with time varying coefficients as follows xmath149 where xmath17 is a xmath13dimensional vector of time invariant regressors for individual xmath4 xmath150 denotes the unobservable random time effect xmath5 is the regression error term the regression coefficient xmath151 is also assumed to be random and time varying but is common across the cross sectional individuals the semiparametric factor model admits eq28 as a special case note that eq28 can be rewritten as xmath152 with xmath153 unobservable factors xmath154 and loading xmath155 the model eq14 being considered on the other hand allows more general nonparametric loading functions let us first consider the asymptotic performance of the projected pca in the conventional factor model xmath156 in the usual statistical applications for factor analysis the latent factors are assumed to be serially independent while in financial applications the factors are often treated to be weakly dependent time series satisfying strong mixing conditions we now demonstrate by a simple example that latent factors xmath33 can be estimated at a faster rate of convergence by projected pca than the conventional pca and that they can be consistently estimated even when sample size xmath7 is finite ex31 to appreciate the intuition let us consider a specific case in which xmath157 so that model eq14 reduces to xmath158 assume that xmath159 is so smooth that it is in fact a constant xmath160 otherwise we can use a local constant approximation where xmath161 then the model reduces to xmath162 the projection in this case is averaging over xmath4 which yields xmath163 where xmath164 xmath165 and xmath166 denote the averages of their corresponding quantities over xmath4 for the identification purpose suppose xmath167 and xmath168 ignoring the last two terms we obtain estimators xmath169 these estimators are special cases of the projected pca estimators to see this define xmath170 and let xmath171 be a xmath6dimensional column vector of ones take a naive basis xmath172 then the projected data matrix is in fact xmath173 consider the xmath87 matrix xmath174 whose largest eigenvalue is xmath175 from xmath176 we have the first eigenvector of xmath137 equals xmath177 hence the projected pca estimator of factors is xmath178 in addition the projected pca estimator of the loading vector xmath179 is xmath180 hence the projected pca estimator of xmath181 equals xmath182 these estimators match with e32 moreover since the ignored two terms xmath183 and xmath184 are of order xmath185 xmath186 and xmath187 converge whether or not xmath7 is large note that this simple example satisfies all the assumptions to be stated below and xmath188 and xmath189 achieve the same rate of convergence as that of theorem th41 we shall present more details about this example in appendix g in the supplementary material xcite we now state the conditions and results formally in the more general factor model eq31 recall that the projection matrix is defined as xmath190 the following assumption is the key condition of the projected pca ass31 there are positive constants xmath191 and xmath192 such that with probability approaching one as xmath193 xmath194 since the dimensions of xmath195 and xmath50 are respectively xmath196 and xmath37 assumption ass31 requires xmath197 which is reasonable since we assume xmath86 the number of factors to be fixed throughout the paper assumption ass31 is similar to the pervasive condition on the factor loadings xcite in our context this condition requires the covariates xmath42 have nonvanishing explaining power on the loading matrix so that the projection matrix xmath198 has spiked eigenvalues note that it rules out the case when xmath42 is completely unassociated with the loading matrix xmath50 eg when xmath42 is pure noise one of the typical examples that satisfies this assumption is the semiparametric factor model model eq14 we shall study this specific type of factor model in section s4 and prove assumption ass31 in the supplementary material xcite note that xmath33 and xmath50 are not separately identified because for any nonsingular xmath199 xmath200 therefore we assume the following ass32 almost surely xmath201 and xmath198 is a xmath141 diagonal matrix with distinct entries this condition corresponds to the pc1 condition of xcite which separately identifies the factors and loadings from their product xmath202 it is often used in factor analysis for identification and means that the columns of factors and loadings can be orthogonalized also see xcite ass33 i there are xmath203 and xmath204 so that with probability approaching one as xmath193 xmath205 ii xmath206 note that xmath207 and xmath208 is a vector of dimensionality xmath209 thus condition i can follow from the strong law of large numbers for instance xmath22 are weakly correlated and in the population level xmath210 is well conditioned in addition this condition can be satisfied through proper normalizations of commonly used basis functions such as b splines wavelets fourier basis etc in the general setup of this paper we allow xmath211 s to be cross sectionally dependent and nonstationary regularity conditions about weak dependence and stationarity are imposed only on xmath212 as follows we impose the strong mixing condition let xmath213 and xmath214 denote the xmath215algebras generated by xmath216 and xmath217 respectively define the mixing coefficient xmath218 ass34 i xmath219 is strictly stationary in addition xmath220 for all xmath221 xmath222 is independent of xmath223 strong mixing there exist xmath224 such that for all xmath225 xmath226 weak dependence there is xmath227 so that xmath228 exponential tail there exist xmath229 satisfying xmath230 and xmath231 such that for any xmath232 xmath122 and xmath233 xmath234 assumption ass34 is standard especially condition iii is commonly imposed for high dimensional factor analysis eg xcite which requires xmath235 be weakly dependent both serially and cross sectionally it is often satisfied when the covariance matrix xmath236 is sufficiently sparse under the strong mixing condition we provide primitive conditions of condition iii in the supplementary material xcite formally we have the following theorem th31 consider the conventional factor model eq31 with assumptions ass31ass34 the projected pca estimators xmath237 and xmath238 defined in section sec23 satisfy as xmath239 xmath240 may either grow simultaneously with xmath6 satisfying xmath241 or stay constant with xmath242 xmath243 to compare with the traditional pca method the convergence rate for the estimated factors is improved for small xmath7 in particular the projected pca does not require xmath244 and also has a good rate of convergence for the loading matrix up to a projection transformation hence we have achieved a finitexmath7 consistency which is particularly interesting in the high dimensional low sample size hdlss context considered by xcite in contrast the traditional pca method achieves a rate of convergence of xmath245 for estimating factors and xmath246 for estimating loadings see remarks re41 re42 below for additional details let xmath247 be the xmath93 covariance matrix of xmath248 convergence eq34add in theorem th31 also describes the relationship between the leading eigenvectors of xmath98 and those of xmath249 to see this let xmath250 be the eigenvectors of xmath249 corresponding to the first xmath86 eigenvalues under the pervasiveness condition xmath251 can be approximated by xmath50 multiplied by a positive definite matrix of transformation xcite in the context of projected pca by definition xmath252 here we recall that xmath253 is a diagonal matrix consisting of the largest xmath86 eigenvalues of xmath98 and xmath254 is a xmath37 matrix whose columns are the corresponding eigenvectors then eq34add immediately implies the following corollary which complements the pca consistency in spiked covariance models eg xcite and xcite th32 under the conditions of theorem th31 there is a xmath141 positive definite matrix xmath255 whose eigenvalues are bounded away from both zero and infinity so that as xmath193 xmath240 may either grow simultaneously with xmath6 satisfying xmath241 or stay constant with xmath242 xmath256in the semiparametric factor model it is assumed that xmath257 where xmath105 is a nonparametric smooth function for the observed covariates and xmath19 is the unobserved random loading component that is independent of xmath17 hence the model is written as xmath258 in the matrix form xmath259 and xmath36 does not vanish pervasive condition see assumption ass42 below the estimators xmath237 and xmath238 are the projected pca estimators as defined in section sec23 we now define the estimator of the nonparametric function xmath29 xmath260 in the matrix form the projected data has the following sieve approximated representation xmath261 where xmath262 is small because xmath39 and xmath40 are orthogonal to the function space spanned by xmath42 and xmath129 is the sieve approximation error the sieve coefficient matrix xmath263 can be estimated by least squares from the projected model eq41 ignore xmath264 replace xmath33 with xmath237 and solve eq41 to obtain xmath2651phi bx bywidehatbf we then estimate xmath29 by xmath266 where xmath267 denotes the support of xmath17 when xmath268 xmath36 can be understood as the projection of xmath50 onto the sieve space spanned by xmath42 hence the following assumption is a specific version of assumptions ass31 and ass32 in the current context ass41 i almost surely xmath201 and xmath269 is a xmath141 diagonal matrix with distinct entries ii there are two positive constants xmath191 and xmath192 so that with probability approaching one as xmath193 xmath270 in this section we do not need to assume xmath271 to be iid for the estimation purpose cross sectional weak dependence as in assumption ass42ii below would be sufficient the iid assumption will be only needed when we consider specification tests in section sec5 write xmath272 and xmath273 ass42 i xmath274 and xmath22 is independent of xmath275 ii xmath276 xmath277 and xmath278 the following set of conditions is concerned about the accuracy of the sieve approximation ass43 xmath279 i the loading component xmath280 belongs to a hlder class xmath281 defined by xmath282 for some xmath283 ii the sieve coefficients xmath284 satisfy for xmath285 as xmath286 xmath287 where xmath288 is the support of the xmath115th element of xmath17 and xmath118 is the sieve dimension iii xmath289 condition ii is satisfied by common basis for example when xmath290 is polynomial basis or b splines condition ii is implied by condition i see eg xcite and xcite th41 suppose xmath241 under assumptions ass33 ass34 ass41ass43 as xmath291 xmath7 can be either divergent or bounded we have that xmath292 in addition if xmath244 simultaneously with xmath6 and xmath118 then xmath293 the optimal xmath294 simultaneously minimizes the convergence rates of the factors and nonparametric loading function xmath29 it also satisfies the constraint xmath295 as xmath296 with xmath297 we have xmath298 and xmath299 satisfies xmath300 some remarks about these rates of convergence compared with those of the conventional factor analysis are in order re41the rates of convergence for factors and nonparametric functions do not require xmath244 when xmath301 xmath302 the rates still converge fast when xmath6 is large demonstrating the blessing of dimensionality this is an attractive feature of the projected pca in the hdlss context as in many applications the stationarity of a time series and the time invariance assumption on the loadings hold only for a short period of time in contrast in the usual factor analysis consistency is granted only when xmath303 for example according to xcite lemma c1 the regular pca method has the following convergence rate xmath304 which is inconsistent when xmath7 is bounded re42when both xmath6 and xmath7 are large the projected pca estimates factors as well as the regular pca does and achieves a faster rate of convergence for the estimated loadings when xmath19 vanishes in this case xmath305 the loading matrix is estimated by xmath306 and xmath307 in contrast the regular pca method as in xcite yields xmath308 comparing these rates we see that when xmath29 s are sufficiently smooth larger xmath309 the rate of convergence for the estimated loadings is also improved the loading matrix always has the following orthogonal decomposition xmath310 where xmath39 is interpreted as the loading component that can not be explained by xmath42 we consider two types of specification tests testing xmath311 and xmath312 the former tests whether the observed covariates have explaining powers on the loadings while the latter tests whether the covariates fully explain the loadings the former provides a diagnostic tool as to whether or not to employ the projected pca the latter tests the adequacy of the semiparametric factor models in the literature testing whether the observed covariates have explaining powers on the factor loadings can be formulated as the following null hypothesis xmath314 due to the approximate orthogonality of xmath42 and xmath39 we have xmath315 hence the null hypothesis is approximately equivalent to xmath316 this motivates a statistic xmath317 for a consistent loading estimator xmath318 normalizing the test statistic by its asymptotic variance leads to the test statistic xmath319 where the xmath141 matrix xmath320 is the weight matrix the null hypothesis is rejected when xmath321 is large the projected pca estimator is inappropriate under the null hypothesis as the projection is not genuine we therefore use the least squares estimator xmath322 leading to the test statistic xmath323 here we take xmath324 as the traditional pca estimator the columns of xmath325 are the first xmath86 eigenvectors of the xmath87 data matrix xmath326 connor hagmann and linton xcite applied the semiparametric factor model to analyzing financial returns who assumed that xmath328 that is the loading matrix can be fully explained by the observed covariates it is therefore natural to test the following null hypothesis of specification xmath329 recall that xmath330 so that xmath331 therefore essentially the specification testing problem is equivalent to testing xmath332 that is we are testing whether the loading matrix in the factor model belongs to the space spanned by the observed covariates a natural test statistic is thus based on the weighted quadratic form xmath333 for some xmath334 positive definite weight matrix xmath335 where xmath237 is the projected pca estimator for factors and xmath336 to control the size of the test we take xmath337 where xmath338 is a diagonal covariance matrix of xmath339 under xmath340 assuming that xmath341 are uncorrelated we replace xmath342 with its consistent estimator let xmath343 define xmath344 then the operational test statistic is defined to be xmath345 the null hypothesis is rejected for large values of xmath346 for the testing purpose we assume xmath347 to be iid and let xmath348 simultaneously the following assumption regulates the relation between xmath7 and xmath6 ass51 suppose i xmath349 are independent and identically distributed xmath350 and xmath351 xmath118 and xmath309 satisfy xmath352 and xmath353 condition ii requires a balance of the dimensionality and the sample size on one hand a relatively large sample size is desired xmath354 so that the effect of estimating xmath342 is negligible asymptotically on the other hand as is common in high dimensional factor analysis a lower bound of the dimensionality is also required condition xmath350 to ensure that the factors are estimated accurately enough such a required balance is common for high dimensional factor analysis eg xcite xcite and in the recent literature for pca eg xcite xcite the iid assumption of covariates xmath17 in condition i can be relaxed with further distributional assumptions on xmath356 eg assuming xmath356 to be gaussian the conditions on xmath118 in condition iii is consistent with those of the previous sections we focus on the case when xmath357 is gaussian and show that under xmath358 xmath359 and under xmath55 xmath360 whose conditional distributions given xmath33 under the null are xmath361 with degree of freedom respectively xmath362 and xmath363 we can derive their standardized limiting distribution as xmath364 this is given in the following result th51 suppose assumptions ass33 ass34 ass42 ass51 hold then under xmath358 xmath365 where xmath48 and xmath49 in addition suppose assumptions ass41 and ass43 further hold xmath366 is iid xmath367 with a diagonal covariance matrix xmath338 whose elements are bounded away from zero and infinity then under xmath55 xmath368 in practice when a relatively small sieve dimension xmath118 is used one can instead use the upper xmath369quantile of the xmath370 distribution for xmath371 we require xmath5 be independent across xmath9 which ensures that the covariance matrix of the leading term xmath372 to have a simple form xmath373 this assumption can be relaxed to allow for weakly dependent xmath374 but many autocovariance terms will be involved in the covariance matrix one may regularize standard autocovariance matrix estimators such as xcite and xcite to account for the high dimensionality moreover we assume xmath338 be diagonal to facilitate estimating xmath342 which can also be weakened to allow for a nondiagonal but sparse xmath338 regularization methods such as thresholding xcite can then be employed though they are expected to be more technically involved we now address the problem of estimating xmath48 when it is unknown once a consistent estimator of xmath86 is obtained all the results achieved carry over to the unknown xmath86 case using a conditioning argument then argue that the results still hold unconditionally as xmath375 in principle many consistent estimators of xmath86 can be employed for example xcite xcite xcite xcite more recently xcite and xcite proposed to select the largest ratio of the adjacent eigenvalues of xmath326 based on the fact that the xmath86 largest eigenvalues of the sample covariance matrix grow as fast as xmath6 increases while the remaining eigenvalues either remain bounded or grow slowly we extend ahn and horenstein s xcite theory in two ways first when the loadings depend on the observable characteristics it is more desirable to work on the projected data xmath71 due to the orthogonality condition of xmath40 and xmath42 the projected data matrix is approximately equal to xmath43 the projected matrix xmath376 thus allows us to study the eigenvalues of the principal matrix component xmath377 which directly connects with the strengths of those factors since the nonvanishing eigenvalues of xmath376 and xmath378 are the same we can work directly with the eigenvalues of the matrix xmath379 second we allow xmath380 let xmath381 denote the xmath116th largest eigenvalue of the projected data matrix xmath137 we assume xmath382 which naturally holds if the sieve dimension xmath118 slowly grows the estimator is defined as xmath383 the following assumption is similar to that of xcite recall that xmath384 is a xmath31 matrix of the idiosyncratic components and xmath385 denotes the xmath386 covariance matrix of xmath339 ass61 the error matrix xmath40 can be decomposed as xmath387 where the eigenvalues of xmath338 are bounded away from zero and infinity xmath388 is a xmath7 by xmath7 positive semidefinite nonstochastic matrix whose eigenvalues are bounded away from zero and infinity xmath389 is a xmath31 stochastic matrix where xmath390 is independent in both xmath4 and xmath9 and xmath391 are iid isotropic sub gaussian vectors that is there is xmath392 for all xmath232 xmath393 there are xmath394 almost surely xmath395 this assumption allows the matrix xmath40 to be both cross sectionally and serially dependent the xmath87 matrix xmath388 captures the serial dependence across xmath9 in the special case of no serial dependence the decomposition eq51 is satisfied by taking xmath396 in addition we require xmath339 to be sub gaussian to apply random matrix theories of xcite for instance when xmath339 is xmath397 for any xmath398 xmath399 and thus condition iii is satisfied finally the almost surely condition of iv seems somewhat strong but is still satisfied by bounded basis functions eg fourier basis we show in the supplementary material xcite that when xmath338 is diagonal xmath5 is cross sectionally independent both the sub gaussian assumption and condition iv can be relaxed the following theorem is the main result of this section th61 under assumptions of theorem th41 and assumption ass61 as xmath400 if xmath118 satisfies xmath401 and xmath402 xmath118 may either grow or stay constant we have xmath403this section presents numerical results to demonstrate the performance of projected pca method for estimating loadings and factors using both real data and simulated data we collected stocks in sp 500 index constituents from crsp which have complete daily closing prices from year 2005 through 2013 and their corresponding market capitalization and book value from compustat there are xmath404 stocks in our data set whose daily excess returns were calculated we considered four characteristics xmath42 as in xcite for each stock size value momentum and volatility which were calculated using the data before a certain data analyzing window so that characteristics are treated known see xcite for detailed descriptions of these characteristics all four characteristics are standardized to have mean zero and unit variance note that the construction makes their values independent of the current data we fix the time window to be the first quarter of the year 2006 which contains xmath405 observations given the excess returns xmath406 and characteristics xmath17 as the input data and setting xmath407 we fit loading functions xmath408 for xmath409 using the projected pca method the four additive components xmath280 are fitted using the cubic spline in the r package gam with sieve dimension xmath410 all the four loading functions for each factor are plotted in figure fig gcurves the contribution of each characteristic to each factor is quite nonlinear xmath411 from financial returns of 337 stocks in sp 500 index they are taken as the true functions in the simulation studies in each panel fixed xmath115 the true and estimated curves for xmath412 are plotted and compared the solid dashed and dotted red curves are the true curves corresponding to the first second and third factors respectively the blue curves are their estimates from one simulation of the calibrated model with xmath413 xmath414 we now treat the estimated functions xmath280 as the true loading functions and calibrate a model for simulations the true model is calibrated as follows take the estimated xmath280 from the real data as the true loading functions for each xmath6 generate xmath366 from xmath415 where xmath416 is diagonal and xmath417 sparse generate the diagonal elements of xmath416 from gammaxmath418 with xmath419 xmath420 calibrated from the real data and generate the off diagonal elements of xmath417 from xmath421 with xmath422 xmath423 then truncate xmath417 by a threshold of correlation xmath424 to produce a sparse matrix and make it positive definite by r package nearpd generate xmath425 from the iid gaussian distribution with mean xmath426 and standard deviation xmath427 calibrated with real data generate xmath428 from a stationary var model xmath429 where xmath430 the model parameters are calibrated with the market data and listed in table table calibfactor finally generate xmath431 herexmath432 is a xmath433 correlation matrix estimated from the real data lccd24d24c 09076 00049 00230 00371 01226 xmath434 00049 08737 00403 02339 01060 xmath435 00230 00403 09266 02803 00755 xmath436 by projected pca p pca red solid and traditional pca dashed blue and xmath437 xmath438 by p pca over 500 repetitions left panel xmath439 right panel xmath440 and xmath441 over 500 repetitions by projected pca p pca solid red and traditional pca dashed blue we simulate the data from the calibrated model and estimate the loadings and factors for xmath442 and xmath443 with xmath6 varying from xmath444 through xmath445 the true and estimated loading curves are plotted in figure fig gcurves to demonstrate the performance of projected pca note that the true loading curves in the simulation are taken from the estimates calibrated using the real data the estimates based on simulated data capture the shape of the true curve though we also notice slight biases at boundaries but in general projected pca fits the model well we also compare our method with the traditional pca method eg xcite the mean values of xmath446 xmath447 xmath448 and xmath449 are plotted in figures fig calibg and fig calibf where xmath450 see section design2 for definitions of xmath451 and xmath452 the breakdown error for xmath453 and xmath39 are also depicted in figure fig calibg in comparison projected pca outperforms pca in estimating both factors and loadings including the nonparametric curves xmath36 and random noise xmath39 the estimation errors for xmath36 of projected pca decrease as the dimension increases which is consistent with our asymptotic theory and xmath454 over 500 repetitions p pca pca and sls respectively represent projected pca regular pca and sieve least squares with known factors design 2 here xmath328 so xmath455 upper two panels xmath6 grows with fixed xmath7 bottom panels xmath7 grows with fixed xmath6 and xmath456 by projected pca solid red and pca dashed blue design 2 upper two panels xmath6 grows with fixed xmath7 bottom panels xmath7 grows with fixed xmath6 consider a different design with only one observed covariate and three factors the three characteristic functions are xmath457 with the characteristic xmath458 being standard normal generate xmath459 from the stationary var1 model that is xmath460 where xmath461 we consider xmath462 we simulate the data for xmath442 or xmath443 and various xmath6 ranging from xmath444 to xmath445 to ensure that the true factor and loading satisfy the identifiability conditions we calculate a transformation matrix xmath199 such that xmath463 xmath464 is diagonal let the final true factors and loadings be xmath465 xmath466 for each xmath6 we run the simulation for xmath445 times we estimate the loadings and factors using both projected pca and pc for projected pca as in our theorem we choose xmath467 with xmath468 and xmath469 to estimate the loading matrix we also compare with a third method sieve least squares sls assuming the factors are observable in this case the loading matrix is estimated by xmath470 where xmath471 is the true factor matrix of simulated data the estimation error measured in max and standardized frobenius norms for both loadings and factors are reported in figures fig simpleg and fig simplef the plots demonstrate the good performance of projected pca in estimating both loadings and factors in particular it works well when we encounter small xmath7 but a large xmath6 in this design xmath328 so the accuracy of estimating xmath472 is significantly improved by using the projected pca figure fig simplef shows that the factors are also better estimated by projected pca than the traditional one particularly when xmath7 is small it is also clearly seen that when xmath6 is fixed the improvement on estimating factors is not significant as xmath7 grows this matches with our convergence results for the factor estimator it is also interesting to compare projected pca with sls sieve least squares with observed factors in estimating the loadings which corresponds to the cases of unobserved and observed factors as we see from figure fig simpleg when xmath6 is small the projected pca is not as good as sls but the two methods behave similarly as xmath6 increases this further confirms the theory and intuition that as the dimension becomes larger the effects of estimating the unknown factors are negligible we now demonstrate the effectiveness of estimating xmath86 by the projected pc s eigenvalue ratio method the data are simulated in the same way as in design 2 xmath442 or xmath443 and we took the values of xmath6 ranging from xmath444 to xmath445 we compare our projected pca based on the projected data matrix xmath137 to the eigenvalue ratio test ah of xcite and xcite which works on the original data matrix xmath326 p pca and ah respectively represent the methods of projected pca and xcite left panel mean right panel standard deviation for each pair of xmath473 we repeat the simulation for xmath443 times and report the mean and standard deviation of the estimated number of factors in figure fig estimatek the projected pca outperforms ah after projection which significantly reduces the impact of idiosyncratic errors when xmath413 we can recover the number of factors almost all the time especially for large dimensions xmath474 on the other hand even when xmath475 projected pca still obtains a closer estimated number of factors we test the loading specifications on the real data we used the same data set as in section sec71 consisting of excess returns from 2005 through 2013 the tests were conducted based on rolling windows with the length of windows spanning from 10 days a month a quarter and half a year for each fixed window length xmath7 we computed the standardized test statistic of xmath321 and xmath476 and plotted them along the rolling windows respectively in figure fig testing in almost all cases the number of factors is estimated to be one in various combinations of xmath477 figure fig testing suggests that the semiparametric factor model is strongly supported by the data judging from the upper panel testing xmath478 we have very strong evidence of the existence of nonvanishing covariate effect which demonstrates the dependence of the market beta s on the covariates xmath42 in other words the market beta s can be explained at least partially by the characteristics of assets the results also provide the theoretical basis for using projected pca to get more accurate estimation from 20060103 to 20121130 the dotted lines are xmath479 in the bottom panel of figure fig testing testing xmath480 we see for a majority of periods the null hypothesis is rejected in other words the characteristics of assets can not fully explain the market beta as intuitively expected and model eq12 in the literature is inadequate however fully nonparametric loadings could be possible in certain time range mostly before financial crisis during 20082010 the market s behavior had much more complexities which causes more rejections of the null hypothesis the null hypothesis xmath328 is accepted more often since 2012 we also notice that larger xmath7 tends to yield larger statistics in both tests as the evidence against the null hypothesis is stronger with larger xmath7 after all the semiparametric model being considered provides flexible ways of modeling equity markets and understanding the nonparametric loading curves this paper proposes and studies a high dimensional factor model with nonparametric loading functions that depend on a few observed covariate variables this model is motivated by the fact that observed variables can explain partially the factor loadings we propose a projected pca to estimate the unknown factors loadings and number of factors after projecting the response variable onto the sieve space spanned by the covariates the projected pca yields a significant improvement on the rates of convergence than the regular methods in particular consistency can be achieved without a diverging sample size as long as the dimensionality grows this demonstrates that the proposed method is useful in the typical hdlss situations in addition we propose new specification tests for the orthogonal decomposition of the loadings which fill the gap of the testing literature for semiparametric factor models our empirical findings show that firm characteristics can explain partially the factor loadings which provide theoretical basis for employing projected pca method on the other hand our empirical study also shows that the firm characteristics can not fully explain the factor loadings so that the proposed generalized factor model is more appropriate throughout the proofs xmath45 and xmath7 may either grow simultaneously with xmath6 or stay constant for two matrices xmath481 with fixed dimensions and a sequence xmath482 by writing xmath483 we mean xmath484 in the regular factor model xmath485 let xmath486 denote a xmath141 diagonal matrix of the first xmath86 eigenvalues of xmath487 then by definition xmath488 let xmath489 then xmath490 where xmath491 still by the equality ea1add xmath501 hence this step is achieved by bounding xmath502 for xmath503 note that in this step we shall not apply a simple inequality xmath504 which is too crude instead with the help of the result xmath505 achieved in step 1 sharper upper bounds for xmath502 can be achieved we do so in lemma b2 in the supplementary material xcite consider the singular value decomposition xmath508 where xmath509 is a xmath93 orthogonal matrix whose columns are the eigenvectors of xmath98 xmath510 is a xmath511 matrix whose columns are the eigenvectors of xmath512 xmath513 is a xmath31 rectangular diagonal matrix with diagonal entries as the square roots of the nonzero eigenvalues of xmath98 in addition by definition xmath253 is a xmath141 diagonal matrix consisting of the largest xmath86 eigenvalues of xmath98 xmath254 is a xmath37 matrix whose columns are the corresponding eigenvectors the columns of xmath514 are the eigenvectors of xmath88 corresponding to the first xmath86 eigenvalues by assumption ass33 xmath528 xmath529 hence xmath530 by lemma b1 in the supplementary material xcite xmath531 similarly xmath532 using the inequality that for the xmath116th eigenvalue xmath533 we have xmath534 for xmath260 hence it suffices to prove that the first xmath86 eigenvalues of xmath320 are bounded away from both zero and infinity which are also the first xmath535 eigenvalues of xmath536 this holds under the theorem s assumption assumption ass31 thus xmath537 which also implies xmath523 fan j liao y and mincheva m 2013 large covariance estimation by thresholding principal orthogonal complements with discussion journal of the royal statistical society series b 75 603680
this paper introduces a projected principal component analysis projected pca which employs principal component analysis to the projected smoothed data matrix onto a given linear space spanned by covariates when it applies to high dimensional factor analysis the projection removes noise components we show that the unobserved latent factors can be more accurately estimated than the conventional pca if the projection is genuine or more precisely when the factor loading matrices are related to the projected linear space when the dimensionality is large the factors can be estimated accurately even when the sample size is finite we propose a flexible semiparametric factor model which decomposes the factor loading matrix into the component that can be explained by subject specific covariates and the orthogonal residual component the covariates effects on the factor loadings are further modeled by the additive model via sieve approximations by using the newly proposed projected pca the rates of convergence of the smooth factor loading matrices are obtained which are much faster than those of the conventional factor analysis the convergence is achieved even when the sample size is finite and is particularly appealing in the high dimension low sample size situation this leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings the proposed method is illustrated by both simulated data and the returns of the components of the sp 500 index style arxiv generalcfg
introduction projected principal component analysis projected-pca in conventional factor models projected-pca in semiparametric factor models semiparametric specification test estimating the number of factors from projected data numerical studies conclusions proofs for section3
the leptonic decays of a charged pseudoscalar meson xmath7 are processes of the type xmath8 where xmath9 xmath10 or xmath11 because no strong interactions are present in the leptonic final state xmath12 such decays provide a clean way to probe the complex strong interactions that bind the quark and antiquark within the initial state meson in these decays strong interaction effects can be parametrized by a single quantity xmath13 the pseudoscalar meson decay constant the leptonic decay rate can be measured by experiment and the decay constant can be determined by the equation ignoring radiative corrections xmath14 where xmath15 is the fermi coupling constant xmath16 is the cabibbo kobayashi maskawa ckm matrix xcite element xmath17 is the mass of the meson and xmath18 is the mass of the charged lepton the quantity xmath13 describes the amplitude for the xmath19 and xmath20quarks within the xmath21 to have zero separation a condition necessary for them to annihilate into the virtual xmath22 boson that produces the xmath12 pair the experimental determination of decay constants is one of the most important tests of calculations involving nonperturbative qcd such calculations have been performed using various models xcite or using lattice qcd lqcd the latter is now generally considered to be the most reliable way to calculate the quantity knowledge of decay constants is important for describing several key processes such as xmath23 mixing which depends on xmath24 a quantity that is also predicted by lqcd calculations experimental determination xcite of xmath24 with the leptonic decay of a xmath25 meson is however very limited as the rate is highly suppressed due to the smallness of the magnitude of the relevant ckm matrix element xmath26 the charm mesons xmath27 and xmath28 are better instruments to study the leptonic decays of heavy mesons since these decays are either less ckm suppressed or favored ie xmath29 and xmath30 are much larger than xmath31 thus the decay constants xmath32 and xmath33 determined from charm meson decays can be used to test and validate the necessary lqcd calculations applicable to the xmath34meson sector amongthe leptonic decays in the charm quark sector xmath35 decays are more accessible since they are ckm favored furthermore the large mass of the xmath11 lepton removes the helicity suppression that is present in the decays to lighter leptons the existence of multiple neutrinos in the final state however makes measurement of this decay challenging physics beyond the standard model sm might also affect leptonic decays of charmed mesons depending on the non sm features the ratio of xmath36 could be affected xcite as could the ratio xcite xmath37 any of the individual widths might be increased or decreased there is an indication of a discrepancy between the experimental determinations xcite of xmath33 and the most recent precision lqcd calculation xcite this disagreement is particularly puzzling since the cleo c determination xcite of xmath32 agrees well with the lqcd calculation xcite of that quantity some xcite conjecture that this discrepancy may be explained by a charged higgs boson or a leptoquark in this article we report an improved measurement of the absolute branching fraction of the leptonic decay xmath0 charge conjugate modes are implied with xmath1 from which we determine the decay constant xmath33 we use a data sample of xmath38 events provided by the cornell electron storage ring cesr and collected by the cleo c detector at the center of mass cm energy xmath39 mev near xmath3 peak production xcite the data sample consists of an integrated luminosity of xmath40 xmath41 containing xmath42 xmath3 pairs we have previously reported xcite measurements of xmath43 and xmath0 with a subsample of these data a companion article xcite reports measurements of xmath33 from xmath43 and xmath0 with xmath44 using essentially the same data sample as the one used in this measurement the cleo c detector xcite is a general purpose solenoidal detector with four concentric components utilized in this measurement a small radius six layer stereo wire drift chamber a 47layer main drift chamber a ring imaging cherenkov rich detector and an electromagnetic calorimeter consisting of 7800 csitl crystals the two drift chambers operate in a xmath45 t magnetic field and provide charged particle tracking in a solid angle of xmath46 of xmath47 the chambers achieve a momentum resolution of xmath48 at xmath49 gevxmath50 the main drift chamber also provides specific ionization xmath51 measurements that discriminate between charged pions and kaons the rich detector covers approximately xmath52 of xmath47 and provides additional separation of pions and kaons at high momentum the photon energy resolution of the calorimeter is xmath53 at xmath54 gev and xmath55 at xmath56 mev electron identification is based on a likelihood variable that combines the information from the rich detector xmath51 and the ratio of electromagnetic shower energy to track momentum xmath57 we use a geant based xcite monte carlo mc simulation program to study efficiency of signal event selection and background processes physics events are generated by evtgen xcite tuned with much improved knowledge of charm decays xcite and final state radiation fsr is modeled by the photos xcite program the modeling of initial state radiation isr is based on cross sections for xmath3 production at lower energies obtained from the cleo c energy scan xcite near the cm energy where we collect the sample the presence of two xmath58 mesons in a xmath3 event allows us to define a single tag st sample in which a xmath58 is reconstructed in a hadronic decay mode and a further double tagged dt subsample in which an additional xmath59 is required as a signature of xmath60 decay the xmath59 being the daughter of the xmath60 the xmath61 reconstructed in the st sample can be either primary or secondary from xmath62 or xmath63 the st yield can be expressed as xmath64 where xmath65 is the produced number of xmath3 pairs xmath66 is the branching fraction of hadronic modes used in the st sample and xmath67 is the st efficiency the xmath68 counts the candidates not events and the factor of 2 comes from the sum of xmath28 and xmath61 tags our double tag dt sample is formed from events with only a single charged track identified as an xmath69 in addition to a st the yield can be expressed as xmath70 where xmath71 is the leptonic decay branching fraction including the subbranching fraction of xmath1 decay xmath72 is the efficiency of finding the st and the leptonic decay in the same event from the st and dt yields we can obtain an absolute branching fraction of the leptonic decay xmath71 without needing to know the integrated luminosity or the produced number of xmath3 pairs xmath73 where xmath74 xmath75 is the effective signal efficiency because of the large solid angle acceptance with high segmentation of the cleo c detector and the low multiplicity of the events with which we are concerned xmath76 where xmath77 is the leptonic decay efficiency hence the ratio xmath78 is insensitive to most systematic effects associated with the st and the signal branching fraction xmath71 obtained using this procedure is nearly independent of the efficiency of the tagging mode to minimize systematic uncertainties we tag using three two body hadronic decay modes with only charged particles in the final state the three st modes and xmath79 are shorthand labels for xmath80 events within mass windows described below of the xmath81 peak in xmath82 and the xmath83 peak in xmath84 respectively no attempt is made to separate these resonance components in the xmath85 dalitz plot are xmath86 xmath79 and xmath87 using these tag modes also helps to reduce the tag bias which would be caused by the correlation between the tag side and the signal side reconstruction if tag modes with high multiplicity and large background were used the effect of the tag bias xmath88 can be expressed in terms of the signal efficiency xmath74 defined by xmath89 where xmath90 is the st efficiency when the recoiling system is the signal leptonic decay with single xmath59 in the other side of the tag as the general st efficiency xmath67 when the recoiling system is any possible xmath91 decays will be lower than the xmath90 sizable tag bias could be introduced if the multiplicity of the tag mode were high or the tag mode were to include neutral particles in the final state as shown in sec sec results this effect is negligible in our chosen clean tag modes the xmath92 decay is reconstructed by combining oppositely charged tracks that originate from a common vertex and that have an invariant mass within xmath93 mev of the nominal mass xcite we require the resonance decay to satisfy the following mass windows around the nominal masses xcite xmath94 xmath95 mev and xmath96 xmath97 mev we require the momenta of charged particles to be xmath56 mev or greater to suppress the slow pion background from xmath98 decays through xmath99 we identify a st by using the invariant mass of the tag xmath100 and recoil mass against the tag xmath101 the recoil mass is defined as xmath102 where xmath103 is the net four momentum of the xmath4 beam taking the finite beam crossing angle into account xmath104 is the four momentum of the tag with xmath105 computed from xmath106 and the nominal mass xcite of the xmath91 meson we require the recoil mass to be within xmath107 mev of the xmath108 mass xcite this loose window allows both primary and secondary xmath91 tags to be selected to estimate the backgrounds in our st and dt yields from the wrong tag combinations incorrect combinations that by chance lie within the xmath109 signal region we use the tag invariant mass sidebands we define the signal region as xmath110 mev xmath111 mev and the sideband regions as xmath112 mev xmath113 mev or xmath114 mev xmath115 mev where xmath116 is the difference between the tag mass and the nominal mass we fit the st xmath109 distributions to the sum of double gaussian signal function plus second degree chebyshev polynomial background function to get the tag mass sideband scaling factor the invariant mass distributions of tag candidates for each tag mode are shown in fig fig dm and the st yield and xmath109 sideband scaling factor are summarized in table table data single we find xmath117 summed over the three tag modes table data single summary of single tag st yields where xmath118 is the yield in the st mass signal region xmath119 is the yield in the sideband region xmath120 is the sideband scaling factor and xmath68 is the scaled sideband subtracted yield colsoptionsheader we considered six semileptonic decays xmath121 xmath122 xmath123 xmath124 xmath125 xmath126 and xmath127 as the major sources of background in the xmath128 signal region the second dominates the nonpeaking background and the fourth with xmath129 dominates the peaking background uncertainty in the signal yield due to nonpeaking background xmath130 is assessed by varying the semileptonic decay branching fractions by the precision with which they are known xcite imperfect knowledge of xmath131 gives rise to a systematic uncertainty in our estimate of the amount of peaking background in the signal region which has an effect on our branching fraction measurement of xmath132 we study differences in efficiency data vs mc events due to the extra energy requirement extra track veto and xmath133 requirement by using samples from data and mc events in which both the xmath134 and xmath2 satisfy our tag requirements ie double tag events we then apply each of the above mentioned requirements and compare loss in efficiency of data vs mc events in this waywe obtain a correction of xmath135 for the extra energy requirement and systematic uncertainties on each of the three requirements of xmath136 all equal by chance the nonxmath69 background in the signal xmath69 candidate sample is negligible xmath137 due to the low probability xmath138 per track that hadrons xmath139 or xmath140 are misidentified as xmath69 xcite uncertainty in these backgrounds produces a xmath141 uncertainty in the measurement of xmath142 the secondary xmath69 backgrounds from charge symmetric processes such as xmath143 dalitz decay xmath144 and xmath145 conversion xmath146 are assessed by measuring the wrong sign signal electron in events with xmath147 the uncertainty in the measurement from this sourceis estimated to be xmath148 other possible sources of systematic uncertainty include xmath68 xmath137 tag bias xmath149 tracking efficiency xmath148 xmath59 identification efficiency xmath150 and fsr xmath150 combining all contributions in quadrature the total systematic uncertainty in the branching fraction measurement is estimated to be xmath151 in summary using the sample of xmath152 tagged xmath28 decays with the cleo c detector we obtain the absolute branching fraction of the leptonic decay xmath153 through xmath154 xmath155 where the first uncertainty is statistical and the second is systematic this result supersedes our previous measurement xcite of the same branching fraction which used a subsample of data used in this work the decay constant xmath33 can be computed using eq eq f with known values xcite xmath156 gevxmath157 xmath158 mev xmath159 mev and xmath160 s we assume xmath161 and use the value xmath162 given in ref we obtain xmath163 combining with our other determination xcite of xmath164 mev with xmath43 and xmath0 xmath165 decays we obtain xmath166 this result is derived from absolute branching fractions only and is the most precise determination of the xmath91 leptonic decay constant to date our combined result is larger than the recent lqcd calculation xmath167 mev xcite by xmath168 standard deviations the difference between data and lqcd for xmath33 could be due to physics beyond the sm xcite unlikely statistical fluctuations in the experimental measurements or the lqcd calculation or systematic uncertainties that are not understood in the lqcd calculation or the experimental measurements combining with our other determination xcite of xmath169 via xmath44 we obtain xmath170 using this with our measurement xcite of xmath171 we obtain the branching fraction ratio xmath172 this is consistent with xmath173 the value predicted by the sm with lepton universality as given in eq eq f with known masses xcite we gratefully acknowledge the effort of the cesr staff in providing us with excellent luminosity and running conditions d cronin hennessy and a ryd thank the ap sloan foundation this work was supported by the national science foundation the us department of energy the natural sciences and engineering research council of canada and the uk science and technology facilities council c amsler et al particle data group phys b 667 1 2008 k ikado et al belle collaboration phys lett 97 251802 2006 b aubert et al babar collaboration phys rev d 77 011107 2008 a g akeroyd and c h chen phys d 75 075004 2007 a g akeroyd prog phys 111 295 2004 j l hewett arxiv hep ph9505246 w s hou phys d 48 2342 1993 e follana c t h davies g p lepage and j shigemitsu hpqcd collaboration phys lett 100 062002 2008 b i eisenstein et al cleo collaboration phys rev d 78 052003 2008 b a dobrescu and a s kronfeld phys 100 241802 2008 d cronin hennessy et al cleo collaboration arxiv08013418 m artuso et al cleo collaboration phys lett 99 071802 2007 k m ecklund et al cleo collaboration phys rev lett 100 161801 2008 j p alexander et al cleo collaboration phys rev d 79 052001 2009 y kubota et al cleo collaboration nucl instrum a 320 66 1992 d peterson et al instrum methods phys sec a 478 142 2002 m artuso et al nucl instrum methods phys a 502 91 2003 s dobbs et al cleo collaboration phys rev d 76 112001 2007 j p alexander et al cleo collaboration phys rev lett 100 161804 2008 e barberio and z was comput commun 79 291 1994
we have studied the leptonic decay xmath0 via the decay channel xmath1 using a sample of tagged xmath2 decays collected near the xmath3 peak production energy in xmath4 collisions with the cleo c detector we obtain xmath5 and determine the decay constant xmath6 mev where the first uncertainties are statistical and the second are systematic
[sec:introduction]introduction [sec:detector]data and the cleo- detector [sec:analysys]analysis method [sec:conclusion]summary
dependence logic xcite is an extension of first order logic which adds dependence atoms of the form xmath0 to it with the intended interpretation of the value of the term xmath1 is a function of the values of the terms xmath2 the introduction of such atoms is roughly equivalent to the introduction of non linear patterns of dependence and independence between variables of branching quantifier logic xcite or independence friendly logic xcite for example both the branching quantifier logic sentence xmath3 and the independence friendly logic sentence xmath4 correspond in dependence logic to xmath5 in the sense that all of these expressions are equivalent to the skolem formula xmath6 as this example illustrates the main peculiarity of dependence logic compared to the others above mentioned logics lies in the fact that in dependence logic the notion of dependence and independence between variables is explicitly separated from the notion of quantification this makes it an eminently suitable formalism for the formal analysis of the properties of dependence itself in a first order setting and some recent papers xcite explore the effects of replace dependence atoms with other similar primitives such as independence atoms xcite multivalued dependence atoms xcite or inclusion or atoms xcite branching quantifier logic independence friendly logic and dependence logic as well as their variants are called logics of imperfect information indeed the truth conditions of their sentences can be obtained by defining for every model xmath7 and sentence xmath8 an imperfect information semantic game xmath9 between a verifier also called eloise and a falsifier also called abelard and then asserting that xmath8 is true in xmath7 if and only if the verifier has a winning strategy in xmath9 as an alternative of this non compositional game theoretic semantics which is an imperfect information variant of hintikka s game theoretic semantics for first order logic xcite hodges introduced in xcite team semantics also called trump semantics a compositional semantics for logics of imperfect information which is equivalent to game theoretic semantics over sentences and in which formulas are satisfied or not satisfied not by single assignments but by sets of assignments called teams in this work we will be mostly concerned with team semantics and some of its variants we refer the reader to the relevant literature for example to xcite and xcite for further information regarding these logics in the rest of this section we will content ourselves with recalling the definitions and results which will be useful for the rest of this work let xmath7 be a first order model and let xmath10 be a finite set of variables then an assignment over xmath7 with domain xmath10 is a function xmath11 from xmath10 to the set xmath12 of all elements of xmath7 furthermore for any assignment xmath11 over xmath7 with domain xmath10 any element xmath13 and any variable xmath14 not necessarily in xmath10 we write xmath15 for the assignment with domain xmath16 such that xmath17w leftbeginarrayl l m mboxif w v sw mboxif w in v backslash v endarray right for all xmath18 let xmath7 be a first order model and let xmath10 be a finite set of variables xmath19 over xmath7 with domain xmath20 is a set of assignments from xmath10 to xmath7 let xmath19 be a team over xmath7 and let xmath10 be a finite set of variables andlet xmath21 be a finite tuple of variables in its domain then xmath22 is the relation xmath23 furthermore we write xmath24 for xmath25 as is often the case for dependence logic we will assume that all our formulas are in negation normal form let xmath26 be a first order signature then the set of all dependence logic formula with signature xmath26 is given by xmath27 where xmath28 ranges over all relation symbols xmath29 ranges over all tuples of terms of the appropriate arities xmath30 range over all terms and xmath14 ranges over the set xmath31 of all variables the set xmath32 of all free variables of a formula xmath8 is defined precisely as in first order logic with the additional condition that all variables occurring in a dependence atom are free with respect to it dl ts let xmath7 be a first order model let xmath19 be a team over it and let xmath8 be a dependence logic formula with the same signature of xmath7 and with free variables in xmath33 then we say that xmath19 satisfies xmath8 in xmath7 and we write xmath34 if and only if ts lit xmath8 is a first order literal and xmath35 for all xmath36 ts dep xmath8 is a dependence atom xmath37 and any two assignments xmath38 which assign the same values to xmath2 also assign the same value to xmath1 tsxmath39 xmath8 is of the form xmath40 and there exist two teams xmath41 and xmath42 such that xmath43 xmath44 and xmath45 tsxmath46 xmath8 is of the form xmath47 xmath48 and xmath49 tsxmath50 xmath8 is of the form xmath51 and there exists a function xmath52 such that xmath53 psi where xmath54 sfsv s in x tsxmath55 xmath8 is of the form xmath56 and xmath57 psi where xmath58 sm v s in x m in textttdomm the disjunction of dependence logic does not behave like the classical disjunction for example it is easy to see that xmath59 is not equivalent to xmath60 as the former holds for the team xmath61 and the latter does not however it is possible to define the classical disjunction in terms of the other connectives defin classicor let xmath62 and xmath63 be two dependence logic formulas and let xmath64 and xmath65 be two variables not occurring in them then we write xmath66 as a shorthand for xmath67 propo classicor for all formulas xmath62 and xmath63 all models xmath7 with at least two elements whose signature contains that of xmath62 and xmath63 and all teams xmath19 whose domain contains the free variables of xmath62 and xmath63 xmath68 the following four proportions are from xcite propo emptyteam for all models xmath7 and dependence logic formulas xmath8 xmath69 if xmath34 and xmath70 then xmath71 if xmath34 and xmath72 then xmath73 dltosigma let xmath74 be a dependence logic formula with free variables in xmath21 then there exists a xmath75 sentence xmath76 such that xmath77 for all suitable models xmath7 and for all nonempty teams xmath19 furthermore in xmath76 the symbol xmath28 occurs only negatively as proved in xcite there is also a converse for the last proposition sigmatodl let xmath76 be a xmath75 sentence in which xmath28 occurs only negatively then there exists a dependence logic formula xmath74 where xmath78 is the arity of xmath28 such that xmath77 for all suitable models xmath7 and for all nonempty teams xmath19 whose domain contains xmath21 because of this correspondence between dependence logic and existential second order logic it is easy to see that dependence logic is closed under existential quantification for all dependence logic formulas xmath79 over the signature xmath80 there exists a dependence logic formula xmath81 over the signature xmath26 such that xmath82 for all models xmath7 with domain xmath26 and for all teams xmath19 over the free variables of xmath8 therefore in the rest of this work we will add second order existential quantifiers to the language of dependence logic and we will write xmath81 as a shorthand for the corresponding dependence logic expression game logics are logical formalisms for reasoning about games and their properties in a very general setting whereas the game theoreticsemantics approach attempts to use game theoretic techniques to interpret logical systems game logics attempt to put logic to the service of game theory by providing a high level language for the study of games they generally contain two different kinds of expressions 1 game terms which are descriptions of games in terms of compositions of certain primitive atomic games whose interpretation is presumed fixed for any given game model 2 formulas which in general correspond to assertions about the abilities of players in games in this subsection we are going to summarize the definition of a variant of dynamic game logic xcite from our formalism in this we follow xcite then in the next subsection we will discuss a remarkable connection between first order logic and dynamic game logic discovered by johan van benthem in xcite one of the fundamental semantic concepts of dynamic game logic is the notion of forcing relation let xmath83 be a nonempty set of states a forcing relation over xmath83 is a set xmath84 where xmath85 is the powerset of xmath83 in brief a forcing relation specifies the abilities of a player in a perfect information game xmath86 if and only if the player has a strategy that guarantees that whenever the initial position of the game is xmath11 the terminal position of the game will be in xmath19 a two player game is then defined as a pair of forcing relations satisfying some axioms let xmath83 be a nonempty set of states a game over xmath83 is a pair xmath87 of forcing relations over xmath83 satisfying the following conditions for all xmath88 all xmath89 and all xmath90 monotonicity if xmath91 and xmath92 then xmath93 consistency if xmath94 and xmath95 then xmath96 non triviality xmath97 determinacy if xmath98 then xmath99 where xmath100 this implies that the other player can force it to belong to the complement of xmath19 let xmath83 be a nonempty set of states let xmath101 be a nonempty set of atomic propositions and let xmath102 be a nonempty set of atomic game symbols then a game model over xmath83 xmath101 and xmath102 is a triple xmath103 where xmath104 is a game over xmath83 for all xmath105 and where xmath10 is a valutation function associating each xmath106 to a subset xmath107 the language of dynamic game logic as we already mentioned consists of game terms built up from atomic games and of formulas built up from atomic proposition the connection between these two parts of the language is given by the test operation xmath108 which turns any formula xmath8 into a test game and the diamond operation which combines a game term xmath109 and a formula xmath8 into a new formula xmath110 which asserts that agent xmath111 can guarantee that the game xmath109 will end in a state satisfying xmath8 let xmath101 be a nonempty set of atomic propositions and let xmath102 be a nonempty set of atomic game formulas then the sets of all game terms xmath109 and formulas xmath8 are defined as xmath112 for xmath113 ranging over xmath101 xmath114 ranging over xmath102 and xmath111 ranging over xmath115 we already mentioned the intended interpretations of the test connective xmath108 and of the diamond connective xmath110 the interpretations of the other game connectives should be clear xmath116 is obtained by swapping the roles of the players in xmath109 xmath117 is a game in which the existential player xmath118 chooses whether to play xmath119 or xmath120 and xmath121 is the concatenation of the two games corresponding to xmath119 and xmath120 respectively let xmath122 be a game model over xmath83 xmath102 and xmath101 then for all game terms xmath109 and all formulas xmath8 of dynamic game logic over xmath102 and xmath101 we define a game xmath123 and a set xmath124 as follows dgl atomic game for all xmath105 xmath125 dgl test for all formulas xmath8 xmath126 where xmath127 iff xmath128 and xmath36 xmath129 iff xmath130 or xmath36 for all xmath89 and all xmath19 with xmath131 dgl concat for all game terms xmath119 and xmath120 xmath132 where for all xmath88 and for xmath133 xmath134 xmath135 if and only if there exists a xmath136 such that xmath137 and for each xmath138 there exists a set xmath139 satisfying xmath140 such that xmath141 dglxmath142 for all game terms xmath119 and xmath120 xmath143 where xmath127 if and only if xmath144 or xmath145 and xmath129 if and only if xmath146 and xmath147 where as before xmath133 and xmath134multiblock footnote omitted dgl dual if xmath148 then xmath149 dglxmath150 xmath151 dgl atomic pr xmath152 dglxmath153 xmath154 dglxmath39 xmath155 dglxmath156 if xmath148 then for all xmath8 xmath157 if xmath128 we say that xmath8 is satisfied by xmath11 in xmath158 and we write xmath35 we will not discuss here the properties of this logic or the vast amount of variants and extensions of it which have been developed and studied it is worth pointing out however that xcite introduced a concurrent dynamic game logic that can be considered one of the main sources of inspiration for the transition logic that we will develop in subsection subsect tdl in this subsection we will briefly recall a remarkable result from xcite which establishes a connection between dynamic game logic and first order logic in brief as the following two theorems demonstrate either of these logics can be seen as a special case of the other in the sense that models and formulas of the one can be uniformly translated into models of the other in a way which preserves satisfiability and truth theo repfo1 let xmath122 be any game model let xmath8 be any game formula for the same language and let xmath89 then it is possible to uniformly construct a first order model xmath159 a first order formula xmath160 and an assignment xmath161 of xmath159 such that xmath162 theo repfo2 let xmath7 be any first order model let xmath8 be any first order formula for the signature of xmath7 and let xmath11 be an assignment of xmath7 then it is possible to uniformly construct a game model xmath163 a game formula xmath164 and a state xmath165 such that xmath166 we will not discuss here the proofs of these two results their significance however is something about which is necessary to spend a few words in brief what this back and forth representation between first order logic and dynamic game logic tells us is that it is possible to understand first order logic as a logic for reasoning about determined games in the next sections we will attempt to develop a similar result for the case of dependence logic we will now define a variant of dynamic game logic which we will call transition logic it deviates from the basic framework of dynamic game logic in two fundamental ways 1 it considers one player games against nature instead of two player games as is usual in dynamic game logic 2 it allows for uncertainty about the initial position of the game hence transition logic can be seen as a decision theoretic logic rather than a game theoretic one transition logic formulas as we will see correspond to assertions about the abilities of a single agent acting under uncertainty instead of assertions about the abilities of agents interacting with each other in principle it is certainly possible to generalize the approach discussed here to multiple agents acting in situations of imperfect information and doing so might cause interesting phenomena to surface but for the time being we will content ourselves with developing this formalism and discussing its connection with dependence logic our first definition is a fairly straightforward generalization of the concept of forcing relation let xmath83 be a nonempty set of states a transition system over xmath83 is a nonempty relation xmath167 satisfying the following requirements downwards closure if xmath168 and xmath169 then xmath170 monotonicity if xmath168 and xmath171 then xmath172 non creation xmath173 for all xmath174 non triviality if xmath175 then xmath176 informally speaking a transition system specifies the abilities of an agent for all xmath90 such that xmath168 the agent has a strategy which guarantees that the output of the transition will be in xmath177 whenever the input of the transition is in xmath19 the four axioms which we gave capture precisely this intended meaning as we will see a decision game is a triple xmath178 where xmath83 is a nonempty set of states xmath118 is a nonempty set of possible decisions for our agent and xmath179 is an outcome function from xmath180 to xmath85 if xmath181 we say that xmath182 is a possible outcome of xmath11 under xmath183 if xmath184 we say that xmath183 fails on input xmath11 let xmath178 be a decision game and let xmath90 then we say that xmath102 allows the transition xmath185 and we write xmath186 if and only if there exists a xmath187 such that xmath188 for all xmath36 that is if and only if our agent can make a decision which guarantees that the outcome will be in xmath177 whenever the input is in xmath19 a set xmath167 is a transition system if and only if there exists a decision game xmath178 such that xmath189 let xmath167 be any transition system let us enumerate its elements xmath190 and let us consider the game xmath191 where xmath192 suppose that xmath168 if xmath193 then xmath194 follows at once by definition if instead xmath175 by non triviality we have that xmath177 is nonempty too and furthermore xmath195 for some xmath196 then xmath197 for all xmath198 as required now suppose that xmath186 then there exists a xmath196 such that xmath199 for all xmath36 if xmath175 this implies that xmath200 and xmath201 hence by monotonicity and downwards closure xmath168 as required if instead xmath193 then by non creation we have again that xmath168 conversely consider a decision game xmath178 then the set of its abilities satisfies our four axioms downwards closure suppose that xmath202 and that xmath169 by definition there exists a xmath187 such that xmath188 for all xmath36 but then the same holds for all xmath203 and hence xmath204 monotonicity suppose that xmath202 and that xmath171 by definition there exists a xmath187 such that xmath188 for all xmath36 but then for all such xmath11 xmath205 too and hence xmath206 non creation let xmath174 and let xmath187 be any possible decision then trivially xmath188 for all xmath207 and hence xmath208 non triviality let xmath209 and suppose that xmath186 then there exists a xmath183 such that xmath188 for all xmath36 and hence in particular xmath210 therefore xmath177 is nonempty what this theorem tells us is that our notion of transition system is the correct one it captures precisely the abilities of an agent making choices under imperfect information and attempting to guarantee that if the initial state is in a set xmath19 the outcome will be in a set xmath177 let xmath83 be a nonempty set of states a trump over xmath83 is a nonempty downwards closed family of subsets of xmath83 whereas a transition system describes the abilities of an agent to transition from a set of possible initial states to a set of possible terminal states a trump describes the agent s abilities to reach some terminal state from a set of possible initial states let xmath211 be a transition system and let xmath212 then xmath213 forms a trump conversely for any trump xmath214 over xmath83 there exists a transition system xmath211 such that xmath215 for any nonempty xmath174 let xmath211 be a transition system then if xmath168 and xmath169 by downwards closure we have at once that xmath170 furthermore xmath173 for any xmath177 hence xmath216 is a trump as required conversely let xmath217 be a trump and let us enumerate its elements as xmath218 then define xmath211 as xmath219 it is easy to see that xmath211 is a transition system and by construction for xmath220 we have that xmath221 where we used the fact that xmath214 is downwards closed we can now define the syntax and semantics of transition logic let xmath101 be a set of atomic propositional symbols and let xmath222 be a set of atomic transition symbols then a transition model is a tuple xmath223 where xmath83 is a nonempty set of states xmath224 is a transition system over xmath83 for any xmath225 and xmath10 is a function sending each xmath106 into a trump of xmath83 let xmath101 be a set of atomic propositions and let xmath222 be a set of atomic transitions then the transition terms and formulas of our language are defined respectively as xmath226 where xmath227 ranges over xmath222 and xmath113 ranges over xmath101 let xmath228 be a transition model let xmath229 be a transition term and let xmath90 then we say that xmath229 allows the transition from xmath19 to xmath177 and we write xmath230 if and only if tl atomic tr xmath231 for some xmath225 and xmath232 tl test xmath233 for some transition formula xmath8 such that xmath234 in the sense described later in this definition and xmath92 tlxmath235 xmath236 and xmath237 for two xmath238 and xmath239 such that xmath240 and xmath241 tlxmath242 xmath243 xmath244 and xmath245 tl concat xmath246 and there exists a xmath247 such that xmath248 and xmath249 analogously let xmath8 be a transition formula and let xmath250 then we say that xmath19 satisfies xmath8 and we write xmath234 if and only if tlxmath251 xmath252 tl atomic pr xmath253 for some xmath106 and xmath254 tlxmath39 xmath255 and xmath256 or xmath257 tlxmath46 xmath258 xmath256 and xmath257 tlxmath156 xmath259 and there exists a xmath177 such that xmath230 and xmath260 for any transition model xmath261 transition term xmath229 and transition formula xmath8 the set xmath262 is a transition system and the set xmath263 is a trump by induction we end this subsection with a few simple observations about this logic first of all we did not take the negation as one of the primitive connectives indeed transition logic much like dependence logic has an intrinsically existential character it can be used to reason about which sets of possible states an agent may reach but not to reason about which ones such an agent must reach there is of course no reason in principle why a negation could not be added to the language just as there is no reason why a negation can not be added to dependence logic thus obtaining the far more powerful team logic xcite however this possible extension will not be studied in this work the connectives of transition logic are for the most part very similar to those of dynamic game logic and their interpretation should pose no difficulties the exception is the tensor operator xmath264 which substitutes the game union operator xmath117 and which while sharing roughly the same informal meaning behaves in a very different way from the semantic point of view for example it is not in general idempotent the decision game corresponding to xmath264 can be described as follows first the agent chooses an index xmath265 then he or she picks a strategy for xmath266 and plays accordingly however the choice of xmath111 may be a function of the initial state hence the agent can guarantee that the output state will be in xmath177 whenever the input state is in xmath19 only if he or she can split xmath19 into two subsets xmath238 and xmath239 and guarantee that the state in xmath177 will be reached from any state in xmath238 when xmath267 is played and from any state in xmath239 when xmath268 is played it is also of course possible to introduce a true choice operator xmath269 with semantical condition tlxmath142 xmath270 iff xmath244 or xmath245 but we will not explore this possibility any further in this work nor we will consider any other possible connectives such as for example the iteration operator tlxmath271 xmath272 iff there exist xmath273 and xmath274 such that xmath275 xmath276 and xmath277 for all xmath278 this subsection contains the central result of this work that is the analogues of theorems theo repfo1 and theo repfo2 for dependence logic and transition logic representing dependence logic models and formulas in transition logic is fairly simple defin dl2tl mod let xmath7 be a first order model then xmath279 is the transition model xmath280 such that xmath83 is the set of all teams over xmath7 the set of all atomic transition symbols is xmath281 and hence xmath222 is xmath282 for any variable xmath14 xmath283 subseteq y and xmath284 subseteq y for any first order literal or dependence atom xmath285 xmath286 defin dl2tl form let xmath8 be a dependence logic formula then xmath287 is the transition term defined as follows 1 if xmath8 is a literal or a dependence atom xmath288 2 if xmath255 xmath289 3 if xmath258 xmath290 4 if xmath291 xmath292 5 if xmath293 xmath294 theo tl rep1 for all first order models xmath7 teams xmath19 and formulas xmath8 the following are equivalent xmath34 xmath295 xmath296 xmath297 we show by structural induction on xmath8 that the first condition is equivalent to the last one the equivalences between the last one and the second and third ones are then trivial 1 if xmath8 is a literal or a dependence atom xmath298 if and only if xmath299 that is if and only if xmath34 2 xmath300 if and only if xmath237 for two xmath301 such that xmath302 and xmath303 by induction hypothesis this can be the case if and only if xmath304 and xmath305 that is if and only if xmath306 3 xmath307 if and only if xmath308 and xmath309 that is by induction hypothesis if and only if xmath310 4 xmath311 if and only if there exists a xmath177 such that xmath312 for some xmath313 and xmath314 by induction hypothesis and downwards closure this can be the case if and only if xmath53 psi for some xmath313 that is if and only if xmath315 5 xmath316 if and only if xmath317 for some xmath318 that is if and only if xmath57 psi that is if and only if xmath319 one interesting aspect of this representation result is that dependence logic formulas correspond to transition logic transitions not to transition logic formulas this can be thought of as one first hint of the fact that dependence logic can be thought of as a logic of transitions and in the later sections we will explore this idea more in depth representing transition models game terms and formulas in dependence logic is somewhat more complex let xmath320 be a transition model furthermore for any xmath225 let xmath321 and for any xmath106 let xmath322 then xmath323 is the first order model with domain for the disjoint union of the sets xmath324 and xmath325 xmath326 whose signature contains for every xmath225 a ternary relation xmath327 whose interpretation is xmath328 for every xmath106 a binary relation xmath329 whose interpretation is xmath330 for any transition formula xmath8 and variable xmath331 the dependence logic formula xmath332 is defined as 1 xmath333 is xmath251 2 for all xmath106 xmath334 is xmath335 3 xmath336 is xmath337 where xmath338 is the classical disjunction introduced in definition defin classicor 4 xmath339 is xmath340 5 xmath341 is xmath342 where for any transition term xmath229 variable xmath331 and unary relation symbol xmath343 xmath344 is defined as 1 for all xmath225 xmath345 is xmath346 2 for all formulas xmath8 xmath347 is xmath348 3 xmath349 4 xmath350 5 xmath351 for a new and unused variable xmath352 theo tl rep2 for all transition models xmath320 transition terms xmath229 transition formulas xmath8 variables xmath331 sets xmath353 and teams xmath19 over xmath323 with xmath354 is a set of states of the transition model xmath355 and xmath356 the proof is by structural induction on terms and formulas let us first consider the cases corresponding to formulas 1 for all teams xmath19 xmath357 and xmath358 as requiredsuppose that xmath359 then there exists a xmath360 such that xmath361 vpj x hence we have that xmath362 and by downwards closure this implies that xmath363 and hence that xmath364 as required conversely suppose that xmath364 then xmath363 and hence xmath365 for some xmath366 then we have by definition that xmath361 vpj x and finally that xmath367 3 by proposition propo classicor xmath368 if and only if xmath369 or xmath370 by induction hypothesis this is the case if and only if xmath371 or xmath372 that is if and only if xmath373 4 xmath374 if and only if xmath369 and xmath375 that is by induction hypothesis if and only if xmath376 xmath377 if and only if there exists a xmath343 such that xmath378 and xmath379 lnot py vee psidly by induction hypothesis the first condition holds if and only if xmath380 as for the second one it holds if and only if xmath381 y1 cup y2 for two xmath41 xmath42 such that xmath382 and xmath383 but then we must have that xmath384 and that xmath385 therefore by downwards closure xmath386 and finally xmath387 conversely suppose that there exists a xmath343 such that xmath380 and xmath388 then by induction hypothesis we have that xmath389 and that xmath379 lnot py vee psidlx and hence xmath390 now let us consider the cases corresponding to transition terms 1 suppose that xmath391 if xmath193 then xmath392 and hence by non creation we have that xmath393 as required let us assume instead that xmath175 then by hypothesis there exists a xmath360 such that there exists a xmath313 such that xmath394f y rti x y xmath394tdly lnot rti x y vee py from the first condition it follows that for every xmath395 there exists a xmath396 such that xmath397 therefore by the definition of xmath327 every such xmath113 must be in xmath398 from the second condition it follows that whenever xmath397 and xmath399 xmath400 and since xmath401 this implies that xmath402 by the definition of xmath327 hence by monotonicity and downwards closure we have that xmath403 and that xmath404 as required conversely suppose that xmath405 for some xmath406 if xmath392 then xmath193 and hence by proposition propo emptyteam we have that xmath407 as required otherwise by non triviality xmath408let now xmath409 be any of its elements and let xmath410 for all xmath411 then xmath412f y rti x y as any assignment of this team sends xmath331 to some element of xmath398 and xmath352 to xmath413 furthermore let xmath414 and let xmath396 be such that xmath415 then xmath416 and hence xmath412tdly lnot rti x y vee py so in conclusion xmath417 as required 2 xmath418 if and only if xmath419 and xmath420 that is if and only if xmath421 3 xmath422 if and only if xmath237 for two xmath423 such that xmath237 and therefore xmath424 xmath425 that is by induction hypothesis xmath426 xmath427 that is by induction hypothesis xmath428 hence if xmath429 then xmath430 conversely if xmath431 for two xmath324 xmath325 such that xmath432 and xmath433 let xmath434 clearly xmath237 and furthermore by induction hypothesis xmath425 and xmath427 hence xmath429 as required xmath435 if and only if xmath436 and xmath437 that is by induction hypothesis if and only if xmath438 5 xmath439 if and only if there exists a xmath440 such that xmath441 and there exists a xmath442 such that xmath443 by downwards closure if this is the case then xmath444 too and hence xmath445 as required conversely suppose that there exists a xmath440 such that xmath441 and xmath444 then by induction hypothesis xmath446 and furthermore xmath381 can be split into xmath447 sy not in q and xmath448 sy in q it is trivial to see that xmath449 and furthermore since xmath450 and xmath444 by induction hypothesis we have that xmath451 thus xmath379 forall y lnot qy vee tau2dlyp and finally xmath452 and this concludes the proof hence the relationship between transition logic and dependence logic is analogous to the one between dynamic game logic and first order logic in the next sections we will develop variants of dependence logic which are syntactically closer to transition logic while still being first order as we will see the resulting frameworks are expressively equivalent to dependence logic on the level of satisfiability but can be used to represent finer grained phenomena of transitions between sets of assignments now that we have established a connection between dependence logic and a variant of dynamic game logic it is time to explore what this might imply for the further development of logics of imperfect information if as theorems theo tl rep1 and theo tl rep2 suggest dependence logic can be thought of as a logic of imperfect information decision problems perhaps it could be possible to develop variants of dependence logic in which expressions can be interpreted directly as transition systems in what follows we will do exactly that first with transition dependence logic a variant of dependence logic expressively equivalent to it which is also a quantified version of transition logic and then with dynamic dependence logic in which all expressions are interpreted as transitions but why would we interested in such variants of dependence logic one possible answer which we will discuss in this subsection is that transitions between teams are already a central object of study in the field of dependence logic albeit in a non explicit manner after all the semantics of dependence logic interprets quantifiers in terms of transformations of teams and disjunctions in terms of decompositions of teams into subteams this intuition is central to the study of issues of interdefinability in dependence logic and its variants like for example the ones discussed in xcite as a simple example let us recall definition defin classicor xmath453 where xmath64 and xmath65 are new variables as we said in proposition propo classicor xmath454 if and only if xmath48 or xmath49 we will now sketch the proof of this result and as we will see this proof will hinge on the fact that the above expression can be read as a specification of the following algorithm 1 choose an element xmath455 and extend the team xmath19 by assigning xmath456 as the value of xmath64 for all assignments 2 choose an element xmath457 and further extend the team by assigning xmath458 as the value of xmath65 for all assignments 3 split the resulting team into two subteams xmath41 and xmath42 such that 1 xmath62 holds in xmath41 and the values of xmath64 and xmath65 coincide for all assignments in it 2 xmath63 holds in xmath42 and the values of xmath64 and xmath65 differ for all assignments in it since the values of xmath64 and xmath65 are chosen to always be respectively xmath456 and xmath458 one of xmath41 and xmath42 is empty and the other is of the form xmath459 and since xmath64 and xmath65 do not occur in xmath62 or xmath63 the above algorithm can succeed for some choice of xmath456 and xmath458 only if xmath48 or xmath49 as another slightly more complicated example let us consider the following problem given four variables xmath460 xmath461 xmath462 and xmath463 let xmath464 be an exclusion atom holding in a team xmath19 if and only if for all xmath38 xmath465 that is if and only if the sets of the values taken by xmath466 and by xmath467 in xmath19 are disjoint by theorem sigmatodl we can tell at once that there exists some dependence logic formula xmath468 such that for all suitable xmath7 and xmath19 xmath469 if and only if xmath470 but what about the converse for example can we find an expression xmath471 in the language of first order logic augmented with these exclusion atoms but with no dependence atoms such that for all suitable xmath7 and xmath19 xmath472 if and only if xmath473 as discussed in xcite in a more general setting the answer is positive and one such xmath471 is xmath474 where xmath475 is some variable other than xmath331 and xmath352 in the second disjunct can be removed but for simplicity we will keep it why is this the case well let us consider any team xmath19 with domain containing xmath331 and xmath352 and let us evaluate xmath476 over it as shown graphically in figure fig f1 the transitions between teams occurring during the evaluation of the formula correspond to the following algorithm 1 first assign all possible values to the variable xmath475 for all assignments in xmath331 thus obtaining xmath477 sm z s in x m in textttdomm 2 then remove from xmath477 all assignments xmath11 for which xmath478 keeping only the ones for which xmath479 3 then verify that for any possible fixed value of xmath331 the possible values of xmath352 and xmath475 are disjoint this algorithm succeeds only if xmath352 is a function of xmath331 indeed suppose that instead there are two assignments xmath38 such that xmath480 xmath481 and xmath482 for three xmath483 with xmath484 now we have that xmath485 sc z sb z sc z subseteq xm z and since xmath484 we have that the assignments xmath486 and xmath487 are not removed from the team in the second step of the proof but then xmath486xz a c sb zxy and therefore it is not true that xmath488 and conversely if in the team xmath19 the value of xmath352 is a function of the value of xmath331 then by splitting xmath477 into the two subteams xmath489 s in x sy sz and xmath490 sy not sz we have that xmath491 xmath492 and xmath493 since for all xmath494 xmath495 on the other hand one dependence logic expression corresponding to xmath464 is xmath496 where xmath497 xmath498 xmath64 and xmath65 are new variable we encourage the interested reader to verify that this is the case by examining the transitions between teams corresponding to the formula in brief the intuition is that first we extend our team by picking all possible pairs of values for xmath497 and xmath498 then for any such pair we flag through our choice of xmath64 and xmath65 whether xmath499 is different from xmath466 or from xmath467 this implies that no such pair is equal to both xmath466 and xmath467 or in other words that xmath466 and xmath467 have no value in common more and more complex examples of definability results of this kind can be found in xcite but what we want to emphasize here is that all these examples like the one we discussed in depth here have a natural interpretation in terms of algorithms which transform teams and apply simple tests to them as the above one hence we hope that the development of variants of dependence logic in which these transitions are made explicit might prove itself useful for the further study of this interesting class of problems as stated we will now define a variant of dependence logic which can also be seen as a quantified variant of transition logic we will then prove that the resulting transition dependence logic is expressively equivalent to dependence logic in the sense that any dependence logic formula is equivalent to some transition dependence logic formula and vice versa let xmath26 be a first order signature then the sets of all transition terms and of all formulas of dependence transition logic are given by the rules xmath500 where xmath14 ranges over all variables in xmath31 xmath28 ranges over all relation symbols of the signature xmath29 ranges over all tuples of terms of the required arities xmath501 ranges over xmath502 and xmath30 range over the terms of our signature let xmath7 be a first order model let xmath229 be a first order transition term of the same signature and let xmath19 and xmath177 be teams over xmath7 then we say that the transition xmath503 is allowed by xmath229 in xmath7 and we write xmath504 if and only if tdlxmath50 xmath229 is of the form xmath505 for some xmath506 and there exists a xmath313 such that xmath507subseteq y tdlxmath55 xmath229 is of the form xmath508 for some xmath506 and xmath509 subseteq y tdl test xmath229 is of the form xmath108 xmath34 in the sense given later in this definition and xmath92 tdlxmath235 xmath229 is of the form xmath264 and xmath237 for some xmath238 and xmath239 such that xmath510 and xmath511 tdlxmath242 xmath229 is of the form xmath512 xmath513 and xmath514 tdl concat xmath229 is of the form xmath515 and there exists a team xmath136 such that xmath516 and xmath517 similarly if xmath8 is a formula and xmath19 is a team with domain xmath31 then we say that xmath19 satisfies xmath8 in xmath7 and we write xmath34 if and only if tdl lit xmath8 is a first order literal and xmath35 in the usual first order sense for all xmath36 tdl dep xmath8 is a dependence atom xmath37 and any two xmath38 which assign the same values to xmath2 also assign the same value to xmath1 tdlxmath39 xmath8 is of the form xmath518 and xmath519 or xmath520 tdlxmath46 xmath8 is of the form xmath521 xmath519 and xmath520 tdlxmath156 xmath8 is of the form xmath522 and there exists a xmath177 such that xmath504 and xmath71 as the next theorem shows in this semantics formulas and transitions are interpreted in terms of trumps and transition systems for all transition dependence logic formulas xmath8 all models xmath7 and all teams xmath19 and xmath177 we have that downwards closure if xmath34 and xmath70 then xmath73 empty team property xmath69 furthermore for all transition dependence logic transition terms xmath229 all models xmath7 and all teams xmath19 xmath177 and xmath136 downwards closure if xmath504 and xmath523 then xmath524 monotonicity if xmath504 and xmath525 then xmath526 non creation for all xmath177 xmath527 non triviality if xmath175 then xmath528 the proof is by structural induction over xmath8 and xmath229 and presents no difficulties whatsoever also it is not difficult to see on the basis of the results of the previous section that this new variant of dependence logic is equivalent to the usual one for every dependence logic formula xmath8 there exists a transition dependence logic transition term xmath529 such that xmath530 for all first order models xmath7 and teams xmath19 xmath529 is defined by structural induction on xmath8 as follows 1 if xmath8 is a first order literal or a dependence atom then xmath531 2 if xmath8 is xmath518 then xmath532 3 if xmath8 is xmath521 then xmath533 4 if xmath8 is xmath51 then xmath534 5 if xmath8 is xmath56 then xmath535it is then trivial to verify again by induction on xmath8 that xmath34 if and only if xmath536 as required this representation result associates dependence logic formulas to transition dependence logic transition terms this fact highlights the dynamical nature of dependence logic operators which we discussed in the previous subsection in this framework quantifiers describe transformations of teams the dependence logic connectives are operations over games and the literals are interpreted as tests in fact one might wonder what is the purpose of transition dependence logic formulas could we do away with them altogether and develop a variant of transition dependence logic in which all formulas are transitions later we will explore this idea further but first let us verify that transition dependence logic is no more expressive than dependence logic for every transition dependence logic formula xmath8there exists a dependence logic formula xmath537 such that xmath538 for all first order models xmath7 and teams xmath19 furthermore for every transition dependence logic transition term xmath229 and dependence logic formula xmath211 there is a dependence logic formula xmath539 such that xmath540 again for all first order models xmath7 and teams xmath19 we prove the two claims together by structural induction over xmath8 and xmath229 first let us consider the cases corresponding to formulas 1 if xmath8 is a first order literal or a dependence atom let xmath537 be xmath8 itself as the interpretation of these expressions is the same in dependence logic and in transition dependence logic there is nothing to prove 2 if xmath8 is of the form xmath40 let xmath537 be xmath541 this expression holds in a team if and only if xmath542 or xmath543 hold that is by induction hypothesis if and only if xmath62 or xmath63 do if xmath8 is of the form xmath47 let xmath537 be xmath544 then xmath537 holds if and only if xmath62 and xmath63 do that is if and only if xmath8 does 4 if xmath8 is of the form xmath522 let xmath21 be the tuple of all variables occurring in xmath545 let xmath28 be a new xmath78ary relation and let xmath537 be xmath546 indeed suppose that xmath547 then for some relation xmath28 there exists a xmath177 such that xmath504 and xmath548 furthermore xmath549 and therefore for the set xmath550 we have that xmath551 but then by downwards closure and locality xmath552 and therefore xmath553 conversely suppose that xmath554 then there exists a xmath177 such that xmath504 and xmath71 now let xmath28 be xmath555 clearly xmath556 since xmath557 and furthermore xmath558 by locality and by the fact that by induction hypothesis xmath552 now let us consider the cases corresponding to transitions 1 if xmath229 is of the form xmath505 for some variable xmath14 let xmath559 be xmath560 indeed suppose that xmath561 then xmath53 theta for some xmath313 and by choosing xmath562 we have that xmath563 and xmath564 as required conversely suppose that for some xmath177 xmath563 and xmath564 then for some xmath313 xmath507 subseteq y and by downwards closure we have that xmath53 theta 2 if xmath229 is of the form xmath508 for some variable xmath14 let xmath559 be xmath565 indeed suppose that xmath566 then xmath57 theta and if we choose xmath567 we have at once that xmath568 and xmath564 conversely if for some xmath177 xmath568 and xmath564 then xmath509 subseteq y and by downwards closure xmath57 theta 3 if xmath229 is of the form xmath108 let xmath559 be xmath569 indeed suppose that xmath570 then by induction hypothesis xmath34 and for xmath571 we have that xmath572 furthermore xmath564 as required conversely suppose that for some xmath177 xmath572 and xmath564 then xmath34 and therefore xmath547 and furthermore xmath92 and hence by downwards closure xmath573 hence xmath574 4 if xmath229 is of the form xmath264 and xmath21 is the tuple of all free variables of xmath211 then let xmath559 be xmath575 where xmath28 is a new xmath576ary relation symbol indeed suppose that xmath577 then there exists a relation xmath28 and two subteams xmath238 and xmath239 of xmath19 such that xmath237 xmath578 and xmath579 hence there are two teams xmath41 and xmath42 such that xmath580 xmath581 xmath582 and xmath583 now let xmath177 be xmath584 by monotonicity we have that xmath510 and xmath511 and furthermore xmath557 too that is for all xmath585 xmath586 is in xmath28 since xmath587 this implies that xmath564 by locality and downwards closure conversely suppose that there is a xmath177 such that xmath588 and xmath564 then let xmath28 be xmath589 now xmath237 for two xmath238 andxmath239 such that xmath510 and xmath511 and by induction hypothesis we have that xmath590 and xmath591 but then xmath592 and furthermore by locality we have that xmath587 hence xmath593 as required if xmath229 is of the form xmath512 and xmath21 is the tuple of all variables of xmath211 then let xmath559 be xmath594 indeed suppose that xmath577 then for some relation xmath28 by induction hypothesis there exist teams xmath41 and xmath42 such that xmath595 xmath596 xmath582 and xmath583 now let xmath177 be xmath584 as before by monotonicity we have that xmath513 and xmath597 and hence xmath598 finally since xmath587 we have that xmath599 as required conversely suppose that there is a xmath177 such that xmath598 and xmath564 since xmath598 xmath513 and xmath597 now let xmath28 be xmath555 by induction hypothesis xmath600 and xmath601 and furthermore since xmath564 we have that xmath587 if xmath229 is of the form xmath515 let xmath559 be xmath602 indeed xmath603 if and only if there is a xmath177 such that xmath604 and xmath605 that is if and only if there are a xmath177 and a xmath136 such that xmath513 xmath606 and xmath607 however in a sense transition dependence logic allows one to consider subtler distinctions than dependence logic does the formula xmath608 for example could be translated as any of xmath609 xmath610 xmath611 xmath612 the intended interpretations of these formulas are rather different even though they happen to be satisfied by the same teams and for this reason transition dependence logic may be thought of as a proper refinement of dependence logic even though it has exactly the same expressive power dynamic semantics is the name given to a family of semantical frameworks which subscribe to the following principle xcite the meaning of a sentence does not lie in its truth conditions but rather in the way it changes the representation of the information of the interpreter in various forms this intuition can be found prefigured in some of the later work of ludwig wittgenstein as well as in the research of philosophers of language such as austin grice searle strawson and others xcite but its formal development can be traced back to the work of groenendijk and stokhof about the proper treatment of pronouns in formal linguistics xcite we refer to xcite for a comprehensive analysis of the linguistic issues which caused such a development as well as for a description of the ways in which this framework was adapted in order to model presuppositions questions answers and other phenomena here we will only present a formulation of dynamic predicate semantics the alternative semantics for first order logic which was developed in the above mentioned paper by groenendijk and stokhof let xmath8 be a first order formula let xmath7 be a suitable first order model and let xmath11 and xmath182 be two assignments then we say that the transition from xmath11 to xmath182 is allowed by xmath8 in xmath7 and we write xmath613 if and only if dpl atom xmath8 is an atomic formula xmath614 and xmath35 in the usual sense dplxmath153 xmath8 is of the form xmath615 xmath616 and for all assignments xmath617 xmath618 dplxmath46 xmath8 is of the form xmath47 and there exists an xmath617 such that xmath619 and xmath620 dplxmath39 xmath8 is of the form xmath40 xmath614 and there exists an xmath617 such that xmath619 or xmath621 dplxmath622 xmath8 is of the form xmath623 xmath614 and for all xmath617 it holds that xmath624 dplxmath50 xmath8 is of the form xmath625 and there exists an element xmath626 such that xmath627 rightarrow s psi dplxmath55 xmath8 is of the form xmath628 xmath614 and for all elements xmath626 there exists an xmath617 such that xmath627 rightarrow h psi a formula xmath8 is satisfied by an assignment xmath11 if andonly if there exists an assignment xmath182 such that xmath613 in this case we will write xmath35 we will discuss neither the formal properties of this formalism nor its linguistic applications here all that is relevant for our purposes is that according to it formulas are interpreted as transitions from assignments to assignments and furthermore that the rule for conjunction allows us to bind occurrences of a variable of the second conjunct to quantifiers occurring in the first one by the rules given it is easy to see that xmath629 if and only if xmath630 that is if and only if xmath631 differently from the case of tarski s semantics the similarity between this semantics and our semantics for transition terms should be evident hence it seems natural to ask whether we can adopt for a suitable variant of dependence logic the following variant of groenendijk and stokhof s motto the meaning of a formula does not lie in its satisfaction conditions but rather in the team transitions it allows from this point of view transition terms are the fundamental objects of our syntax and formulas can be removed altogether from the language although of course the tests corresponding to literals and dependence formulas should still be available as in groenendijk and stokhof s logic satisfaction becomes then a derived concept in brief a team xmath19 can be said to satisfy a term xmath229 if and only if there exists a xmath177 such that xmath229 allows the transition from xmath19 to xmath177 or in other words if and only if some set of non losing outcomes can be reached from the set xmath19 of initial positions in the game corresponding to xmath229 in the next section we will make use of these intuitions to develop another terser version of dependence logic and finally we will discuss some implications of this new version for the further developments and for the possible applications of this interesting logical formalism we will now develop a formula free variant of transition dependence logic along the lines of groenendijk and stockhof s dynamic predicate logic let xmath26 be a first order signature the set of all formulas of dynamic dependence logic over xmath26 is given by the rules xmath632 where as usual xmath28 ranges over all relation symbols of our signature xmath29 ranges over all tuples of terms of the required lengths xmath501 ranges over xmath502 xmath30 range over all terms and xmath14 ranges over xmath31 the semantical rules associated to this language are precisely as one would expect ddl tts let xmath7 be a first order model let xmath229 be a dynamic dependence logic formula over the signature of xmath7 and let xmath19 and xmath177 be two teams over xmath7 with domain xmath31 then we say that xmath229 allows the transition xmath185 in xmath7 and we write xmath504 if and only if ddl lit xmath229 is a first order literal xmath633 in the usual first order sense for all xmath36 and xmath92 ddl dep xmath229 is a dependence atom xmath37 xmath92 and any two assignments xmath38 which coincide over xmath2 also coincide over xmath1 ddlxmath50 xmath229 is of the form xmath505 for some xmath506 and xmath507 subseteq y for some xmath52 ddlxmath55 xmath229 is of the form xmath508 for some xmath506 and xmath509 subseteq y ddlxmath235 xmath229 is of the form xmath264 and xmath237 for two teams xmath238 and xmath239 such that xmath510 and xmath511 ddlxmath242 xmath229 is of the form xmath512 xmath513 and xmath597 ddl concat xmath229 is of the form xmath515 and there exists a xmath136 such that xmath516 and xmath517 a formula xmath229 is said to be satisfied by a team xmath19 in a model xmath7 if and only if there exists a xmath177 such that xmath504 and if this is the case we will write xmath634 it is not difficult to see that dynamic dependence logic is equivalent to transition dependence logic and therefore to dependence logic let xmath8 be a dependence logic formula then there exists a dynamic dependence logic formula xmath635 which is equivalent to it in the sense that xmath636 for all suitable teams xmath19 and models xmath7 we build xmath635 by structural induction 1 if xmath8 is a literal or a dependence atom then xmath637 2 if xmath8 is xmath40 then xmath638 3 if xmath8 is xmath47 then xmath639 4 if xmath8 is xmath625 then xmath640 5 if xmath8 is xmath628 then xmath641 let xmath229 be a dynamic dependence logic formula then there exists a transition dependence logic transition term xmath642 such that xmath643 for all suitable xmath19 xmath177 and xmath7 and such that hence xmath644 build xmath642 by structural induction 1 if xmath229 is a literal or dependence atom then xmath645 2 if xmath229 is of the form xmath505 or xmath508 then xmath646 3 if xmath229 is of the form xmath264 then xmath647 4 if xmath229 is of the form xmath512 then xmath648 5 if xmath229 is of the form xmath515 then xmath649 dynamic dependence logic is equivalent to transition dependence logic and to dependence logic follows from the two previous results and from the equivalence between dependence logic and transition dependence logic in this work we established a connection between a variant of dynamic game logic and dependence logic and we used it as the basis for the development of variants of dependence logic in which it is possible to talk directly about transitions from teams to teams this suggests a new perspective on dependence logic and team semantics one which allow us to study them as a special kind of algebras of nondeterministic transitions between relations one of the main problems that is now open is whether it is possible to axiomatize these algebras in the same sense in which in xcite allen mann offers an axiomatization of the algebra of trumps corresponding to if logic or equivalently to dependence logic furthermore we might want to consider different choices of connectives like for example ones related to the theory of database transactions the investigation of the relationships between the resulting formalisms is a natural continuation of the currently ongoing work on the study of the relationship between various extensions of dependence logic and promises of being of great utility for the further development of this fascinating line of research the author wishes to thank johan van benthem and jouko vnnen for a number of useful suggestions and insights furthermore he wishes to thank the reviewers for a number of highly useful suggestions and comments hintikka j and g sandu 1989 informational independence as a semantic phenomenon in j fenstad i frolov and r hilpinen eds logic methodology and philosophy of science elsevier pp 571589 kontinen j and v nurmi 2009 team logic and second order logic in h ono m kanazawa and r de queiroz eds logic language information and computation vol 5514 of lecture notes in computer science springer berlin heidelberg pp 230241 parikh r 1985 the logic of games and its applications in selected papers of the international conference on foundations of computation theory on topics in the theory of computation new york ny usa pp 111139 vnnen j 2007b team logic in j van benthem d gabbay and b lwe eds interactive logic selected papers from the 7th augustus de morgan workshop msterdam university press pp
we examine the relationship between dependence logic and game logics a variant of dynamic game logic called transition logic is developed and we show that its relationship with dependence logic is comparable to the one between first order logic and dynamic game logic discussed by van benthem this suggests a new perspective on the interpretation of dependence logic formulas in terms of assertions about reachability in games of imperfect information against nature we then capitalize on this intuition by developing expressively equivalent variants of dependence logic in which this interpretation is taken to the foreground
introduction transition logic dynamic variants of dependence logic further work acknowledgements
the properties of the relativistic fermi gas rfg model of the nucleus xcite have inspired the idea of superscaling in the rfg model the responses of the system to an external perturbationare related to a universal function of a properly defined scaling variable which depends upon the energy and the momentum transferred to the system the adjective universal means that the scaling function is independent on the momentum transfer this is called scaling of first kind and it is also independent on the number of nucleons and this is indicated as scaling of second kind the scaling function can be defined in such a way to result independent also on the specific type of external one body operator this feature is usually called scaling of zeroth kind xcite one has superscaling when the three kinds of scaling are verified this happens in the rfg model the theoretical hypothesis of superscaling can be empirically tested by extracting response functions from the experimental cross sections and by studying their scaling behaviors inclusive electron scattering data in the quasi elastic region have been analyzed in this way xcite the main result of these studies is that the longitudinal responses show superscaling behavior the situation for the transverse responses is much more complicated the presence of superscaling features in the data is relevant not only by itself but also because this property can be used to make predictions in effect from a specific set of longitudinal response data xcite an empirical scaling function has been extracted xcite and has been used to obtain neutrino nucleus cross sections in the quasi elastic region xcite we observe that the empirical scaling function is quite different from that predicted by the rfg model this indicates the presence of physics effects not included in the rfg model but still conserving the scaling properties we have investigated the superscaling behavior of some of these effects they are the finite size of the system its collective excitations the meson exchange currents mec and the final state interactions fsi the inclusion of these effects produce scaling functions rather similar to the empirical one our theoretical universal scaling functions xmath3 and the empirical one xmath4 have been used to predict electron and neutrino cross sections the definitions of the scaling variables and functions have been presented in a number of papers xcite therefore we do not repeat them here the basic quantities calculated in our work are the electromagnetic and the weak nuclear response functions we have studied their scaling properties by direct numerical comparison for a detailed analysis see ref xcite we present in fig fig fexp the experimental longitudinal and transverse scaling function data for the xmath0c xmath2ca and xmath5fe nuclei given in ref xcite for three values of the momentum transfer we observe that the xmath6 functions scale better than the xmath7 ones the xmath7 scaling functions of xmath0c especially for the lower xmath8 values are remarkably different from those of xmath2ca and xmath5fe the observation of the figure indicates that the scaling of first kind independence on the momentum transfer and of zeroth kind independence on the external probe are not so well fulfilled by the experimental functions these observations are in agreement with those of refs xcite and transverse xmath7 scaling functions obtained from the experimental electromagnetic responses of ref xcite the numbers in the panels indicate the values of the momentum transfer in mev c the full circles refer to xmath0c the white squares to xmath2ca and the white triangles to xmath5fe the thin black line in the xmath6 panel at 570 mev c is the empirical scaling function obtained from a fit to the data the thick lines show the results of our calculations when all the effects beyond the rfg model have been considered the full lines have been calculated for xmath0c the dotted lines for xmath1o and the dashed lines for xmath2ca the dashed thin lines show the rfg scaling functionsheight604 to quantify the quality of the scaling between a set of xmath9 scaling functions each of them known on a grid of xmath10 values of the scaling variable xmath11 we define the two indexes xmath12 minalpha1ldots m left falphapsii right right labeleq delta and xmath13 minalpha1ldots m left falphapsii right right labeleq erre where xmath14 is the largest value of the xmath15 the two indexes give complementary information the xmath16 index is related to a local property of the functions the maximum distance between the various curves since the value of this index could be misleading if the responses have sharp resonances we have also used the xmath17 index which is instead sensitive to global properties of the differences between the functions since we know that the functions we want to compare are roughly bell shaped we have inserted the factor xmath18 to weight more the region of the maxima of the functions than that of the tails tab rdeltavalues of the xmath16 and xmath17 indexes for the experimental scaling functions of fig fig fexp cols in tab tab rdelta we give the values of the indexes calculated by comparing the experimental scaling functions of the various nuclei at fixed value of the momentum transfer we consider that the scaling between a set of functions is fulfilled when xmath19 0096 and xmath20 011 these values have been obtained by adding the uncertainty to the values of xmath17 and xmath16 for xmath6 at 570 mev c from a best fit of this last set of data we extracted an empirical universal scaling function xcite represented by the thin full line in the lowest left panel of fig fig fexp this curve is rather similar to the universal empirical function given in ref xcite let s consider now the scaling of the theoretical functions the thin dashed lines of fig fig fexp show the rfg scaling functions the thick lines show the results of our calculations when various effects beyond the rfg are introduced ie nuclear finite size collective excitations final state interactions and in the case of the xmath7 functions meson exchange currents we have studied the effects of the nuclear finite size by calculating scaling functions within a continuum shell model at q700 mev c these scaling functions are very similar to those of the rfg model at lower values of the momentum transfer the shell modelscaling functions show sharp peaks produced by the shell structure not present in the rfg model we found that shell model scaling functions fulfill the scaling of first kind the most likely violated down to 400 mev c we have estimated the effects of the collective excitations by doing continuum rpa calculations with two different residual interactionsxcite the rpa effects become smaller the larger is the value of the momentum transfer at xmath21 600mev c the rpa effects are negligible if calculated with a finite range interaction collective excitations breaks scaling properties but we found that scaling of first kind is satisfied down to about 500 mev c the presence of the mec violates the scaling of the transverse responses we included the mec by using the model of ref xcite in our calculationsonly one pion exchange diagrams are considered including those with virtual excitation of the xmath22 in our modelmec effects start to be relevant for xmath23 600 mev c we found that mec do not destroy scaling in the kinematic range of our interest the main modification of the shell model scaling functions are produced by the fsi we have considered by using the model developed in ref we obtained scaling functions very different from those predicted by the rfg model and rather similar to the empirical ones in any case the fsi do not heavily break the scaling properties we found that the scaling of first kind is conserved down to xmath8450 mev c the same type of scaling analysis applied to xmath24 reaction leads to very similar results xcite to investigate the prediction power of the superscaling hypothesis we compared responses and cross sections calculated by using rpa fsi and eventually mec with those obtained by using xmath3 and xmath25 we show in fig fig eexsect double differential electron scattering cross sections calculated with complete model full and those obtained with xmath3 dashed lines and xmath26 dotted lines these results are compared with the data of refs xcite c data xcite have been measured at a scattering angle of xmath27375xmath28 the xmath1o data xcite at xmath27320xmath28 and the xmath2ca data xcite at xmath27455xmath28 the full lines show the results of our complete calculations the cross sections obtained by using xmath3 are shown by the dashed lines and those obtained with xmath4 by the dotted linesheight566 the excellent agreement between the results of the full calculations and those obtained by using xmath3 indicates the validity of the scaling approach in this kinematic region where the xmath8 values are larger than 500 mev c the differences with the cross sections obtained by using the empirical scaling functions reflect the differences between the various scaling functions shown in fig fig fexp the disagreement with the experimental data is probably due to the fact that our models do not consider the excitation of the real xmath22 resonance and the pion production mechanism o in all the panelsthe full lines show the result of our complete calculation the dashed dotted lines the result obtained with our universal empirical scaling function the results shown in panels a b and c have been obtained for neutrino energy of 300 mev panel a double differential cross sections calculated for the scattering angle of 30xmath28 as a function of the nuclear excitation energy panel b cross sections integrated on the scattering angle always as a function of the nuclear excitation energy panel c cross sections integrated on the nuclear excitation energy as a function of the scattering angle panel d total cross sections as a function of the neutrino energyheight566 the situation for the double differential cross sections is well controlled since all the kinematic variables beam energy scattering angle energy of the detected lepton are precisely defined and consequently also energy and momentum transferred to the target nucleus this situation changes for the total cross sections which are of major interest for the neutrino physics the total cross sections are only function of the energy of the incoming lepton therefore they consider all the scattering angles and of the possible values of the energy and momentum transferred to the nucleus with the only limitation of the global energy and momentum conservations this means that in the total cross sections kinematic situations where the scaling is valid and also where it is not valid are both present we show in the first three panels of fig fig nue various differential charge exchange cross sections obtained for 300 mev neutrinos on xmath1o target in the panel a we show the double differential cross sections calculated for a scattering angle of 30xmath28 as a function of the nuclear excitation energy the values of the momentum transfer vary from about 150 to 200 mev c this is not the quasi elastic regime where the scaling is supposed to hold and this explains the large differences between the various cross sections the cross sections integrated on the scattering angle are shown as a function of the nuclear excitation energy in the panel b of the figure while the cross sections integrated on the excitation energy as a function of the scattering angle are shown in the panel c the first three panels of the figure illustrate in different manner the same physics issue the calculation with the scaling functions fails in reproducing the results of the full calculation in the region of low energy and momentum transfer where surface and collective effects are important this is shown in panel b by the bad agreement between the three curves in the lower energy region and in panel c at low values of the scattering angle where the xmath8 valued are minimal total charge exchange neutrino cross sections are shown in panel d as a function of the neutrino energy xmath29 the scaling predictions for neutrino energies up to 200 mev are unreliable these total cross sections are dominated by the giant resonances and more generally by collective nuclear excitation we have seen that these effects strongly violate the scaling at xmath29 200 mev the cross section obtained with our universal function is still about 20 larger than those obtained with the full calculation this difference becomes smaller with increasing energy and is about the 7 at xmath29 300 mev this is an indication that the relative weight of the non scaling kinematic regions becomes smaller with the increasing neutrino energy
superscaling analysis of electroweak nuclear response functions is done for momentum transfer values from 300 to 700 mev c some effects absent in the relativistic fermi gas model where the superscaling holds by construction are considered from the responses calculated for the xmath0c xmath1o and xmath2ca nuclei we have extracted a theoretical universal superscaling function similar to that obtained from the experimental responses theoretical and empirical universal scaling functions have been used to calculate electron and neutrino cross sections these cross sections have been compared with those obtained with a complete calculation and for the electron scattering case with the experimental data
introduction superscaling beyond rfg model superscaling predictions
the black hole binary grs 1915 105 is highly variable in x rays belloni et al 2000 and references therein still even its hardest spectra are relatively soft consisting of a blackbody like component and a high energy tail vilhu et al they are softer than those of other black hole binaries in the hard state which xmath1 spectra peak at xmath2 kev eg cyg x1 gierliski et al 1997 and are similar to their soft state eg cyg x1 gierliski et al 1999 hereafter g99 lmc x1 lmc x3 wilms et al 2001 the blackbody component arises most likely in an optically thick accretion disk on the other hand there is no consensus at present regarding the origin of the tail all three main models proposed so far involve comptonization of the blackbody photons by high energy electrons they differ however in the distribution and location of the electrons which are assumed to be either thermal maxwellian non thermal close to a power law or in a free fall onto the black hole a discussion of these models is given in zdziarski 2000 who shows that the thermal and free fall models of the soft state of black hole binaries can be ruled out mostly by the marked absence of a high energy cutoff around 100 kev in the cgro data grove et al 1998 g99 tomsick et al 1999 mcconnell et al the present best soft state model appears to involve electron acceleration out of a maxwellian distribution ie a non thermal process which leads to a hybrid electron distribution consisting of both thermal and non thermal parts zdziarski lightman macioek niedwiecki 1993 poutanen coppi 1998 g99 coppi 1999 in this letter we present all osse observations of grs 1915 105 we then choose two osse spectra corresponding to the lowest and highest x ray flux and fit them together with spectra from simultaneous rxte pointed observations the spectra showing extended power laws without any cutoff up to at least 600 kev provide strong evidence for the presence of non thermal comptonization more extensive presentation of the combined x ray osse data will be given elsewhere table 1 gives the log of the 9 osse observations together with results of power law fits and basic data about the corresponding x ray and radio states the osse instrument accumulated spectra in a sequence of 2min measurements of the source field alternated with 2min offset pointed measurements of background the background spectrum for each source field was derived bin by bin with a quadratic interpolation in time of the nearest background fields see johnson et al figure fig osse shows the osse spectra including standard energy dependent systematic errors which were fitted up to energies at which the source signal was still detected the uncertainty for a fitted parameter corresponds hereafter to 90 confidence xmath3 we see that the source went through wide ranges of radio and x ray fluxes and types of x ray variability during those observations in spite of that variety 8 out of 9 osse spectraare best fitted by a power law with a photon index of xmath4 and the flux varying within a factor of 2 the only exception is the osse spectrum corresponding to the highest x ray flux measured by the asm 1999 april 2127 which is much harder xmath5 and has a much lower flux we then consider the osse spectra corresponding to the extreme x ray fluxes measured by the rxteasm ie from 1997 may 1420 vp 619 and 1999 april2127 vp 813 we fit them together with spectra from the pointed rxte observations of 1997 may 15 and 1999 april 23 the observation ids are 20187 02 02 00 40403 01 07 00 1 systematic error is added to the pca data with the responses of 2001 february these pca data correspond to the variability classes belloni et al 2000 of xmath6 and xmath7 in which the variability is moderate and the source spends most of the time in two basic low xmath8 and high xmath9 x ray flux state respectively we fit the data with the xspec arnaud 1996 model eqpair coppi 1999 g99 which calculates self consistently microscopic processes in a hot plasma with electron acceleration at a power law rate with an index xmath10 in a background thermal plasma with a thomson optical depth of ionization electrons xmath11 the electron temperature xmath12 is calculated from the balance of compton and coulomb energy exchange as well as xmath13 pair production yielding the total optical depth of xmath14 is taken into account the last two processes depend on the plasma compactness xmath15 where xmath16 is a power supplied to the hot plasma xmath17 is its characteristic size and xmath18 is the thomson cross section we then define a hard compactness xmath19 corresponding to the power supplied to the electrons and a soft compactness xmath20 corresponding to the power in soft seed photons irradiating the plasma which are assumed to be emitted by a blackbody disk with the maximum temperature xmath21 the compactnesses corresponding to the electron acceleration and to a direct heating ie in addition to coulomb energy exchange with non thermal xmath13 and compton heating of the thermal xmath13 are denoted as xmath22 and xmath23 respectively and xmath24 details of the model are given in g99 87 cm we also take into account compton reflection with a solid angle of xmath25 magdziarz zdziarski 1995 and an fe kxmath26 emission from an accretion disk assumed to extend down to xmath27 which results in a relativistic smearing the equivalent width xmath28 with respect to the scattered spectrum only is tied to xmath25 via xmath29 ev george fabian 1991 the elemental abundances of anders ebihara 1982 an absorbing column of xmath30 xmath31 dickey lockman 1990 vilhu et al 2001 and an inclination of xmath32 are assumed as discussed in g99 xmath33 depends weakly on xmath20 in a wide range of this parameter an increase of xmath20 leads to increasing xmath13 pair production which then leads to an annihilation feature around 511 kev the presence of such a feature is compatible with the osse data fig fig osse but only very weakly constrained g99 found that xmath34 provides a good fit to cyg x1 data here we find that a good fit is provided with xmath35 compatible with the high luminosity of grs 1915 105 for example for 12 of the eddington luminosity xmath36 and spherical geometry the size of the plasma corresponds then to xmath37 this model provides very good description of our two broad band spectra as well as of other rxteosse spectra s v vadawale et al in preparation for vp 619 we assume a free relative normalization of the hexte and osse spectra with respect to those from the pca on the other hand the hexte spectrum for vp 813 has relatively few counts at its highest energies and thus we use the actual osse normalization in that fit table 2 gives the fit results and figure f data shows the spectra 82 cm our model predicts the power law emission extending with no cutoff well above 1 mev and a weak annihilation feature with the plasma allowed to be pair dominated ie with xmath38 for vp 619 those predictions can be tested by future soft xmath7ray detectors more sensitive than the osse we note that the comptel has already detected a power law tail up to xmath39510 mev in the soft state of cyg x1 mcconnell et al 2000 figure f modela shows the spectral components of the fit to the vp619 spectrum compton reflection with xmath40 is detected at a very high significance xmath41 gives xmath42 corresponding to the chance appearance of reflection of xmath43 from the xmath44test and it is responsible for the convex curvature in the xmath3910100 kev spectrum figure f modela also shows that the scattered component has a spectral break at xmath2 kev but continues as a power law with addition of the broad annihilation feature above it due to the domination of non thermal scattering at those energies comptonization by the thermal electrons dominates at energies close to the blackbody component and thus pca data of grs 1915 105 can be reasonably modeled up to 60 kev by thermal comptonization of a disk blackbody vilhu et al when the osse data are included the probability that non thermal electrons are not present ie xmath45 is only xmath46 the best fit thermal compton model shown by the long dashes in figure f modela strongly underestimates the flux above 100 kev the statistical significance of the presence of non thermal electrons can be further increased by fitting the rxte spectrum together with the average spectrum from osse which has virtually identical shape to that of vp 619 but much better statistics then allowing for xmath47 reduces xmath48 from xmath49 to xmath50 which corresponds to the chance probability of xmath51 thus we strongly rule out the pure thermal comptonization model during the vp 813 compton reflection is statistically not required as indeed expected at the large xmath52 of the scattering medium covering the disk which would completely smear out any disk reflection and fluorescence features thus we set xmath41 is the fit the presence of non thermal electrons is now required at an extremely high significance xmath53 due to the presence of the very distinct hard high energy tail above the thermally cut off spectrum see figures f data f modelb 84 cm on the other hand shrader titarchuk 1998 have fitted a model of bulk motion comptonization of blackbody photons to rxte and batse data from a hard state of grs 1915 105 similar to that of vp 619 we fit their model bmc in xspec at a free xmath54 to our broad band spectra and find it is completely unacceptable statistically with xmath55 and xmath56 for vp 619 and 813 respectively however the specific feature of bulk motion comptonization is a high energy cutoff at xmath57 kev due to the effects of compton recoil and gravitational redshift close to the black hole horizon eg laurent titarchuk 1999 such a cutoff is not included in the bmc model and its inclusion would further worsen the fits above thus we use monte carlo results of laurent titarchuk 1999 which include the cutoff to test whether the osse data regardless of the data at lower energies are compatible with its presence we find that their theoretical spectrum for the accretion rate of xmath58 matches well the slope of the average osse spectrum at low energies the histogram in fig f bmc the monte carlo spectrum can then be very well reproduced by a power law times a step function convolved with a gaussian model plabsstep in xspec with the cutoff energy of 150 kev and the gaussian width of 35 kev the solid curve in fig f bmc we see that the osse average spectrum lies well above that model at xmath57 kev quantitatively the bulk motion compton model yields xmath59 in comparison the power law and eqpair models yield xmath60 and xmath61 respectively thus the bulk compton model is completely ruled out further problems with that model are discussed in zdziarski 2000 70 cm also the commonly used phenomenological models of disk blackbody and either a power law or an e folded power law give very bad fits to our data the latter yields xmath62 xmath63 for vp 619 813 respectively in fact even the thermal part of the vp813 spectrum is very poorly described by a disk blackbody with xmath64 for a fit to the 3520 kev pca data whereas the same data are well modeled by thermal comptonization xmath65 with neither model including a high energy tail thus we find physical models of the spectra of grs 1915 105 in terms of thermal and non thermal comptonization and in some cases compton reflection to be vastly superior to any other model proposed so far we have found that broad band spectra of grs 1915 105 in its two main spectral states xmath9 xmath8 are very well fitted by comptonization of disk blackbody photons in a hybrid plasma with both electron heating and acceleration the presence of strong reflection indicates the plasma is located in coronal regions possibly magnetic flares above the disk this physical model is the same as that fitted to the soft state of cyg x1 by g99 differences in the variability properties of cyg x1 and grs 1915 105 are likely to be due to the disk being stable in the former case and unstable in the latter most likely due to its much higher xmath66 the corona is unstable in both cases the comptonizing medium in the highxmath67 state xmath9 is thomson thick and it can represent the surface layer of an overheated disk accreting at a super eddington rate beloborodov 1998 an issue to be addressed by future research is the origin of the hardening of the tail in this state as compared to other ones fig fig osse our model predicts broad annihilation features although their strength depends on the unknown size of the plasma grs 1915 105 was in the power law xmath7ray state in the classification of grove et al 1998 during the 9 osse observations this state usually corresponds to the high soft x ray state indeed x ray spectra observed so far from grs 1915 105vilhu et al 2001 are substantially softer than those with xmath68 and a sharp thermal cutoff at xmath69 kev characteristic to the hard state of other black hole binaries we thank p coppi and m gierliski for their work on the eqpair model w n johnson for help with the osse data reduction and ph laurent for supplying his monte carlo results this research has been supported by grants from kbn 2p03d00614 and 2p03c00619p12 the foundation for polish science aaz and the swedish natural science research council and the anna greta and holger crafoord fund jp jp and aaz acknowledge support from the royal swedish academy of sciences the polish academy of sciences and the indian national science academy through exchange programs zdziarski a a 2000 in iau symp 195 highly energetic physical processes and mechanisms for emission from astrophysical plasmas ed martens s tsuruta m a weber san francisco asp 153 astro ph0001078
grs 1915 105 was observed by the cgroosse 9 times in 1995 2000 and 8 of those observations were simultaneous with those by rxte we present an analysis of all of the osse data and of two rxteosse spectra with the lowest and highest x ray fluxes the osse data show a power law like spectrum extending up to xmath0 kev without any break we interpret this emission as strong evidence for the presence of non thermal electrons in the source the broad band spectra can not be described by either thermal or bulk motion comptonization whereas they are well described by comptonization in hybrid thermal non thermal plasmas
introduction osse and _rxte_ spectra discussion and conclusions
most fundamental physical stellar parameters of field white dwarfs such as effective temperature surface gravity and magnetic field strength can directly be measured with high precision from spectroscopic observations assuming a mass radius relation both mass and radius may be inferred independently of the distance determining these properties also for the accreting white dwarfs in cataclysmic variables cvs is a relatively new research field essential not only for testing stellar binary evolution theory but for understanding the physics of accretion in this whole class of binaries the last years saw a rapid growth of identified polars cvs containing a synchronously rotating magnetic white dwarf despite the large number of know systems xmath6 rather littleis known about the temperatures of the accreting white dwarfs in these systems the main reasons for this scarcity are twofold a in the easily accessible optical wavelength band the white dwarf photospheric emission is often diluted by cyclotron radiation from the accretion column below the stand off shock by emission from the secondary star and by light from the accretion stream even when the accretion switches off almost totally and the white dwarf becomes a significant source of the optical flux eg schwope et al 1993 the complex structure of the zeeman splitted balmer lines and remnant cyclotron emission complicate a reliable temperature determination b at ultraviolet wavelengths the white dwarf entirely dominates the emission of the system during the low state and may be a significant source even during the high state however the faintness of most polars requires time consuming space based observations eg stockman et al iue observations of rxj13132xmath03259 henceforth rxj1313 were carried out in march 1996 one swp 1150xmath01980 and one lwp 1950xmath03200 low resolution spectrum were obtained on march 2 and march 6 respectively table1 the lwp image was taken prior to the failure of gyro5 read out of the image had to await that control over the spacecraft was re established both observations were taken through the large aperture resulting in a spectral resolution of xmath7 because of the faintness of rxj1313 the exposure time of the swp spectrum was chosen roughly equal to the orbital period the spectra have been processed through the iuenewsips pipeline yielding flux and wavelength calibrated spectra the swp spectrum is shown in figf swp it is a blue continuum with a flux decline below xmath8 due to the long exposure time the spectrum is strongly affected by cosmic ray hits some emission of civxmath91550 and heiixmath91640 may be present in the spectrum of rxj1313 but from the present data no secure detection of line emission can be claimed the absence weakness of emission lines strongly indicates that the iue observations were taken during a period of very low accretion activity the broad flux turnover below xmath8 is reminiscent of the photospheric absorption observed during low states eg in amher xcite or dpleo xcite our first approach was thus to fit the swp data with non magnetic pure hydrogen white dwarf model spectra xcite however none of the models could satisfyingly describe the observed spectrum while the continuum requires a rather low temperature xmath10k the steep slope in the narrow core of the absorption xmath11 is in disagreement with the very broad line of such low temperature models lrr imageno start ut exp time sec swp56879l 02mar1996080149 13800 lwp32069l 06mar1996182031 2100 the analysis of low state ultraviolet spectroscopy of other polars taught us that the white dwarfs often have a non uniform temperature distribution over their surface xcite possibly due to heating by low level accretion xcite we therefore fitted the iue data of rxj1313 with a two temperature model using again our non magnetic pure hydrogen model spectra and leaving four free parameters the temperatures and scaling factors of both components the best fit is achieved by a white dwarf with a base temperature of xmath1k and a spot temperature of xmath2k figf swp for a distance xmath12pc as derived by thomas et al xcite the white dwarf radius resulting from the scaling factors is xmath13 cm assuming the hamada salpeter 1961 mass radius relation for a carbon core the corresponding mass is xmath14 which is consistent with the mass derived by thomas et al xcite because the iue swp observation represents the orbital mean of the ultraviolet emission of rxj1313 the spot size can not be directly estimated assuming that the ultraviolet bright spot shows a similar variation as the x ray spot observed with rosat xcite we estimate a fractional area xmath3 for a somewhat larger spot the temperature would be correspondingly lower figf overall shows the iue swp and lwp spectra along with an average optical low state spectrum as well as the two component model the flux of the lwp spectrum is somewhat lower than predicted by the model which could be due either to heavy underexposure table1 or to the fact that the lwp spectrum covers only xmath15 of the orbital period possibly resulting in a lower spot contribution than in the orbital averaged swp spectrum or both the agreement of the model spectra with observed optical flux is reasonably good especially when considering that only the 12251900 range was used for the fit and that the ultraviolet and optical spectra were taken at different epochs the summed spectrum of the white dwarf model and a red dwarf matching the red end of the rxj1313 spectrum has xmath16 which is in agreement with the observed low state magnitude of the system xcite during the low state the optical and ultraviolet emission of rxj1313 is hence dominated by its two stellar components for completeness we mention that an additional possible source of absorption is the interstellar gas we computed the interstellar profile for the absorption column derived from the x ray data xmath17 xcite the width of this line is smaller than the geocoronal emission in the swp spectrum thus interstellar absorption can not explain the narrow core observed in the iue spectrum a major uncertainty in the computation of realistic hydrogen line profiles in magnetic atmospheres is the treatment of the stark broadening in the presence of a magnetic field the stark broadening of the individual zeeman components is smaller than that of the entire transition in the non magnetic case but no detailed calculations are available this uncertainty can be taken into account by treating the amount of the stark broadening as a free parameter in the model atmosphere calculation and calibrating it with observations xcite for this approach is however difficult on one hand there are only few single magnetic white dwarfs for which good ultraviolet spectroscopy has been obtained on the other hand the three zeeman components of become visible as individual absorption features only for fields xmath18 mg for lower field strengths the profileis still dominated by the stark effect and the zeeman shifts introduce only an additional broadening which is again difficult to quantify an additional problem in the computation of synthetic profiles arises for low temperature atmospheres xmath19k in ultraviolet observations of non magnetic white dwarfs in this temperature range quasi molecular absorption of xmath20 andxmath21 produces strong absorption features at xmath8 and xmath22 respectively which are overlayed on the red wing of xcite calculations of these transitions in the presence of a strong magnetic field have not yet been approached we have retrieved the iue spectra available for all magnetic white dwarfs listed by jordan xcite and find that in at best two of them the xmath20 feature can be identified bpm25114 xmath23 mg and kuv23162xmath01230 xmath24 mg also none of the accreting magnetic white dwarfs in polars with xmath19k observed in the ultraviolet display noticeable xmath20 absorption xcite from figf swp it is apparent that also the spectrum of rxj1313 is devoid of noticeable absorption at 1400 and 1600 in summary observations indicate that the xmath20 and xmath21 quasi molecular absorption lines may be weaker in a strongly magnetic atmosphere than in a non magnetic one assuming a magnetic field strength of xmath25 mg for rxj1313 as derived by thomas et al xcite from the cyclotron emission the expected shift of the xmath26 components is xmath27 causing the centre of the xmath28 component to coincide with the steepest slope of the profile while the zeeman effect may broaden the observed profile the reduced stark broadening will cause an opposite effect we estimate that the use of non magnetic model spectra in the analysis of the profile may cause a temperature error of a about xmath29k we conclude that the theoretical uncertainties in the stark broadening do presently not warrant the use of magnetic model spectra the narrow core in the broad absorption observed in rxj1313 can not be produced by magnetic effects supporting our interpretation of a rather cool white dwarf with a localized hot region it is well established that the white dwarfs in cvs tend to be hotter than single white dwarfs this observational result suggests that accretional heating takes place in addition to the secular core cooling of the white dwarfs in cvs eg sion 19911999 furthermore the white dwarfs in cvs below the period gap are on average cooler than those in cvs above the gap gnsicke 19971998 sion 19911999 a combination of two effects is thought to be responsible for this difference i the average age of the short period cvs below the period gap is about an order of magnitude larger than that of cvs above the gap xcite and core cooling of their white dwarfs has progressed correspondingly ii the average accretion rate in short period cvs is about an order of magnitude lower than in long period cvs xcite resulting in reduced accretional heating warner xcite shows admittedly only for a small sample of cvs that the expected correlation between accretion rate and white dwarf temperature does in fact exist rxj1313 is the polar with the fourth longest period it is therefore expected to be rather young to experience a comparatively high time averaged accretion rate and to have a correspondingly hot white dwarf contrary to these expectations however it harbours the coldest white dwarf of all the cvs above the period gap in fact the temperature of the white dwarf in rxj1313 is comparable to the average white dwarf temperature in short period cvs we suggest two possible scenarii which can explain the atypically low white dwarf temperature a rxj1313 has only recently developed from a detached pre cataclysmic binary into the semi detached state mass transfer is in the process of turning on and substantial heating of the white dwarf has not yet taken place in this case the observed effective temperature of the white dwarf allows to estimate a lower limit on the cooling age and thereby on the time elapsed since the system emerged from the common envelope the time scale for the turn on of the mass transfer is xmath30yrs which is short compared to the xmath31yrs that a cv spends above the gap xcite the probability of finding a cv in this stage of its evolution is rather small but non zero b rxj1313 is a normal long period cv but has more recently experienced a low accretion rate for a sufficiently long time interval xmath5yrs which allowed its white dwarf to cool down to its current temperature long term xmath32yrs fluctuations of the accretion rate about the secular mean predicted from angular momentum loss by magnetic braking xcite are consistent with the large range of observed accretion rates at a given orbital period xcite two possible explanation for these fluctuations have been suggested b1 a limit cycle in the secondary s radius driven by irradiation from the hot primary king et al 19951996 which causes a corresponding variation in the mass transfer rate b2 cvs possibly enter a prolonged phase of low or zero xmath33 following a classical nova eruption referred to as hibernation xcite rxj1313 may be such a hibernating cv however the low temperature of the white dwarf in rxj1313 argues against a very recent nova explosion in v1500cyg the white dwarf cooled from the nuclear burning regime ie several xmath34k in 1975 to xmath35k in 1992 xcite on the theoretical side prialnik 1986 shows in a simulation of a 125 classical nova that the white dwarf reaches its minimum temperature xmath36yrs after the nova explosion two observational tests could help to decide whether rxj1313 is a rather fresh post nova with its secondary only marginally filling its roche lobe 1 a nova eruption may break synchronization as observed in v1500cyg xcite causing the orbit to widen and the secondary to retreat from its roche lobe the resynchronization of the white dwarf spin with the orbital period occurs in v1500cyg apparently on a time scale of a few hundred years xcite more accurate ephemerides of rxj1313 than presently available thomas et al 1999 are necessary to test for a small remnant asynchronism of the white dwarf spin 2 the nova eruption may contaminate the binary system with material processed in the thermonuclear event resulting in deviations from the typical populationi abundances found in most cvs xcite anomalous ultraviolet emission line ratios similar to those observed in post novae have been found in the asynchronous polar bycam xcite the white dwarf in bycam has xmath37k xcite which is in rough in agreement with the temperature expected a few 1000 years after a nova explosion bycam contains also a slightly asynchronously rotating white dwarf which leaves the question of the expected time scale for resynchronization somewhat unsettled high state ultraviolet observations of the cno lines in rxj1313 do not exist so far and will serve to test the post nova hypothesis if no evidence for a nova event should be found rxj1313 is either very young as a cv or experiences a prolonged low state in some kind of mass transfer cycle before we discuss the temperature of the white dwarf photosphere 15000k we comment on the warm ultraviolet bright spot with xmath38k rxj1313 is yet another polar in which the white dwarf appears to have a non uniform temperature distribution other examples are amher xcite dpleo xcite or qstel xcite these hot spots are best explained by the localized irradiation of the photosphere with cyclotron and x ray photons from the accretion funnel which is continuously fed at a low rate in the case of rxj1313 we estimate the luminosity of the ultraviolet bright spot to be xmath39 corresponding to an accretion rate of xmath40 which is consistent with the low state accretion rate xmath41 derived by thomas et al xcite we now discuss accretion heating of the white dwarf in rxj1313 as shown by giannone weigert xcite and by sion xcite this is an inherently time dependent process accretion compresses the outer non degenerate layers of the white dwarf which heat up approximately adiabatically if the accretion rate is high the core suffers some compression too which heats primarily the non degenerate ions for intermittent accretion the thermal inertia of the deep heating produces a time delay which causes an enhanced luninosity long after accretion stopped for very low accretion rates xmath42 on the other hand prolonged accretion may lead to a quasi stationary state in which the energy loss balances compressional heating xcite and the temperature profile of the envelope remains stationary the simplest way to viewcompressional heating is to consider the energy released when the accreted mass is added since the envelope mass is small and represents a practically constant fraction of the white dwarf mass xmath33 increases the mass of the degenerate core of mass xmath43 radius and temperature the energy released per unit time is gxmath331 1 of which some fraction feeds the initial degeneracy of the electrons reaching apart from a factor of order unity this energy equals the work performed by compression xmath44dt with xmath45 the pressure and xmath46 the density integrated over the envelope note that this energy release is different from that freed at the surface which equals gxmath33 in an am her star and represents the additional energy released by compression of the envelope of the white dwarf between radii and if compression is adiabatic the work performed is used to increase the internal energy of the gas as prescribed by the first law of thermodynamics in the isothermal case the released energy would completely appear as radiative loss we consider here the case of slow compression and assume that the energy released by accretion at a rate xmath33 equals the increment in luminosity xmath47 where g is the gravitational constant and we estimate that xmath48 is between 05 and 10 core heating is a minor effect and adds only xmath49 to xmath50 hence the compressional energy is primarily released in the envelope at an approximately constant rate per radius interval in equilibrium accretion at a rate xmath33 can maintain an effective temperature of the white dwarf defined by even if the white dwarf had cooled to a substantially lower temperature before the onset of accretion in their discovery paper thomas et al 1999 derive a mass of the white dwarf in rxj1313 of xmath51 and a secondary mass of xmath52 with uncertainties of about 010 there was some concern about the mass ratio which should be xmath53xmath54 for stable mass transfer for definiteness we assume here xmath55 thomas et al 1999 also derived a mass accretion rate which is very low compared to other long period cvs they observed the system over seven years and found that it hovers most of the time at low accretion luminosities corresponding to xmath42 only twice was the system found in an intermediate state with an accretion rate of xmath56 during the rosat all sky survey and in a subsequent optical follow up observation in february 1993 it was never observed at an accretion rate of xmath57 the typical value of cvs with 45 h xcite to be sure the derived accretion rates depend i on the adopted white dwarf mass and ii on the soft x ray temperature and the bolometric fluxes of the quasi blackbody source and the quoted rates are probably uncertain by a factor of xmath58 a white dwarf of 05 has a core radius xmath59 cm and a radius xmath60 cm at 15000k for these parameters we find an equilibrium temperature from compressional heating alone of xmath61 k where xmath62 k is the accretion rate in units of xmath63 the observed temperature of 15000k can be maintained by an accretion rate of xmath64 for xmath65 which is within the observed range of mass transfer rates since the internal energy source of the white dwarf will contribute to the observed luminosity the actual accretion rate required to maintain the photosphere at 15000 k may be somewhat lower alternatively the efficiency xmath48 of converting the compressional energy release into the observed luminosity may be lower within the uncertainties however it is also possible that the present temperature is almost entirely due to compressional heating and that the white dwarf had cooled to a temperature substantially below 15000 k prior to the onset of mass transfer in any case the cooling age of xmath66yrs to reach 15000 k xcite is a lower limit to the actual pre cv age of the white dwarf since the kelvin helmholtz time scale of the envelope is roughly of the order of xmath67yrs the low temperature of the white dwarf requires xmath33 to have been low for a comparable length of time if rxj1313 is a cv in the process of turning on mass transfer we would expect that the accretion rate in rxj1313 would ultimately reach xmath57 at which time the white dwarf has been compressionally heated to xmath68k the temperature typically observed in cvs with xmath69h xcite we conclude that the low temperature of the white dwarf in rxj1313 is consistent with compressional heating by mass accretion at a rate substantially lower than the expected for a long period cv the system has not passed through a phase of high accretion rate within at least the last xmath67yrs which is the approximate kelvin helmholtz time scale for the envelope there are three possible previous histories of rxj1313 a it is a young cv in the process of turning on the mass transfer b1 it is in a long lasting phase of low accretion within an irradiation driven limit cycle b2 it has passed through a nova outburst shutting off mass transfer for a prolonged period we can not presently distinguish between cases a and b1 while observational tests of b2 have been suggested above we thank klaus reinsch for the optical spectrum of rxj1313 and for useful comments on the manuscript and stefan jordan for discussions on magnetic model atmospheres this research was supported by the dlr under grant 50or96098 and 50or99036
we present low state iue spectroscopy of the rosat discovered polar rxj13132xmath03259 the swp spectrum displays a broad absorption profile which can be fitted with a two temperature model of a white dwarf of xmath1k with a hot spot of xmath2k which covers xmath3 of the white dwarf surface the white dwarf temperature is atypically low for the long orbital period 418h of rxj13132xmath03259 this low temperature implies either that the system is a young cv in the process of switching on mass transfer or that it is an older cv found in a prolonged state of low accretion rate much below that predicted by standard evolution theory in the first case we can put a lower limit on the life time as pre cv of xmath4yrs in the second case the good agreement of the white dwarf temperature with that expected from compressional heating suggests that the system has experienced the current low accretion rate for an extended period xmath5yrs a possible explanation for the low accretion rate is that rxj13132xmath03259 is a hibernating post nova and observational tests are suggested
introduction observations analysis and results a note on the use of non-magnetic model spectra discussion conclusion
nanoscopic physics has been a subject of increasing experimental and theoretical interest for its potential applications in nanoelectromechanical systems nemsxcite the physical properties of these devices are of crucial importance in improving our understanding of the fundamental science in this area including many body phenomenaxcite one of the most striking paradigms exhibiting many body effects in mesoscopic science is quantum transport through single electronic levels in quantum dots and single moleculesxcite coupled to external leads realizations of these systems have been obtained using semiconductor beams coupled to single electron transistors set s and superconducting single electron transistors ssetsxcite carbon nanotubesxcite and most recently suspended graphene sheetsxcite such systems can be used as a direct measure of small displacements forces and mass in the quantum regime the quantum transport properties of these systems require extremely sensitive measurement that can be achieved by using set s or a resonant tunnel junction and sset s in this context nems are not only interesting devices studied for ultrasensitive transducers but also because they are expected to exhibit several exclusive features of transport phenomena such as avalanche like transport and shuttling instabilityxcite the nanomechanical properties of a resonant tunnel junction coupled to an oscillatorxcite or a setxcite coupled to an oscillator are currently playing a vital role in enhancing the understanding of nems the nanomechanical oscillator coupled to a resonant tunnel junction or set is a close analogue of a molecule being used as a sensor whose sensitivity has reached the quantum limitxcite the signature of quantum states has been predicted for the nanomechanical oscillator coupled to the setsxcite and ssetsxcite in these experiments it has been confirmed that the nanomechanical oscillator is strongly affected by the electron transport in the circumstances where we are also trying to explore the quantum regime of nems in this system electrons tunnel from one of the leads to the isolated conductor and then to the other lead phonon assisted tunneling of non resonant systems has mostly been shown by experiments on inelastic tunneling spectroscopy its with the advancement of modern technology as compared to its scanning tunneling spectroscopy sts and scanning tunneling microscopy stm have proved more valuable tools for the investigation and characterization of molecular systemsxcite in the conduction regime in sts experiments significant signatures of the strong electron phonon interaction have been observedxcite beyond the established perturbation theory hence a theory beyond master equation approach or linear response is necessary most of the theoretical work on transport in nems has been done within the scattering theory approach landauer but it disregards the contacts and their effects on the scattering channel as well as effect of electrons and phonons on each otherxcite very recently the non equilibrium green s function negf approachxcite has been growing in importance in the quantum transport of nanomechanical systemsxcite an advantage of this method is that it treats the infinitely extended reservoirs in an exact wayxcite which may lead to a better understanding of the essential features of nems negf has been applied in the study of shot noise in chain modelsxcite and disordered junctionsxcite while noise in coulomb blockade josephson junctions has been discussed within a phase correlation theory approachxcite in the case of an inelastic resonant tunneling structure in which strong electron phonon coupling is often considered a very strong source drain voltage is expected for which coherent electron transport in molecular devices has been considered by some workersxcite within the scattering theory approach inelastic effects on the transport properties have been studied in connection with nems and substantial work on this issue has been done again within the scattering theory approachxcite recently phonon assisted resonant tunneling conductance has been discussed within the negf technique at zero temperaturexcite to the best of our knowledge in all these studies time dependent quantum transport properties of a resonant tunnel junction coupled to a nanomechanical oscillator have not been discussed so far the development of time dependent quantum transport for the treatment of nonequilibrium system with phononic as well as fermionic degree of freedom has remained a challenge since the 1980sxcite generally time dependent transport properties of mesoscopic systems without nanomechanical oscillator have been reportedxcite and in particular sudden joining of the leads with quantum dot molecule have been investigatedxcite for the case of a noninteracting quantum dot and for a weakly coulomb interacting molecular system strongly interacting systems in the kondo regime have been investigatedxcite more recentlyxcite the transient effects occurring in a molecular quantum dot described by an anderson holstein hamiltonian has been discussed to this end we present the following study in the present work we shall investigate the time evolution of a quantum dot coupled to a single vibrational mode as a reaction to a sudden joining to the leads we employ the non equilibrium green s function method in order to discuss the transient and steady state dynamics of nems this is a fully quantum mechanical formulation whose basic approximations are very transparent as the technique has already been used to study transport properties in a wide range of systems in our calculationinclusion of the oscillator is not perturbative as the sts experimentsxcite are beyond the perturbation theory so a non perturbative approach is required beyond the quantum master equationxcite or linear response hence our work provides an exact analytical solution to the current voltage characteristics including coupling of leads with the system very small chemical potential difference and both the right and left fermi level response regimes for simplicity we use the wide band approximationxcite where the density of states in the leads and hence the coupling between the leads and the dot is taken to be independent of energy although the method we are using does not rely on this approximation this provides a way to perform transient transport calculations from first principles while retaining the essential physics of the electronic structure of the dot and the leads another advantage of this method is that it treats the infinitely extended reservoirs in an exact way in the present system which may give a better understanding of the essential features of nems in a more appropriate quantum mechanical picture we consider a single quantum dot connected to two identical metallic leads a single oscillator is coupled to the electrons on the dot and the applied gate voltageis used to tune the single level of the dot in the present system we neglect the spin degree of freedom and electron electron interaction effects and consider the simplest possible model system we also neglect the effects of finite electron temperature of the lead reservoirs and damping of the oscillator our model consists of the individual entities such as the single quantum dot and the left and right leads in their ground states at zero temperature the hamiltonian of our simple systemxcite isxmath0 c0dagc0hbaromegabdaggerbmathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12 label1 where xmath1 is the single energy level of electrons on the dot with xmath2 the corresponding creation and annihilation operators the coupling strength xmath3 with xmath4 is the electrostatic field between electrons on the dot and an oscillator seen by the electrons due to the charge on the oscillator xmath5 is the zero point amplitude of the oscillator xmath6 is the frequency of the oscillator and xmath7 are the raising and lowering operator of the phonons the remaining elements of the hamiltonian are xmath8 where we include time dependent hopping xmath9 to enable us to connect the leads xmath10 to the dot at a finite time for the time dependent dynamics we shall focus on sudden joining of the leads to the dot at xmath11 which means xmath12 where xmath13 is the heaviside unit step function xmath14 is the total number of states in the lead and xmath15 represents the channels in one of the leads for the second leadthe hamiltonian can be written in the same way the total hamiltonian of the system is thus xmath16 we write the eigenfunctions of xmath17 asxmath18operatornamehmlkexpmathrmi kx0 label4psink x00anexptextstylefracl2k22operatornamehnlk label5 for the occupied xmath19 and unoccupied xmath20 dot respectively where xmath21 is the shift of the oscillator due to the coupling to the electrons on the dot where xmath22 xmath23 and xmath24 are the usual hermite polynomials herewe have used the fact that the harmonic oscillator eigenfunctions have the same form in both real and fourier space xmath25 in order to transform between the representations for the occupied and unoccupied dot we require the matrix with elements xmath26 which may be simplifiedxcite asxmath27 for xmath28 where xmath29 and xmath30 are the associated laguerre polynomials note that the integrand is symmetric in xmath31 and xmath32 but the integral is only valid for xmath28 clearlythe result for xmath33 is obtained by exchanging xmath31 and xmath32 in equation 7 to obtain xmath34maxn mexpleft textstylefrac14x2right left mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmixright m noperatornamelminn mm nleft textstylefrac12x2right label8 in order to calculate the analytical solutions and to discuss the numerical results of the transient and steady state dynamics of the nanomechanical systems our focus in this section is to derive an analytical relation for the time dependent effective self energy and the green s functions in obtaining these resultswe use the wide band approximation only for simplicity although the method we are using does not rely on this approximation where the retarded self energy of the dot due to each lead is given byxcite xmath35 where xmath10 represent the left and right leads and the green s function in the leads for the uncoupled system is xmath36 with the fact that xmath37 where xmath15 stands for every channel in each lead and xmath38 is the constant number density of the leads now using the uncoupled green s function into equation 9 the retarded self energy may be written as xmath39valphat2label10 mathrminalphavalphaastt1valphat2thetat1t2oversetinftyundersetinftydisplaystyleint mathrmdvarepsilonalphaexpmathrmivarepsilonalphat1t2nonumber mathrminalphavalphaastt1valphat2thetat1t2times2pideltat1t2 label11 now we use the fact that xmath40 xmath41 then the above expression can be written as xmath42 where xmath43 is the damping factor xmath44 similarly xmath45ast mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmigammaalpha xmath46 we solve dyson s equation using xmath47 as a perturbation in the presence of the oscillator the retarded and advanced green s functions on the dot with the phonon states in the representation of the unoccupied dot may be written as xmath48 where xmath49 is the retarded advanced green s function on the occupied dot coupled to the leads may be written asxmath50text t10 label14 operatornamegkat2tprime mathrmithetat2tprimetimesexpmathrmivarepsilon kmathrmigammat2tprimetext t20 label15 with xmath51 xmath52 and xmath53 the above eqs 12 13 14 15 will be the starting point of our examination of the time dependent response of the coupled system these functions are the essential ingredients for theoretical considerations of such diverse problems as low and high voltage coupling of electron and phonons transient and steady state phenomena the density matrix is related to the dot population through xmath55 where the density matrix xmath56 for xmath57 and xmath58 xmath59 is the lesser green s functionxcite on the dot including all the contributions from the leads the lesser green s function for the dot in the presence of the nanomechanical oscillator is given byxmath60 whereas for xmath61 and xmath62 the xmath59 is equal to zero and xmath63 includes all the information of the nanomechanical oscillator and electronic leads of the system and xmath64 are the oscillator indices the lesser self energy xmath65 contains electronic and oscillator contributions the electronic contributions are non zero only when xmath66 and xmath67 as the oscillator is initially in its ground state only the xmath68 term gives a non zero contribution to the lesser self energy the lesser self energy for the dot may be written as xmath69 with xmath70 where xmath71 is the fermi distribution functions of the left and right leads which have different chemical potentials under a voltage bias for the present case of zero temperature the lesser self energy may be recast in terms of the heaviside step function xmath72 as xmath73 label17 where xmath74 are all non zeroonly when both times xmath75 are positive xmath76 and xmath77 is the fermi energy on each of leads the density matrix xmath78 can be calculated by using eqs 12 13 14 15 17 in eq 16 at xmath57 and xmath58 as xmath79 timesmathrmigammadisplaystyleintinftyepsilonmathrmfalpha fracmathrmdvarepsilonalpha2piexpmathrmivarepsilonalphat1t2phi0kphin kastexpmathrmivarepsilonkmathrmigamma t2tendaligned although xmath80 is non zero for xmath81 it is never required due to the way it combines with xmath74 by carrying out the time integrations the resulting expression is written asxmath82expmathrmivarepsilonalphavarepsilonkmathrmigammatexp mathrmivarepsilonalphavarepsilonmmathrmigammatrightendaligned the integral over the energy in the above equation is carried outxcite the final result for the density matrix is written asxmath83 label18 where we have added the contribution from the right and the left leads which can be written in terms ofxmath84 as xmath85right left lnepsilonmathrmfalphavarepsilonkmathrmigamma lnepsilonmathrmfalphavarepsilonmmathrmigamma right left1expmathrmivarepsilonkvarepsilonm2mathrmigammatright timesleftmathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12fraclnepsilonmathrmfalpha varepsilonk2gamma2lnepsilonmathrmfalpha varepsilonm2gamma2mathrmilefttan1leftfracvarepsilonfalphavarepsilonkgammarighttan1leftfracvarepsilonfalphavarepsilonmgammarightpirightright zalphamk expmathrmivarepsilonkvarepsilonm2mathrmigamma tleftoperatornameeimathrmiepsilonmathrmfalphavarepsilonkmathrmigammatoperatornameeimathrmiepsilonmathrmfalphavarepsilon mmathrmigammatright leftoperatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammatoperatornameeimathrmiepsilonmathrmfalphavarepsilon kmathrmigammatrightendaligned with xmath86 being the right and the left fermi levels and xmath87 the exponential integral function special care is required in evaluating the xmath87 to choose the correct riemann sheets in order to make sure that these functions are consistent with the initial conditions xmath88 and are continuous functions of time and chemical potential the same applies to complex logarithms in the first apparently simpler form for xmath89 now using equation 18 the dot population may be written asxmath90 the particle current xmath92 into the interacting region from the lead is related to the expectation value of the time derivative of the number operator xmath93 asxcite xmath94right label19 and the final result for the current through each of the leads is written as see appendix appa xmath95 where xmath96fracpi2right mathrmileft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammatright i2alpham left 1exp2gamma tright lefttan1leftfracvarepsilonfalphavarepsilonmgammarightfracpi2right mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmiexp2gamma tleft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat right mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmileft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat rightendaligned where in calculating the left current we need xmath97 and both the contributions xmath98 and xmath99 and for the right current xmath97 is replaced by xmath100 as before special care is required in evaluating the xmath87 to choose the correct riemann sheets in order to make sure that these functions are consistent with the initial conditions xmath101 and are continuous functions of time and chemical potential to calculate the energy transferred from the electrons to the nanomechanical oscillator we return to the density matrix xmath78 given in eq 18 we may therefore use the lesser green s function or density matrix to calculate the energy transferred to the oscillator asxmath102 note that the normalisation in equation 21 is required as the bare density matrix contians both electronic and oscillator contributions the trace eliminates the oscillator part leving the electronic part in order to further characterize the state of the nanomechanical oscillator we investigate the fano factor for the change of average occupation number xmath103 as a function of time the corresponding relation for the fano factoris given byxcitexmath104 where xmath105 and xmath106 with the average evaluated using the diagonal element of the density matrix on the quantum dot the dot population net current through the system total current into the system average energy and fano factor of a resonant tunnel junction coupled to a nanomechanical oscillator are shown graphically as a function of time for different values of coupling strength tunneling rate and voltage bias the following parametersxcite were employed the single energy level of the dot xmath107 and the characteristic frequency of the oscillator xmath108 these parameters will remain fixed for all further discussions and have same dimension as of xmath109 we are interested in small and large values of tunneling from the leads different values of the coupling strength between the electrons and the nanomechanical oscillator and of the left chemical potential xmath110 time dependent dot population xmath54 against time for different pairs of the right and the left fermi energies 00 01 11 the dotted line correspond to empty dashed line correspond to half full and solid line corresponds to almost full state of the dot parameters xmath111 units all the parameters have same dimension as of xmath109scaledwidth500 total current xmath112 flowing onto the dot as a function of time for fixed values of xmath113this current solid line is equivalent to the rate of change of dot population xmath114 dashed line as a function of time for same parameters as of current in this figure solid and dashed lines have same values at all points units all the parameters have same dimension as of xmath109scaledwidth500 the nanomechanical oscillator induced resonance effects are clearly visible in the numerical results it must be noted that we have obtained these results in the regime of both strong and zero or weak coupling between the nanomechanical oscillator and the electrons on the dot the tunneling of electrons between the leads and the dot is considered to be symmetric xmath115 and we assume that the leads have constant density of states the dot population is shown in fig fig1 as a function of time in order to see the transient and steady state dynamics of the system we consider here empty half full and occupied states of the system for fixed values of xmath116 xmath117 by choosing the right and the left fermi levels pairs 0 0 0 1 and 1 1 respectively firstly when both the fermi levels are below the dot energy then the dot population rises initially for a short time and for long times settles at a small but finite value this is not quite empty because the finite xmath118 allows some tunneling onto the dot secondly when the left fermi level is above the dot energy then the dot population settles in a partially full half full state thirdly when both the fermi levels are above the dot energy it is completely full for a short time but for long time is not quite full again due to the dot coupling with the leads these results are consistent with the particle hole symmetry of the system as the empty state of the system is not empty and the occupied state is not completely full while the partially full is roughly half full in fig fig2 we have shown the total current flowing onto the dot as a function of time for fixed values of xmath119 and of the left fermi level 1 this current solid line is equivalent to the rate of change of the dot population dashed line for the same parameters in this figure we can not distinguish the solid and the dashed line this confirms that our analytical results are consistent with the equation of continuity xmath120 and hence with the conservation laws for all parameters net current xmath121 flowing through the system as a function of both time and of the left fermi level for two different values of coupling strength xmath122 fig fig3axmath123 and xmath124 fig fig3b parameters xmath125 units all the parameters have same dimension as of xmath109titlefigscaledwidth700 net current xmath121 flowing through the system as a function of both time and of the left fermi level for two different values of coupling strength xmath122 fig fig3axmath123 and xmath124 fig fig3b parameters xmath125 units all the parameters have same dimension as of xmath109titlefigscaledwidth700 net current xmath121 flowing through the system as a function of time for two different values of coupling strength xmath122 dotted linexmath123 and xmath126 solid line parameters units all the parameters have same dimension as of xmath109scaledwidth500 net current xmath121 flowing through the system as a function of time for two different values of coupling strength xmath122 dotted linexmath123 and xmath126 solid line and xmath128 all the parameters are same as in fig fig4 and have same dimension as of xmath109scaledwidth500 in fig fig3 we have shown the net current xmath121 flowing through the system as a function of both time and of the left fermi level for two different values of coupling strength xmath122 to xmath129 and for small and large values of xmath118 we observe simple oscillations in the net current flowing through the system for weak coupling strength and weak tunneling with increasing coupling strengththe structure of the oscillations becomes more complicated as shown in fig fig3b in order to interpret this complicated structure we have a two step discussion firstly we have plotted the net current as a function of time in fig fig4 with fixed values of the fermi level xmath130 xmath131 tunneling energy xmath132 and for different values of coupling strength xmath122 and xmath133 in this figure in the limit of weak coupling the oscillations are again simple while for the strong coupling limit there is a beating pattern in the oscillations we note that the frequency of the simple oscillations is xmath134 and these oscillations are present even in the limit of weak coupling we conclude that this is a purely electronic process plasmon oscillations it is clear from the figure that in the strong coupling case it contains two beating frequencies therefore we interpret this as due to a mixture of electronic and mechanical frequencies secondly in fig fig5 we have plotted the net current for fixed values of xmath130 xmath135 tunneling energy xmath136 and for different values of coupling strength xmath122 and xmath133 we have found that in the regime xmath137 the effects of the oscillator are not apparent and the period of the nanomechanical oscillator can not be resolved why can the period of the oscillator not be resolved by the electrons in this limit in this regime electrons spend less time on the dot than the period of the oscillator therefore electrons do not resolve the period of the nanomechanical oscillator now we will focus only in the regime of small tunneling xmath138 for further discussion in order to analyze the dynamics of the nanomechanical oscillator and the effects of coupling between the electrons and the nanomechanical oscillator average energy transferred to the oscillator as a function of time and left fermi level for fixed values of xmath139 and for different values of coupling strength xmath122 fig fig6axmath123 and xmath126 fig fig6b units all the parameters have same dimension as of xmath109titlefigscaledwidth700 average energy transferred to the oscillator as a function of time and left fermi level for fixed values of xmath139 and for different values of coupling strength xmath122 fig fig6axmath123 and xmath126 fig fig6b units all the parameters have same dimension as of xmath109titlefigscaledwidth700 next we have shown the average energy of the nanomechanical oscillator as a function of time and of the left fermi energy in fig fig6 for fixed values of tunneling xmath132 xmath131 and for different values of coupling strength xmath140 xmath129 we found damped oscillations for short times and constant energy for long times this constant average energy increases with increasing fermi level why have we found this particular type of structure we know that the nanomechanical oscillator potential seen by the electrons on the dot is independent of time when the oscillator is in any of its pure eigenstates otherwise when the oscillator is not in a pure state the potential seen by the electrons is time dependent in the former case the electrons are scattered elastically by the time independent potential and in the latter case the scattering process is inelastic because the time dependent potential allow the transfer of energy between the two we observe that the constant average energy also has steps as a function of the left fermi level which become more pronounced with increasing coupling strength hence the oscillatory part of the behavior of the mechanical oscillator is damped by coupling with the electrons on the dot but the constant part is not the damping mechanism in the transient dynamics is due to transfer of energy from the nanomechanical oscillator to the electrons on the dot while when the oscillator is in any of the pure eigenstate then there is no mechanism for the transfer of energy between the two this same physical phenomenon also applies to the net current flowing through the dot as well this appear to be a specifically new quantum phenomena in the study of nanomechanical systems average energy transferred to the oscillator as a function of xmath141 and for fixed values of xmath142 and xmath122 units all the parameters have same dimension as of xmath109scaledwidth500 can we compare this quantum phenomena with the classical mechanical oscillator yes the nanomechanical oscillator has to enter the classical regime in the limit of small xmath143 for this we study the dynamics of the quantum oscillator in the classical limit in which xmath143 in the mechanical oscillator part of the hamiltonian given in eq 1 goes to zero where xmath144 to see this we have plotted the average energy as a function of xmath145 in the nanomechanical part of the system in fig fig7 for fixed values of tunneling xmath146 xmath147 and coupling strength xmath148 we found that the average energy of the quantum nanomechanical oscillator scales as xmath149 we set the average energy in the limit xmath150 to see what happen to the system for long time it implies that in this limit the energy transferred to the nanomechanical oscillator is zero for long time hence we conclude that the long time dynamics of the classical mechanical oscillator is always zero fano factor as a function of time for two different values of coupling strength xmath122 dotted linexmath123 and xmath126 solid line parameters xmath151 units all the parameters have same dimension as of xmath109scaledwidth500 finally in fig fig8 we have shown the fano factor as a function of time for two different values of xmath152 and for fixed values of xmath153 in the limit of weak coupling the nanomechanical oscillator shows thermal like behavior and poissonian statistics while in the limit of strong coupling its dynamics is non thermal which leads to super poissonian statistics in this figure the short time behavior is always thermal but this is trivial as the nanomechanical oscillator is initially in its ground state in conclusion we have found mixed and pure states in our results which confirm the quantum dynamics of our model with the following justifications in a classical mechanical oscillator modelxcite all states give rise to a time dependent potential hence all states of the classical mechanical oscillator are damped thus we confirm the new quantum dynamics of the nanomechanical oscillator that will be helpful for further experiments beyond the classical limit to develop better understanding of nems devices in this work we analyzed the time dependent quantum transport of a resonant tunnel junction coupled to a nanomechanical oscillator by using the nonequilibrium green s function approach without treating the electron phonon coupling as a perturbation we have derived an expression for the full density matrix or the dot population and discuss it in detail for different values of the coupling strength and the tunneling rate we derive an expression for the current to see the effects of the coupling of the electrons to the oscillator on the dot and the tunneling rate of electrons to resolve the dynamics of the nanomechanical oscillator this confirms that electrons resolve the dynamics of nanomechanical oscillator in the regime xmath154 while they do not in the opposite case xmath155 furthermore we discuss the average energy transferred to oscillator as a function of time we also discuss the fano factor as a function of time which shows thermal behavior and poissonian to non thermal and super poissonian behavior we have found new dynamics of the nanomechanical oscillator pure and mixed states which are never present in a classical oscillator these results suggest further experiments for nems to go beyond the classical dynamics the particle current xmath92 into the interacting region from the lead is related to the expectation value of the time derivative of the number operator xmath156 asxcitexmath157rangle label23 ialphat fracehbaroperatornameg0alphat tvalpha0tv0alphaasttoperatornamegalpha0t t label24 where we have the following relationsxmath158 where xmath159 refers to the unperturbed states of the leads and given asxmath160 with the fact that xmath161 with xmath38 being the constant number density of the leads and other uncoupled green s function in the leads arexmath162 operatornamegalphaalphat tprime frac1nundersetjdisplaystylesum operatornamefalphavarepsilonalphagalpha jt tprimeoverset inftyundersetinftydisplaystyleint mathrmdvarepsilonalphaoperatornamefalphavarepsilonalphamathrminalphaexpmathrmivarepsilonalphat tprimeendaligned now using equations 25 26 in the equation 24 of current through lead xmath91 asxmath163 using the fact that xmath164 we can simplify the above equation asxmath165right label28 where xmath166 are non zero only when both the times xmath167 are positive xmath168 although xmath169 is non zero for xmath81 it is never required due to the way it combines with xmath166 herewe note that we require xmath169 from eq 14 15 for positive times only xmath170 the first integral on right hand side of eq 28 may be solved by using eq 13 14 17 asxmath171expmathrmivarepsilonalphatprimetnonumber fracmathrmigamma2pisummphi0mphi0mastdisplaystyleintlimitsinftyepsilonmathrmfalpha mathrmdvarepsilonalphaleftfrac1 expmathrmivarepsilonalphavarepsilonmmathrmigammatvarepsilonalphavarepsilonmmathrmigammarightnonumber fracmathrmigamma2pisummphi0mphi0mastleftlnepsilonmathrmfalphavarepsilonmmathrmigamma operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammatright label29 where the final result is obtained using standard integralsxcite we note once again that special care is required in evaluating the xmath172 and xmath87 to choose the correct riemann sheets in order to make sure that these functions are consistent with the initial conditions and are continuous functions of time and chemical potential this statement will also apply to all further discussions the second third integral on right hand side of eq 28 are written asxmath173 this integral can be solved in the same way as for the dot population the final result is written asxcitexmath174rightlefttan1leftfracepsilonmathrmfalphavarepsilonmgammarightfracpi2rightnonumber mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmiexp2gamma tleftoperatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammatrightnonumber mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmileft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammatrightbiggr label30 and the fourth integral on right hand side of equation 28 can be solved by using eq 13 15 17 asxmath175exp mathrmivarepsilonalphat tprimenonumber fracmathrmigamma2pisummphi0mphi0mastdisplaystyleintlimitsinftyepsilonmathrmfalpha mathrmdvarepsilonalphaleftfrac1expmathrmivarepsilonalphavarepsilon mmathrmigammatvarepsilonalphavarepsilonmmathrmigammarightnonumber fracmathrmigamma2pisummphi0mphi0mastleft lnepsilonmathrmfalphavarepsilonmmathrmigamma operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat right label31 using equations 29 30 31 in eq 28 the final expression for the current is written asxmath176 where components of current are written asxmath96fracpi2right mathrmileft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat right i2alpham left1exp2gamma tright lefttan1leftfracepsilonmathrmfalphavarepsilonmgammarightfracpi2right mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmiexp2gamma tleft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat right mathchoicetextstylefrac12textstylefrac12scriptstyle12scriptscriptstyle12mathrmileft operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat operatornameeimathrmiepsilonmathrmfalphavarepsilonmmathrmigammat rightendaligned where in calculating the left current we need xmath97 together with both xmath98 and xmath99 whereas for the right current xmath97 is replaced by xmath100 a schliesser et al nature physics 5 509 2009 k l ekinci and m l roukes review of scientific instruments 76 061101 2005 k l ekinci small 20051 no 8 9 786 797 m l roukes technical digest of the 2000 solid state sensor and actuator workshop nanoelectromechanical systems h g craighead science 290 1532 2000 p kim and c m lieber science 126 2148 1999 s d bennett and aa clerk phys b 78 165328 2008 s akita y nakayama s mizooka y takano t okawa y miyatake s yamanaka m tsuji and t nosaka appl 79 1691 2001 a m fennimore t d yuzvlnsky w q han m s fuhrer j cummings and a zettl nature 424 408 2003 j kinaret t nord and s viefers appl 82 1287 2003 ke and h d espinosa appl 85 681 2004 v sazonova y yaish h stnel d roundy t arias and p mceuen nature 431 284 2004 h park et al nature london 407 57 2000j koch and f von oppen phys 94 206804 2005 j koch m e raikh and f von oppen ibid 95 056801 2005 j koch f von oppen and a v andreev phys rev b 74 205438 2006 m a reed c zhou c j muller t p burgin and j m tour science 278 252 1997 r h m smit y noat c untiedt n d lang m c van hemert and j m van ruitenbeek ibid 419 906 2002 l h yu z k keane j w ciszek l cheng m p stewart j m tour and d natelson phys rev 93 266802 2004 l h yu and d natelson nano lett 4 79 2004 m elbing r ochs m koentopp m fischer c von hnisch f weigend f evers h b weber and m mayor proc 102 8815 2005 m poot e osorio k oneill j m thijssen d vanmaekelbergh c a van walree l w jenneskens and h s j van der zant nano lett 6 1031 2006 e a osorio k oneill n stuhr hansen o f nielsen t bjrnholm and h s j van der zant adv weinheim ger 19 281 2007 e lrtscher h b weber and h riel phys 98 176807 2007 j repp g meyer s m stojkovi a gourdon and c joachim phys 94 026803 2005 j repp g meyer s paavilainen f e olsson and m persson phys 95 225503 2005 a shimizu and m ueda phys 69 1403 1992 o l bo and yu galperin phys b 55 1696 1997 b dong h l cui x l lei and n j m horing phys rev b 71 045331 2005 y c chen and m di ventra phys lett 95166802 2005 n nishiguchi phys 89 066802 2002 a a clerk and s m girvin phys rev b 70 121303r 2004 t novotn a donarini c flindt and a pjauho phys 92 248302 2004 a d armour and a mackinnon phys b 66 035333 2002 c flindt t novotny and a p jauho phys rev b 70 205334 2004 j wabnig d v khomitsky j rammer and a l shelankov phys rev b 72165347 2005 s dallakyan and s mazumdar appl 82 2488 2003 k walczak phys status solidi b 241 2555 2004 y c chen and m di ventra phys rev b 67 153304 2003 j lagerqvist y c chen and m di ventra nanotechnology 15 s459 2004 v aji j e moore and c m varma arxiv cond mat0302222 unpublished dmitry a ryndyk and gianaurelio cuniberti phys b 76 155430 2007 j x zhu and a v balatsky phys rev b 67 165326 2003 m tahir and a mackinnon phys rev b 77 224305 2008 v moldoveanu v gudmundsson and a manolescu phys b 76 085330 2007 j maciejko j wang and h guo phys rev b 74 085324 2006 y wei and j wang phys rev b 79 195315 2009 p myhnen a stan g stefanucci and r van leeuwen phys b 80 115107 2009 a r hernndez f a pinheiro c h lewenkopf and er mucciolo phys b 80 115311 2009 h hbener and t brandes phys b 80 155437 2009 s ramakrishnan y gulak and h benaroya phys rev b 78 174304 2008 g kiesslich e schll t brandes f hohls and r j haug phys rev lett 99 206602 2007 h hbener and t brandes phys 99 247206 2007
we present a theoretical study of time dependent quantum transport in a resonant tunnel junction coupled to a nanomechanical oscillator within the non equilibrium green s function technique an arbitrary voltage is applied to the tunnel junction and electrons in the leads are considered to be at zero temperature the transient and the steady state behavior of the system is considered here in order to explore the quantum dynamics of the oscillator as a function of time the properties of the phonon distribution of the nanomechnical oscillator strongly coupled to the electrons on the dot are investigated using a non perturbative approach we consider both the energy transferred from the electrons to the oscillator and the fano factor as a function of time we discuss the quantum dynamics of the nanomechanical oscillator in terms of pure and mixed states we have found a significant difference between a quantum and a classical oscillator in particular the energy of a classical oscillator will always be dissipated by the electrons whereas the quantum oscillator remains in an excited state this will provide useful insight for the design of experiments aimed at studying the quantum behavior of an oscillator
introduction model calculations time-dependent dot population @xmath54 time-dependent current from lead @xmath91 average energy and fano factor discussion of results summary [app.a]
recent investigations of the large scale distribution of galaxies in the sloan digital sky survey sdss xcite have revealed a complex relationship between the properties of galaxies such as color luminosity surface brightness and concentration and their environments xcite these and other investigations using the sdss xcite and the two degree field galaxy redshift survey xcite have found that galaxy clustering is a function both of star formation history and of luminosity for low luminosity galaxies clustering is a strong function of color while for luminous galaxies clustering is a strong function of luminosity for red galaxies clustering is a non monotonic function of luminosity peaking at both high and low luminosities although galaxy clustering correlates also with surface brightness and concentration xcite and xcite show that galaxy environment is independent of these properties at fixed color and luminosity thus color and luminosity measures of star formation history appear to have a more fundamental relationship with environment than do surface brightness and concentration measures of the distribution of stars within the galaxy some of the investigations above have explored the scale dependence of these relationships studies of the correlation function such as xcite and xcite can address this question but do not address directly whether the density on large scales is related to galaxy properties independent of the relationships with density on small scales if only the masses of the host halos of galaxies strongly affect their properties then we expect no such independent relationship between galaxy properties and the large scale density field thus it is important to examine this issue in order to test the assumptions of the halo model description of galaxy formation and of semi analytic models that depend only on the properties of the host halo eg xcite recent studies of this question have come to conflicting conclusions for example xcite have concluded from their analysis of sdss and 2dfgrs galaxies that the equivalent width of hxmath4 is a function of environment measured on scales of 11 xmath2 mpc and 55 xmath2 mpc independently of each other on the other hand xcite find that at fixed density at scales of 1 xmath2 mpc the distribution of d4000 a measure of the age of the stellar population is not a strong function of density on larger scales here we address the dependence on scale of the relative bias of sdss galaxies section data describes our data set section results explores how the relationship between the color luminosity and environments of galaxies depends on scale section bluefrac resolves the discrepancy noted in the previous paragraph between xcite and xcite finding that only small scales are important to the recent star formation history of galaxies section summary summarizes the results where necessary we have assumed cosmological parameters xmath5 xmath6 and xmath7 km sxmath8 mpcxmath8 with xmath9 the sdss is taking xmath10 ccd imaging of xmath11 of the northern galactic sky and from that imaging selecting xmath12 targets for spectroscopy most of them galaxies with xmath13 eg automated software performs all of the data processing astrometry xcite source identification deblending and photometry xcite photometricity determination xcite calibration xcite spectroscopic target selection xcite spectroscopic fiber placement xcite and spectroscopic data reduction an automated pipeline called idlspec2d measures the redshifts and classifies the reduced spectra schlegel et al in preparation the spectroscopy has small incompletenesses coming primarily from 1 galaxies missed because of mechanical spectrograph constraints 6 percent which leads to a slight under representation of high density regions and 2 spectra in which the redshift is either incorrect or impossible to determine xmath14 percent in addition there are some galaxies xmath15 percent blotted out by bright galactic stars but this incompleteness should be uncorrelated with galaxy properties for the purposes of computing large scale structure and galaxy property statistics we have assembled a subsample of sdss galaxies known as the nyu value added galaxy catalog nyu vagc xcite one of the products of that catalog is a low redshift catalog here we use the version of that catalog corresponding to the sdss data release 2 dr2 the low redshift catalog has a number of important features which are useful in the study of low luminosity galaxies most importantly 1 we have checked by eye all of the images and spectra of low luminosity xmath16 or low redshift xmath17 galaxies in the nyu vagc most significantly we have trimmed those which are flecks incorrectly deblended out of bright galaxies for some of these cases we have been able to replace the photometric measurements with the measurements of the parents for a full description of our checks see xcite 2 for galaxies which were shredded in the target version of the deblending the spectra are often many arcseconds away from the nominal centers of the galaxy in the latest version of the photometric reductions we have used the new version of the deblending to decide whether these otherwise non matched spectra should be associated with the galaxy in the best version we have estimated the distance to low redshift objects using the xcite model of the local velocity field using xmath18 and propagated the uncertainties in distance into uncertainties in absolute magnitude for the purposes of our analysis below we have matched this sample to the results of xcite who measured emission line fluxs and equivalent widths for all of the sdss spectra below we use their results for the hxmath4 equivalent width the range of distances we include is xmath19 xmath2 mpc making the sample volume limited for galaxies with xmath20 the total completeness weighted effective area of the sample excluding areas close to tycho stars is 22209 square degrees the catalog contains 28089 galaxies xcitehave investigated the luminosity function surface brightness selection effects and galaxy properties in this sample we will be studying the environments of galaxies as a function of their luminosity and color below to give a sense of the morphological properties of galaxies with various luminosities and colors figure colormag shows galaxies randomly selected in bins of color and luminosity each image is 40 xmath2 kpc on a side red high luminosity galaxies are classic giant ellipticals lower luminosity red galaxies tend to be more flattened and less concentrated blue high luminosity galaxies have well defined spiral structure and dust lanes lower luminosity blue galaxies have less well defined bulges and fewer spiral features in order to evaluate the environments of galaxies in our sample we perform the following procedure first for each given galaxy in the sample we count the number of other galaxies xmath21 with xmath22 outside a projected radius of 10 xmath2 kpc and within some outer radius xmath23 which we will vary below and within xmath24 km sxmath8 in the redshift direction this trace catalog is volume limited within xmath25 in order to make a more direct comparison to xcite we will also use a trace catalog containing only galaxies with xmath26 second we calculate the mean expected number of galaxies in that volume as xmath27 where xmath28 is the sampling fraction of galaxies in the right ascension xmath4 and declination xmath29 direction of each point within the volume we perform this integral using a monte carlo approach distributing random points inside the volume with a density modulated by the sampling fraction xmath28 in order to calculate the mean density around galaxies in various classes we will simply calculate xmath30 as the density with respect to the mean when one calculates the mean density around galaxies it is necessary to have a fair sample of the universe for the most luminous galaxies in our sample xmath31 the sample is volume limited out to our redshift limit of xmath32 and constitutes the equivalent of a 60 xmath2 mpc radius sphere which constitutes a fair sample for many purposes xmath33cdm predicts a variance in such a sphere to be about 013 however the lower luminosity galaxies can only be seen in the fraction of this volume which is nearby and below a certain luminosity the sample is no longer fair for example consider figure checkrhoconverge which shows the cumulative mean density around galaxies with xmath34 in spheres of larger and larger radius around the milky way the mean overdensity does not converge until a volume which corresponds to approximately xmath35 thus it is not really safe to evaluate the mean density around galaxies that are too low luminosity to be observed out that far in redshift which is to say less luminous than xmath36 however for the moment let us consider figure bidenall the greyscale and contours show the mean density relative to the mean as a function of color and luminosity using a projected radius of xmath37 xmath2 mpc the mean is calculated in a sliding box with the width shown if the sliding box contains fewer than 20 galaxies the result is ignored and colored pure white herewe show the results for the entire sample our statistical uncertainties are well behaved down to about xmath38 but we are likely to be cosmic variance limited for xmath39 as indicated by the solid vertical line thus the apparent decline in the mean overdensity for red galaxies lower luminosity than xmath40 is probably spurious despite that limitation we note that there is a strong relationship between environment and color even at xmath41 we note in passing that we can still use the variation of the density within xmath42 to study the properties of galaxies as a function of density down to low luminosity just because the mean density of galaxies in that volume has not converged does not imply that there is insufficient variation of density to study the variation of galaxy properties with environment for our fair sample of galaxies with xmath43 figure bidenscales shows the dependence of overdensity on luminosity and color for six different projected radii 02 05 1 2 4 and 6 xmath2 mpc we only show results for xmath44 for which we have a fair sample obviously the density contrast decreases with scale on the other hand the qualitative form of the plot does not change our results remain similar to those shown in xcite and xcite the results here demonstrate that the environments of low luminosity red galaxies do continue to become denser as absolute magnitude increases down to absolute magnitudes of xmath45 about two magnitudes less luminous than explored by our previous work figure bidenratios shows the ratio of the overdensity xmath29 at each scale relative to that at the largest scale of xmath46 xmath2 mpc this ratio is a measure of the steepness of the cross correlation between galaxies of a given color and absolute magnitude with all galaxies in our volume limited sample xmath47 interestingly the contours in steepness are qualitatively similar to the contours in overdensity in figure bidenscales this similarity implies that for each class of galaxy the strength of the correlation on large scales always is associated with a steeper correlation function another way of looking at similar results is to ask as a function of environment what fraction of galaxies are blue we split the sample into red and blue galaxies using the following luminosity dependent cut xmath48 blue galaxies thus have xmath49 we then sort all the galaxies with xmath50 into bins of density on three different scales xmath51 xmath52 and xmath53 xmath2 mpc in each binwe calculate the fraction of blue galaxies figure lowzfracblue shows this blue fraction as a function of density in all cases the blue fraction declines as a function of density as one would expect based on figure bidenscales above and from the astronomical literature a highly abridged list of relevant work would include xcite if we divide the sample into bins of luminosity we find that higher luminosities have smaller blue fractions of course but that the dependence of blue fraction on density does not change the question naturally arises which scales are important to the process of galaxy formation is the local environment within 05 xmath2 mpc the only important consideration or is the larger scale environment also important for example consider figure denvden which shows the conditional dependence of the three density estimators at the three scales on each other the diagonal plots simply show the distribution within our sample of each density estimator the off diagonal plots show the conditional distribution of the quantity on the xmath54axis given the quantity on the xmath55axis as an example the lower right panel shows xmath56 shows the fraction of blue galaxies as a function of two density estimates one with xmath51 xmath2 mpc and one with xmath57 xmath2 mpc in this caseit is clear that the blue fraction is a function of both densities that is even at a fixed density on scales of xmath37 xmath2 mpc the density outside that radius matters to the blue fraction in addition at a fixed density on scales of xmath52 xmath2 mpc the distribution of galaxies within that radius appears to affect the blue fraction as well on the other hand consider figure lowzfracblue210 60 which is the same as figure lowzfracblue205 10 but now showing the densities at scales of xmath57 and xmath53 xmath2 mpc in figure lowzfracblue210 60 the contours are vertical indicating that the density between xmath52 and xmath53 xmath2 mpc has very little effect on galaxy properties at a fixed value of the density at the smaller scale the larger scale environment appears to be of little importance xcite found that these contours were not vertical when he looked at the fraction of galaxies for which the hxmath4 equivalent width was xmath58 their result appears in conflict with that of the previous paragraph on the other hand the emission lines measure a more recent star formation rate than does the color it is possible in principle that the more recent star formation rate depends more strongly on large scale environment to rule out this possibility figure lowzfrachalpha210 60 shows the same result as figure lowzfracblue210 60 but now showing the fraction of galaxies with hxmath4 equivalent widths as measured by xcite greater than 4 again for strong emission line fraction as for the blue fraction the smaller scales are important but the 6 xmath2 mpc scales are not in contradiction with xcite why then did xcite conclude that large scales were important there are a number of differences between our study and theirs first their contouring method differs instead of measuring the blue fraction in bins of fixed size at each point they measure the star forming fraction among the nearest 500 galaxies in the plane of xmath59 and xmath60 we have found that this procedure creates a slight bias in the contouring in the sense that near the edges of the distribution vertical contours will become diagonal however this effect is not strong enough to explain the differences between our results and those of xcite second to estimate the density in their sample they used a spherical gaussian filter whereas here we use the overdensity in cones we have not investigated what effect this difference has finally they use tracer galaxies with a considerably lower mean density than ours their effective absolute magnitude limit is xmath61 such galaxies have a mean density of xmath62 xmath63 mpcxmath64 our tracers xmath22 have a mean density of xmath65 xmath63 mpcxmath64 almost six times higher figure lowzfrachalpha2m20510 60 shows our results when we restrict our tracer sample to xmath61 the contours in this figure are very diagonal similar to the results of xcite this result suggests that one of two possible mechanisms are causing the differences between our results and those of xcite first the higher luminosity galaxies with xmath61 might be yielding fundamentally different information about the density field than our lower luminosity tracers second the lower mean density of the galaxies with xmath61 might be effectively introducing noise in the measurement on small scales remember that the large scale and small scale densities are intrinsically correlated so if the small scale measurement is noisy enough the higher signal to noise ratio large scale measurement could actually be adding extra information about the environment on small scales such an effect would make the contours in figure lowzfrachalpha2m20510 60 diagonal we have performed a simple test to distinguish these possibilities which is to remake figure lowzfrachalpha210 60 using the low luminosity tracers xmath66 but subsampling them to the same mean density as the high luminosity tracers xmath67 this test yields diagonal contours meaning one can understand the diagonal contours of figure lowzfrachalpha2m20510 60 and of xcite as simply reflecting the low signal to noise ratio of the density estimates we explore the relative bias between galaxies as a function of scale finding the following 1 the dependence of mean environment on color persists to the lowest luminosities we explore xmath68 red low luminosity galaxies tend to be in overdense regions down at least to xmath69 this result extends those found by xcite and xcite towards lower luminosities by about 2 magnitudes 3 at any given point of color and luminosity a correlation function with a stronger amplitude implies correlation function with a steeper slope 4 in regions of a given overdensity on small scales xmath70 xmath2 mpc the overdensity on large scales xmath46 xmath2 mpc does not appear to relate to the recent star formation history of the galaxies the last point above deserves elaboration first it contradicts the results of xcite we have found that their results are probably due to the low mean density of the tracers they used this explanation underscores the importance of taking care when using low signal to noise quantities galaxy environments are difficult to measure in the sense we use tracers that do not necessarily trace the environment perfectly meaning neither with low noise nor necessarily in an unbiased manner we claim here that our higher density of tracers marks an improvement over previous work but it is worth noting the limitations of assuming that the local galaxy density fairly and adequately represents whatever elements of the environment affect galaxy formation second if the galaxy density field is an adequate representation of the environment the result has important implications regarding the physics of galaxy formation in simulationswhose initial conditions are constrained by cosmic microwave background observations and galaxy large scale structure observations virialized dark matter halos do not extend to sizes much larger than xmath71xmath72 xmath2 mpc thus our results are consistent with the notion that only the masses of the host halos of the galaxies we observe are strongly affecting the star formation of the galaxies in addition xcite find that only the star formation histories not the azimuthally averaged structural parameters are directly related to environment for these reasons it is likely that we can understand the process of galaxy formation by only considering the properties of the host dark matter halos our results therefore encourage the halo model description of galaxy formation and the pursuit of semi analytic models which depend only on the properties of the host halo eg xcite thanks to eric bell and george lake for useful discussions during this work thanksto guinevere kauffmann for encouraging us to pursue this question thanks to christy tremonti and jarle brinchmann for the public distribution of their measurements of sdss spectra funding for the creation and distribution of the sdss has been provided by the alfred p sloan foundation the participating institutions the national aeronautics and space administration the national science foundation the us department of energy the japanese monbukagakusho and the max planck society the sdss web site is httpwwwsdssorg the sdss is managed by the astrophysical research consortium arc for the participating institutions the participating institutions are the university of chicago fermilab the institute for advanced study the japan participation group the johns hopkins university los alamos national laboratory the max planck institute for astronomy mpia the max planck institute for astrophysics mpa new mexico state university university of pittsburgh princeton university the united states naval observatory and the university of washington m eke v miller c lewis i bower r couch w nichol r bland hawthorn j baldry i k baugh c bridges t cannon r cole s colless m collins c cross n dalton g de propris r driver s p efstathiou g ellis r s frenk c s glazebrook k gomez p gray a hawkins e jackson c lahav o lumsden s maddox s madgwick d norberg p peacock j a percival w peterson b a sutherland w taylor k 2004 348 1355
we investigate the relationship between the colors luminosities and environments of galaxies in the sloan digital sky survey spectroscopic sample using environmental measurements on scales ranging from xmath0 to xmath1 xmath2 mpc we find 1 that the relationship between color and environment persists even to the lowest luminosities we probe xmath3 2 at luminosities and colors for which the galaxy correlation function has a large amplitude it also has a steep slope and 3 in regions of a given overdensity on small scales 1 xmath2 mpc the overdensity on large scales 6 xmath2 mpc does not appear to relate to the recent star formation history of the galaxies of these results the last has the most immediate application to galaxy formation theory in particular it lends support to the notion that a galaxy s properties are related only to the mass of its host dark matter halo and not to the larger scale environment
motivation data dependence of mean density on color and luminosity blue fraction as a function of environment summary and discussion
because many natural systems are organized as networks in which the nodes be they cells individuals populations or web servers interact in a time dependent fashion the study of networks has been an important focus in recent research one of the particular points of interest has been the question of how the hardwired structure of a network its underlying graph affects its function for example in the context of optimal information storage or transmission between nodes along time it has been hypothesized that there are two key conditions for optimal function in such networks a well balanced adjacency matrix the underlying graph should appropriately combine robust features and random edges and well balanced connection strengths driving optimal dynamics in the system however only recently has mathematics started to study rigorously through a combined graph theoretical and dynamic approach the effects of configuration patterns on the efficiency of network function by applying graph theoretical measures of segregation clustering coefficient motifs modularity rich clubs integration path length efficiency and influence node degree centrality various studies have been investigating the sensitivity of a system s temporal behavior to removing adding nodes or edges at different places in the network structure and have tried to relate these patterns to applications to natural networks brain functioning is one of the most intensely studied contexts which requires our understanding of the tight inter connections between system architecture and dynamics the brain is organized as a dynamic network self interacting in a time dependent fashion at multiple spacial and temporal scales to deliver an optimal range for biological functioning the way in which these modules are wired together in large networks that control complex cognition and behavior is one of the great scientific challenges of the 21st century currently being addressed by large scale research collaborations such as the human connectome project graph theoretical studies of empirical empirical data support certain generic topological properties of brain architecture such as modularity small worldness the existence of hubs and rich clubs xcite in order to explain how connectivity patterns may affect the system s dynamics eg in the context of stability and synchronization in networks of coupled neural populations and thus the observed behavior a lot of effort has been thus invested towards formal modeling approaches using a combination of analytical and numerical methods from nonlinear dynamics and graph theory in both biophysical models xcite and simplified systems xcite these analyses revealed a rich range of potential dynamic regimes and transitions xcite shown to depend as much on the coupling parameters of the network as on the arrangement of the excitatory and inhibitory connections xcite the construction of a realistic data compatible computational model has been subsequently found to present many difficulties that transcend the existing methods from nonlinear dynamics and may in fact require 1 new analysis and book keeping methods and 2 a new framework that would naturally encompass the rich phenomena intrinsic to these systems both of which aspects are central to our proposed work in a paper with dr verduzco flores xcite one of the authors of this paper first explored the idea of having network connectivity as a bifurcation parameter for the ensemble dynamics in a continuous time system of coupled differential equations we used configuration dependent phase spaces and our probabilistic extension of bifurcation diagrams in the parameter space to investigate the relationship between classes of system architectures and classes of their possible dynamics and we observed the robustness of the coupled dynamics to certain changes in the network architecture and its vulnerability to others as expected when translating connectivity patterns to network dynamics the main difficulties were raised by the combination of graph complexity and the system s intractable dynamic richness in order to break down and better understand this dependence we started to investigate it in simpler theoretical models where one may more easily identify and pair specific structural patterns to their effects on dynamics the logistic family is historically perhaps the most studied family of maps in nonlinear dynamics whose behavior is by now relatively well understood therefore we started by looking in particular at how dynamic behavior depends on connectivity in networks with simple logistic nodes this paper focuses on definitions concepts and observations in low dimensional networks future work will address large networks and different classes of maps dynamic networks with discrete nodes and the dependence of their behavior on connectivity parameters have been previously described in several contexts over the past two decades for example in an early paper wang considered a simple neural network of only two excitatory inhibitory neurons and analyzed it as a parameterized family of two dimensional maps proving existence of period doubling to chaos and strange attractors in the network xcite masolle attay et al have found that in networks of delay coupled logistic maps synchronization regimes and formation of anti phase clusters depend on coupling strength xcite and on the edge topology characterized by the spectrum of the graph laplacian xcite yu has constructed and studied a network wherein the undirected edges symbolize the nodes relation of adjacency in an integer sequence obtained from the logistic mapping and the top integral function xcite in our present work we focus on investigating in the context of networked maps extensions of the julia and mandelbrot sets traditionally defined for single map iterations for three different model networks we use a combination of analytical and numerical tools to illustrate how the system behavior measured via topological properties of the julia sets changes when perturbing the underlying adjacency graph we differentiate between the effects on dynamics of different perturbations that directly modulate network connectivity increasing decreasing edge weights and altering edge configuration by adding deleting or moving edges we discuss the implications of considering a rigorous extension of fatou julia theory known to apply for iterations of single maps to iterations of ensembles of maps coupled as nodes in a network the logistic map is historically perhaps the best known family of maps in nonlinear dynamics iterations of one single quadratic function have been studied starting in the early 19th century with the work of fatou and julia the prisoner set of a map xmath3 is defined as the set of all points in the complex dynamic plane whose orbits are bounded the escape set of a complex map is the set of all points whose orbits are unbounded the julia set of xmath3 is defined as their common boundary xmath4 the filled julia set is the union of prisoner points with their boundary xmath4 for polynomial maps it has been shown that the connectivity of a map s julia set is tightly related to the structure of its critical orbits ie the orbits of the map s critical points due to extensive work spanning almost one century from julia xcite and fatou xcite until recent developments xcite we now have the following fatou julia theorem for a polynomial with at least one critical orbit unbounded the julia set is totally disconnected if and only if all the bounded critical orbits are aperiodic for a single iterated logistic mapxcite the fatou julia theorem implies that the julia set is either totally connected for values of xmath5 in the mandelbrot set ie if the orbit of the critical point 0 is bounded or totally disconnected for values of xmath5 outside of the mandelbrot set ie if the orbit of the critical point 0 is unbounded in previous work the authors showed that this dichotomy breaks in the case of random iterations of two maps xcite in our current work we focus on extensions for networked logistic maps although julia and mandelbrot sets have been studied somewhat in connection with coupled systems xcite none of the existing work seems to address the basic problems of how these sets can be defined for networks of maps how different aspects of the network hardwiring affect the topology of these sets and whether there is any fatou julia type result in this context these are some of the questions addressed in this paper which is organized as follows in section logisticmaps we introduce definitions of our network setup as well as of the extensions of mandelbrot and julia sets that we will be studying in order to illustrate some basic ideas and concepts we concentrate on three examples of 3dimensional networks which differ from each other in edge distribution and whose connectivity strengths are allow to vary in section complexmaps we focus on the behavior of these 3dimensional models when we consider the nodes as complex iterated variables we analyze the similarities and differences between node wise behavior in each case and we investigate the topological perturbations in one dimensional complex slices of the mandelbrot and julia sets as the connectivity changes from one model to the next through intermediate stages in section realmaps we address the same questions for real logistic nodes with the advantage of being able to visualize the entire network mandelbrot and julia sets as 3dimensional real objects in both sections we conjecture weaker versions of the fatou julia theorem connecting points in the mandelbrot set with connectivity properties of the corresponding julia sets finally in section discussion we interpret our results both mathematically and in the larger context of network sciences we also briefly preview future work on high dimensional networks and on networks with adaptable nodes and edges we consider a set of xmath6 nodes coupled according to the edges of an oriented graph with adjacency matrix xmath7 on which one may impose additional structural conditions related to edge density or distribution in isolation each node xmath8 xmath9 functions as a discrete nonlinear map xmath10 changing at each iteration xmath11 as xmath12 when coupled as a network with adjacency xmath13 each node will also receive contributions through the incoming edges from the adjacent nodes throughout this paper we will consider an additive rule of combining these contributions for a couple of reasons first summing weighted incoming inputs is one simple yet mathematically nontrivial way to introduce the cross talk between nodes second summing weighted inputs inside a nonlinear integrating function is reminiscent of certain mechanisms studied in the natural sciences such as the integrate and fire neural mechanism studied in our previous work in the context of coupled dynamics the coupled system will then have the following general form xmath14 where xmath15 are the weights along the adjacency edges one may view this system simply as an iteration of an xmath6dimensional map xmath16 with xmath17 in the case of real valued nodes or respectively xmath18 in the case of complex valued nodes the new and exciting aspect that we are proposing in our work is to study the dependence of the coupled dynamics on the parameters in particular on the coupling scheme adjacency matrix viewed itself as a system parameter to fix these ideas we focused first on defining these questions and proposing hypotheses for the case of quadratic node dynamics the logistic family is one of the most studied family of maps in the context of both real and complex dynamics of a single variable it was also the subject of our previous modeling work on random iterations in this paper in particular we will work with quadratic node maps with their traditional parametrization xmath19 with xmath20 and xmath21 for the complex case and xmath22 and xmath23 for the real case the network variable will be called respectively xmath24 in the case of complex nodes and xmath25 in the case of real nodes we consider both the particular case of identical quadratic maps equal xmath5 values and the general case of different maps attached to the nodes throughout the network in both cases we aim to study the asymptotic behavior of iterated node wise orbits as well as of the xmath6dimensional orbits which we will call multi orbits as in the classical theory of fatou and julia we will investigate when orbits escape to infinity or remain bounded and how much of this information is encoded in the critical multi orbit of the system for the following definitions fix the network ie fix the adjacency xmath13 and the edge weights xmath26 to avoid redundancy we give definitions for the complex case but they can be formulated similarly for real maps for a fixed parameter xmath27 we call the filled multi julia set of the network the locus of xmath28 which produce a bounded multi orbit in xmath29 we call the filled uni julia set the locus of xmath30 so that xmath31 produces a bounded multi orbit the multi julia set or the multi j set of the network is defined as the boundary in xmath29 of the filled multi julia set similarly one defines the uni julia set or uni j set of the network as the boundary in xmath32 of its filled counterparts we define the multi mandelbrot set or the multi m set of the network the parameter locus of xmath27 for which the multi orbit of the critical point xmath33 is bounded in xmath29 we call the equi mandelbrot set or the equi m set of the network the locus of xmath21 for which the critical multi orbit is bounded for equi parameter xmath34 we call the xmath35th node equi m set the locus xmath21 such that the component of the multi orbit of xmath33 corresponding to the xmath35th node remains bounded in xmath32 we study using a combination of analytical and numerical methods how the structure of the julia and mandelbrot sets varies under perturbations of the node wise dynamics ie under changes of the quadratic multi parameter xmath36 and under perturbations of the coupling scheme ie of the adjacency matrix xmath13 and of the coupling weights xmath26 in this paper we start with investigating these questions in small 3dimensional networks with specific adjacency configurations in a subsequent paper we will move to investigate how similar phenomena may be quantified and studied analytically and numerically in high dimensional networks in both cases we are interested in particular in observing differences in the effects on dynamics of three different aspects of the network architecture 1 increasing decreasing edge weights 2 increasing decreasing edge density 3 altering edge configuration by adding deleting or moving edges while a desired objective would be to obtain general results for all network sizes since many natural networks are large we start by studying simple low dimensional systems in this study we focus on simple networks formed of three nodes connected by different network geometries and edge weights to fix our ideas we will follow and illustrate three structures in particular also see figure 3dnet 1 two input nodes xmath37 and xmath38 are self driven by quadratic maps and the single output node xmath39 is driven symmetrically by the two input nodes xmath37 additionally communicates with xmath38 via an edge of variable weight xmath40 which can take both positive and negative values we will call this the simple dual model 2 in addition to the simple dual scheme the output node xmath39 is also self driven ie there is a self loop on xmath39 of weight xmath41 which can be positive or negative we will call this the self drive model 3 in addition to the self driven model there is also feedback from the output node xmath39 into the node xmath38 via a new edge of variable weight xmath3 we will call this the feedback model unless specified edges have positive unit weight notice that the same effect as negative feed forward edges from xmath37 and xmath38 into xmath39 can be obtained by changing the sign of xmath41 etc the three connectivity models we chose to study and compare are described by the equations below simple dual model xmath42 self drive model xmath43 feedback model xmath44 for a fixed multi parameter xmath45 for example one can see all three systems as generated by a network map xmath46 xmath47 xmath48 defined as xmath491fc2az2fc3az3 for any xmath50 we try to classify and understand the effects that coupling changes have on the topology of multi j and multi m sets for both complex and real networked maps we do nt expect all classical topology results on the julia and mandelbrot sets for single maps eg fatou julia theorem or connectivity of the mandelbrot set to carry out for networks of coupled maps however since the topology of the full sets in xmath51 is somewhat harder to inspect we study as a first step their equi slices and node wise equi slices which are objects in xmath32 we will track and compare in particular the differences between the three models but also the geometric and topological changes produced on the equi slices within each one model for different values of the parameters xmath40 xmath41 and xmath3 none of these results however can be directly extrapolated to similar conclusions on the full sets to offer some insight into the latter we study the multi m and multi j sets in the context of real maps for which there objects can be visualized in xmath52 a first intuitive question is when the nodes of the network have similar behavior and whether if one node wise orbit is bounded the others will remain bounded this relationship is trivial to establish in some cases such as for example in the simple dual model with independent input nodes ie xmath53 indeed in this model for any fixed xmath21 the origin s orbit in xmath51 under xmath54 can be described as xmath55 the projection of the orbit in any of the three components only depends on the previous states of xmath37 and xmath38 and these three sequences are simultaneously bounded in xmath32 hence the node specific equi mandelbrot sets are all identical with the traditional mandelbrot set some basic connections between node wise equi m sets in each of the three models are stated below we will prove these incrementally recall that the dual model is a particular case of self drive for xmath56 and the self drive is a particular case of feedback model with xmath57 the equi mandelbrot set for the nodes xmath38 and xmath39 are identical red but different from the set for the node xmath37 blue b for the self drive model with negative feedback xmath58 and xmath59 the equi mandelbrot sets for the three nodes xmath37 xmath38 and xmath39 shown respectively in blue green and red are all different c for the feedback model with with negative feedback xmath58 and xmath59 xmath60 the equi mandelbrot set for the nodes xmath38 and xmath39 are identical red but different from the set for the node xmath37 blue in all panels the computations were generated based on xmath61 iterations and for a test radius of xmath62 in the simple dual model the node wise equi m sets for the nodes xmath38 and xmath39 are identical subsets of the traditional mandelbrot set which is the equi m set for node xmath37 propsimpledual an additional self drive xmath63 applied to the output node changes the balance of inputs to xmath39 in the following sense in the self drive model the node wise equi m sets of xmath38 and xmath39 remain subsets of the standard mandelbrot set but the equi m set of xmath39 is strictly contained in the equi m set of xmath38 figure nodedifferencesb propselfdrive finally introducing any arbitrary feedback xmath64 re couples the behavior of nodes xmath38 and xmath39 producing a common equi mandelbrod set largely shrunk from the simple dual version in the feedback model with xmath63 and xmath64 the node wise equi m sets for the nodes xmath38 and xmath39 are again identical subsets of the traditional mandelbrot set figure nodedifferencesc propfeedback for the rest of the section the term of equi m set will be referring to the equi mandelbrot set of the network which is the intersection of the three node specific sets we illustrate the equi m set for the three models and for different levels of cross talk xmath40 xmath41 and xmath3 between nodes starting with the simple dual input version of the model we show in figure dualinput the effects of changing the level xmath40 of talk between the input nodes on the shape of the equi m set it is not surprising that in both positive and negative xmath40 ranges increasing xmath65 gradually shrinks the equi mandelbrot set this can be motivated intuitively by the fact that an additional contribution to the node xmath38 may cause the critical orbit to increase faster in the xmath38 and subsequently the xmath39 components hence points in the traditional set will no longer be included in the mutants for xmath66 as xmath40 increases in the positive range we noticed that the network m sets form nested subsets which is not true for the negative range that they remain connected for all values of xmath40 and that the hausdorff dimension of the boundary increases with xmath40 in figure dualinput notice an increased wrinkling of the boundary as xmath40 takes larger positive values and an increase smoothing as xmath40 takes negative values with increasing absolute value perturbations of xmath40 in the positive range seem to have a much more substantial contribution to the size of the equi m set while perturbations of xmath40 in the negative range have a lesser influence on the size and affect mostly the region close tot the boundary of the equi m set and the boundary topological details we will track the same changes in xmath40 in the other network models and investigate if this trend is consistent increases a xmath58 b xmath67 c xmath53 traditional mandelbrot set d xmath68 e xmath69 figure equimandselfdrive illustrates the evolution of the equi m set in the case of the model with self drive for a grid of positive and negative values of the input connectivity xmath40 and of the self drive xmath41 below are some simple visual observations based on our numerical computations to be addressed analytically in future work decreasing xmath41 in the negative range produces no alteration of the m sets when xmath70 however it induces dramatic changes in shape and connectivity when xmath71 if for xmath72 relatively large increasing xmath40 only slightly alters the shape of the set for small xmath72 the size of the set is also altered with increasing xmath40 generating smaller and smaller subsets and the complexity of its boundary also seems to increase the effects of varying xmath71 for a fixed value of xmath41 become more dramatic with decreasing xmath41 in the negative range these effects include changes in shape and topology the region xmath71 and xmath73 allowing the m set to break into multiple connected components and xmath41 the rows show from bottom to top increasing values of the self drive xmath74 xmath75 xmath76 xmath56 this row representing the simple dual model as shown in figure dualinput xmath59 xmath77 xmath78 the columns show from left to right increasing values of cross talk between the two input nodes xmath58 xmath67 xmath53 xmath68 and xmath69 all the equi m sets were generated from xmath61 iterations and plotted at the same scale in the complex square xmath79 times 1515scaledwidth900 in this section we will track the changes in the uni julia set when the parameters of the system change one of our goals is to test first in the case of equi parameters xmath21 then for general parameters in xmath51 if a fatou julia type theorem applies in the case of our three networks first we try to establish a hypothesis for connectedness of uni j sets by addressing numerically and visually questions such as is it true that if xmath5 is in the equi m set of a network then the uni julia set is connected is it true that if xmath5 is not in the equi m set of the network then the uni julia set is totally disconnected clearly this is not simply a xmath51 version of the traditional fatou julia theorem but rather a slightly different result involving the projection of the julia set onto a uni slice notice that a connected uni j set in xmath32 may be obtained from a disconnected xmath51 network julia set and conversely that a disconnected uni j projection may come from a connected julia in xmath51 we will further discuss xmath51 versions of these objects in the context of iterations of real variables where one can visualize the full mandelbrot and julia sets for the network as subsets of xmath52 here we will first investigate uni j sets for equi parameters xmath80 with a particular focus on tracking the topological changes of the uni j set as the system approaches the boundary of the equi m set and leaves the equi m set andxmath74 for different values of the equi parameter xmath5 marked with colored dots on the equi m template in upper left xmath81 red xmath82 green xmath83 blue xmath84 orange xmath85 purple all sets were based on xmath86 iterations both equi m and uni j sets coincide in this case with the traditional mandebrot and julia sets for single map iterationsscaledwidth650 and xmath76 for different values of the equi parameter xmath5 marked with colored dots on the equi m template in upper left xmath87 red xmath88 green xmath84 orange xmath89 blue xmath90 dark purple xmath91 cyan xmath92 magenta for the first four panels xmath5 is in the equi m set for the last two xmath5 is outside of the equi m set all sets were based on 100 iterationsscaledwidth900 as the network profile is changed from a simple dual with xmath93 to b self drive with additional xmath74 to c feedback with additional xmath94scaledwidth600 first we fix the network type and the connectivity profile ie the parameters xmath40 xmath41 and xmath3 and we observe how the uni j sets evolves as the equi parameter xmath5 changes in figures unijulia1 and unijulia2 we illustrate this for two examples of self driven models one with xmath74 and xmath53 the other with xmath76 and xmath58 as the parameter point xmath95 approaches the boundary of the equi m set the topology of the uni j set if affected with its connectivity braking down around the boundary second we look at the dependence of uni julia sets on the coupling profile network type as an example we fixed the equi parameter xmath96 and we first considered a simple dual network with negative feed forward and small cross talk xmath97 we then added self drive xmath74 to the output node then additionally introduced a small negative feedback xmath98 the three resulting uni julia sets are shown in figure unijuliamodelcomparison notice that a very small degree of feedback xmath3 produces a more substantial difference than a significant change in the self drive xmath41 third one can study the dependence of uni julia sets on the strength of specific connections within the network as a simple illustration ofhow complex this dependence may be we show in figures c2julia and c3julia the effects on the uni j sets of slight increases in the cross talk parameter xmath40 for two different values of the equi parameter xmath5 an immediate observation is that uni j sets no not exhibit the dichotomy from traditional single map iterations no longer stands uni j sets can be connected totally disconnected but also disconnected into a finite or infinite number of connected components without being totally disconnected based on our illustrations we can further conjecture in the context of our three models a description of connectedness for uni j sets as follows and equi parameter xmath99 as the input cross talk xmath40 is increased the panels show left to right xmath100 xmath93 xmath53 xmath101 and xmath102scaledwidth900 and equi parameter xmath96 as the input cross talk xmath40 is increased the panels show left to right xmath103 xmath104 xmath105 xmath97 xmath53 xmath106 and xmath107 xmath108scaledwidth900 for any of the three models described and for any equi parameter xmath21 the uni j set is connected only if xmath5 is in the equi m set of the network and it is totally disconnected only if xmath5 is not in the equi m set of the network the conjecture implies a looser dichotomy regarding connectivity of uni j sets than that delivered by the traditional fatou julia result for single maps if xmath5 is in the equi m set of the network then the uni j set is either connected or disconnected without being totally disconnected if xmath5 is not in the equi m set of the network then the uni j set is disconnected allowing in particular the case of totally disconnected finally we want to remind the reader that uni julia sets can be defined for general parameters xmath45 as shown in figure generalunijulia with xmath109 xmath110 and xmath111 the panels represent uni j sets for a self drive network with xmath74 as the cross talk xmath40 changes from a xmath53 to b xmath112 to c xmath113scaledwidth700 center self drive network with xmath114 and xmath78 right self drive network with xmath115 and xmath78 plots were generated with xmath116 iterations and in resolution xmath117scaledwidth900 the same definitions apply for iterations of real quadratic maps with the real case presenting the advantage of easy visualization of full julia and mandelbrot sets rather than having to consider equi slices as we did in the complex case in figures realmand and realjulia we illustrate a few multi m and multi j sets respectively for some of the same networks considered in our complex case moving to illustrate the relationship between the multi m and the multi j set in this case consider for example the self drive real network with xmath115 and xmath78 for different parameters xmath118 while more computationally intensive higher resolution figures would be necessary to establish the geometry and fractality of these sets one may already notice basic properties for example figure realjulia shows that if one were to consider complex equi parameters the multi julia set may not only be connected figure realjuliaa or totally disconnected not shown but may also be broken into a number of connected components figures realjuliab anc c andxmath78 with equi parameters respectively a xmath119 b xmath120 c xmath96 plots were generated with xmath116 iterations and in resolution xmath117scaledwidth900 this remained true if we returned to our restriction of having real parameters once we allow arbitrary that is not necessarily equi parameters the panels of figure realcomp show the multi j sets for two different but close parameters xmath121 and xmath122 respectively both of which are not in the multi m set the figures suggest a disconnected although not totally disconnected multi j set in the first case and a connected multi j set in the second case this implies that in this case the fatou julia dichotomy fails in its traditional form and that the statement relating boundedness of the critical orbit with connectedness of the multi j set does not hold for real networks more precisely we found parameters for which the multi julia set appears to be connected although the critical multi orbit is unbounded on the other hand the counterpart of the theorem may still hold in the following form the multi j set is connected if the parameter belongs to the multi m set part of our current work consists in optimizing the numerical algorithm for multi m and j sets in real networks with high enough resolution to allow 1 observation of possible fractal properties of multi j sets and of multi m sets boundaries and 2 computation of the genus of the filled multi j sets in attempt to phrase a topological extension of the theorem that takes into account the number of handles and tunnels that open up in these sets as their connectivity breaks down when leaving the mandelbrot set and xmath78 the two multi parameters xmath121 left panels and xmath122 right panels are not in the mandelbrot set for the network the top row shows the 3dimensional julia sets the bottom panels show top views of the same sets plots were generated with xmath116 iterations and in resolution xmath117scaledwidth600 in this paper we used a combination of analytical and numerical approaches to propose possible extensions of fatou julia theory to networked complex maps we started by showing that even in networks where all nodes are identical maps their behavior may not be synchronized the node wise mandelbrot sets may be identical in some cases which in others they may differ substantially depending on the coupling pattern we then investigated how specific changes in the network hard wiring trigger different effects on the structure of the network mandelbrot and julia sets focusing in particular on observing topological properties connectivity and fractal behavior haussdorff dimension we found instances in which small perturbations in the strength of one single connection may lead to dramatic topological changes in the asymptotic sets and instances in which these sets are robust to much more significant changes more generally our paper suggests a new direction of study with potentially tractable although complex mathematical questions while existing results do not apply to networks in their traditional form it appears that connectivity of the newly defined uni julia sets may still be determined by the behavior of the critical orbit we conjectured a weaker extension of the fatou julia theorem which was based only on numerical inspection and which remains subject to a rigorous study that would support or refute it there are a few interesting aspects which we aim to address in our future work on iterated networks for example we are interested in studying the structure of equi m and uni j sets for larger networks and in understanding the connection between the network architecture and its asymptotic dynamics this direction can lead to ties and applications to understanding functional networks that appear in the natural sciences which are typically large the authors previous work has addressed some of these aspects in the context of continuous dynamics and coupled differential equations however when translating network architectural patters into network dynamics the great difficulty arises from a combination of the graph complexity and the system s intractable dynamic richness addressing the question at the level of low dimensional networks can help us more easily identify and pair specific structural patterns to their effects on dynamics and thus better understand this dependence the next natural step is to return to the search for a similar classification in high dimensional networks where specific graph measures or patters eg node degree distribution graph laplacian presence of strong components cycles or motifs may help us independently or in combination classify the network s dynamic behavior described schematically on the left together with their adjacency matrices both systems have connectivity parameters xmath123 xmath124 xmath125 xmath126 xmath127 xmath128 xmath129 xmath130 described schematically on the left together with their adjacency matrices both systems have connectivity parameters xmath123 xmath124scaledwidth900 described schematically on the left together with their adjacency matrices both systems have connectivity parameters xmath123 xmath124 xmath125 xmath126 xmath127 xmath128 xmath131 xmath130 described schematically on the left together with their adjacency matrices both systems have connectivity parameters xmath123 xmath124scaledwidth900 of high interest are methods that can identify robust versus vulnerable features of the graph from a dynamics standpoint as figures 4dimmand and 10dimmand show it is clear that a small perturbation of the graph eg adding a single edge have the potential even for higher dimensional networks to produce dramatic changes in the asymptotic dynamics of the network and readily lead to substantially different m and j sets however this is not consistently true we would like to understand whether a network may have a priori knowledge of which structural changes are likely to produce large dynamic effects this is a real possibility in large natural learning networks including the brain where such knowledge probably affects decisions of synaptic restructuring and temporal evolution of the connectivity profile nodes formed of two cliques xmath132 and xmath133 with xmath134 nodes in each th adjacency matrix is therefore similar to those in figure 4dimmand with square blocks xmath135 xmath136 and xmath137 of size xmath134 the densities number of ones in each block ie number of xmath132toxmath133 and respectively xmath133toxmath132 connecting edges were takes in each panel to be out of the total of xmath138 a xmath139 b xmath140 xmath141 c xmath142 d xmath143 in all cases the connectivity parameters ie edge weights were xmath144 and xmath145titlefigscaledwidth240 nodes formed of two cliques xmath132 and xmath133 with xmath134 nodes in each th adjacency matrix is therefore similar to those in figure 4dimmand with square blocks xmath135 xmath136 and xmath137 of size xmath134 the densities number of ones in each block ie number of xmath132toxmath133 and respectively xmath133toxmath132 connecting edges were takes in each panel to be out of the total of xmath138 a xmath139 b xmath140 xmath141 c xmath142 d xmath143 in all cases the connectivity parameters ie edge weights were xmath144 and xmath145titlefigscaledwidth240 nodes formed of two cliques xmath132 and xmath133 with xmath134 nodes in each th adjacency matrix is therefore similar to those in figure 4dimmand with square blocks xmath135 xmath136 and xmath137 of size xmath134 the densities number of ones in each block ie number of xmath132toxmath133 and respectively xmath133toxmath132 connecting edges were takes in each panel to be out of the total of xmath138 a xmath139 b xmath140 xmath141 c xmath142 d xmath143 in all cases the connectivity parameters ie edge weights were xmath144 and xmath145titlefigscaledwidth240 nodes formed of two cliques xmath132 and xmath133 with xmath134 nodes in each th adjacency matrix is therefore similar to those in figure 4dimmand with square blocks xmath135 xmath136 and xmath137 of size xmath134 the densities number of ones in each block ie number of xmath132toxmath133 and respectively xmath133toxmath132 connecting edges were takes in each panel to be out of the total of xmath138 a xmath139 b xmath140 xmath141 c xmath142 d xmath143 in all cases the connectivity parameters ie edge weights were xmath144 and xmath145titlefigscaledwidth240 our future work includes understanding and interpreting the importance of this type of results in the context of networks from natural sciences one potential view proposed by the authors in their previous joint work is to interpret iterated orbits as describing the temporal evolution of an evolving system eg copying and proofreading dna sequences or learning in a neural network along these lines an initial xmath146 which escapes to xmath147 under iterations may represent a feature of the system which becomes in time unsustainable while an initial xmath146 which is attracted to a simple periodic orbit may represent a feature which is too simple to be relevant or efficient for the system then the points on the boundary between these two behaviors ie the julia set may be viewed as the optimal features allowing the system to perform its complex function we study how this optimal set of features changes when perturbing its architecture once we gain enough knowledge of networked maps for fixed nodes and edges and we formulate which applications this framework may be appropriate to address symbolically we will allow the nodes dynamics as well as the edge weights and distribution to evolve in time together with the iterations this process may account for phenomena such as learning or adaptation a crucial aspect that needs to be understood about systems this represents a natural direction in which to extend existing work by the authors on random iterations in the one dimensional case the work on this project was supported by the suny new paltz research scholarship and creative activities program we additionally want to thank sergio verduzco flores for his programing suggestions and mark comerford for the useful mathematical discussions 10 rdulescu a verduzco flores s 2015 nonlinear network dynamics under perturbations of the underlying graph chaos an interdisciplinary journal of nonlinear science 251 013116 gray rt robinson pa 2009 stability and structural constraints of random brain networks with excitatory and inhibitory neural populations journal of computational neuroscience 271 81101 siri b quoy m delord b cessac b berry h 2007 effects of hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons journal of physiology paris 1011 136148 brunel n 2000 dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons journal of computational neuroscience 83 183208 bullmore e sporns o 2009 complex brain networks graph theoretical analysis of structural and functional systems nature reviews neuroscience 103 186198 sporns o 2002 graph theory methods for the analysis of neural connectivity patterns neuroscience databases a practical guide 171186 sporns o 2011 the non random brain efficiency economy and complex dynamics frontiers in computational neuroscience 5 5 julia g mmoire sur litration des fonctions rationnelles journal de mathmatiques pures et appliques 47246 1918 fatou p sur les quations fonctionnelles bulletin de la socit mathmatique de france 47 161271 1919 branner b hubbard jh the iteration of cubic polynomials part ii patterns and parapatterns acta mathematica 1691 229325 1992 qiu wy yin yc proof of the branner hubbard conjecture on cantor julia sets science in china series a mathematics 521 4565 2009 carleson l gamelin tw complex dynamics volume 69 springer science business media 1993 devaney rl look dm a criterion for sierpinski curve julia sets in topology proceedings volume 30 163179 2006 fatihcan m atay jrgen jost and andreas wende delays connection topology and synchronization of coupled chaotic maps 9214144101 2004 c hauptmann h touchette and mc mackey information capacity and pattern formation in a tent map network featuring statistical periodicity 672026217 2003 ob isaeva sp kuznetsov and ah osbaldestin phenomena of complex analytic dynamics in the systems of alternately excited coupled non autonomous oscillators and self sustained oscillators 2010 cm marcus and rm westervelt dynamics of iterated map neural networks 401501 1989 cristina masoller and fatihcan m atay complex transitions to synchronization in delay coupled networks of logistic maps 621119126 2011 anca rdulescu and ariel pignatelli symbolic template iterations of complex quadratic maps 1 18 2016 xiaoling yu zhen jia and xiangguo jian logistic mapping based complex network modeling 4111558 2013 wang xin period doublings to chaos in a simple neural network an analytical proof 54 425444 1991 the figures show four uni j sets for xmath148 and xmath149 nodes the equi parameters adjacency matrices and connectivity parameters of each network are given below from left to right
many natural systems are organized as networks in which the nodes interact in a time dependent fashion the object of our study is to relate connectivity to the temporal behavior of a network in which the nodes are real or complex logistic maps coupled according to a connectivity scheme that obeys certain constrains but also incorporates random aspects we investigate in particular the relationship between the system architecture and possible dynamics in the current paper we focus on establishing the framework terminology and pertinent questions for low dimensional networks a subsequent paper will further address the relationship between hardwiring and dynamics in high dimensional networks for networks of both complex and real node maps we define extensions of the julia and mandelbrot sets traditionally defined in the context of single map iterations for three different model networks we use a combination of analytical and numerical tools to illustrate how the system behavior measured via topological properties of the julia sets changes when perturbing the underlying adjacency graph we differentiate between the effects on dynamics of different perturbations that directly modulate network connectivity increasing decreasing edge weights and altering edge configuration by adding deleting or moving edges we discuss the implications of considering a rigorous extension of fatou julia theory known to apply for iterations of single maps to iterations of ensembles of maps coupled as nodes in a network real and complex behavior for networks of coupled logistic maps anca rdulescuxmath0 ariel pignatellixmath1 xmath2 department of mathematics suny new paltz ny 12561 xmath1 department of mechanical engineering suny new paltz ny 12561
introduction our models of networked logistic maps complex coupled maps real case discussion acknowledgements appendix a: uni-j sets for higher dimensional networks
a latin bitrade xmath5 is a pair of partial latin squares which are disjoint occupy the same set of non empty cells and whose corresponding rows and columns contain the same set of entries one of the earliest studies of latin bitrades appeared in xcite where they are referred to as exchangeable partial groupoids latin bitrades are prominent in the study of critical sets which are minimal defining sets of latin squares xcitexcitexcite and the intersections between latin squares xcite we write xmath6 when symbol xmath7 appears in the cell at the intersection of row xmath8 and column xmath9 of the partial latin square xmath10 a xmath3homogeneous bitrade has xmath3 elements in each row xmath3 elements in each column and each symbol appears xmath3 times cavenagh xcite obtained the following theorem using combinatorial methods as a corollary to a general classification result on xmath3homogeneous bitrades theorem3homtransversals let xmath5 be a xmath3homogeneous bitrade then xmath10 can be partitioned into three transversals in this paperwe provide an independent and geometric proof of cavenagh s result in doingso we provide a framework for studying bitrades as tessellations in spherical euclidean or hyperbolic space in particular bitrades can be thought of as finite representations of certain triangle groups we let permutations act on the right in accordance with computer algebra systems such as sage xcite graphs in this paper may contain loops or multiple edges otherwise our notation is standard and we refer the reader to diestel xcite some basic topological terms will be used for these we refer the reader to stillwell xcite finally a good reference for hypermaps and graphs on surfaces is xcite a partial latin square xmath11 of order xmath12 is an xmath13 array where each xmath14 appears at most once in each row and at most once in each column a latin square xmath15 of order xmath12 is an xmath13 array where each xmath14 appears exactly once in each row and exactly once in each column it is convenient to use setwise notation to refer to entries of a partial latin square and we write xmath16 if and only if symbol xmath7 appears in the intersection of row xmath8 and column xmath9 of xmath11 in this manner xmath17 for finite sets xmath18 each of size xmath19 it is also convenient to interpret a partial latin square as a multiplication table for a binary operator xmath20 writing xmath6 if and only if xmath21 defnbitradea123 let xmath10 xmath22 be two partial latin squares then xmath5 is a bitrade if the following three conditions are satisfied xmath23 for all xmath24 and all xmath25 xmath26 xmath27 there exists a unique xmath28 such that xmath29 and xmath30 for all xmath31 and all xmath25 xmath26 xmath27 there exists a unique xmath32 such that xmath29 and xmath30 conditions r2and r3imply that each row column of xmath10 contains the same subset of xmath33 as the corresponding row column of xmath34 a xmath7homogeneous bitrade xmath5 has xmath7 entries in each row of xmath10 xmath7 entries in each column of xmath10 and each symbol appears xmath7 times in xmath10 by symmetrythe same holds for xmath34 a set xmath35 is a transversal if xmath36 intersects each row of xmath10 in precisely one entry each column in precisely one entry and if the number of symbols appearing in xmath36 is equal to xmath37 the latter condition can be written as xmath38 a bitrade xmath5 is primary if whenever xmath39 is a bitrade such that xmath40 and xmath41 then xmath42 bijections xmath43 for xmath44 xmath45 xmath3 give an isotopic bitrade and permuting each xmath18 gives an autotopism in xcite drapal gave a representation of bitrades in terms of three permutations xmath46 acting on a finite set for xmath47 define the map xmath48 where xmath49 if and only if xmath50 and xmath51 for xmath52 by definition defnbitradea123 each xmath53 is a bijection then xmath54 are defined by xmath55 we refer to xmath56 as the xmath46 representation we write xmath57 for the set of points that the finite permutation xmath58 acts on defnt1234 let xmath59 xmath60 xmath61 be finite permutations and let xmath62 define four properties 1 xmath63 2 if xmath64 is a cycle of xmath46 and xmath65 is a cycle of xmath66 then xmath67 for any xmath68 3 each xmath46 is fixed point free 4 the group xmath69 is transitive on xmath70 by letting xmath18be the set of cycles of xmath46 drapal obtained the following theorem which relates definition defnbitradea123 and defnt1234 theoremdrapaltaustructure a bitrade xmath5 is equivalent up to isotopism to three permutations xmath59 xmath60 xmath61 acting on a set xmath70 satisfying t1 t2 and t3 if t4is also satisfied then the bitrade is primary to construct the xmath46 representation for a bitrade we simply evaluate equation in the reverse direction we have the following construction constructiontautobitrade let xmath59 xmath60 xmath61 be permutations satisfying condition t1 t2 and t3 let xmath62 define xmath71 for xmath44 xmath45 xmath3 now define two arrays xmath10 xmath34 xmath72 by theorem theoremdrapaltaustructure xmath5 is a bitrade exampleintercalaterep the smallest bitrade xmath5 is the intercalate which has four entries the bitrade is shown below xmath73 the xmath46 representation is xmath74 xmath75 xmath76 where we have written xmath77 for xmath78 to make the presentation of the xmath46 permutations clearer by construction constructiontautobitrade with xmath79 we can convert the xmath46 representation to a bitrade xmath80 xmath81 in this way we see that row xmath82 of xmath10 corresponds to row xmath83 of xmath84 which is the cycle xmath85 of xmath59 and so on for the columns and symbols ex3hom the following xmath3homogeneous bitrade is pertinent to the proof of the main result of this paper xmath86 writing xmath87 for xmath88 the xmath46 representation is xmath89 the bitrade has four rows so xmath59 has four cycles similarly xmath60 and xmath61 each have four cycles in general a bitrade can have a different number of row column and symbol cycles using construction constructiontautobitrade the cell at row xmath90 column xmath91 will contain the symbol xmath92 since these cycles intersect in xmath93 before showing how a bitrade can be represented as a graph embedded in a surface we briefly review the theory of hypermaps a combinatorial hypermap xmath94 is made up of three permutations xmath95 xmath96 xmath97 and a finite set xmath70 such that xmath98 and xmath99 acts transitively on xmath70 the following construction takes a combinatorial hypermap to a hypermap which is a bipartite graph embedded in a surface for a proof of correctness see chapter 1 of xcite and references therein and for further examples see chapter 1 and 2 of xcite the representation of hypermaps as bipartite graphs was given by walsh xcite constrpairtobipartitegraph let xmath94 be a combinatorial hypermap on the finite set xmath70 create vertex sets xmath100 xmath101 and undirected edges xmath102 xmath103 colour the vertices of xmath100 black denoted xmath104 and those of xmath101 white denoted xmath105 when drawing the graph we usually label an edge xmath106 with xmath107 to save space suppose that xmath108 is a cycle of xmath95 and let xmath109 be the associated black vertex with adjacent edges xmath110 for xmath111 then order the edges adjacent to xmath109 as xmath112 xmath113 xmath114 in the anticlockwise direction apply the same process to each xmath115 this defines a rotation scheme for the vertices of the bipartite graph and hence an embedding in a surface exbipartitegraph let xmath116 and define xmath117 and xmath118 then there are two black vertices two white vertices and four edges xmath119 xmath120 and xmath121 the graph embedding with anticlockwise orientation is shown in figure figbipartiteembedding given a bipartite graph embedding we often move to the canonical triangulation as described in the following construction constrfundamentaltriangulation let xmath122 be a hypermap place a new vertex xmath123 in each face of the hypermap connect this new vertex to each vertex that lies on the border of the face using dotted edges to xmath104 vertices and dashed edges to xmath105 vertices the surface is now subdivided into triangles each triangle has three types of vertices xmath104 xmath105 and xmath123 each triangle has three types of sides a solid dashed or dotted line from the inside of a triangle we view its vertices according to the order xmath104 xmath105 xmath123 xmath104 and if we turn in the anticlockwise direction then the triangle is positive otherwise it is negative we shade the positive triangles vertices are labelled from the set xmath124 to aid in identification with the intercalate bitrade and the action of xmath95 is shown on a particular shaded triangle since each shaded triangle is adjacent to precisely one solid edge in the canonical triangulation we can identify the action of xmath95 as the rotation of shaded triangles around their black vertex in an anticlockwise direction as shown in figure figcanonical also see xcite in general the action of xmath96 and xmath97 correspond to rotations around white and star vertices as indicated in figure figfundtriangulation xmath96 and xmath97 on certain shaded triangles is showntitlefig the canonical triangulation of the bipartite graph embedding of example exbipartitegraph is shown in figure figcanonical writing xmath77 for the shaded triangle with vertex labels xmath8 xmath9 xmath7 on black white and star vertices respectively we see that xmath125 xmath126 and xmath127 as expected the action of xmath95 xmath96 and xmath97 in figure figcanonical is exactly the same as xmath59 xmath60 and xmath61 of example exampleintercalaterep applying euler s formula leads to the genus formula for hypermaps xmath128 where xmath129 denotes the number of cycle of the permutation xmath58 lemmatorus a xmath3homogeneous bitrade xmath5 defines a tessellation of shaded and unshaded triangles in the euclidean plane each shaded triangle is edge wise adjacent only to unshaded triangles and vice versa shaded and unshaded triangles correspond to the entries of xmath10 and xmath34 respectively black white and star vertices correspond to row column and symbol labels of xmath10 let xmath5 be a xmath3homogeneous bitrade and let xmath56 be the xmath46 representation by condition t1and t4we see that xmath59 xmath60 and xmath61 satisfy the properties to be a combinatorial hypermap let xmath94 tau1 tau2 tau3 and construct the associated hypermap using construction constrpairtobipartitegraph apply construction constrfundamentaltriangulation so that the hypermap consists of shaded and unshaded triangles since xmath130 and xmath131 it follows that xmath132 so the underlying surface is the torus the fundamental group of the torus is xmath133 so the covering surface is the euclidean plane by construction constrfundamentaltriangulation each shaded triangle is adjacent edge wise to precisely one unshaded triangle and vice versa the permutation xmath95 acts on shaded triangles while xmath59 acts on elements of xmath10 by equation we set xmath134 so shaded triangles correspond to elements of xmath10 and unshaded triangles correspond to elements of xmath34 black vertices correspond to cycles of xmath59 which in turn correspond to row labels of xmath10 and similar for white and star vertices figure fig3homtessellation shows the tessellation for the xmath3homogeneous bitrade of example ex3hom identifying opposite sides of the parallelogram marked by thick grey lines gives the torus with regards to theorem theorem3homtransversals we can partition xmath10 into three transversals xmath135 where xmath136 these transversals may be located geometrically in figure fig3homtessellation xmath137 is made up of shaded triangles located directly above a xmath104 vertex xmath138 is made up of shaded triangles located directly to the lower left of a xmath104 vertex and xmath139 is made up of shaded triangles located directly to the lower right of a xmath104 vertex identifying opposite sides of the solid grey parallelogram gives a torus fig3homtessellation in this section we provide the geometric proof of theorem theorem3homtransversals let xmath5 be a xmath3homogeneous bitrade apply lemma lemmatorus to obtain the labelled tessellation of the euclidean plane for xmath5 without loss of generality let the triangles have unit length sides let xmath140 be an unshaded triangle in the tessellation define three actions xmath64 on xmath140 xmath141 rotates xmath140 by angle xmath142 anticlockwise around its xmath104 vertex xmath143 rotates xmath140 by angle xmath142 anticlockwise around its xmath105 vertex xmath144 rotates xmath140 by angle xmath142 anticlockwise around its xmath123 vertex the plane is tessellated by hexagons similar to those in figure fig3homtessellation since xmath5 is xmath3homogeneous it follows that there are three shaded triangles at each vertex so xmath145 for xmath146 and xmath147 these xmath64 induce a triangle group xmath148 which acts on the set of equilateral triangles xmath149 of the tessellation xmath150 if xmath151 is the xmath46 representation for the bitrade in question then we define the cartographic group xmath152 by xmath153 note that xmath148 is an infinite group acting on the tessellation of the euclidean plane while xmath152 is a finite permutation group acting on the corresponding triangles on the identified surface the torus the group xmath152 has all of the defining relations for xmath148 so it is natural to define a group homomorphism xmath154 that sends xmath64 to xmath46 and the empty word xmath155 to the identity xmath156 we then extend xmath154 to an arbitrary word by xmath157 where xmath158 for xmath159 to relate the group actions xmath160 and xmath161 we form a map xmath162 fix a shaded triangle xmath163 and an entry xmath164 and set xmath165 then use xmath154 to extend xmath166 to any xmath167 by defining xmath168 where xmath169 for some xmath170 let xmath166 be defined as above for some fixed xmath171 xmath172 first we check that xmath166 is a well defined map namely that the choice of xmath173 for xmath169 does not matter suppose that xmath174 then xmath175 so xmath176 where xmath177 we ca nt assume that xmath178 is the identity in xmath152 only that it fixes xmath172 then xmath179 so xmath180 takes the same value whether xmath181 or xmath182 was chosen hence xmath166 is well defined the tessellation lies on the euclidean plane and we are free to place the xmath107 xmath191 axes as we wish we will choose one of three placements that shown in figure fig3homxyaxes or the rotation of those axes by angle xmath142 or xmath192 suppose for contradiction that there exists a triple xmath197 such that xmath198 since xmath59 has no fixed pointcondition t3in the definition of a bitrade there must be a xmath3cycle xmath199 in xmath59 for some xmath200 xmath201 if xmath202 for some shaded triangle xmath203 then it must be that xmath204 and xmath205 recalling that xmath141 is rotation about a xmath104 vertex in the anticlockwise direction the labelled tessellation must have hexagons like those shown in figure fig3homconsistency the hexagon xmath206 has xmath207 while the hexagon xmath7 has xmath208 the order of xmath25 xmath200 xmath140 is forced by xmath166 let each side of a triangle in the tessellation have unit length without loss of generality we can place the xmath107 xmath191 axes on the tessellation as in figure fig3homxyaxes so that the xmath104 vertex of xmath206 is at xmath209 and the xmath104 vertex of xmath7 is at xmath210 then we have a euclidean distance xmath211 which we assume to be minimal we then show that there exists another pair of inconsistently labelled hexagons xmath212 and xmath213 such that xmath214 except for a few cases in which contradictions arise with respect to the bitrade itself in the limiting casewe get xmath215 which implies that xmath59 has a fixed point xmath25 there are four main cases to check each with three subcases a b and c each of the a and b cases cover an infinite part of the plane so we use various constructions to find xmath212 xmath213 such that xmath214 the c cases are finite and provide the required local contradictions suppose that xmath218 and consider the action of xmath219 on the triangles labelled xmath140 as shown in figure figcase11 recall that xmath141 is rotation to the next shaded triangle in an anticlockwise direction around a xmath104 vertex and xmath143 is rotation around a xmath105 vertex we find xmath220 and factor out the xmath221 term xmath222 now xmath223 since xmath217 so xmath214 the last step is to observe that there must be a cycle xmath224 in xmath59 as shown above then xmath225 so xmath212 and xmath213 are a closer pair of inconsistent hexagons this completes case 1a suppose that xmath228 and consider the action of xmath229 on the triangles labelled xmath25 as in figure figcase12 we calculate the distance between xmath212 and xmath213 xmath230 we need xmath231 which simplifies to xmath232 by assumption xmath233 and this completes case 1b here we deal with local cases which give rise to contradictions note that some xmath235 do not correspond to valid hexagon positions eg there is no hexagon centred at xmath236 the valid cases are as follows i xmath237 xmath238 in this case xmath206 and xmath7 are the same hexagon implying that xmath239 so xmath59 has a fixed pointwhich contradicts t3in the definition of a bitrade ii xmath237 xmath240 in this case xmath61 has a fixed pointxmath25 contradicting t3see figure figcase13 iii xmath241 xmath242 here we find a fixed pointof xmath60 contradicting t3see figure figcase14 iv xmath241 xmath243 if xmath228 then the action of xmath229 on the triangles labelled xmath25 shows that xmath59 has a fixed pointas shown in figure figcase15 cases 2 3 and 4 are very similar for each sub case of type a and b we state the xmath244 word along with the corresponding action as xmath245 each sub case of type c gives an immediate contradiction in the form of a fixed point for some xmath59 xmath60 or xmath61 suppose that there exists xmath197 such that xmath265 since xmath59 is fixed point freeit must contain a xmath3cycle xmath199 for some xmath200 xmath201 then the inconsistent hexagons are as shown in figure fig3homhexagon2 we see that xmath266 so by lemma 3hompartition12 we have a contradiction the case where xmath267 is similar by corollary 3hompartition123 the xmath195 sets are mutually disjoint by assumption the group xmath269 acts transitively on xmath10 the group homomorphism xmath154 given by xmath270 is actually a group epimorphism with lemma 3homcommutes it follows that each xmath197 will be an element of some xmath195 set hence xmath271 is a partition of xmath10 in lemma 3hompartition12 and corollary corpartition we constructed the partition by dividing up elements of xmath10 around a xmath104 vertex so it is impossible for any xmath195 to have more than one element from a row of xmath10 in particular this would imply a fixed point of xmath59 conversely suppose that a cycle xmath199 exists in xmath60 and is labelled as shown in figure fig3homrowcol now xmath272 xmath273 xmath274 according to the labelling induced around xmath104 vertices if another hexagon centred at a xmath105 vertex was inconsistently labelled then we would have an inconsistent labelling around xmath104 vertices contradicting corollary corpartition similarly labellings around xmath123 vertices are consistent corollary corpartition and lemma 3homtransversal give the main result theorem theorem3homtransversals we note that in general the covering surface will be spherical euclidean or hyperbolic most large bitrades will be hyperbolic and we expect that future work will derive combinatorial properties of hyperbolic bitrades from their geometrical representation
a latin bitrade xmath0 is a pair of partial latin squares that define the difference between two arbitrary latin squares xmath1 and xmath2 of the same order a xmath3homogeneous bitrade xmath0 has three entries in each row three entries in each column and each symbol appears three times in xmath4 cavenagh xcite showed that any xmath3homogeneous bitrade may be partitioned into three transversals in this paper we provide an independent proof of cavenagh s result using geometric methods in doing so we provide a framework for studying bitrades as tessellations in spherical euclidean or hyperbolic space additionally we show how latin bitrades are related to finite representations of certain triangle groups
introduction latin bitrades bitrades as graphs on surfaces the geometric proof
hamilton equations of motion constitute a system of ordinary first order differential equations xmath3 where the xmath4 denotes differentiation with respect to time xmath5 and xmath6 they can be viewed as the characteristic equations of the partial differential equation xmath7 with xmath8 the first order differential operator xmath9 generating a flow on phase space if xmath10 does not depend explicitly on xmath5 a formal solution of hamiltonianflow is xmath11 in most cases this expression remain just formal but one may often split the hamiltonian into two parts xmath12 with a corresponding splitting xmath13 such that the flows generated by xmath14 and xmath15 separately are integrable one may then use the cambell baker hausdorff formula to approximate the flow generated by xmath8 one obtains the strang splitting formula xcite xmath16rightcdots which shows that time stepping this expression with a timestep xmath1 provides an approximation with relative accuracy of order xmath0 exactly preserving the symplectic property of the flow this corresponds to the symplectic splitting scheme of iterating the process of solving xmath17 here the last part of one iteration may be combined with the first part of the next unless one deals with time dependent systems or wants to register the state of the system at the intermediate times from a practical point of view the most interesting property of this formulationis that it can be interpreted directly in terms of physical processes for instance for hamiltonians xmath18 a standard splitting scheme is to choose xmath19 and xmath20 in that case symplecticsplitting corresponds to a collection of freely streaming particles receiving kicks at regular time intervals xmath1 these kicks being dependent of the positions xmath21 of the particles ie we may think of the evolution as a collection of kicks and moves xcite it is not clear that this is the best way to approximate or model the exact dynamics of the real system for instance why should the motion between kicks be the free streaming generated by xmath22 there are more ways to split the hamiltonian into two integrable parts xcite the best splitting is most likely the one which best mimics the physics of equation hamiltonequation further since this equation is not solved exactly by symplecticsplitting for any finite value of xmath1 we need not necessarily choose xmath23 to be exactly xmath24 as long as it approaches this quantity sufficiently fast as xmath25 we will exploit this observation to improve the accuracy of the splitting scheme symplecticsplitting in a systematic manner we are of course not the first trying to improve on the strmer verlet splitting scheme an accessible review of several earlier approaches can be found in reference xcite neri xcite has provided the general idea to construct symplectic integrators for hamiltonian systems forest and ruth xcite discussed the explicit fouth order method for the integration of hamiltonian equations for the simplest non trivial case yoshida xcite worked out a symplectic integrator for any even order and suzuki xcite presented the idea of how recursive construction of successive approximants may be extended to other methods for a simple illustration of our idea consider the hamiltonian xmath26 whose exact evolution over a time interval xmath1 is xmath27 ptexte endpmatrix beginpmatrixr cos tau sin tau sin tau costau endpmatrix beginpmatrix qpendpmatrix labelh0a compare this with a kick move kick splitting scheme over the same time interval with xmath28 and xmath29 where xmath30 and xmath31 may depend on xmath1 one full iteration gives xmath32 1frac12m ktau2 m tau 04ex 1frac14 kmtau 2 ktau 1frac12 k mtau2 endpmatrix beginpmatrixq p endpmatrix we note that by choosing xmath33 labelharmonicoscillatorcorrection1ex k frac2tau tan fractau2 1frac112tau2frac1120tau4 frac1720160tau6 cdotsnonumberendaligned the exact evolution is reproduced if we instead choose a move kick move splitting scheme with xmath34 and xmath35 one iteration gives xmath36 ptextsendpmatrix beginpmatrixc 1frac12barm barktau2 1frac14barmbarkbarm tau 05ex barktau 1frac12 barkbarmtau2 endpmatrix beginpmatrixq p endpmatrix which becomes exact if we choose xmath37 it should be clear that this idea works for systems of harmonic oscillators in general ie for quadratic hamiltonians of the form xmath38 where xmath39 and xmath40 are symmetric matrices for a choosen splitting scheme and step interval xmath1there are always modified matrices xmath41 and xmath42 which reproduces the exact time evolution for systems where xmath39 and xmath40 are too large for exact diagonalization but sparse a systematic expansion of xmath43 and xmath44 in powers of xmath0 could be an efficient way to improve the standard splitting schemes for a more general treatment we consider hamiltonians of the form xmath45 a series solution of the hamilton equations in powers of xmath1 is xmath46 here we have introduced notation to shorten expressions xmath47 15ex d equiv pa partialaquad bard equiv partiala vpartialanonumberendaligned where we employ the einstein summation convention an index which occur twice once in lower position and once in upper position are implicitly summed over all available values ie xmath48 we will generally use the matrix xmath39 to rise an index from lower to upper position the corresponding result for the kick move kick splitting scheme is xmath49 as expected it differs from the exact result in the third order but the difference can be corrected by introducing second order generators xmath50 to be used in respectively the move and kick steps specialized to a one dimensional system with potential xmath51 this agrees with equation harmonicoscillatorcorrection with this correctionthe kick move kick splitting scheme agrees with the exact solution to xmath52 order in xmath1 but differ in the xmath53terms we may correct the difference by introducing fourth order generators xmath54 1ex v4 frac1480 bard2 vtau4nonumberendaligned specialized to a one dimensional system with potential xmath51 this agrees with equation harmonicoscillatorcorrection with this correctionthe kick move kick splitting scheme agrees with the exact solution to xmath55 order in xmath1 but differ in the xmath56terms we may correct the difference by introducing sixth order generators xmath57 1ex v6 frac1161280left 17 bard3 10bard3rightvtau6nonumberendaligned where we have introduced xmath58 specialized to a one dimensional system with potential xmath51 this agrees with equation harmonicoscillatorcorrection with this correctionthe kick move kick splitting scheme agrees with the exact solution to xmath59 order in xmath1 but differ in the xmath60terms one may continue the correction process but this is probably well beyond the limit of practical use already addition of extra potential terms xmath61 is in principle unproblematic for solution of the kick steps the equations xmath62 can still be integrated exactly preserving the symplectic structure the situation is different for the kinectic term xmath63 since it now leads to equations xmath64 which is no longer straightforward to integrate exactly although the problematic terms are small one should make sure that the move steps preserve the symplectic structure exactly let xmath65 denote the positions and momenta just before the move step and xmath66 the positions and momenta just after we construct a generating function xcite xmath67 with xmath68 this preserves the symplectic structure we just have to construct xmath69 to represent the move step sufficiently accurately consider first the case without the correction terms the choice xmath70 gives xmath71 which is the correct relation now add the xmath72term to the move step to order xmath73 the exact solution of equation movesteps becomes xmath74 labelexactmove 15ex pa pa frac112 partiala d2 vtau3 frac124partiala d3 vtau4nonumberendaligned compare this with the result of changing xmath75 where xmath76 the solution of equation canonicaltransformation change from the relations simplekick to xmath77 since xmath78 is linear in xmath79 equation pequation constitute a system of third order algebraic equation which in general must be solved numerically this should usually be a fast process for small xmath1 an exact solution of this equation is required to preserve the symplectic structure but this solution should also agree with the exact solution of movesteps to order xmath73 this may be verified by perturbation expansion in xmath1 a perturbative solution of equation pequation is xmath80 which inserted into qequation reproduces the full solution exactmove to order xmath73 this process can be systematically continued to higher orders we write the transformation function as xmath81 and find the first terms in the expansion to be xmath82it remains to demonstrate that our algorithms can be applied to real examples we have considered the hamiltonian xmath83 with initial condition xmath84 xmath85 the exact motion is a nonlinear oscillation with xmath10 constant equal to xmath86 and period xmath87 here xmath88 is the beta function in figure energypreservation we plot the behaviour of xmath89 during the last half of the xmath90 oscillation for various values of xmath1 and corrected generators up to order xmath91 corresponding to xmath92 we have shown that it is possible to systematically improve the accuracy of the usual symplectic integration schemes for a rather general class of hamilton equations the process is quite simple for linear equations where it may be useful for sparse systems for general systemsthe method requires the solution of a set of nonlinear algebraic equations at each move step to which extent an higher order method is advantageous or not will depend on the system under analysis and the wanted accuracy as always with higher order methods the increased accuracy per stepmay be countered by the higher computational cost per step xcite d cohen t jahnke k lorenz and c lubich numerical integrator for highly oscillatory hamiltonian systems a review in analysis modelling and simulations of multiscale problems springer verlag 2006 553576
we show how the standard strmer verlet splitting method for differential equations of hamiltonian mechanics with accuracy of order xmath0 for a timestep of length xmath1 can be improved in a systematic manner without using the composition method we give the explicit expressions which increase the accuracy to order xmath2 and demonstrate that the method work on a simple anharmonic oscillator splitting method hamilton equations higher order accuracy symplecticity
introduction harmonic oscillators nonlinear systems solving the _move_ steps explict computations conclusion
star and planet formation are connected through disks disk formation long thought to be a trivial consequence of angular momentum conservation during core collapse and star formation eg bodenheimer1995 turned out to be much more complicated than originally envisioned the complication comes from magnetic fields which are observed in dense star forming cores of molecular clouds see crutcher2012 for a recent review the field can strongly affect the angular momentum evolution of core collapse and disk formation through magnetic braking there have been a number of studies aiming at quantifying the effects of magnetic field on disk formation in the ideal mhd limit both analytic considerations and numerical simulations have shown that the formation of a rotationally supported disk rsd hereafter is suppressed by a realistic magnetic field corresponding to a dimensionless mass to flux ratio of xmath4 a few trolandcrutcher2008 during the protostellar mass accretion phase in the simplest case of a non turbulent core with the magnetic field aligned with the rotation axis allen2003 galli2006 pricebate2007 mellonli2008 hennebellefromang2008 dappbasu2010 seifried2011 santos lima2012 the suppression of rsds by excessive magnetic braking is termed magnetic braking catastrophe in star formation rotationally supported disks are routinely observed however around evolved class ii young stellar objects see williamscieza2011 for a review and increasingly around class i eg jorgensen2009 lee2011 takakuwa2012 and even one class 0 source tobin2012 when and how such disks form in view of the magnetic braking catastrophe is unclear the current attempts to overcome the catastrophic braking fall into three categories 1 non ideal mhd effects including ambipolar diffusion ohmic dissipation and hall effect 2 misalignment between magnetic and rotation axes and 3 turbulence ambipolar diffusion does not appear to weaken the braking enough to enable large scale rsd formation under realistic conditions krasnopolskykonigl2002 mellonli2009 duffinpudritz2009 li2011 ohmic dissipation can produce small au scale rsd in the early protostellar accretion phase machida2010 dappbasu2010 dapp2012 tomida2013 larger xmath5scale rsds can be produced if the resistivity or the hall coefficient of the dense core is much larger than the classical microscopic value krasnopolsky2010krasnopolsky2011 see also braidingwardle2012a braidingwardle2012b xcite explored the effects of tilting the magnetic field away from the rotation axis on disk formation see also machida2006 pricebate2007 hennebelleciardi2009 they concluded that keplerian disks can form for a mass to flux ratio xmath6 as low as xmath7 as long as the tilt angle is close to xmath2 see their fig the effects of turbulence were explored by xcite who concluded that a strong enough turbulence can induce enough magnetic diffusion to enable the formation of a xmath5scale rsd xcite andxcite considered supersonically turbulent massive cores they found rotationally dominated disks around low mass stars although in both cases the turbulence induced rotation is misaligned with the initial magnetic field by a large angle which may have contributed to the disk formation see also joos2013 the goal of this paper is to revisit the role of magnetic field rotation misalignment in disk formation the misalignment is expected if the angular momenta of dense cores are generated through turbulent motions eg burkertbodenheimer2000 myers2012 it is also inferred from the misalignment between the field direction traced by polarized dust emission and the outflow axis which is taken as a proxy for the direction of rotation hull2012 indeed in the carma sample of xcite the distribution of the angle xmath8 between the magnetic field and jet rotation axis is consistent with being random if true it would indicate that in half of the sources the two axes are misaligned by a large angle of xmath9 see however chapman2013 and discussion in disk such a large misalignment would be enough to allow disk formation in dense cores magnetized to a realistic level with xmath6 of a few trolandcrutcher2008 according to xcite if the alignment angle xmath8 is indeed random and xcite s conclusions are generally true the magnetic braking catastrophe would be largely solved given their far reaching implications it is prudent to check xcite s conclusions using a different numerical code it is the task of this paper we carry out numerical experiments of disk formation in dense cores with misaligned magnetic and rotation axes using non ideal mhd code zeus tw that includes self gravity we find that a large misalignment angle does indeed enable the formation of rsds in weakly magnetized dense cores with dimensionless mass to flux ratios xmath1 but not in dense cores magnetized to higher more typical levels our conclusion is that while the misalignment helps with disk formation especially in relatively weakly magnetized cores it may not provide a complete resolution to the magnetic braking catastrophe by itself the rest of the paper is organized as follows in setup we describe the model setup the numerical results are described in misalignment and strong we compare our results to those of xcite and discuss their implications in discussion and conclude with a short summary in summary we follow xcite and xcite and start our simulations from a uniform spherical core of xmath10 and radius xmath11 in a spherical coordinate system xmath12 the initial density xmath13 corresponds to a molecular hydrogen number density of xmath14 we adopt an isothermal equation of state with a sound speed xmath15 below a critical density xmath16 and a polytropic equation of state xmath17 above it at the beginning of the simulation we impose a solid body rotation of angular speed xmath18 on the core with axis along the north pole xmath19 it corresponds to a ratio of rotational to gravitational binding energy of 0025 which is typical of the values inferred for nhxmath20 cores goodman1993 the initial magnetic field is uniform tilting away from the rotation axis by an angle xmath8 we consider three values for the initial field xmath21 xmath22 and xmath23 corresponding to dimensionless mass to flux ratio in units of xmath24 xmath25 xmath26 and xmath27 respectively for the core as a whole the mass to flux ratio for the central flux tube xmath28 is higher than the global value xmath6 by xmath29 so that xmath30 xmath31 and xmath32 for the three cases respectively the effective mass to flux ratio xmath33 should lie between these two limits if the star formation efficiency per core is xmath34 eg alves2007 then one way to estimate xmath33 is to consider the cylindrical magnetic flux surface that encloses xmath35 of the core mass which yields xmath36 corresponding to xmath37 xmath38 and xmath39 for the three cases respectively the fraction xmath35 is also not far from the typical fraction of core mass that has accreted onto the central object at the end of our simulations see table 1 for the tilt angle we also consider three values xmath40 xmath41 and xmath2 the xmath40 corresponds to the aligned case with the magnetic field and rotation axis both along the xmath42axis xmath19 the xmath43 corresponds to the orthogonal case with the magnetic field along the xmath44axis xmath45 xmath46 models with these nine combinations of parameters are listed in table 1 additional models are discussed below lllllll a 972 137 0xmath47 1 024 no b 486 685 0xmath47 1 022 no c 292 412 0xmath47 1 033 no d 972 137 45xmath47 1 021 yes porous e 486 685 45xmath47 1 035 no f 292 412 45xmath47 1 027 no g 972 137 90xmath47 1 038 yes robust h 486 685 90xmath47 1 046 yes porous i 292 412 90xmath47 1 047 no m 972 137 90xmath47 0 010 yes robust n 972 137 90xmath47 01 026 yes robust p 403 568 90xmath47 1 025 yes porous q 344 485 90xmath47 1 014 no as in xcite we choose a non uniform grid of xmath48 in the radial direction the inner and outer boundaries are located at xmath49 and xmath11 respectively the radial cell size is smallest near the inner boundary xmath50 or xmath51 it increases outward by a constant factor xmath52 between adjacent cells in the polar direction we choose a relatively large cell size xmath53 near the polar axes to prevent the azimuthal cell size from becoming prohibitively small it decreases smoothly to a minimum of xmath54 near the equator where rotationally supported disks may form the grid is uniform in the azimuthal direction the boundary conditions in the azimuthal direction are periodic in the radial direction we impose the standard outflow boundary conditions material leaving the inner radial boundary is collected as a point mass protostar at the center it acts on the matter in the computational domain through gravity on the polar axes the boundary condition is chosen to be reflective although this is not strictly valid we expect its effect to be limited to a small region near the axis we initially intended to carry out simulations in the ideal mhd limit so that they can be compared more directly with other work especially xcite however ideal mhd simulations tend to produce numerical hot zones that force the calculation to stop early in the protostellar mass accretion phase a tendency we noted in our previous 2d mellonli2008 and 3d simulations krasnopolsky2012 to lengthen the simulation we include a small spatially uniform resistivity xmath55 we have verified that in the particular case model g xmath56 and xmath43 this resistivity changes the flow structure little compared to either the ideal mhd model m before the latter stops or model n where the resistivity is reduced by a factor 10 to xmath57 to illustrate the effect of the misalignment between the magnetic field direction and rotation axis we first consider an extreme case where the magnetic field is rather weak with a mass to flux ratio xmath56 for the core as a whole and xmath58 for the inner xmath35 of the core mass in this case a well formed rotationally supported disk is present in the orthogonal case with xmath43 model g in table 1 such a disk is absent in the aligned case with xmath40 model a the contrast is illustrated in fig contrast where we plot snapshots of the aligned and orthogonal cases at a representative time xmath59 when a central mass of xmath60 and xmath61 respectively has formed the flow structures in the two cases are very different in both the equatorial panels a and b and meridian panels c and d plane in the equatorial plane the aligned case has a relatively large with radius xmath62 over dense region where material spirals rapidly inward on the smaller scale of xmath63 the structure is dominated by expanding low density lobes they are the decoupling enabled magnetic structures dems for short that have been studied in detail by xcite and xcite no rotationally supported disk is evident the equatorial structure on the xmath64 scale in the orthogonal case is dominated by a pair of spirals instead the spirals merge on the xmath63 scale into a more or less continuous rapidly rotating structure a rotationally supported disk clearly the accretion flow in the orthogonal case was able to retain more angular momentum than in the aligned case why is this the case a clue comes from the meridian view of the two cases panels c and d of fig contrast in the aligned case there is a strong bipolar outflow extending beyond xmath65 at the relatively early time shown the outflow forces most of the infalling material to accrete through a flattened equatorial structure an over dense pseudodisk gallishu1993 see panel a for a face on view of the pseudodisk noting the difference in scale between panel c and a it is the winding of the magnetic field lines by the rotating material in the pseudodisk that drives the bipolar outflow in the first place the wound up field lines act back on the pseudodisk material braking its rotation it is the efficient magnetic braking in the pseudodisk that makes it difficult for rotationally supported disks to form in the aligned case the prominent bipolar outflow indicative of efficient magnetic braking is absent in the orthogonal case as was emphasized by xcite it is replaced by a much smaller shell like structure inside which the xmath63scale rotationally supported disk is encased panel d to understand this difference in flow structure pictorially we plot in fig 3d the three dimensional structure of the magnetic field lines on the scale of xmath66 or xmath67 which is xmath29 larger than the size of panels a and b of fig contrast but half of that of panels c and d clearly in the aligned case the relatively weak initial magnetic field corresponding to xmath68 has been wound up many turns by the material in the equatorial pseudodisk building up a magnetic pressure in the equatorial region that is released along the polar directions see the first panel of fig the magnetic pressure gradient drives a bipolar outflow which is evident in panel c and in many previous simulations of magnetized core collapse including the early ones such as xcite and xcite in contrast in the orthogonal case the equatorial region is no longer the region of the magnetically induced pseudodisk in the absence of rotation along the xmath42axis the dense core material would preferentially contract along the field lines that are initially along the xmath44axis to form a dense sheet in the xmath69xmath42 plane that passes through the origin the twisting of this sheet by rotation along the xmath42axis produces two curved curtains that spiral smoothly into the disk at small distances somewhat analogous in shape to two snail shells see the second panel of fig 3d the snail shaped dense curtain in the orthogonal case naturally explains the morphology of the density maps shown in panels b and d of fig contrast first the two prominent spiral arms in the panel a are simply the equatorial xmath44xmath69 cut of the curved curtains an interesting feature of the spirals and the snail shaped dense curtain as a whole is that they are the region where the magnetic field lines change directions sharply this is illustrated in fig pinch which is similar to panel b of fig contrast except that the magnetic vectors rather than velocity vectors are plotted on top of the density map clearly the spirals separate the field lines rotating counter clock wise lower right part of the figure from those rotating close wise upper left the sharp kink is analogous to the well known field line kink across the equatorial pseudodisk in the aligned case where the radial component of the magnetic field changes direction it supports our interpretation of the spirals and by extension the curtain as a magnetically induced feature as is the case of pseudodisk in other words the spirals are not produced by gravitational instability in a rotationally supported structure they are pseudospirals in the same sense as the pseudodisks of xcite the field line kinks are also evident across the dense curtain in the 3d structure shown in the second panel of fig 3d the 3d topology of the magnetic field and the dense structures that it induces lie at the heart of the difference in the magnetic braking efficiency between the aligned and orthogonal case in particular a flattened rotating equatorial pseudodisk threaded by an ordered magnetic field with an appreciable vertical component along the rotation axis is more conducive to driving an outflow than a warped curtain with a magnetic field predominantly tangential to its surface the outflow plays a key role in angular momentum removal and the suppression of rotationally supported disks as we demonstrate next to quantify the outflow effect we follow xcite xcite see also joos2012 and compare the rates of angular momentum change inside a finite volume xmath70 through its surface xmath71 due to infall and outflow to that due to magnetic torque the total magnetic torque relative to the origin from which a radius vector xmath72 is defined is xmath73dv where the integration is over the volume xmath70 typically the magnetic torque comes mainly from the magnetic tension rather than pressure force the dominant magnetic tension term can be simplified to a surface integral matsumototomisaka2004 xmath74 over the surface xmath71 of the volume this volume integrated magnetic torque is to be compared with the rate of angular momentum advected into the volume through fluid motion xmath75 which will be referred to as the advective torque below since the initial angular momentum of the dense core is along the xmath42axis we will be mainly concerned with the xmath42component of the magnetic and advective torque which for a spherical volume inside radius xmath76 are given by xmath77 and xmath78 the advective torque consists of two parts the rates of angular momentum advected into and out of the sphere by infall and outflow respectively xmath79 and xmath80 an example of the magnetic and advective torques is shown in fig torque the torques are evaluated on spherical surfaces of different radii at the representative time xmath59 for the aligned case the net torque close to the central object is nearly zero up to a radius of xmath81 indicating that the angular momentum advected inward is nearly completely removed by magnetic braking there at larger distances between xmath82 and xmath62 the net torque xmath83 is negative indicating that the angular momentum of the material inside a sphere of radius in this range decreases with time this is in sharp contrast with the orthogonal case where the net torque is positive in that radial range with the angular momentum there increasing rather than decreasing with time one may think that the difference is mainly due to a significantly larger magnetic torque xmath84 in the aligned case than in the orthogonal case although this is typically the case at early times the magnetic torques in the two cases become comparable at later times see the lowest solid lines in the two panels of fig torque a movie of the torques is available on request from the authors a bigger difference comes from the total or net angular momentum xmath85 advected inward which is substantially smaller in the aligned case than in the orthogonal case see the uppermost solid lines in the two panels the main reason for the difference is that a good fraction of the angular momentum advected inward by infall xmath86 is advected back out by outflow xmath87 in the former but not the latter this is helped by the fact that xmath86 is somewhat smaller in the aligned case to begin with compare the dotted lines in the two panels the lack of appreciable outward advection of angular momentum by outflow which is itself a product of field winding and magnetic braking in the aligned case appears to be the main reason for the orthogonal case to retain more angular momentum at small radii and form a rotationally supported disk in this particular case of relatively weak magnetic field the formation of a rotationally supported disk can be seen most clearly in fig rotation where we plot the infall and rotation speed as well as specific angular momentum as a function of radius along 4 xmath88 and xmath89 directions in the equatorial plane in the orthogonal case the infall and rotation speeds display the two tell tale signs of rotationally supported disks 1 a slow subsonic although nonzero infall speed much smaller than the free fall value and 2 a much faster rotation speed close to the keplerian value inside a radius of xmath90 the absence of a rotationally supported disk in the aligned case is just as obvious it has a rotation speed well below the keplerian value and an infall speed close to the free fall value especially at small radii up to xmath90 this corresponds to the region dominated by the low density strongly magnetized expanding lobes ie dems see panel a of fig contrast where the angular momentum is almost completely removed by a combination of magnetic torque and outflow see the third panel in fig rotation also evident from the panelis that the specific angular momentum of the equatorial inflow drops significantly twice near xmath91 and xmath62 respectively the former corresponds to the dems dominated region and the latter the pseudodisk see panel a of fig contrast the relatively slow infall inside the pseudodisk allows more time for magnetic braking to remove angular momentum it is the pseudodisk and its associated outflow working in tandem with the dems that suppresses the formation of a rotationally supported disk in the aligned case interestingly there is a bump near xmath62 for the specific angular momentum of the orthogonal case indicating that the angular momentum in the equatorial plane is transported radially outward along the spiraling field lines from small to large distances see fig pinch the spiraling equatorial field lines in the orthogonal model g have an interesting property they consist of two strains of opposite polarity as the strains get wound up more and more tightly by rotation at smaller and smaller radii field lines of opposite polarity are pressed closer and closer together creating a situation that is conducive to reconnection either of physical or numerical origin see the first panel of fig magnetic model g contains a small but finite resistivity xmath55 it does not appear to be responsible for the formation and survival of the keplerian disk because a similar disk is also formed at the same relatively early time for a smaller resistivity of xmath92 model n and even without any explicit resistivity model m numerical resistivity may have played a role here but it is difficult to quantify at the moment in any case the magnetic field on the keplerian disk appears to be rather weak as can be seen from the second panel of fig magnetic where the plasma xmath93 is plotted along 4 xmath88 and xmath89 directions in the equatorial plane on the keplerian disk in the orthogonal model g inside xmath90 xmath93 is of order xmath94 or more indicating that there is more matter accumulating in the disk than magnetic field either because the matter slides along the field lines into the disk increasing density but not the field strength or because of numerical reconnection that weakens the field or both this situation is drastically different from the aligned case where the inner xmath95 region is heavily influenced by the magnetically dominated low density lobes we have seen from the preceding section that in the weakly magnetized case of xmath56 the xmath5scale inner part of the protostellar accretion flow is dominated by two very different types of structures a weakly magnetized dense rotationally supported disk rsd in the orthogonal case xmath43 model g and magnetically dominated low density lobes or dems in the aligned case xmath40 model a at least at the relatively early time discussed in misalignment when the central mass reaches xmath96 this dichotomy persists to later times for these two models and for other models as well as illustrated by fig all where we plot models a i at a time when the central mass reaches xmath97 it is clear that the rsd for model g becomes even more prominent at the later time although a small magnetically dominated low density lobe is evident close to the center of the disk it is a trapped dems that is too weak to disrupt the disk in this case the identification of a robust rsd is secure at even later times up to the end of the simulation when the central mass reaches xmath98 or xmath99 of the initial core mass in the aligned case model a the inner accretion flow remains dominated by the highly dynamic dems at late times with no sign of rsd formation for the intermediate tilt angle case of xmath100 model d the inner structure of the protostellar accretion flow is shaped by the tussle between rsd and dems all shows that at the plotted time model d has several spiral arms that appear to merge into a rotating disk there are however at least three low density holes near the center of the disk they are the magnetically dominated dems movies show that the highly variable dems are generally confined close to the center although they can occasionally expand to occupy a large fraction of the disk surface overall the circumstellar structure in model d is more disk like than dems like we shall call it a porous disk to distinguish it from the more filled in more robust disk in the orthogonal model g even though most of the porous disk has a rotation speed dominating the infall speed the infall is highly variable and often supersonic the rotation speed also often deviates greatly from the keplerian value such an erratic disk is much more dynamic than the quiescent disks envisioned around relatively mature eg class ii ysos the intermediate tilt angle case drives a powerful bipolar outflow unlike the orthogonal case but similar to the aligned case this is consistent with the rate of angular momentum removal increasing with decreasing tilt angle ie from xmath2 to xmath41 see also ciardihennebelle2010 as the strength of the initial magnetic field in the core increases the dems becomes more dominant this is illustrated in the middle column of fig all where the three cases with an intermediate field strength corresponding to xmath101 and xmath102 are plotted in model h where the magnetic and rotation axes are orthogonal a relatively small with radius of xmath103 rotationally dominated disk is clearly present at the time shown as in the weaker field case of model g it is fed by prominent pseudo spirals which are part of a magnetically induced curtain in 3d see the second panel of fig 3d compared to model g the curtain here is curved to a lesser degree which is not surprising because the rotation is slower due to a more efficient braking and the stronger magnetic field embedded in the curtain is harder to bend the disk is also smaller less dense and more dynamic it is more affected by dems which occasionally disrupt the disk although it always reforms after disruption overall the circumstellar structure in model h is more rsd like than dems like as in model d we classify it as a porous disk as the tilt angle decreases from xmath2 to xmath41 model e and further to xmath104 model b the rotationally dominated circumstellar structure largely disappears it is replaced by dems dominated structures even though there is still a significant amount of rotation in the accretion flow a dense coherent diskis absent we conclude that for a moderately strong magnetic field of xmath101 the formation of rsd is suppressed if the tilt angle is moderate in the cases of the strongest magnetic field corresponding to xmath105 and xmath106 the formation of rsd is suppressed regardless of the tilt angle as can be seen from the last column of fig all for the orthogonal case model i the prominent pseudo spirals in the weaker field cases of model g xmath56 and h xmath101 are replaced by two arms that are only slightly bent they are part of a well defined pseudodisk that happens to lie roughly in the xmath107 or xmath44xmath42 plane see the second panel of fig 3db1e5 in the absence of any initial rotation one would expect the pseudodisk to form perpendicular to the initial field direction along the xmath44axis ie in the xmath108 or xmath69xmath42 plane over the entire course of core evolution and collapse the rotation has rotated the expected plane of the pseudodisk by nearly xmath2 nevertheless at the time shown when the central mass reaches xmath97 there is apparently little rotation left inside xmath109 to warp the pseudodisk significantly except for the orientation this pseudodisk looks remarkably similar to the familiar one in the aligned case model c see the first panel of fig 3db1e5 in particular there are low density holes in the inner part of both pseudodisks which are threaded by intense magnetic fields and surrounded by dense filaments they are the dems in the intermediate tilt angle case of xmath41 model f not shown in the 3d figure the pseudodisk is somewhat more warped than the two other cases and its inner part is again dominated by dems it is clear that for a magnetic field of xmath6 of a few the inner circumstellar structure is dominated by the magnetic field with rotation playing a relatively minor role the rsd remains suppressed despite the misalignment to better estimate the boundary between the cores that produce rsds and those that do not we carried out two additional simulations with xmath43 models p and q in table 1 we found a porous disk in model p xmath110 and xmath111 as in the weaker field case of model h but no disk in model q xmath112 and xmath113 as in the stronger field case of model i from this we infer that the boundary lies approximately at xmath114 or xmath115 our most important qualitative result is that the misalignment between the magnetic field and rotation axis tends to promote the formation of rotationally supported disks especially in weakly magnetized dense cores this is in agreement with the conclusion previously reached by xcite xcite jhc12 hereafter using a different numerical code and somewhat different problem setup their calculations were carried out using an adaptive mesh refinement amr code in the cartesian coordinate system with the central object treated using a stiffening of the equation of state whereas ours were done using a fixed mesh refinement fmr code in the spherical coordinate system with an effective sink particle at the origin despite the differences these two distinct sets of calculations yield qualitatively similar results the case for the misalignment promoting disk formation is therefore strengthened quantitatively there appears to be a significant discrepancy between our results and theirs according to their fig 14 a keplerian disk is formed in the relatively strongly magnetized case of xmath116 if the misalignment angle xmath43 formally this case corresponds roughly to our model i xmath105 and xmath43 for which we can rule out the formation of a rotationally supported disk with confidence see fig all we believe that the discrepancy comes mostly from the initial density profile adopted which affects the degree of magnetization near the core center for a given global mass to flux ratio xmath6 xcite adopted a centrally condensed initial mass distribution xmath117 with the characteristic radius xmath118 set to xmath35 of the core radius xmath119 so that the central to edge density contrast is 10 see also ciardihennebelle2010 it is easy to show that for this density profile and a uniform magnetic field the mass to flux ratio for the flux tube passing through the origin is xmath120 where xmath121 and xmath6 is the global mass to flux ratio for the core as a whole for xmath122 we have xmath123 in other words the central part of their core is substantially less magnetized relative to mass than the core as a whole due to the initial mass condensation for the xmath116 case under consideration we have xmath124 which makes the material on the central flux tube rather weakly magnetized relative to mass the magnetization of the central region is important because the central part is accreted first and the star formation efficiency in a core may not be 100 efficient eg alves2007 if we define as in setup an effective mass to flux ratio for the cylindrical magnetic flux surface that encloses xmath35 of the core mass then xmath125 for the density distribution adopted by jhc12 equation profile it is significantly different from the effective mass to flux ratio of xmath36 for the uniform density that we adopted the above difference in xmath33 makes our strongest field case of xmath105 xmath106 more directly comparable to jhc12 s xmath126 xmath127 case there is agreement that in both cases the formation of a rsd is suppressed even when the tilt angle xmath43 these results suggest that rsd formation is suppressed when the effective mass to flux ratio xmath128 independent of the degree of field rotation misalignment consistent with the conclusion we reached toward the end of strong based on models p and q similarly jhc12 s xmath116 xmath129 models may be more directly comparable to our xmath101 xmath102 models our calculations show that a more or less rotationally supported disk is formed in the extreme xmath43 case for xmath102 model h see fig all but not in the intermediate tilt angle xmath100 case model e this is consistent with their fig 14 where a keplerian disk is formed for xmath43 for xmath129 but not for xmath100 in the latter case jhc12 found a disk like structure with a flat rather than keplerian rotation profile they attributed the flat rotation to additional support from the magnetic energy which dominates the kinetic energy at small radii we believe that their high magnetic energy comes from strongly magnetized low density lobes ie dems which are clearly visible in our model e see fig all although there is still significant rotation on the xmath5 scale in model e we find the morphology and kinematics of the circumstellar structure too disorganized to be called a disk we conclude that for a moderate field strength corresponding to xmath130xmath131 a rotationally supported disk does not form except when the magnetic field is tilted nearly orthogonal to the rotation axis our weak field case of xmath56 xmath58 can be compared with jhc12 s xmath132 xmath133 case in both cases a well defined rotationally supported disk is formed when xmath43 see our fig all and their fig 12 for the intermediate tilt angle xmath100 jhc12 obtained a disk like structure with a flat rotation curve with the magnetic energy dominating the kinetic energy at small radii this is broadly consistent with our intermediate tilt angle model d where a highly variable porous disk is formed with the central part often dominated by strongly magnetized low density lobes there is also agreement that the disk formation is suppressed if xmath134 even for such a weakly magnetized case it therefore appears that for the weak field case of xmath135 a rotationally supported disk can be induced by a relatively moderate tilt angle of xmath136 the result that a misalignment between the magnetic field and rotation axis helps disk formation by weakening magnetic braking may be counter intuitive xcite showed analytically that for a uniform rotating cylinder embedded in a uniform static external medium magnetic braking is much more efficient in the orthogonal case than in the aligned case this analytic result may not be directly applicable to a collapsing core however as emphasized by jhc12 the collapse drags the initially uniform rotation aligned magnetic field into a configuration that fans out radially jhc12 estimated analytically that the collapse induced field fan out could in principle make the magnetic braking in the aligned case more efficient than in the orthogonal case the analytical estimate did not however take into account of the angular momentum removal by outflow which as we have shown in torque is a key difference between the weak field xmath56 xmath137 aligned case model a where disk formation is suppressed and its orthogonal counterpart model g that does produce a rotationally supported disk see figs contrast and torque and also ciardihennebelle2010 the generation of a powerful outflow in the weak field aligned case model a is facilitated by the orientation of its pseudodisk which is perpendicular to the rotation axis see fig 3d this configuration is conducive to both the pseudodisk winding up the field lines and the wound up field escaping above and below the pseudodisk which drives a bipolar outflow when the magnetic field is tilted by xmath2 away from the rotation axis the pseudodisk is warped by rotation into a snail shaped curtain that is unfavorable to outflow driving see fig the outflow makes it more difficult to form a rotationally supported disk in the aligned case disk formation is further hindered by magnetically dominated low density lobes dems which affect the inner part of the accretion flow of the aligned case more than that of the perpendicular case at least when the field is relatively weak see fig contrast and all for more strongly magnetized cases the dems becomes more dynamically important close to the central object independent of the tilt angle xmath8 see fig 3db1e5 dems like structureswere also seen in some runs of jhc12 eg the case of xmath126 and xmath40 see their fig 19 but were not commented upon as stressed previously by xcite and xcite and confirmed by our calculations the dems presents a formidable obstacle to the formation and survival of a rotationally supported disk while there is agreement between jhc12 and our calculations that misalignment between the magnetic field and rotation axis is beneficial to disk formation it is unlikely that the misalignment alone can enable disk formation around the majority of young stellar objects the reason is the following xcite obtained a mean dimensionless mass to flux ratio of xmath138 through oh zeeman observations for a sample of dense cores in nearby dark clouds near the center relative to that in the envelope for such a small xmath33 disk formation is completely suppressed even for the case of maximum misalignment of xmath43 xcite argued however that there is a flat distribution of the total field strength in dense cores from xmath139 to some maximum value xmath140 the latter corresponds to xmath141 so that the mean xmath142 stays around 2 if this is the case some cores could be much more weakly magnetized than others and disks could form preferentially in these cores however to form a rotationally supported disk the core material must have 1 an effective mass to flux ratio xmath33 greater than about 5 and 2 a rather large tilt angle see discussion in the preceding section if one assumes that the core to core variation of xmath33 comes mostly from the field strength rather than the column density as done in crutcher2010 then the probability of a core having xmath143 or xmath144 is xmath145 since a large tilt angle of xmath146is required to form a rsd for xmath130xmath131 the chance of disk formation is reduced from xmath145 by at least a factor of 2 assuming a random orientation of the magnetic field relative to the rotation axis to xmath147 or less the above estimate is necessarily rough and can easily be off by a factor of two in either direction it is however highly unlikely for the majority of the cores to simultaneously satisfy the conditions on both xmath33 and xmath8 for disk formation the condition xmath143 is especially difficult to satisfy because as noted earlier it implies that the dense cores probed by oh observations must have a total field strength xmath148 less than or comparable to the well defined median field strength inferred by xcite for the much more diffuse cold neutral atomic medium cnm it is hard to imagine a reasonable scenario in which the majority of dense cores have magnetic fields weaker than the cnm we note that xcite independently estimated a range of xmath149xmath29 for the fraction of dense cores that would produce a keplerian disk based on fig 14 of jhc12 their lower limit of xmath147 is in agreement with our estimate in both cases the fraction is dominated by weakly magnetized cores that have mass to flux ratios greater than xmath150 and moderately large tilt angles their upper limit of xmath151 is much higher than our estimate mainly because it includes rather strongly magnetized cores with mass to flux ratios as small as xmath152 since our calculations show that such strongly magnetized cores do not produce rotationally supported disks even for large tilt angles we believe that this upper limit may be overly generous whether dense cores have large tilt angle xmath8 between the magnetic field and rotation axis that are conducive to disk formation is unclear xcite measured the field orientation on the xmath109scale for a sample of 16 sources using millimeter interferometer carma they found that the field orientation is not tightly correlated with the outflow axis indeed the angle between the two is consistent with being random if the outflow axis is aligned with the core rotation axis and if the field orientation is the same on the core scale as on the smaller xmath109scale then xmath8 would be randomly distributed between xmath153 and xmath2 with half of the sources having xmath154 however the outflow axis may not be representative of the core rotation axis this is because the fast outflow is thought to be driven magnetocentrifugally from the inner part of the circumstellar disk on the au scale or less shu2000 koniglpudritz2000 a parcel of core material would have lost most of its angular momentum on the way to the outflow launching location the torque most likely magnetic or gravitational that removes the angular momentum may also change the direction of the rotation axis similarly the field orientation on the xmath109scale may not be representative of the initial field orientation on the larger core scale the magnetic field on the xmath109 scale is more prone to distortion by collapse and rotation than that on the core scale indeed xcite found that the field orientation on the core scale measured using single disk telescope cso is within xmath155 of the outflow axis for 3 of the 4 sources in their sample the larger angle measured in the remain source may be due projection effects because its outflow axis is close to the line of sight if the result of xcite is valid in general and if the outflow axis reflects the core rotation axis then dense cores with large tilt angle xmath8 would be rare in this case disk formation would be rare according to the calculations presented in this paper and in jhc12 even in the unlikely event that the majority of dense cores are as weakly magnetized as xmath143 rotationally supported disks are observed however routinely around evolved class ii ysos see williamscieza2011 for a recent review and increasingly around younger class i eg jorgensen2009 lee2011 takakuwa et al 2012 and even class 0 tobin2012 sources when and how such disks form remain unclear our calculations indicated that the formation of large observable xmath5scale rotationally supported disks small disks may be needed to drive fast outflows during the class 0 phase and may form through non ideal mhd effects machida2010 dappbasu2010 dapp2012 is difficult even in the presence of a large tilt angle xmath8 during the early protostellar accretion class 0 phase eg alves2007 this may not contradict the available observations yet maury2010 since only one keplerian disk is found around a late class 0 source so far l1527 tobin2012 it could result from a rare combination of an unusually weak magnetic field and a large tilt angle xmath8 if it turns out through future observations using alma and jvla that large scale keplerian disks are prevalent around class 0 sources then some crucial ingredients must be missing from the current calculations possible candidates include non ideal mhd effects and turbulence the existing calculations indicate that realistic levels of the classical non ideal mhd effects do not weaken the magnetic braking enough to enable large scale disk formation mellonli2009 li2011 krasnopolsky2012 see also krasnopolskykonigl2002 and braidingwardle2012a braidingwardle2012b although misalignment has yet to be considered in such calculations supersonic turbulence was found to be conducive to disk formation santos lima2012santos lima2013 seifried2012 myers2012 joos2013 although dense cores of low mass star formation typically have subsonic non thermal line width and it is unclear whether subsonic turbulence can enable disk formation in dense cores magnetized to a realistic level if on the other hand it turns out that large scale keplerian disks are rare among class 0 sources then the question of disk growth becomes paramount how do the mostly undetectable class 0 disks become detectable in the class i and ii phase if the magnetic braking plays a role in keeping the early disk undetectable then its weakening at later times may promote rapid disk growth one possibility for the late weakening of magnetic braking is the depletion of the protostellar envelope either by outflow stripping mellonli2008 or accretion machida2011 it deserves to be better quantified we carried out a set of mhd simulations of star formation in dense cores magnetized to different degrees and with different tilt angles between the magnetic field and the rotation axis we confirmed the qualitative result of xcite that misalignment between the magnetic field and rotation axis tends to weaken magnetic braking and is thus conducive to disk formation quantitatively we found however that the misalignment enables the formation of a rotationally supported disk only in dense cores where the star forming material is rather weakly magnetized with a dimensionless mass to flux ratio xmath1 large misalignment in such cores allows the rotation to wrap the equatorial pseudodisk in the aligned case into a curved curtain that hinders outflow driving and angular momentum removal making disk formation easier in more strongly magnetized cores disk formation is suppressed independent of the misalignment angle because the inner part of the protostellar accretion flow is dominated by strongly magnetized low density regions if dense cores are as strongly magnetized as inferred by xcite xcite with a mean mass to flux ratio xmath3 it would be difficult for the misalignment alone to enable disk formation in the majority of them we conclude that how protostellar disks form remains an open question we thank mark krumholz for useful discussion this work is supported in part by nasa grant nnx10ah30 g and the theoretical institute for advanced research in astrophysics tiara in taiwan through the charms project allen a li z y shu f h 2003 599 363 alves j lombardi m lada c j 2007 462 17 bodenheimer p 1995 33 199 braiding c r wardle m 2012 422 261 braiding c r wardle m 2012 427 3188 burkert a bodenheimer p 2000 543 822 chapman n et al 2013 submitted ciardi a hennebelle p 2010 409 l39 crutcher r m 2012 50 29 crutcher r m wandelt b heiles c falgarone e troland t h 2010 725 466 dapp w b basu s 2010 521 56 dapp w b basu s kunz m w 2012 541 a35 duffin d f pudritz r e 2009 706 l46 galli d lizano s shu f h allen a 2006 647 374 galli d shu f h 1993 417 243 goodman a a benson p fuller g a myers p c 1993 406 528 heiles c troland t h 2005 624 773 hennebelle p ciardi a 2009 506 l29 hennebelle p fromang s 2008 477 9 hull c l h et al 2012 arxiv12120540 joos m hennebelle p ciardi a 2012 543 a128 joos m hennebelle p ciardi a fromang s 2013 submitted arxiv13013004 jrgensen j k van dishoeck e f visser r bourke t l wilner d j lommen d hogerheijde m r myers p c 2009 507 861 knigl a pudritz r 2000 in protostars and planets iv eds v mannings et al univ of arizona press 759 krasnopolsky r knigl a 2002 580 987 krasnopolsky r li z y shang h 2010 716 1541 krasnopolsky r li z y shang h 2011 733 54 krasnopolsky r li z y shang h zhao b 2012 757 77 krumholz m r crutcher r m hull c l h 2013 submitted lee c f 2011 741 62 li z y krasnopolsky r shang h 2011 738 180 machida m n inutsuka s matsumoto t 2010 arxiv10092140 machida m n inutsuka s matsumoto t 2011 63 555 machida m n matsumoto t hanawa t tomisaka k 2006 645 1227 matsumoto t tomisaka k 2004 616 266 maury a j andr p hennebelle p motte f stamatellos d bate m belloche a duchne g whitworth a 2010 512 a40 myers a t mckee c f cunningham a j klein r i krumholz m r 2012 arxiv12113467 submitted to mellon r r li z y 2008 681 1356 mellon r r li z y 2009 698 922 mouschovias t ch paleologou e v 1979 230 204 price d j bate m r 2007 311 75 santos lima r de gouveia dal pino e m lazarian a 2012 747 21 santos lima r de gouveia dal pino e m lazarian a 2013 in press seifried d banerjee r klessen r s duffin d pudritz r e 2011 417 1054 seifried d banerjee r pudritz r klessen r s 2012 423 40 shu f h najita j r shang h li z y 2000 in protostars and planets iv eds v mannings et al univ of arizona press 789 takakuwa s saito m lim j saigo k sridharan t k patel n a 2012 754 52 tobin j j hartmann l chiang h f wilner d j looney l w loinard l calvet n dalessio p 2012 492 83 tomida k tomisaka k matsumoto t hori y okuzumi s machida m n saigo k 2013 763 6 tomisaka k 1998 502 l163 troland t h crutcher r m 2008 680 457 williams j p cieza l a 2011 49 67 zhao b li z y 2013 763 7 zhao b li z y nakamura f krasnopolsky r shang h 2011 742 10
stars form in dense cores of molecular clouds that are observed to be significantly magnetized in the simplest case of a laminar non turbulent core with the magnetic field aligned with the rotation axis both analytic considerations and numerical simulations have shown that the formation of a large xmath0scale rotationally supported protostellar disk is suppressed by magnetic braking in the ideal mhd limit for a realistic level of core magnetization this theoretical difficulty in forming protostellar disks is termed magnetic braking catastrophe a possible resolution to this problem proposed by xcite and xcite is that misalignment between the magnetic field and rotation axis may weaken the magnetic braking enough to enable disk formation we evaluate this possibility quantitatively through numerical simulations we confirm the basic result of xcite that the misalignment is indeed conducive to disk formation in relatively weakly magnetized cores with dimensionless mass to flux ratio xmath1 it enabled the formation of rotationally supported disks that would otherwise be suppressed if the magnetic field and rotation axis are aligned for more strongly magnetized cores disk formation remains suppressed however even for the maximum tilt angle of xmath2 if dense cores are as strongly magnetized as indicated by oh zeeman observations with a mean dimensionless mass to flux ratio xmath3 it would be difficult for the misalignment alone to enable disk formation in the majority of them we conclude that while beneficial to disk formation especially for the relatively weak field case the misalignment does not completely solve the problem of catastrophic magnetic braking in general
introduction problem setup weak-field case: disk formation enabled by field-rotation misalignment moderately strong field cases: difficulty with disk formation discussion
the geodesics in a kerr metric are considered in the classical books on general relativity 13 some recent papers are devoted to more detailed study of geodesics on kerr s black hole with the aim to elucidate the mechanism of jet formation 4 and to analyze the possibility of particle acceleration to arbitrary high energy 5 the complete sets of analytic solutions of the geodesic equation in axially symmetric space time are given in 6 however the description of particle motion by geodesics is restricted to a spinless particle the motion of a spinning test particle is described by the mathisson papapetrou dixon equations 79 xmath2 xmath3 where xmath4 is the particle s 4velocity xmath5 is the tensor of spin xmath6 and xmath7 are respectively the mass and the covariant derivative with respect to the particle s proper time xmath8 xmath9 is the riemann curvature tensor units xmath10 are used it is necessary to add a supplementary condition to eqs 1 2 in order to choose an appropriate trajectory of the particle s center of mass most often the conditions xmath11 and xmath12 are used where xmath13 is the 4momentum in practice the condition for a spinning test particle xmath14 must be taken into account 10 where xmath15 is the absolute value of spin xmath16 is the coordinate distance of the particle from the massive body after 79 eqs 1 2 were obtained in many papers by different approaches 11 also this subject is of importance in some recent publications 12 in general the solutions of eqs 1 2 under conditions 3 and 4 are different however in the post newtonian approximation these solutions coincide with high accuracy 13 just as in the case if the spin effects can be described by a convergent in spin series as some corrections to the corresponding geodesic expressions 14 therefore instead of rigorous mpd eqs 1 their linear spin approximation xmath17 is often considered in this approximation condition 4 coincides with 3 the effect of spin on the particle s motion in kerr s field has been studied since the 1970s 101516 in the past 1012 yearsthis subject gives rise to renewed interest 1722 particularly in the context of investigations of the possible chaotic motions 1719 also these references provide a good introduction concerning the mpd equations the purpose of this paper is to investigate more carefully the world lines and trajectories of a spinning particle moving relative to a kerr source with the velocity close to the velocity of light we focus on the circular and close to circular orbits because just on these orbits one can expect the significant effects of the spin gravity interaction 102325 indeed these orbits are of interest in the context of investigations of the nongeodesic synchrotron electromagnetic radiation of highly relativistic protons and electrons near black holes besides it is known that the highly relativistic circular orbits of a spinless particle are of importance in the classification of all possible geodesic orbits in a kerr spacetime naturally the circular highly relativistic orbits of a spinning particle are of importance in the classification of all possible significantly nongeodesic orbits in this spacetime as well also these orbits are exclusive in the sense that they are described by the strict analytical solutions of the mpd equations the main features of the spin gravity interaction that are revealed on the circular and close to circular orbits will be a good reference to further investigations of most general motions of a spinning particle in a kerr spacetime we stress that mpd equations are the classical limit of the general relativistic dirac equation 26 new results in this context are presented in 27 therefore highly relativistic solutions of the mpd equations stimulate the corresponding investigations of the fermion s interaction with the strong gravitational field the paper is organized as follows sec 2 deals with the relationships following from the mpd equations for the highly relativistic equatorial circular orbits of a spinning particle in kerr s field in the boyer lindquist coordinates the linear spin mpd equations for any motions of a spinning particle are considered in sec the results of computer integration of these equations for some significantly nongeodesic motions are presented in sec 4 we conclude in sec 5 following 3 in this paper xmath0 notes the radial coordinate of the photon circular orbits in the case of the counter rotation in practical calculations it is convenient to represent the mpd equations through the spin 3vector xmath18 instead of the 4tensor xmath5 where by definition xmath19 where xmath20 is the determinant of xmath21 xmath22 is the levi civita symbol here and in the following latin indices run 1 2 3 and greek indices 1 2 3 4 unless otherwise specified 2 can be written as 23 xmath23 upi urho gammarhopi4 uisk uk xmath24 upi 0 where a dot denotes differentiation with respect to the proper time xmath8 and square brackets denote antisymmetrization of indices xmath25 are the christoffel symbols 7 in terms of xmath18 is xmath26 xmath27 in many papers the 4vector of spin xmath28 is considered where xmath29 the following relationship holds xmath30 let us consider eqs 9 10 for the kerr metric using the boyer lindquist coordinates xmath31 then the nonzero components of xmath21 are xmath32 xmath33 xmath34 where xmath35 in the following we shall put xmath36 without any loss in generality the metric signature is it is easy to check that three equations from 9 have a partial solution with xmath37 xmath38 xmath39 and the relationship for the nonzero component of the spin 3vector xmath40 is xmath41 where xmath42 is the constant of integration the physical meaning of this constant is the same as in the general integral of the mpd equations 15 xmath43 we stress that relationship 12 is valid for any equatorial motions xmath37 with the spin orthogonal to the motion plane xmath44 the possible equatorial orbits of a spinning particle are described by eq first we shall consider the case of the circular orbits with xmath45 investigating the conditions of existence of the equatorial circular orbits for a spinning particle in kerr s field we use eqs 10 12 and xmath46 it is known that from the geodesic equations in this field the algebraic relationship that follows determines the dependence of the velocity of a spinless particle on the radial coordinate xmath16 of the equatorial circular orbit similarly from the first equation of set 10 using eq 14 we obtain the relationship for the equatorial circular orbits of a spinning particle in kerr s field as follows xmath47 xmath480 by eq 14 other equations of set 10 are automatically satisfied taking into account eq 12 and the explicit expressions for xmath49 and xmath9 see eg 18 from eq 15 we obtain xmath50 xmath510 the 4velocity component xmath52 can be expressed through xmath53 from the condition xmath46 as follows xmath54 just the sign at the radical in eq 17 ensures the positive value of xmath52 inserting expression 17 into eq 16 and eliminating the radical by raising to the second power we get xmath55 12varepsilon mar4delta9varepsilon2m2r4delta xmath56 xmath57 where as in eq 6 xmath58 without any loss in generality we put xmath59 then by eq 12 xmath60 so the particle s angular velocity xmath61 on the circular orbit with the radial coordinate xmath16 must satisfy eq let us show that eq 18 provides known solutions in the partial case of a spinless particle xmath62 from eq 18 we have xmath63 it follows from eq 19 that the velocity of such a particle on the circular orbit is highly relativistic if the expression xmath64 is close to xmath65 this fact is known from the analysis of the geodesic orbits in a kerr field as well as that just the roots of the equation xmath66 determine the values of xmath16 for the photon orbits if xmath67 and the absolute value of the expression xmath68 in the factor of xmath69 in eq 18 is much greater than xmath70 then it is easy to verify that the corresponding roots of eq 18 describe the circular orbits with the angular velocity which is close to the angular velocity of the corresponding geodesic orbits due to xmath71 more exactly in this case we have xmath72 that is not described in the literature namely it is not difficult to calculate that for xmath67 eqs 16 18 have the solutions which describe the highly relativistic circular orbits with the values of xmath16 that is equal or close to xmath0 ie to the radial coordinate of the counter rotation photon circular orbits for example in the case of the maximum kerr field xmath73 the orbits with xmath74 where xmath75 are highly relativistic both for positive and negative xmath76 if xmath77 according to eqs 16 18 the values of xmath53 on these orbits are determined by the expression xmath78 the choice of the sign in eq 21 xmath79 is dictated by the necessity to satisfy both eq 18 and 16 eq 18 as compared to 16 has additional roots because of the operation of raising to the second power similarly as in the case of eq 20 it is easy to check that if xmath80 it follows from eqs 16 18 the expression for xmath53 which in the main term coincides with the known analytic solution for the corresponding geodesic circular orbit it follows from eqs 16 18 at xmath1 for any xmath81 that xmath82 it is known from the geodesic equations that the values of xmath0 increase monotonically from xmath83 at xmath84 to xmath85 at xmath73 thus according to eq 22 the expression for the angular velocity xmath61 in the main term is proportional to xmath86 whereas the angular velocity in eq 20 at xmath87 xmath88 is proportional to xmath89 further details appear below in this sec using eq 22 we can estimate the value of the lorentz xmath90factor corresponding to the 4velocity component xmath53 for different xmath91 more exactly we shall calculate the lorentz xmath90factor from the point of view of an observer which is at rest relative to a kerr mass according to the general expression for the 3velocity componentsxmath92 we write 1 xmath93 and for the second power of the velocity absolute value xmath94 we have xmath95 where xmath96 is the 3space metric tensor the relationship between xmath96 and xmath21is as follows xmath97 for the circular motions we have xmath98 and according to eq 23 xmath99 by eqs 2426 and the condition xmath46 for the xmath90factor we write xmath100 inserting the value xmath53 from eq 22 into eq 27 we find in the corresponding spin approximation xmath101 it follows from eq 28 that xmath102 ie the under consideration circular motions are highly relativistic fig 1 shows the dependence xmath103 on xmath104 where xmath105 is the xmath90factor for xmath106 that is at xmath84 ratio xmath90factor at different xmath91 from xmath107 to xmath90factor at xmath84 vs radial coordinatewidth264 it is known that the important characteristic of the particle s motion in the kerr spacetime are its energy and angular momentum let us estimate the conserved values of the energy xmath108 and angular momentum xmath109 of a spinning particle on the above considered highly relativistic circular orbits with eqs 21 22 the expressions for these quantities are 1628 xmath110 xmath111 by eqs 21 29 for xmath74 xmath77 we obtain xmath112 the energy of a spinless particle on the geodesic circular orbit with xmath74 xmath113 for xmath73 is equal to xmath114 it follows from eqs 31 32 that xmath115 at the same time according to 31 32 for xmath116 we have xmath117 that is the values of energy of the spinning and spinless particles on the highly relativistic circular orbits with the same xmath74 in the maximal kerr field can differ significantly it is easy to show that similar situation takes place for all values xmath81 with xmath87 in addition one can estimate for these circular orbits that according to eq 30 xmath118 as a result the relationships xmath119 and xmath118 following from eqs 29 30 show clearly that the corresponding solutions of the mpd equations can not be obtained in the framework of the analytic perturbation approach to the dynamics of a classical spinning particle developed in 2022 we stress that in this sec above we have considered the new highly relativistic circular solutions of the approximate mpd eqs 2 7 let us show that these solutions satisfy the rigorous mpd eqs 1 2 indeed it follows from eqs 1 that their terms which were neglected in eqs 7 in the case of the circular equatorial motions are presented in the first eq of set 1 only in metric 11 these terms can be written as xmath120 where xmath121 xmath122 xmath123 xmath124 xmath125 first we point out that according to eq 36 in the case of a schwarzschild s field for the circular orbit with xmath126 the all coefficients xmath127 are equal to 0 therefore in this case expression 35 is equal to 0 identically independently on the explicit expressions for xmath53 xmath52 it means that the highly relativistic circular orbit of a spinning particle with xmath126 in a schwarzschild s field is a common strict solution both of the approximate mpd equations 2 7 and the rigorous mpd equations 1 2 second it is not difficult to check that applying eqs 17 22 to expression 35 at xmath128 yields xmath65 in the main terms ie within the accuracy of order xmath129 thus in this sec we dealt with new partial solutions of the mpd equations in a kerr spacetime with eqs 12 14 22 all highly relativistic orbits of a spinning particle described by these solutions are circular and located in the space region with xmath87 xmath75 to study highly relativistic orbits beyond this region it is necessary to carry out the corresponding computer calculations it is an aim of secs 3 and 4 now the point of interesting is the noncircular highly relativistic motions of a spinning particle that starts near xmath0 in a kerr field in particular we shall consider the effect of the 3vector xmath18 inclination to the equatorial plane xmath37 on the particle s trajectory in this case 9 can not be integrated separately from eqs 7 for computer integration it is necessary to write the explicit form of eqs 7 9 in metric 11 it is convenient to use the dimensionless quantities xmath130 where by definition xmath131 xmath132 xmath133 and xmath134 in contrast to xmath129 from 6 that depends on xmath16 here xmath135 is defined to be xmath136 then it follows from eqs 7 9 the set of 11 first order differential equations xmath137 xmath138 xmath139 where a dot denotes differentiation with respect to xmath140 and xmath141 xmath142 are some functions which depend on xmath143 because the expressions for xmath144 in general case of any xmath91 are too lengthy here we write xmath144 for the much simpler case xmath84 xmath145 xmath146 xmath147 xmath148 xmath149 xmath150 xmath151 xmath152 xmath153 xmath154fracy9y8a4 xmath155 xmath156 xmath157 xmath158 xmath159 xmath160 xmath161 xmath162 xmath163fracy11y8a4 where xmath164 according to eqs 13 38 the expression for xmath165 can be written as xmath166 xmath167 xmath168 eqs 3941 are aimed at computer integration we present here the results of computer integration of eqs 39 with 40 41 all plots beloware restricted to the domains of validity of the linear spin approximation that is in these domains the neglected terms of the rigorous mpd equations are much less than the linear spin terms we monitor errors of computing using the conserved quantities the absolute value of spin the energy and angular momentum see eqs 13 29 30 29 correspond to the case of schwarzschild s field all plots start with the same initial values of the coordinates xmath169 namely at xmath170 and xmath65 correspondingly we do not vary the initial values of the 4velocity components xmath171 and xmath53 as well more exactly we put xmath172 and xmath173 where expression 42 is the solution of eq 18 at xmath84 rigorous in xmath135 that is the initial values of xmath171 and xmath53 are the same as for the equatorial circular orbit with xmath126 it is easy to check that expression 42 coincides with 22 in the corresponding spin approximation however we vary the initial inclination angle of the spin 3vector to the plane xmath174 without change of the absolute value of spin figs 26 and add the small perturbation by the radial velocity figs 79 without any loss in generality we put xmath175 for comparison we present the corresponding solutions of the geodesic equations that start with the same initial values of the coordinates and velocity as the solutions of the equations for a spinning particle in all cases as a typical value we put xmath176 figs 29 let us compare the world lines and trajectories of the spinning and spinless particles in schwarzschild s field 2 6 7 9 exhibit the significant repulsive effects of the spin gravity interaction on the spinning particle due to the repulsive action the spinning particle falls on the horizon surface during longer time as compared to the spinless particle fig 2 moreover according to figs 6 9the considerable space separation takes place within a short time ie within the time of the spinless particle s fall on the horizon the point of interest is the phenomenon when a spinning particle orbits below the equatorial plane for some revolutions figs 3 8 we recall that according to the geodesic equations similar situation is impossible for a spinless particle it is easy to calculate that for xmath177 a spinning particle can orbit above the equatorial plane we stress that all graphs in figs 29 are interrupted beyond the domain of validity of the linear spin approximation by figs 3 4 within the time of this approximation validity the period of the xmath178oscillation coincides with the period of the particle s revolution by xmath179 whereas on this interval the value of the spin component xmath180 is xmath136 fig 5 just as the components xmath181 and xmath182 the corresponding graphs are not presented here we point out that this situation differs from the corresponding case of the circular motions of a spinning particle that are not highly relativistic indeed then the nonzero radial spin component is not xmath136 but oscillates with the period of the particle s revolution by xmath179 besides in the last case the mean level of xmath178 coincides with xmath183 whereas in figs 3 8 the mean values of xmath178 are above xmath183 according to figs 3 8 the amplitude of the xmath178oscillation increases with the inclination angle however xmath184 is small even for the inclination angle that is equal to xmath183 in the context of figs 3 8 we point out an interesting analytical result following from eqs 3941 namely it is not difficult to check that these equations are satisfied if xmath185 xmath186 xmath187 xmath188 xmath189 where xmath190 and xmath191 are some small constant values such that xmath192 xmath193 according to eq 41 the relationship between xmath190 xmath191 and xmath129 from eq 6 is as follows xmath194 by notation 37 we conclude that the partial solution of eqs 3940 that is presented in eq 43 describes the highly relativistic nonequatorial circular motion with xmath195 that is eq 43 shows the possibility of the spinning particle solution that orbit permanently above or below the equatorial plane however with the small value xmath196 only one can verify that due to the small values xmath190 xmath191 solution 43 is an approximate solution of the rigorous mpd equations just as in the case of schwarzschild s field the set of eqs 39 with the corresponding expressions for xmath144 can be integrated numerically in kerr s field 10 as an analogy of fig 2 shows the plots of xmath197 for some values of the inclination angle at xmath73 11 shows the dependence of the amplitude and period of the xmath178oscillation on the kerr parameter xmath91 the main features of figs 49 are peculiar to the corresponding plots for kerr s field that are not presented here for brevity according to figs 2 6 10 when the inclination angle is equal to xmath183 ie when spin lies in the equatorial plane so that spin gravity coupling is equal to xmath65 the corresponding plots coincide with the geodesics it is an evidence of the correct transition from solutions of the mpd equations to geodesics fig 12 illustrates some typical cases of the equatorial motions of a particle with fixed initial values of its coordinates and velocity but with different absolute value of spin the inclination angle is equal to xmath198 in kerr s field at xmath73 all curves start from xmath199 with zero radial velocity and the tangential velocity which at xmath176 is needed for the circular motion with xmath199 this circular motion is shown by the horizontal line whereas other curves represent the noncircular motions at xmath200 with the same particle s initial values of coordinates and velocity according to fig 12 the circular orbit of a spinning particle with xmath199 monotonically trend to the corresponding noncircular geodesic orbit if xmath135 trend to 0 ie the limiting transition xmath201 is correct radial coordinate vs proper time for the inclination angle xmath198 horizontal line xmath126 xmath202 dash and dot line xmath203 dash line and xmath183 solid line at xmath84 xmath176 the dot line corresponds to the geodesic motion with the same initial values of the coordinates and velocitywidth264 graphs of the angle xmath178 vs proper time for the inclination angle xmath198 horizontal line xmath204 xmath202 solid line xmath203 dash line and xmath183 dash and dot line at xmath84 xmath176width249 graphs of the angle xmath179 vs proper time at xmath84 xmath176 for different values of the inclination angle practically coincide with the corresponding geodesic plot the same feature takes place for the corresponding graphs t vs s that are not presented here for brevitywidth283 graphs of xmath180 vs proper time for the inclination angle xmath202 dash line xmath203 dash and dot line and xmath183 solid line at xmath84 xmath176width283 trajectories of the spinning particle in the polar coordinates for the inclination angle xmath198 circle xmath126 xmath202 dash line xmath203 dash and dot line and xmath183 solid line at xmath84 xmath176 the dot line corresponds to the geodesic motion with the same initial values of the coordinates and velocity the circle xmath205 corresponds to the horizon linewidth226 radial coordinate vs proper time for the inclination angle xmath198 dash and dot line xmath202 dash line and xmath203 solid line at xmath84 xmath176 and the nonzero radial velocity xmath206 the dot line corresponds to the geodesic motion with the same initial values of the coordinates and velocitywidth264 graphs of the angle xmath178 vs proper time for the inclination angle xmath198 horizontal line xmath202 dash line and xmath203 solid line at xmath84 xmath176 and xmath206width264 trajectories of the spinning particle in the polar coordinates for the inclination angle xmath198 dash and dot line xmath202 dash line and xmath203 solid line at xmath84 xmath176 and xmath207 the dot line corresponds to the geodesic motion with the same initial values of the coordinates and velocitywidth226 radial coordinate vs proper time for the inclination angle xmath198 horizontal line xmath199 xmath202 dash and dot line xmath203 dash line and xmath183 solid line at xmath73 xmath176 the dot line corresponds to the geodesic motion with the same initial values of the coordinates and velocity width264 graphs of the angle xmath178 vs proper time for the inclination angle xmath202 at xmath84 dash line xmath208 dash and dot line xmath73 solid line xmath176width264 radial coordinate vs proper time for the equatorial motions in kerr s field at xmath73 with different absolute value of spin and the same common initial values of the coordinates and velocity the circular orbit with xmath199 at xmath176is shown by the long dash line the solid dash and dot and dash lines describe the cases when xmath135 is equal to xmath209 xmath210 and xmath211 correspondingly the dot line represents the geodesic motion with the same initial values of the coordinates and velocitywidth226 finally we remark on a simple conclusion following from the nongeodesic curves of a spinning particle presented in part in figs 212 let us consider any point on the trajectory of a spinning particle corresponding to its proper time xmath212 we recall that all curves in figs 212 begin at xmath213 then the geodesic curve can be calculated which starts just at this point with the velocity that is equal to the velocity of the spinning particle at the same point also it is not difficult to estimate the deviation of the pointed out nongeodesic curve from this geodesic in principle it means that in our comparison of the corresponding geodesic and nongeodesic curves we are not restricted to the trajectories of a spinning particle which start in the small neighborhood of the value xmath1 only naturally here we are restricted to the domain of validity of the linear spin approximation this conclusion may be useful for the generalization of the results obtained in this sec on other motions of a spinning particle in this paper using the linear spin approximation of the mpd equations we have studied the significantly nongeodesic highly relativistic motions of a spinning particles starting near xmath1 in kerr s field some of these motions namely circular are described by the analytical relationships following directly from mpd equations in the boyer lindquist coordinates others noncircular and nonequatorial are calculated numerically for realization of these motionsthe spinning particle must possess the obital velocity corresponding to the relativistic lorentz xmath90factor of order xmath86 all considered cases of the spinning particle motion are within the framework of validity of the test particle approximation when xmath214 the situation with a macroscopic test particle moving relative to a massive body with xmath215 is not realistic however the highly relativistic values of the lorentz xmath90factor are usual in astrophysics for the elementary particles for example if xmath216 is equal to three of the sun s mass as for a black hole then xmath135 for an electron is of order xmath217 and from eq 28 we have the xmath90factor of order xmath218 similarly for a neutrino with the mass xmath219 we have xmath220 we can expect the effects of the significant space separation of some highly relativistic particles with different orientation of spin indeed the effects considered in this paper exhibit the strong repulsive action of the spin gravity interaction for another correlation of signs of the spin and the particle s orbital velocitythis interactions acts as an attractive force in general the last case is beyond the validity of the linear spin approximation the nonlinear spin effects will be investigated in another paper also we plan to show that according to the mpd equations significantly nongeodesic orbits of a highly relativistic spinning particle with the xmath90factor of order xmath86 exist for the much wider space region of the initial values of the particle s coordinates in a kerr spacetime than the orbits considered above however the corresponding calculations are very complicated because this result follows from the rigorous mpd equations 1 2 only and is not common for the approximate equations 2 7 l d landau and e m lifshitz the classical theory of fields addison wesley reading massachusetts 1971 c w misner k s thorne and j a wheeler gravitation freeman san francisko 1973 s chandrasekhar the mathematical theory of black holes oxford university press oxford 1983 j gariel m a h maccallum g marcilhacy and n o santos astron and astrophys 515 a15 2010 m banados j silk and s m west phys 103 111102 2009 e berti v cardoso l gualtieri f pretorios and u sperhake phys lett 103 239001 2009 t jacobson and t sotiriou phys 104 021101 2010 e hackmann v kagramanova j kunz and c lmmerzahl europhys lett 88 30008 2009 m mathisson acta phys polon 6 163 1937 a papapetrou proc a 209 248 1951 w g dixon proc a 314 499 1970 gen gravitation 4 199 1973 philos a 277 59 1974 acta phys suppl 1 27 2008 r wald phys d 6 406 1972 w tulczyjew acta phys pol 18 393 1959 a taub j math phys 5 112 1964 p bartrum proc r soc a 284 204 1965 h p knzl j math 13 739 1972 m omote progr theor49 1559 1973 s hojman phys d 18 2741 1978 a balachandran g marmo b skagerstam and a stern phys b 89 199 1980 j natario commun 281 387 2008 e barausse e racine and a buonanno phys d 80 104025 2009 j steinhoff and g schfer europhys lett 87 50004 2009 b m barker and r f oconnell gen 4 193 1973 a n aleksandrov kinem 7 13 1991 b mashhoon j math 12 1075 1971 k p tod f de felice and m calvani nuovo cim b 34 365 1976 s suzuki and k maeda phys d 58 023005 1998 o semerak mon not r astron soc 308 863 1999 m hartl phys d 67 024005 2003 phys d 67 104023 2003 c chicone b mashhoon and b punsly phys a 343 1 2005 b mashhoon and d singh phys d 74 124006 2006 d singh phys d 78 104028 2008 r plyatsko phys d 58 084031 1998 r plyatsko and o bilaniuk class quantum grav 18 5187 2001 r plyatsko class quantum grav 22 1545 2005 s wong int phys 5 221 1972 l kannenberg ann ny 103 64 1977 r catenacci and m martellini lett nuovo cimento 20 282 1977 j audretsch j phys a 14 411 1981 a gorbatsievich acta phys b 17 111 1986 a barut and m pavsic class quantum grav 4 41 1987 f cianfrani and g montani europhys lett 84 30008 2008 int j mod a 23 1274 2008 yu obukhov a silenko and o teryaev phys d 80 064044 2009 r micoulaut z phys 206 394 1967
using the mathisson papapetrou dixon mpd equations we investigate the trajectories of a spinning particle starting near xmath0 in a kerr field and moving with the velocity close to the velocity of light xmath0 is the boyer lindquist radial coordinate of the counter rotation circular photon orbits first as a partial case of these trajectories we consider the equatorial circular orbit with xmath1 this orbit is described by the solution that is common for the rigorous mpd equations and their linear spin approximation then different cases of the nonequatorial motions are computed and illustrated by the typical figures all these orbits exhibit the effects of the significant gravitational repulsion that are caused by the spin gravity interaction possible applications in astrophysics are discussed
introduction highly relativistic equatorial circular orbits of spinning particles in a kerr field according to approximate and rigorous mpd equations equations (7), (9) for any motions in a kerr field numerical results conclusions
a coamoeba is the image of a subvariety of a complex torus under the argument map to the real torus coamoebae are cousins to amoebae which are images of subvarieties under the coordinatewise logarithm map xmath0 amoebae were introduced by gelfand kapranov and zelevinsky in 1994 xcite and have subsequently been widely studied xcite coamoebae were introduced by passare in a talk in 2004 and they appear to have many beautiful and interesting properties for example coamoebae of xmath1discriminants in dimension two are unions of two non convex polyhedra xcite and a hypersurface coamoeba has an associated arrangement of codimension one tori contained in its closure xcite bergman xcite introduced the logarithmic limit set xmath2 of a subvariety xmath3 of the torus as the set of limiting directions of points in its amoeba bieri and groves xcite showed that xmath2 is a rational polyhedral complex in the sphere logarithmic limit sets are now called tropical algebraic varieties xcite for a hypersurface xmath4 logarithmic limit set xmath5 consists of the directions of non maximal cones in the outer normal fan of the newton polytope of xmath6 we introduce a similar object for coamoebae and establish a structure theorem for coamoebae similar to that of bergman and of bieri and groves for amoebae let be the coamoeba of a subvariety xmath3 of xmath7 with ideal xmath8 the of xmath3 is the set of accumulation points of arguments of sequences in xmath3 with unbounded logarithm for xmath9 the initial varietyxmath10 is the variety of the initial ideal of xmath8 the fundamental theorem of tropical geometry asserts that xmath11 exactly when the direction of xmath12 lies in xmath2 we establish its analog for coamoebae t one the closure of xmath13 is xmath14 and xmath15 johansson xcite used different methods to prove this when xmath3 is a complete intersection the cone over the logarithmic limit set admits the structure of a rational polyhedral fan xmath16 in which all weights xmath17 in the relative interior of a cone xmath18 give the same initial scheme xmath19 thus the union in theorem t one is finite and is indexed by the images of these cones xmath20 in the logarithmic limit set of xmath3 the logarithmic limit set or tropical algebraic variety is a combinatorial shadow of xmath3 encoding many properties of xmath3 while the coamoeba of xmath3 is typically not purely combinatorial see the examples of lines in xmath21 in section s lines the phase limit set does provide a combinatorial skeleton which we believe will be useful in the further study of coamoebae we give definitions and background in section s defs and detailed examples of lines in three dimensional space in section s lines these examples are reminiscent of the concrete descriptions of amoebae of lines in xcite we prove theorem t one in section s phase as a real algebraic group the set xmath22 of invertible complex numbers is isomorphic to xmath23 under the map xmath24 here xmath25 is the set of complex numbers of norm 1 which may be identified with xmath26 the inverse map is xmath27 let xmath28 be a free abelian group of finite rank and xmath29 its dual group we use xmath30 for the pairing between xmath28 and xmath31 the group ring xmath32 is the ring of laurent polynomials with exponents in xmath28 it is the coordinate ring of a torus xmath33 which is identified with xmath34 the set of group homomorphisms xmath35 there are natural maps xmath36 and xmath37 which are induced by the maps xmath38 and xmath39 maps xmath40 of free abelian groups induce corresponding maps xmath41 of tori and also of xmath42 and xmath43 if xmath44 is the rank of xmath31 we may identify xmath31 with xmath45 which identifies xmath33 with xmath46 xmath43 with xmath47 and xmath42 with xmath48 the of a subvariety xmath49 is its image under the map xmath50 and the of xmath3 is the image of xmath3 under the argument map xmath51 an amoeba has a geometric combinatorial structure at infinity encoded by the logarithmic limit set xcite coamoebae similarly have phase limit sets which have a related combinatorial structure that we define and study in section s phase if we identify xmath52 with xmath53 then the map xmath54 given by xmath55 is a real algebraic map thus coamoebae as they are the image of a real algebraic subset of the real algebraic variety xmath33 under the real algebraic map xmath56 are semialgebraic subsets of xmath43 xcite it would be very interesting to study them as semi algebraic sets in particular what are the equations and inequalities satisfied by a coamoeba when xmath3 is a grassmannian such a description would generalize richter gebert s five point condition for phirotopes from rank two to arbitrary rank xcite similarly we may replace the map xmath57 in the definition of amoebae by the map xmath58 to obtain the of xmath3 which is a subset of xmath59 the algebraic amoeba is a semi algebraic subset of xmath59 and we also ask for its description as a semi algebraic set ex linep2 let xmath60 be defined by xmath61 the coamoeba xmath62 is the set of points of xmath63 of the form xmath64 for xmath65 if xmath66 is real then these points are xmath67 xmath68 and xmath69 if xmath66 lies in the intervals xmath70 xmath71 and xmath72 respectively for other values consider the picture below in the complex plane xmath73 for xmath74 fixed xmath75 can take on any value strictly between xmath76 for xmath17 near xmath77 and xmath78 for xmath66 near xmath78 and thus xmath62 consists of the three points xmath79 xmath80 and xmath81 and the interiors of the two triangles displayed below in the fundamental domain xmath82 2subset mathbb r2 of xmath63 this should be understood modulo xmath83 so that xmath84 xmath85 the coamoeba is the complement of the region xmath86 2 alphabeta leq piarg1 together with the three images of real points xmath67 xmath68 and xmath69 given a general line xmath87 with xmath88 we may replace xmath66 by xmath89 and xmath90 by xmath91 to obtain the line xmath92 with coamoeba this transformation rotates the coamoeba by xmath93 horizontally and xmath94 vertically let xmath95 be a polynomial with support xmath96 xmath97 where we write xmath98 for the element of xmath32 corresponding to xmath99 given xmath100 let xmath101 be the minimum of xmath102 for xmath103 then the initial form xmath104 of xmath6 with respect to xmath100 is the polynomial xmath105 defined by xmath106 given an ideal xmath107 and xmath100 the with respect to xmath17 is xmath108 lastly when xmath8 is the ideal of a subvariety xmath3 the xmath109 is defined by the initial ideal xmath110 the sphere xmath111 is the set of directions in xmath42 write xmath112 for the projection the of a subvariety xmath3 of xmath33 is the set of accumulation points in xmath113 of sequences xmath114 where xmath115 is an unbounded set a sequence xmath116 is unbounded if its sequence of logarithms xmath117 is unbounded a xmath118 is the set of points xmath100 which satisfy finitely many inequalities and equations of the form xmath119 where xmath120 the of xmath20 is the dimension of its linear span and of xmath20 are proper subsets of xmath20 obtained by replacing some inequalities by equations the relative interior of xmath20 consists of its points not lying in any face also xmath20 is determined by xmath121 which is a finitely generated subsemigroup of xmath31 a is a collection of rational polyhedral cones in xmath42 in which every two cones of xmath16 meet along a common face t fttg the cone in xmath42 over the negative xmath122 of the logarithmic limit set of xmath3 is the set of xmath100 such that xmath123 equivalently it is the set of xmath100 such that for every xmath95 lying in the ideal xmath8 of xmath3 xmath104 is not a monomial this cone over xmath122 admits the structure of a rational polyhedral fan xmath16 with the property that if xmath124 lie in the relative interior of a cone xmath20 of xmath16 then xmath125 it is important to take xmath122 this is correct as we use the tropical convention of minimum which is forced by our use of toric varieties to prove theorem t one in section s tropicalcompact we write xmath126 for the initial ideal defined by points in the relative interior of a cone xmath20 of xmath16 the fan structure xmath16 is not canonical for it depends upon an identification xmath127 moreover it may be the case that xmath128 but xmath129 bergman xcite defined the logarithmic limit set of a subvariety of the torus xmath33 and bieri and groves xcite showed it was a finite union of convex polyhedral cones the connection to initial ideals was made more explicit through work of kapranov xcite and the above form is adapted from speyer and sturmfels xcite the logarithmic limit set of xmath3 is now called the tropical algebraic variety of xmath3 and this latter work led to the field of tropical geometry we consider coamoebae of lines in three dimensional space we will work in the torus xmath130 of xmath131 which is the quotient of xmath132 by the diagonal torus xmath133 and similarly in xmath134 the quotient of xmath135 by the diagonal xmath136 by in xmath134we mean the images in xmath134 of lines and planes in xmath135 parallel to some coordinate plane let xmath137 be a line in xmath131 not lying in any coordinate plane then xmath137 has a parameterization xmath138 longmapsto ell0s tell1s tell2s tell3s t where xmath139 are non zero linear forms which do not all vanish at the same point for xmath140 let xmath141 be the zero of xmath142 the configuration of these zeroes determine the coamoeba of xmath143 which we will simply write as xmath62 suppose that two zeroes coincide say xmath144 then xmath145 for some xmath146 and so xmath137 lies in the translated subtorus xmath147 and its coamoeba xmath62 lies in the coordinate subspace of xmath148 defined by xmath149 in fact xmath62 is pulled back from the coamoeba of the projection of xmath137 to the xmath150 plane it follows that if there are only two distinct roots among xmath151 then xmath62 is a coordinate line of xmath148 if three of the roots are distinct then up to a translation the projection of the coamoeba xmath62 to the xmath150 plane looks like so that xmath62 consists of two triangles lying in a coordinate plane for each xmath140 define a function depending upon a point xmath152inmathbb p1 and xmath153 by xmath154 for each xmath140 let xmath155 be the image in xmath134 of xmath25 under the map xmath156 for each xmath140 xmath155 is a coordinate line in xmath134 that consists of accumulation points of xmath62 this follows from theorem t one for the main idea note that xmath157 for xmath153 is a curve in xmath134 whose hausdorff distance to the line xmath155 approaches 0 as xmath158 the of xmath137 is the union of these four lines l constant suppose that the zeroes xmath159 are distinct then xmath160 is constant along each arc of the circle in xmath161 through xmath159 after changing coordinates in xmath161 and translating in xmath162 rotating coordinates we may assume that these roots are xmath163 and so the circle becomes the real line choosing affine coordinates we may assume that xmath164 xmath165 and xmath166 so that we are in the situation of example ex linep2 then the statement of the lemma is the computation there for xmath66 real in which we obtained the coordinate points xmath79 xmath80 and xmath81 l disjoint the phase limit lines xmath167 xmath168 xmath169 and xmath170 are disjoint if and only if the roots xmath151 do not all lie on a circle suppose that two of the limit lines meet say xmath167 and xmath168 without loss of generality we suppose that we have chosen coordinates on xmath135 and xmath161 so that xmath171 and xmath172 for xmath140 then there are points xmath173 such that xmath174 comparing the last two components we obtain xmath175 and so the zeroes xmath151 have the configuration below xmath176figures elemgeomeps put1424theta put56335theta put152zeta3 put6661zeta2 put120zeta0 put940zeta1 endpicture but then xmath151 are cocircular conversely if xmath151 lie on a circle xmath177 then by lemma l constant the lines xmath155 and xmath178 meet only if xmath179 and xmath180 are the endpoints of an arc of xmath181 l immersion if the roots xmath151 do not all lie on a circle then the map xmath182 is an immersion let xmath183 which we consider to be a real two dimensional manifold after possibly reordering the roots the circle xmath184 containing xmath185 meets the circle xmath186 containing xmath187 transversally at xmath66 under the derivative of the map xmath188 tangent vectors at xmath66 to xmath184 and xmath186are taken to nonzero vectors xmath189 and xmath190 in the tangent space to xmath135 furthermore as the four roots do not all lie on a circle we can not have both xmath191 and xmath192 and so this derivative has full rank two at xmath66 as a map from xmath193 which proves the lemma by these lemmas there is a fundamental difference between the coamoebae of lines when the roots of the lines xmath142 are cocircular and when they are not we examine each case in detail first choose coordinates so that xmath194 after dehomogenizing and separately rescaling each affine coordinate eg identifying xmath134 with xmath195 and applying phase shifts to each coordinate xmath196 of xmath195 we may assume that the map parametrizing xmath137 is xmath197 suppose first that the four roots are cocircular as xmath198 the other three lie on a real line in xmath199 which we may assume is xmath200 that is if the four roots are cocircular then up to coordinate change we may assume that the line xmath137 is real and the affine parametrization is also real for this reason we will call such lines xmath137 we first study the boundary of xmath62 suppose that xmath66 lies on a contour xmath177 in the upper half plane as in figure f contour 22365 010 550xmath201 1200xmath202 1650xmath203 21318xmath200 7647xmath177 that contains semicircles of radius xmath204 centered at each root and a semicircle of radius xmath205 centered at 0 but otherwise lies along the real axis for xmath204 a sufficiently small positive number then xmath206 is constant on the four segments of xmath177 lying along xmath200 with respective values xmath207 moving from left to right on the semicircles around xmath201 xmath202 and xmath203 two of the coordinates are essentially constant but not quite equal to either 0 or xmath208 while the third decreases from xmath208 to 0 finally on the large semicircle the three coordinates are nearly equal and increase from xmath209 to xmath210 the image xmath211 can be made as close as we please to the quadrilateral in xmath195 connecting the points of in cyclic order when xmath204 is sufficiently small thus the image of the upper half plane under the map xmath212 is a relatively open membrane in xmath195 that spans the quadrilateral it lies within the convex hull of this quadrilateral which is computed using the affine structure induced from xmath213 by the quotient xmath214 for this observe that its projection in any of the four coordinate directions parallel to its edges is one of the triangles of the coamoeba of the projected line in xmath215 of example ex linep2 and the convex hull of the quadrilateral is the intersection of the four preimages of these triangles because xmath137 is real the image of the lower half plane is isomorphic to the image of the upper half plane under the map xmath216 and so the coamoeba is symmetric in the origin of xmath195 and consists of two quadrilateral patches that meet at their vertices hereare two views of the coamoeba of the line where the roots are xmath217 xmath218figures cocircularline1eps qquad includegraphicsheight19infigures cocircularline2eps now suppose that the roots xmath151 do not all lie on a circle by lemma l disjoint the four phase limit lines xmath219 are disjoint and the map from xmath137 to the coamoeba is an immersion figure f symmetric shows two views of the coamoeba in a fundamental domain of xmath134 when the roots are xmath220 where xmath221 is a primitive third root of infinity this and other pictures of coamoebae of lines are animated on the webpage xcite the projection of this coamoeba along a coordinate direction parallel to one of the phase limit lines xmath155 gives a coamoeba of a line in xmath222 as we saw in example ex linep2 the line xmath155 is mapped to the interior of one triangle and the vertices of the triangles are the images of line segments lying on the coamoeba these three line segments come from the three arcs of the circle through the three roots other than xmath179 the root corresponding to xmath155 p linesegments the interior of the coamoeba of a general line in xmath130 contains xmath223 line segments in triples parallel to each of the four coordinate directions the symmetric coamoeba we show in figure f symmetric has six additional line segments two each coming from the three longitudinal circles through a third root of unity and xmath224 two such segments are visible as pinch points in the leftmost view in figure f symmetric we ask what is the maximal number of line segments on a coamoeba of a line in xmath130 the of a complex subvariety xmath225 is the set of all accumulation points of sequences xmath226 where xmath227 is an unbounded sequence for xmath228 xmath229 is the possibly empty initial scheme of xmath3 whose ideal is the initial ideal xmath110 where xmath8 is the ideal of xmath3 our main result is that the phase limit set of xmath3 is the union of the coamoebae of all its initial schemes this is a finite union by theorem t fttg xmath230 is non empty only when xmath17 lies in the cone over the logarithmic set xmath2 which can be given the structure of a finite union of rational polyhedral cones such that any two points in the relative interior of the same cone xmath20 have the same initial scheme if we write xmath231 for the initial scheme corresponding to a cone xmath20 then the torus xmath232 acts on xmath233 by translation see eg corollary c inigeometry here xmath234 is the span of xmath121 a free abelian group of rank xmath235 this implies that xmath236 is a union of orbits of xmath237 and thus that xmath238 we review the standard dictionary relating initial ideals to toric degenerations in the context of subvarieties of xmath33 let xmath49 be a subvariety with ideal xmath107 we study xmath110 and the initial schemes xmath241 for xmath228 since xmath242 so that xmath243 we may assume that xmath244 as xmath31 is the lattice of one parameter subgroups of xmath33 xmath17 corresponds to a one parameter subgroup written as xmath245 define xmath246 by xmath247 the fiber of xmath248 over a point xmath249 is xmath250 let be the closure of xmath248 in xmath251 and set to be the fiber of xmath252 over xmath253 we first describe the ideal xmath255 of xmath248 for xmath99 the element xmath256 takes the value xmath257 on the element xmath258 and so if xmath259 then xmath98 takes the value xmath260 on xmath261 given a polynomial xmath95 of the form xmath262 define the polynomial xmath263m by xmath264 then xmath265 so xmath255 is generated by the polynomials xmath266 for xmath267 a general element of xmath255 is a linear combination of translates xmath268 of such polynomials for xmath269 if we set xmath101 to be the minimal exponent of xmath270 occurring in xmath266 then xmath271 and xmath272 this shows that xmath273m is generated by polynomials xmath274 where xmath267 since xmath275 andthe remaining terms are divisible by xmath270 we see that the ideal of xmath276 is generated by xmath277 which completes the proof we use proposition p initialscheme to prove one inclusion of theorem t one that xmath278 fix xmath279 and let xmath248 xmath252 and xmath280 be as in proposition p initialscheme and let xmath281 we show that xmath282 since xmath283 there is an irreducible curve xmath284 with xmath285 the projection of xmath286 to xmath52 is dominant so there exists a sequence xmath287 that converges to xmath288 with each xmath289 real and positive then xmath290 is the limit of the sequence xmath291 for each xmath292 set xmath293 since xmath289 is positive and real every component of xmath294 is positive and real and so xmath295 thus xmath290 is the limit of the sequence xmath296 since xmath297 converges to xmath298 and xmath289 converges to xmath78 the sequence xmath299 is unbounded which implies that xmath290 lies in the phase limit set of xmath3 this proves we complete the proof of theorem t one by establishing the other inclusion xmath300 suppose that xmath301 is an unbounded sequence to study the accumulation points of the sequence xmath302 we use a compactification of xmath3 that is adapted to its inclusion in xmath33 suitable compactifications are tevelev s tropical compactifications xcite for in these the boundary of xmath3 is composed of initial schemes xmath19 of xmath3 in a manner we describe below by theorem t fttg the cone over the logarithmic limit set xmath2 of xmath3 is the support of a rational polyhedral fan xmath16 whose cones xmath20 have the property that all initial ideals xmath110 coincide for xmath17 in the relative interior of xmath20 recall the construction of the toric variety xmath303 associated to a fan xmath16 xcite ch 6 for xmath18 set xmath304 set xmath305 and xmath306 which is naturally isomorphic to xmath307 where xmath308 is the subgroup generated by xmath121 the map xmath309 determines a comodule map xmath310tomathbb csigmaveeotimesmathbb cm which induces the action of the torus xmath33 on xmath311 its orbits correspond to faces of the cone xmath20 with the smallest orbit xmath312 corresponding to xmath20 itself the inclusion xmath313 is split by the semigroup map xmath314 which induces a mapxmath32twoheadrightarrowmathbb csigmaperp and thus we have the xmath33equivariant split inclusion xmath315 on orbits xmath316 in xmath311 the map xmath317 is simply the quotient by xmath318 if xmath319 with xmath320 then xmath321 so xmath310supsetmathbb ctauvee and so xmath322 since the quotient fields of xmath310 and xmath32 coincide these are inclusions of open sets and these varieties xmath311 for xmath18 glue together along these natural inclusions to give the toric variety xmath303 the torus xmath33 acts on xmath303 with an orbit xmath312 for each cone xmath20 of xmath16 since xmath323 xmath303 contains xmath33 as a dense subset and thus xmath3 is a non closed subvariety let xmath324 be the closure of xmath3 in xmath303 as the fan xmath16 is supported on the cone over xmath2 xmath324 will be a tropical compactification of xmath3 and xmath324 is complete 23 to understand the points of xmath325 we study the intersection xmath326 which is defined by xmath327 as well as the intersection xmath328 which is defined in xmath329 by the image xmath330 of xmath327 under the map xmath310twoheadrightarrowmathbb csigmaperp induced by let xmath267 since xmath20 is a cone in xmath16 we have that xmath332 for all xmath17 in the relative interior of xmath20 thus for xmath333 the function xmath334 on exponents of monomials of xmath6 is minimized on a superset of the support of xmath335 and if xmath17 lies in the relative interior of xmath20 then the minimizing set is the support of xmath335 multiplying xmath6 if necessary by xmath336 where xmath337 is some monomial of xmath338 we may assume that for every xmath333 the linear function xmath334 is nonnegative on the support of xmath6 so that xmath339 and the function is zero on the support of xmath335 furthermore if xmath17 lies in the relative interior of xmath20 then it vanishes exactly on the support of xmath335 thus xmath340 which completes the proof let xmath343 be a point in the phase limit set of xmath3 then there exists an unbounded sequence xmath301 with xmath344 since xmath324 is compact the sequence xmath345 has an accumulation point xmath66 in xmath324 as the sequence is unbounded xmath346 andso xmath347 thus xmath66 is a point of xmath328 for some cone xmath348 of xmath16 replacing xmath349 by a subsequence we may assume that xmath350 because the map xmath317 is continuous and is the identity on xmath312 we have that xmath351 converges to xmath352 and thus xmath353 corollary c inigeometry implies that xmath354 as xmath355 recall that on xmath356 xmath317 is the quotient by xmath318 thus we conclude from that xmath357 which completes the proof of theorem t one as xmath358 for any xmath17 in the relative interior of xmath20 in xcite the closure of a hypersurface coamoeba xmath359 for xmath95 was shown to contain a finite collection of these are translates of codimension one subtori xmath360 for xmath20 a cone in the normal fan of the newton polytope of xmath6 corresponding to an edge by theorem t one these translated tori are that part of the phase limit set of xmath3 corresponding to the cones xmath20 dual to the edges specifically xmath236 since xmath20 has dimension xmath361 the torus xmath362 acts with finitely many orbits on xmath233 which is therefore a union of finitely many translates of xmath362 thus xmath236 is a union of finitely many translates of xmath360 the logarithmic limit set xmath363 of a curve xmath364 is a finite collection of points in xmath113 each point gives a ray in the cone over xmath363 and the components of xmath365 corresponding to a ray xmath20 are finitely many translations of the dimension one subtorus xmath360 of xmath43 which we referred to as lines in section s lines these were the lines lying in the boundaries of the coamoebae xmath62 of the lines xmath137 in xmath366 and xmath367 alexander below vanessa krummeck and jrgen richter gebert complex matroids phirotopes and their realizations in rank 2 discrete and computational geometry algorithms combin vol 25 springer berlin 2003 pp 203233
a coamoeba is the image of a subvariety of a complex torus under the argument map to the real torus we describe the structure of the boundary of the coamoeba of a variety which we relate to its logarithmic limit set detailed examples of lines in three dimensional space illustrate and motivate these results
introduction coamoebae, tropical varieties and initial ideals lines in space structure of the phase limit set
one of the crucial properties of the dark matter dm is the feebleness of its coupling to the electromagnetic field the early decoupling of dm from the baryon photon fluid is a basic ingredient of the current picture of structure formation and various direct dm detection experiments set stringent limits on the coupling of dm with ordinary matter the phenomenological possibilities of a charged xcite or a milli charged xcite dm species or that of dm featuring an electric or magnetic dipole moment xcite were considered in several recent studies all pointing towards a severe suppression of any coupling of the dm with photons significant absorption or scattering of photons by dm appears to ruled out perhaps implying that dm does not cast shadows in this analysis we investigate the possibility that while the typical scattering cross section of dm with photons is very small photons with the right energy can resonantly scatter off dm particles we show that this resonant scattering might result in peculiar absorption features in the spectrum of distant sources this effect can occur if the extension of the standard model of particle physics required to accommodate a neutral dm particle candidate xmath0 also encompasses 1 a second heavier neutral particle xmath1 and 2 an electric andor magnetic transition dipole moment which couples the electromagnetic field to xmath0 and xmath1 we also assume for definiteness that xmath0 and xmath1 are fermionic fields in this setting there exists a special photon energy xmath2 where the scattering cross section of photons by dm is resonantly enhanced to the unitarity limit if the resonance is broad enough and the cross section and dm column number density are large enough the spectrum of distant photon sources might in principle feature a series of absorption lines corresponding to dm halos at different redshifts if such anomalous absorption features exist not only would they provide a smoking gun for the particle nature of dm but they could also potentially give information about the distribution of dm in the universe we adopt here a completely model independent setting where we indicate with xmath4 the masses of xmath5 and consider the effective interaction lagrangian xmath6 in the rest frame of xmath0 the photon dm scattering mediated through an xmath7channel xmath1 exchange see fig fig feyna is resonant at the photon energy xmath8 for xmath9 the xmath10dm scattering cross section can be approximated with the relativistic breit wigner bw formula xmath11 where xmath12 indicates the modulus of the center of mass momentum xmath7 is the center of mass energy squared xmath13 is the total decay width of xmath1 and xmath14 is the decay width of xmath1 into xmath0 and a photon the value of xmath15 at xmath16 saturates the unitarity limit provided xmath17 under this assumption even if xmath0 and xmath1 featured interactions different in their detailed microscopic nature from those described by eq eq lagr such as a transition milli charge or fermion sfermion loops in neutralino dm models the maximal resonant xmath10dm scattering cross section would always be given by xmath18 for xmath19 from eq eq lagr we compute xmath20 in the remainder of this study for conciseness we shall denote xmath21 and in order to maximize the scattering rate of photons by dm we will assume a model with xmath22 all the quantities above can be trivially rephrased in terms of xmath23 and of the two ratios xmath24 and xmath25 as xmath26 nonumber erm resgamma m2 frac1r22r quad sigmagammachi1erm resgamma frac8pileft1r2right2frac1m2203 cm nonumber sigmagammachi1tilde eequiv fracegammam2frac2pim22fracr2tilde ertilde e2fractildegamma2leftr2 2rtilde e1right2tildegamma2endaligned where xmath27 let us now turn to the effects of the resonant scattering of photons emitted by a distant source the mean specific intensity at the observed frequency xmath28 as seen by an observer at redshift xmath29 from the direction xmath30 is given by xmath31epsilonnu zpsietaurm eff where xmath32 xmath33 is the emissivity per unit comoving volume and xmath34 is the effective opacity the latter can be cast as xmath35 where xmath36 is the dm density to get a numerical feeling of whether the resonant scattering of photons leads to a sizable effect we define xmath37 where xmath38 indicates an effective dm surface density associated with the integral along the line of sight of the dm density when the quantity xmath39 in eq eq tau for xmath40 we expect a significant absorption for photon energies xmath19 once a photon from a background source scatters off an intervening dm particle the flux from the source itself is attenuated as long as the photon is diffused into an angle larger than the angular resolution of the instrument the kinematics of the process closely resembles that of the relativistic compton scattering xcite or thompson scattering at lower energies roughly speaking the relevant quantity can be cast as the fraction xmath41 of scattered photons which end up being scattered into an angle smaller than the instrumental angular resolution xmath42 over the total number of scattered photons for an order of magnitude estimateit is easy to show that apart from the details of the dm distribution geometry xmath41 depends on the two variables xmath42 and xmath43 making simple assumptions we estimate the values of xmath41 for an instrument featuring an angular resolution of one degree over the range xmath44 fall within xmath45 we therefore can safely assume that if a photon scatters off dm it is effectively lost ie the flux of photons from the background source is effectively depleted by photon dm scattering processes since the lagrangian eq lagr effectively couples the electromagnetic field to xmath0 and xmath1 depending upon the size of the coupling and the mass of the two particles xmath5 the constraints that apply to a milli charged particle xmath30 eg neutrinos featuring a small electric charge xmath46 xcite or the paratons of ref xcite will also be relevant for the present setting the parameter space of the model we consider here consists of the parameters xmath23 xmath47 and xmath48 for future convenience we choose to represent the viable range of parameters on the xmath49 plane at fixed representative values of xmath50 and 099 fig fig fig2 to translate the constraints from the milli charged scenario in the present language we need to compute the cross section xmath51 and compare it to the standard xmath52 cross section where xmath53 we find xmath54 a first simple astrophysical constraint on the model is based on avoiding excessive energy losses in stars that can produce xmath55 pairs by various reactions in particular through plasma decay processes the most stringent limits come from avoiding an unacceptable delay of helium ignition in low mass red giants the relevant energy scale in the process is the plasma frequency xmath56 kev and the limit applies roughly to masses xmath57 constraining xcite xmath58 while the lower limit stems from the energy losses argument the upper limit comes from the requirement that the mean free path of xmath0 is smaller than the physical size of the stellar core if the xmath0 particles get trapped the impact on the stellar evolution through energy transfer would in any case be negligible compared to other mechanisms xcite at such low masses however constraints from large scale structure and namely from lymanxmath3 forest data on the smallest possible mass for the dm particle force xmath59 kev xcite this bound corresponds to the left most horizontal lines in fig fig fig2 for a narrow range of effective xmath60 couplings the cooling limit discussed above can be applied to the sn 1987a data for a sn core plasma frequency xmath61 mev values of xmath57 can be ruled out in the range of couplings xcite xmath62 the limits from sn 1987a show up on the xmath49 plane as the rectangular regions of parameter space shown in fig fig fig2 if xmath5 reach thermal equilibrium in the early universe before big bang nucleosynthesis bbn they contribute to the energy density and thus to the expansion rate translating in the present language the constraints from bbn found in ref xcite if xmath63 then xmath64 the bbn limit corresponds to the central horizontal lines shown in fig fig fig2 as pointed out in xcite a strong constraint on the size of dm magnetic or electric dipole momentsis related to the size of the photon transverse vacuum polarization tensor see fig fig feyn b xmath65 the strongest constraint derived from eq vacu comes from the effect of the running of the fine structure constant for momenta ranging up to the xmath66 mass on the relationship between xmath67 xmath68 and xmath69 using eq lagr we computed xmath70 finding xmath71endaligned the theoretically computed standard model values and the experimental inputs yield a limit on extra contributions to the running of xmath3 namely xmath72 at 95 cl xcite with particle massesxmath73 eq eq loop reduces to xmath74 implying xmath75 for consistency with electroweak precision observables the limits from eq eq loop rule out the region below the line labeled as vacpolew precision in fig fig fig2 lastly high energy accelerator experiments also put constraints on particles with an effective coupling to photons such particles could have been seen in free quark searches xcite at the anomalous single photon asp detector at the slac storage ring pep xcite designed to look for events in the form xmath76 weakly interacting particles and in beam dump experiments from vector meson decays and direct drell yan production xcite the combination of all accelerator constraints rules out the relatively massive and strongly coupled models lying below the upper right curvy lines on the xmath49 plane shown for three values of xmath48 in fig fig fig2 this completes our discussion of the constraint on the parameter space of the model under consideration here the viable parameter space for a given xmath48 lies above the lines shown in fig fig fig2 while the portions of parameter space that are ruled out correspond to the regions of the plot below the various constraint lines since dm particles live in halos characterized by a velocity dispersion xmath77 which depending upon the mass of the dm halo can take values from roughly xmath78km s to over xmath79km s the momentum distribution of the dm particles approximately follows a maxwell boltzmann distribution xmath80 an incoming photon will therefore scatter off dm particles with the above momentum distribution and the effective scattering cross section will be given by the following average xmath81 where xmath82 where xmath83 is the cosine of the incident dmxmath10 angle and where the center of mass energy and momentum squared read xmath84 and xmath85 the integral in eq eq anginte can be solved analytically and we report the result in the appendix as a result of the averaging procedure of eq eq convolution the maximum of the effective cross section is no longer the peak value xmath86 but will be a non trivial combination of the latter xmath13 xmath77 and xmath87 we illustrate an instance of the result of the broadening of the bw cross section in fig fig resonance given an instrument with an energy resolution xmath88 defined as the relative energy resolution ie the ratio of the energy resolution at the energy xmath89 over the energy xmath89 itself we require the width xmath90 of the resonance which we define for convenience to be the range of values of xmath91 where xmath92 to be at least as large as xmath93 to a good approximation the solution to the equation xmath94 is independent of xmath47 since xmath95 for xmath96 also since xmath97 see eq eq distribution at fixed xmath48 and small xmath13 the ratio xmath98 is independent of xmath23 as well we therefore plot in fig fig sigmat curves at constant values of xmath98 on the xmath99 plane as clear from eq eq distribution the larger the value of the velocity dispersion xmath77 the larger xmath90 from eq eq array we also understand that as xmath100 xmath101 explaining why arbitrarily large values of xmath98 can be obtained for large xmath48 see the upper part of fig fig sigmat how would the spectrum of a background source look like after photons have resonantly scattered off dm we address this question in fig fig flux we assume for definiteness our results do nt critically depend upon the particular spectral shape a power law spectrum of the form xmath102 we consider a setup where xmath103 mev and xmath104 mev and as an example we focus on the case of a source located behind or at the center of a cluster with features similar to those of the coma cluster making use of the estimates provided in ref xcite we consider a dm surface density integrating the d05 dm profile xcite along the direction of the center of the cluster within one virial radius of the cluster center of xmath105 also we assume a velocity dispersion of xmath106km s notice that the redshift of the coma cluster xmath107 is small enough that the effect of photon redshift on the shape and location of the absorption feature is completely negligible making use of these estimates the effect on the background source spectrum depends entirely upon the value of xmath13 for large values of the latter quantity the dm halo is opaque to photons with energies around xmath2 we show in fig fig flux how the spectrum defined in eq eq spectru is affected by setups with various different values of xmath13 for xmath108 ev orange line the absorption is almost complete around xmath2 smaller values of xmath13 imply only a partial deformation of the spectrum and a reduced energy range where absorption effectively takes place for xmath109 ev the absorption feature would be almost invisible we summarize our results on the xmath110 plane in fig fig fig6 for the same reference values we employed in fig fig sigmat ie xmath111 and xmath106km s for this choice of parameters xmath112 the area shaded in yellow at the bottom right of the plot is ruled out by the various constraints discussed in the previous section the green dashed lines correspond to fixed values for xmath38 such that xmath39 in eq eq tau in units of xmath113 blue solid line for dm surface densitiesxmath114 absorption is possible for dm particle masses xmath115xmath116 mev the absorption feature in this plot is predicted according to eq eq eres to occur at a photon energy xmath117 henceforth in the range above we predict xmath118xmath119 mev the analogue of fig fig fig6 for other values of xmath77 and xmath48 can be directly read out of our results shown in fig fig sigmat taking into account the constraints shown in fig fig fig2 and the fact that the values of xmath38 such that xmath39 in eq eq tau scale approximately linearly with xmath48 for instance again for dm surface densities xmath120 we would predict a dm particle mass xmath121 mev for xmath122 and around 150 mev for xmath123 analogously the location of the absorption feature is predicted in the range xmath124 1 mev to 150 gev for xmath122 and at xmath124 10500 kev for xmath123 in the present setup therefore for reasonable dm surface densities the location of the absorption feature varies in a wide range of photon energies from tens of kev up to several gev photons from background sources will in general pass through various dm halos at all intermediate redshifts resulting in a cumulative cosmological effect leading in principle to a broadening and modulation of the absorption feature described above in ref xcite for instance an analogous computation was carried out for the monochromatic photon emission from dm pair annihilations into two photons the detailed setup here is however different as the effect depends linearly rather than quadratically on the dark matter density distribution a similar cosmological broadening was also discussed for the case of resonant xmath125 high energy neutrino absorption eg in ref the spatial homogeneity of the cosmic neutrino background results however in a completely different column density structure than in the present case the detailed computation of this cumulative cosmological effect depends on several assumptions about the distribution and nature of dm structures in the universe on the presence of dm clumps or other substructures xcite and on the assumed halo density profiles and velocity distributions xcite we leave the detailed analysis of this effect to a future study in passing we note that thermally produced dm candidates with masses in the tens of kev to the mev range often referred to as warm dm candidates exhibit potentially interesting features in structure formation suppressing through free streaming small scale structures and partly alleviating the cusp problem of cold dm models see eg ref xcite and references therein depending on the details of the particle physics model constraints on such warm dm candidates might be used to constrain our scenario closing the photon line in fig fig feyn a into a loop generates radiative corrections to xmath126 if the latter are too large the values we employed must be corrected accordingly and small values of xmath126 might not be theoretically allowed we can estimate the size of these corrections as xmath127 radiative corrections are therefore smaller than xmath128 provided xmath129 a condition which is always widely satisfied in the parameter space under consideration here a mass mixing term would also be generated by the interaction responsible for the effective lagrangian eq lagr in principle one should then rotate eq eq lagr to the proper mass eigenstate basis however the relative size of the induced xmath130 mixing is also very suppressed as it roughly scales as xmath131 and can be thus safely neglected here in the scenario we are discussing here the xmath0 particles can also pair annihilate into two photons through a xmath1 xmath132 or xmath133 channel exchange the resulting cross section can be estimated as xmath134 pair annihilation of xmath0 s into photons can a priori be the process through which dm annihilates in the early universe and potentially this could thermally produce the amount of dm inferred in the current cosmological standard model in the range of couplings and masses we obtain here the above mentioned annihilation channel is insufficient to produce a large enough pair annihilation rate in the early universe in order to get the required dm abundance xmath135 other channels otherwise irrelevant for the present discussion and compatible with the present setting can however contribute to give the xmath0 particles the right pair annihilation rate the same diagram discussed above and the same pair annihilation cross section intervene in the pair annihilation rate of xmath0 s today into monochromatic photons of energy xmath136 the flux of photons per unit solid angle from monochromatic pair annihilations of xmath0 s can be written as xmath137 where in this instance the quantity xmath138 refers to the following line of sight integral along the direction xmath139 averaged over the solid angle xmath140 xmath141 when xmath142 we can derive an upper limit to the monochromatic photon flux which is independent of xmath4 namely xmath143 taking an angular region xmath144 sr in the direction of the galactic center the range of values which xmath138 can take for various viable dm halo models is xmath145xmath146 this means that one expects a flux of monochromatic photons in the range xmath147xmath148 the diffuse gamma ray flux in the galactic center region as measured by comptel and egret xcite is at the level of 001 xmath149 at a gamma ray energy of 1 3 mev extrapolating to smaller energies we expect an even larger flux at energies around or smaller than 100 kev this makes it extremely hard to reconstruct a would be annihilation signal from the galactic background dedicated searches for line emissions show that instruments such as integral spi also fail to achieve the sensitivity required here xcite on the other hand this also means that the class of models discussed above is not currently constrained by monochromatic photon emissions furthermore observations of objects where the diffuse gamma ray background is expected to be suppressed such as the nearby dwarf galaxies xcite can potentially lead to constraints or even to the detection of the monochromatic emission line predicted here if as we describe here photons scatter off dm at significant rates one might also expect other associated features besides the absorption lines and the monochromatic emissions described above scattering off dm might generate an effective index of refraction in the photon propagation possibly inducing eg time delays in transient sources at different frequencies or frequency dependent distortions of the photon paths for steady sources a detailed discussion of these effects lies however beyond the scopes of the present analysis neutralino dm in the context of the minimal supersymmetric extension of the standard model mssm can in principle produce an effective lagrangian setup as that in eq eq lagr for instance through fermion sfermion loops coupling two different neutralinos xmath150 and xmath151 xmath152 from the discussion above however it is clear that supersymmetric dm can not produce any sizable photon absorption first the lightest supersymmetric particle lsp in any viable low energy supersymmetry setup is typically heavier than at least a few gev for exceptions eg in the next to mssm see xcite this implies as can be read off fig fig fig6 very large values of xmath38 to get xmath39 in eq eq tau secondly the assumptions we made at the beginning that xmath22 does not hold in general in the mssm the radiative xmath153 decay can be the dominant mode only in restricted regions of parameter space eg when phase space suppresses other three body decay modes the resulting effective xmath47 in the notation set above is in any case limited from above by xmath154 requiring xmath155 implies xmath156 and typically xmath157 furthermore since in the mssm when two neutralinos are quasi degenerate the lightest chargino is also quasi degenerate with them lep2 limits on the chargino mass xcite force xmath158 gev xmath157 also implies xmath159 gev these values for the model parameters imply a small relative widths and b too large dm surface densities for the absorption feature to be detectable relaxing the requirement that xmath22 would not help anyway since the cross section eq xsec receives the large suppression factor xmath160 and the photon absorption process is again suppressed one can envision however various particle physics scenarios where the phenomenology described above can take place for instance a concrete particle physics setup which can explain at once the dm abundance neutrino masses and mixing the baryon asymmetry of the universe and potentially inflation is the so called xmath161msm xcite or one of its extensions xcite these models feature a light quasi stable sterile neutrino with a mass in the tens of kev xcite up to xmath16210 mev xcite range and heavier majorana neutrinos with a mass at the gev scale extending this class of models with an effective interaction of the form of our eq eq lagr gives rise to the phenomenology described above and hence to possible resonant photon scattering we have shown that photons can in principle resonantly scatter off dm through an effective lagrangian featuring a dipole transition moment coupling photons the dm particle xmath0 and a heavier neutral particle xmath1 we discussed the constraints on the model from stellar energy losses data from sn 1987a the lymanxmath3 forest big bang nucleosynthesis electro weak precision measurements and accelerator searches the effective resonant absorption cross section is broadened by the effect of the momentum distribution of dm particles in dm halos we showed that dm particles in the tens of kev to a few mev range can lead to resonant photon scattering resulting in absorption lines which can lie between tens of kev up to tens of gev provided the dm surface mass density is at least of xmath163 we also pointed out that typical supersymmetric dm the weak scale neutralino does not cast any shadows ie it does not absorb photons while photon absorption can take place in other particle physics setups which can explain various pieces of physics beyond the standard model we thank john beacom and christopher hirata for insightful comments on an earlier draft of this manuscript we thank vincenzo cirigliano shane davis mikhail gorshteyn tesla jeltema marc kamionkowski and enrico ramirez ruiz for related discussions sp is supported in part by doe grants de fg03 92er40701 and de fg02 05er41361 and nasa grant nng05gf69 g ks is supported by nasa through hubble fellowship grant hst hf0119101a awarded by the space telescope science institute which is operated by the association of universities for research in astronomy inc for nasa under contract nas 5 26555 the angular integral xmath164 can be computed analytically with the result xmath165times03 cm nonumberleft 2 fracegammapleftcff1f2cg grightright endaligned where xmath166 xmath167 xmath16803 cm nonumberfrac 2m12 delta m2m2gammachi2delta m22 m22gammachi22 endaligned xmath169 endaligned xmath170 endaligned and xmath17103 cm nonumber arctanleftfracdelta m2 2egamma leftpsqrtm1 2p2rightm2gammachi2rightendaligned s davidson s hannestad and g raffelt jhep 0005 2000 003 arxiv hep ph0001179 s l dubovsky d s gorbunov and g i rubtsov jetp lett 79 2004 1 pisma zh 79 2004 3 arxiv hep ph0311189 s colafrancesco s profumo and p ullio astron astrophys 455 2006 21 arxiv astro ph0507575 j diemand m zemp b moore j stadel and m carollo mon not 364 2005 665 arxiv astro ph0504215 s profumo k sigurdson and m kamionkowski phys 97 2006 031301 arxiv astro ph0603373 p bode j p ostriker and n turok astrophys j 556 2001 93 arxiv astro ph0010389 b moore t quinn f governato j stadel and g lake mon not soc 310 1147 1999 arxiv astro ph9903164 v avila reese p colin o valenzuela e donghia and c firmani astrophys j 559 2001 516 arxiv astro ph0010525 a w strong h bloemen r diehl w hermsen and v schoenfelder arxiv astro ph9811211 a w strong i v moskalenko and o reimer astrophys j 537 2000 763 erratum ibid 541 2000 1109 arxiv astro ph9811296 a w strong i v moskalenko and o reimer astrophys j 613 2004 962 arxiv astro ph0406254
we carry out a model independent study of resonant photon scattering off dark matter dm particles the dm particle xmath0 can feature an electric or magnetic transition dipole moment which couples it with photons and a heavier neutral particle xmath1 resonant photon scattering then takes place at a special energy xmath2 set by the masses of xmath0 and xmath1 with the width of the resonance set by the size of the transition dipole moment we compute the constraints on the parameter space of the model from stellar energy losses data from sn 1987a the lymanxmath3 forest big bang nucleosynthesis electro weak precision measurements and accelerator searches we show that the velocity broadening of the resonance plays an essential role for the possibility of the detection of a spectral feature originating from resonant photon dm scattering depending upon the particle setup and the dm surface mass density the favored range of dm particle masses lies between tens of kev and a few mev while the resonant photon absorption energy is predicted to be between tens of kev and few gev
introduction the model constraints on the parameter space resonant dm-photon scattering discussion conclusions the angular integral @xmath164
in lattice qcd the finite lattice spacing and finite lattice volume effects on the gluon propagator can be investigated with the help of lattice simulations at several lattice spacings and physical volumes herewe report on such a calculation for details on the lattice setupsee xcite in figure fig gluevol we show the renormalized gluon propagator at xmath0 gev for all lattice simulations note that we compare our data with the large volume simulations performed by the berlin moscow adelaide collaboration xcite see xcite for details in each plotwe show data for a given value of xmath1 ie data in the same plot has the same lattice spacing the plots show that for a given lattice spacing the infrared gluon propagator decreases as the lattice volume increases for larger momenta the lattice data is less dependent on the lattice volume indeed for momenta above xmath2900 mev the lattice data define a unique curve we can also investigate finite volume effects by comparing the renormalized gluon propagator computed using the same physical volume but different xmath1 values we are able to consider 4 different sets with similar physical volumes see figure fig gluespac although the physical volumes considered do not match perfectly one can see in figure fig gluespac that for momenta above xmath2 900 mev the lattice data define a unique curve this means that the renormalization procedure has been able to remove all dependence on the ultraviolet cut off xmath3 for the mid and high momentum regions however a comparison between figures fig gluevol and fig gluespac shows that in the infrared region the corrections due to the finite lattice spacing seem to be larger than the corrections associated with the finite lattice volume in particular figure fig gluespac shows that the simulations performed with xmath4 ie with a coarse lattice spacing underestimate the gluon propagator in the infrared region in this sense the large volume simulations performed by the berlin moscow adelaide collaboration provide a lower bound for the continuum infrared propagator we also aim to study how temperature changes the gluon propagator at finite temperature the gluon propagator is described by two tensor structures xmath5 where the transverse and longitudinal projectors are defined by xmath6 the transverse xmath7 and longitudinal xmath8 propagators are given by xmath9 xmath10 on the lattice finite temperature is introduced by reducing the temporal extent of the lattice ie we work with lattices xmath11 with xmath12 the temperature is defined by xmath13 in table tempsetup we show the lattice setup of our simulation simulations in this section have been performed with the help of chroma library xcite for the determination of the lattice spacing we fit the string tension data in xcite in order to have a function xmath14 note also that we have been careful in the choice of the parameters in particular we have only two different spatial physical volumes xmath15 and xmath16 this allows for a better control of finite size effects lattice setup used for the computation of the gluon propagator at finite temperature colsoptionsheader tempsetup figures fig transtemp and fig longtemp show the results obtained up to date we see that the transverse propagator in the infrared region decreases with the temperature moreover this component shows finite volume effects in particular the large volume data exhibits a turnover in the infrared not seen at the small volume data the longitudinal component increases for temperatures below xmath17 then the data exhibits a discontinuity around xmath18 and the propagator decreases for xmath19 the behaviour of the gluon propagator as a function of the temperature can also be seen in the 3d plots shown in figure fig3dtemp as shown above data for different physical spatial volumes exhibits finite volume effects this can be seen in more detail in figure fig finvoltemp where we show the propagators for two volumes at t324 mev moreover we are also able to check for finite lattice spacing effects at t305 mev where we worked out two different simulations with similar physical volumes and temperatures but different lattice spacings for this case it seems that finite lattice spacing effects are under control with the exception of the zero momentum for the transverse component see figure fig lattspactemp our results show that a better understanding of lattice effects is needed before our ultimate goal which is the modelling of the propagators as a function of momentum and temperature paulo silva is supported by fct under contract sfrh bpd409982007 work supported by projects cern fp1236122011 cern fp1236202011 and ptdc fis1009682008 projects developed under initiative qren financed by ue feder through programme compete
we study the landau gauge gluon propagator at zero and finite temperature using lattice simulations particular attention is given to the finite size effects and to the infrared behaviour
the gluon propagator at zero temperature the gluon propagator at finite temperature acknowledgments
cosmological inflation xcite xcite xcite xcite was proposed to address the horizon problem flatness problem and monopole problem in the context of big bang cosmology by postulating that in the early universe there was a brief period of rapid exponential expansion one can explain without fine tuning the observed facts that the universe is the same in different regions which are causally disconnected the horizon problem the universe appears to be spatially flat the flatness problem and that there appears to be a much lower density of grand unified monopoles than onewould naively expect however the inflation hypothesis itself has several unanswered questions i what is the detailed mechanism for inflation ii what precedes the inflationary phase or how does inflation turn on iii how does the universe make a graceful exit from this early inflationary phase to standard friedman robertson walker frw radiation dominated expansion ie how does inflation turn off in many of the original models xcite xcite xcite inflationary expansionwas driven by a phase transition at the grand unified scale the mechanism for inflation we propose hereis based on particle creation from the gravitational field and it need not occur at the same time energy scale compared to the canonical examples of inflationary mechanisms specifically we focus on particle creation connected with the hawking like radiation that occurs in frw space time this is similar to black hole evaporation but time reversed for an astrophysical size black hole hawking radiationis at first a very weak channel for mass energy loss for the black hole as the black hole decreases in mass due to loss from hawking radiation it gets hotter andevaporates at a faster rate beyond some size hawking radiationbecomes very strong so that near the end stages of evaporation the black hole will radiate explosively however near the end stages of evaporation one can no longer trust the semi classical calculation xcite leading to hawking radiation one common speculation is that near the end stages of evaporation where quantum gravity should become important that hawking radiation will turn off one concrete proposal along these lines is the suggestion that in the quantum gravity regime space time becomes non commutative which leads naturally to a turning off of hawking radiation in the late stages of black hole evaporation xcite applying these ideas to frw space timeleads to a time reversed version of black hole evaporation during the very earliest stages of the universe when the energy density is large so that one is in the quantum gravity regime the hawking radiation from the frw would be turned off until the universe expanded to the point when quantum gravity started to give way to semi classical gravity at this point the hawking radiation of frw space time would turn on and as we show below would drive a period of exponential expansion as the universe expanded the hawking temperature of the frw universe would decrease until the universe becomes dominated by ordinary radiation rather than hawking radiation at this pointthe universe would make a gracefully transition from inflationary expansion to the power law expansion associated with a universe dominated by ordinary radiation already in the 1930s schrdinger xcite put forward the idea that particle creation can influence cosmological evolution more recently parker xcite and others xcitexcite have followed this early work of schrdinger with studies of how particle creation can affect the structure of cosmological space times as pointed out in xcitethere are two points about cosmological history which are well addressed by these particle creation models first one can explain very well the enormous entropy production in the early universe via the irreversible energy flow from the gravitational field to the created particles second since the matter creation is an irreversible process one avoids the initial singularity in cosmological space times xcite in this modelthe universe begins from an instability of the vacuum instead of a singularity the universe then rapidly moves through an inflationary phase followed by a radiation dominated era and finally followed by a matter dust dominated era our particle creation hawking radiation model for inflation is closely tied to thermodynamics in a given space time so we begin by collecting together some thermodynamic results the first law of thermodynamics reads xmath0 where xmath1 is the heat flow into out of the system during some interval of cosmic time from xmath2 to xmath3 xmath4 is the energy density xmath5 is the volume and xmath6 is the thermodynamic pressure dividing this equation by xmath7 gives the following differential form for the first law of thermodynamics xmath8 for most cosmological models the assumption is made that the universe is a closed adiabatic system which means xmath9 with this assumption the second law of thermodynamics xmath10 leads to a non change in the entropy ie xmath11 during the cosmic time interval xmath7 this line of reasoning contradicts the observed fact that the universe has an enormous entropy this contradiction can be addressed by having irreversible particle creation from the gravitational field ie hawking radiation from an frw space time this irreversible particle production leads to entropy production the change in heat xmath1 is now completely due to the change of the number of particles coming from particle creation therefore there is a transfer of energy from the gravitational field to the created matter and the universe is treated like an open adiabatic thermodynamic system xcite we review the relevant parts of the frw space time the standard frw metric is xmath12 labelfrwendaligned where xmath13 is the scale factor and xmath14 is the spatial curvature of the universe xmath15 is flat xmath16 is open and xmath17 is closed the einstein field equations xmath18 for this metric have a time time xmath19 component and space space xmath20 component given respectively by xmath21 in the above equations xmath4 is the energy density and xmath6 is pressure of the matter source fluid field combining these two equations gives the standard conservation relationship xmath22 which clearly describes the universe as a closed adiabatic system with xmath9 as mentioned above this leads to xmath11 which then seems to contradict the very large observed entropy of the universe allowing for matter creation alters things first in the presence of matter creation the equations are altered the first equation on the left of remains the same but the second equation is altered and one has an additional equation for the time rate of change of particle number density these modified and additional equations are xcite xmath23 the overdot implies a time derivative xmath24 is particle number density xmath25 is the matter creation rate and xmath26 is the pressure due to matter creation the matter creation rate and the matter creation pressure are connected by the following relationship xcite xmath27 if one assumes that xmath4 and xmath6 describe a normal fluid so that one has the energy condition xmath28 assuming that xmath29 this condition is known as the weak energy condition xcite and in addition that the matter creation rate is positive xmath30 one can see that xmath26 of is positive and thus contributes a negative pressure to such negative pressures can drive accelerated expansion such as during the early inflationary phase of the universe or during the current dark energy dominated era of the universe it would be economical if this negative pressure that occurs due to hawking radiation in frw space time could drive both the inflationary era and the present accelerated phase of the universe which is normally attributed to dark energy we will show that while this particle creation pressure can drive inflation it can not drive the present accelerated expansion in the form one could easily explain both inflation and the current accelerated expansion by simply choosing a matter creation rate xmath31 to produce whatever acceleration if xmath30 or deceleration if xmath32 one wants for example if one wants exponential expansion xmath33 one should choose xmath34 xcite however this choice has very little physical motivation beyond giving one the result one wanted in advance the strength of our proposal is that the particle creation comes from a specific mechanism hawking radiation in frw space time and as such leads to definite predictions which allow the model to be verified or ruled out we will see that our mechanism does in fact lead to a particle production rate xmath35 we now move on to a discussion of hawking radiation and associated temperature in frw space time since the frw space time is dynamical the definition of the cosmological event horizon is subtle however one can define the apparent horizon knowing the local properties of the space time in order to dothis one can rewrite in the following form xcite xmath36 where xmath37 and xmath38 diagxmath39 and xmath40 the position of the apparent horizon is given by the root xmath41 of the equation xmath42 expanding this equation over xmath43 sector and simplifying we get the position of the apparent horizon xmath44 xcite xmath45tildertildera 0 implies tilde ra fraccsqrth2frackc2a2 labelrah using the above one can find the hawking temperature of the apparent horizon xcite xmath46 the first equality above is the standard relationship between the hawking temperature and surface gravity xmath47 at the horizon of a given space time for frw space time the surface gravity is xmath48 thus in general the temperature xmath49 depends on both xmath41 and its time derivative xmath50 however during an inflationary phase the universe s scale factor takes the form xmath51 so that xmath52 if xmath53 satisfies xmath54 later we show this is the case for our model of inflation we have from xmath55 and xmath56 thus the temperature in simplifies toxcite xmath57 in the final approximation we are again assuming xmath54 which as mentioned above we will justify later before moving to a detailed calculation of how the particle creation pressure affects the evolution of the early universe in the case when this pressure comes from the particle creation from hawking radiation we give some numerical comparisons which show that this mechanism is of the correct order of magnitude to explain inflation considering xmath58 to be inverse of the planck time xmath59 gives from xmath60 on the other hand at planck energy xmath61 gives a planck temperature of xmath62 thus the hawking temperature of frw space time at very early is around the planck temperature this large temperature associated with hawking radiation of frw space time in the early universe is a good indication that our proposed mechanism has the proper order of magnitude to be a major factor in the early evolution of the universe our proposed hawking radiation mechanism for inflation is the inverse of black hole evaporate for astrophysical black holesthe evaporation process begins very weakly for a black hole having the mass of our sun the temperature of the black body radiation emitted is xmath63 however at the end stages of evaporation when the black holes has a small mass the evaporation will proceed explosively at this pointone is not justified in using the approximations that led to hawking radiation as a thermal spectrum and it is said that one must have in hand a quantum theory of gravity to understand these end stages of black hole evaporation for frw space time oneis not justified in using hawking radiation results at the very early stage of the universe one should have a theory of quantum gravity to understand this regime as the universe expands there will be a point at which the approximations leading to hawking radiation from frw space time become valid it is at this point that our hawking radiation from frw space time mechanism for inflation turns on and inflation begins as the universe inflates further the hawking temperature naturally decreases and our inflation mechanism will automatically turn off one can see that our proposed process is the inverse of black hole evaporation since the direction of radiation flux of the hawking radiation for the apparent horizon in a frw space time is the opposite from that of a schwarzschild black hole event horizon for black holes the created particles escape outside the event horizon towards asymptotic infinity while for the apparent horizon of frw space time the created particles come inward from the horizon due to the isotropy of frw space time the radiation is isotropic from all directions the net result is an effective power gain in the universe given by the stephan boltzmann s b radiation law in summarythe difference in the radiation direction from a schwarzschild black hole and from the frw space time is as follows for black holes the time rate of energy change xmath64 is negative ie they lose power during hawking evaporation while for the frw space time the time rate of energy change xmath64 is positive ie the universe gains energy according to the stephan boltzmann radiation law the time rate of energy gain due to hawking radiation is xmath65 where xmath66 is the s b constant and xmath67 is the area of apparent horizon now one can substitute into but in that case the right hand side of which is the rate of change in energy flux through the apparent horizon has to be evaluated at xmath68 so that xmath69tildertilderasigma ah leftfrachbar h2pi kbright4 labelgenrl where we have used to calculate the left hand side we first consider the volume of a sphere of arbitrary radius by ignoring the curvature term ie we take xmath15 and the volume is given by xmath70 note that here we take the radius at arbitrary xmath71 only after performing the xmath2 derivative in weset xmath72 on the other hand for the right hand side which represents the flow of energy across the apparent horizon we take xmath73 where xmath74 comes from using these expressions for the area and volume in yields a modified continuity equation xmath75if one ignored the effect of the hawking radiation particle creation term on the right hand side of by setting xmath76 in then becomes xmath77 which is the usual continuity equation in the absence of particle creation using xmath15 and xmath78 we now rewrite using the first equation in as xmath79 where we have taken the equation of state for ordinary matter as xmath80 and the time dependent equation of state due to particle creation is xmath81 the equation of state parameter due to particle creation is xmath82 the constant xmath83 above is essentially the inverse of the planck energy density xmath84 as we will show later it is this constant xmath83 sets the time and length scale for our inflation mechanism this may also be different from the usual scale of inflation which is set by the grand unified scale moving the xmath85 term in from the right hand side to the left hand side one can see that this particle creation term acts like a negative pressure for the present universethis term is negligible the present value of the energy density of the universe is xmath86 so that xmath87 term on the right hand side of is effectively zero thus this effective negative pressure can not explain the current accelerated expansion of the universe one still needs dark energy however in the early universe xmath4 can be large enough so that the particle creation pressure on right hand side of dominates and as we will see this can drive inflation and also give a natural turn off for inflation at this pointit should be mentioned that does not violate wald s first axiom xcite on the energy momentum tensor which is nothing but the usual conservation equation xmath88 xcite to see this we note that in the absence of particle creation the right hand side of vanishes and the energy momentum tensor has the form xmath89 which satisfies the conservation equation however in the presence of particle creation the above definition of xmath90 fails to simultaneously describe the conservation law and particle creation in order to take both features into account oneneeds to consider a modification xmath91 which can deal with particle creation such a scenario is normally discussed in relationship to particle creation from black holes since under the appropriate choice of vacuum state ie the unruh vacuum black holes emit real particles in the form of thermal radiation so that there is a power loss associated with hawking radiation it may appear that wald s first axiom is violated however as demonstrated in xcite for such cases it is the regularized energy momentum tensor xmath92 which satisfies the conservation equation xmath93 for the unruh vacuumthe regularized energy momentum tensor is xmath94 thus it is clear that when one is dealing with particle creation it is the regularized modified energy momentum tensor that satisfies wald s axioms this is exactly the picture in our case looking into the relation one can see that the conservation equation in our case is given by xmath95 where the modified energy momentum tensor has the form xmath96 in the above relation xmath97 xmath98 is independent of time and given by whereas the remaining part xmath99 only contains the contribution from xmath100 the particle creation pressure due to hawking radiation in addition to the negative pressure associated with particle creation due to hawking radiation one can also calculate the effective particle creation rate xmath101 and compare with general result given in usingthe equation of state xmath102 one can re write as xmath103 equating with gives the time dependent matter creation rate associated with particle creation due to hawking radiation in frw space time xmath104 recall that in order to have exponential expansion xmath105 one needs the creation rate from to be xmath106 xcite thus from in order to have exponential expansion ie inflation one needs xmath107 to be approximately the same size as xmath108 ie one needs xmath109 if one assumes the equation of state for ordinary radiation ie xmath110 since xmath111 where xmath112 this equality ie xmath109 will occur when the density xmath113 which is approximately the planck density this density corresponds to the density in the early universe thus the rough calculations again point toward there being a large enough matter creation rate xmath114 in the early universe to drive inflationary expansion however as the universe expands and xmath115 drops the creation rate xmath114 will decrease and this hawking radiation driven mechanism for inflation will turn off we now give a detailed calculation of inflation driven by hawking radiation inserting xmath116 into one can integrate the resulting equation to find the energy density xmath4 as a function of scale factor xmath117 xmath118 xmath119 is a constant and in the last equality we have taken the equation of state of the ordinary matter to be that of radiation ie xmath120 since we want the early hawking radiation inflation phase to be followed by a universe dominated by ordinary radiation the dimensions of xmath119 depend on the value of the equation of state parameter xmath121 note in the classical limit xmath122 xmath123 the frw hawking radiation effect turns off and gives xmath124 which is the well known result for a universe dominated by ordinary matter with an equation of state xmath80 there are two limits of this xmath4 from i xmath125 so that xmath126 and the hawking radiation effect dominates ii xmath127 so that xmath128 which is the energy density of an ordinary radiation dominated universe in case i the energy density is constant so that one has an effective cosmological constant which as shown below leads to exponential inflationary expansion in both cases i and ii the universe is radiation dominated but for case i this means hawking radiation of an frw space time and in case ii this means ordinary radiation as one can see from the two limiting case behaviors of xmath4 these two types of radiation result in very different evolution we now want to find the time dependence of the scale factor xmath13 we begin by substituting xmath4 from into the first equation in to get a differential equation for xmath117 as a function of xmath2 recall we are assuming that xmath129 in is zero or negligible compared to the other terms it is possible to integrate the resulting equation for xmath117 to obtain xmath130 frac83sqrtfrac2pi g dc2t k1 sqrtalpha d labelsoln we have written the integration constant as xmath131 where xmath132 is some positive number greater than xmath133 this will make it easier to write out some of the later formulas one important point to make about the scale factor xmath13 in is that it has an early exponential expansion phase the second logarithm term on the left hand side which naturally transitions to a power law expansion the first power law term on the left hand side we will discuss these two regimes in more detail in the following subsections that these two phases come out naturally from the proposed inflation mechanism without need for fine tuning some inflaton potential is a very attractive feature in the next following three subsectionswe will analyze the the early time exponential behavior of the later time power law behavior of and then we will discuss the possible values of xmath119 and xmath132 we first examine the limit of in the very early universe where xmath13 is of a size such that one has the limit xmath125 in this limitbecomes xmath134 labelinflation era thus in this limit we find exponential expansion inflation with a hubble constant given by xmath135 at this point we can return and justify some of our earlier assumption and approximations first after we assumed that xmath136 is valid for xmath137 near the planck size or larger eg for xmath138 for xmath13 of the planck scale onehas xmath139 as compared to xmath140 from eq second we assumed that xmath56 this is also justified since from xmath141 and during the inflationary phase xmath58 is approximately constant with its value given by thus xmath142 during inflation the standard lore is that the radius of the universe should increase by a factor of xmath143 thus we need xmath144 where xmath145 with xmath146 being the end and beginning time for this hawking radiation driven inflation fromwe have xmath147 secxmath148 so we find that gives xmath149 sec note that if one took the ratio in to be 10 orders larger ie xmath150 this would yield xmath151 sec in other words the time scale for the length of this inflation is set by xmath58 in and independent of xmath119 and xmath132 in because xmath58 in is so large one does not need a very long time xmath152 in order to inflate the universe by many order of magnitude in contrast to the above mechanism of inflation which is driven by near planck scale physics the standard picture of inflation is that it is driven by physics at the grand unified scale ie by a grand unified phase transition in this standard scenario inflationis thought to go from xmath153 sec until xmath154 sec or xmath155 sec thus for inflation driven by a phase transition at the grand unified scale one has xmath156 sec is plotted with respect to xmath2 in units of planck time xmath157 using equation in a we fix xmath158 and in b xmath159 in both caseswe take xmath160 xmath161 in this range of timexmath13 increases exponentially from planck size to about xmath162 following the equation because of an extremely large value of the hubble constant the lifetime of this inflation is very small this is the reason why inb apparently time is not changing along xmath163 axis in fact the change in xmath2 takes place after the eight decimal places and thus does not appear in the plotwidth566 in we show two plots of xmath164 from for the early inflationary part of in this figurewe have set xmath165 and two different values of xmath132 are shown this value of xmath119 is justified in a subsequent section from the two different values of xmath132 we seethat this parameter controls when inflation starts but it does not influence how long inflation last which in this model is xmath166 sec the scale functionxmath13 given in will leave the regime where the very early universe approximation in is valid and then at some time will reach the point where xmath167 after this intermediate stage xmath13 from will continue to increase until the regime is reached where xmath127 in this limitgives xmath168 furthermore if the above condition is satisfied in a manner that xmath169 one finally finds xmath170 this is the usual xmath171 power law expansion for a radiation dominated universe thus after the inflationary stage given by the solution given in transitions into radiation dominated expansion given by is plotted with respect to xmath2 in units of planck time xmath157 again by using in a we consider xmath158 and in b xmath159 asbefore we take xmath172 and time changes after eight decimal places in b both figures show that in these intermediate values of xmath13 the inflationary behavior naturally makes a transition to an ordinary radiation dominated era for xmath173 these figures nicely capture the end of inflation and the beginning of ordinary radiation dominationwidth566 in we show two plots of xmath117 vs xmath2 from which shows the beginning of the transition from exponential inflation to xmath171 power law inflation again in this figure we have set xmath165 and have the same values of xmath132 as in againthe two different values of xmath132 control when inflation starts but they do not influence its duration in this subsectionwe want to investigate possible values of the integration constants xmath119 and xmath132 xmath119 can be set from the late time energy density of radiation from xciteone finds that xmath174 which is the ratio of the radiation energy density to the critical energy density using the value of the critical energy density xmath175 we get xmath176 for the present radiation energy density equating this with xmath177 and taking xmath178 as the present scale factor of the universe yields xmath165 thus the amplitude of is xmath179 since for our inflationary phase scale factor xmath13 given bywe require xmath125 which because of the xmath180 power can translate to xmath181 we see that in this picture inflation stops at a scale of xmath182 rather than xmath183 however given the uncertainty in when exactly inflation ends this is not a fatal problem the scale of the universe is still inflated by the same orders of magnitude it just starts inflating a smaller scale and ends at a smaller scale moving on to the constant xmath132 one can see from figs 1 and 2 that this constant sets the time scale for when inflation starts in plots 1a and 2a where xmath132 is chosen to be xmath184 we find that inflation starts at xmath2 a few times larger than the planck time xmath157 from the figures we see that for xmath158 inflation starts at xmath185 to xmath186 on the other hand from plots 1b and 2b we see that for xmath159 inflation starts at about xmath187 sec this start time corresponds to the standard picture where inflation is driven by a grand unified phase transition note that even though xmath132 can shift the starting time of inflation it can not control the duration which is fixed at xmath188secin the previous section we sketched a model for inflation driven by hawking radiation of frw space time which has a natural turn off or graceful exit from inflation we now offer speculation that this model of inflation driven by hawking radiation may also have a natural turn on or entrance to inflation as already noted the process for inflation suggested hereis the reverse of black hole evaporation during post inflation ie late stage the hawking radiation of frw space time will be a weak minor effect just as hawking radiation is weak minor effect at the beginning ie early stage of black hole evaporation during inflation ie the very early stage described in the section above the frw hawking radiation effectis dominate just as during the end ie very late stage of black hole evaporation the hawking radiation is dominate during the very late stages of evaporation of a black hole there are speculations that quantum gravity effects will turn off hawking radiation one particularly concrete example of this is in the non commutative geometry scenario xcite where as the planck scale is approached space time become non commutative xmath189i theta mu nu where xmath190 is an anti symmetric rank 2 tensor which has the dimensions of distance squared as a result of this non commutativity black holes can not evaporate to arbitrarily small size but due to the implied uncertainty relationship between spatial coordinates xmath191 for example xmath192 a black hole can not shrink to zero size since then one would have xmath193 in violation of this uncertainty relationship detailed analysis xcite shows that as a black hole evaporates in the non commutative space time characterized by it reaches some maximum temperature after which the black hole temperature will decrease as the black hole continues to evaporate at some pointthe hawking temperature of the black hole goes to zero the evaporation process stops and one is left with a non radiating remnant xcite applying this picture to the frw hawking radiation model of inflationone would find that in the very early universe as during the late stages of black hole evaporation the size of the universe would be small and the frw hawking temperature would be zero thus at this early stage there would be no inflation since the frw hawking radiation would be turned off the universe would expand normally according to a power law like at some point the universe would reach a size large enough not to be dominated by the uncertainty relationship coming from the noncommutative space time relationship of at this pointthe frw hawking radiation would turn on and drive inflation until the universe transitioned from the regime xmath125 to the regime xmath127 when the universe entered this regime ie xmath127 it would undergo power law type of expansion given in rather than the inflationary expansion of in this paper we have proposed a mechanism for inflation based on the particle creation due to hawing radiation in an frw space time this mechanism differs from the model of inflation driven by some phase transition at the grand unified scale this can be seen in the different time scales inflation driven by a grand unified phase transition is thought to start at xmath153 sec andlast until xmath194 sec thus having xmath195 sec because of the large value of xmath58 in or alternatively the small value of xmath83 in the time scale of our proposed mechanism for inflation is xmath196 sec which is different than the standard time for inflation there are two constant xmath119 and xmath132 which arise in the solution of the scale factor the constant xmath119 is determined by matching the theoretical late time energy density xmath128 with the observed value of the present day radiation energy density xmath197 and the present day value of xmath178 in this waywe obtain xmath165 we also get a the amplitude of the inflationary period expression for xmath13 as given in namely xmath198 this means that this model of inflation ends when xmath199 this is six orders of magnitude smaller than the standard picture of inflation which ends at xmath200 however the scale factor in our model still inflates in size by a factor of xmath143 in thispicture inflation exits at a smaller scale factor than in the canonical picture the other constant xmath132 simply shifts when inflation starts but does not control the duration from figs 1 and 2 onecan see for xmath201 inflation starts near the planck time while for xmath202 inflations starts near xmath203 sec the standard starting time in inflation driven by a grand unified phase transition because for some values of xmath132 the starting time of inflation can be near the planck time one should worry for these values of xmath132 about the validity of the calculation of the hawking radiation for one near the planck scale the constants xmath204 xmath205 and xmath206 could be different from the present day values in particular since xmath83 in and therefore xmath58 in depend on xmath204 to the seventh power having a different value of xmath204 at these early near planck times by even one order of magnitude would greatly change the scale of hawking radiation driven inflation mechanism proposed here if xmath204 were one order of magnitude smaller in these very early times the energy scale of the hawking radiation driven inflation would shift to be more in line with that of the grand unified phase transition mechanism for inflation in this paperwe simply stick to the simplest assumption that xmath204 xmath205 and xmath206 have the present constant values even at these early near planck times we hope later to investigate the possibility that xmath204 xmath205 andor xmath206 have different value at these early times in this picture of hawking radiation driven inflationthe time scale is set by xmath83 in setting aside this definite scale prediction for a moment allowing for an arbitrary scale xmath83 we note that one might regard and the resulting scale factor xmath13 in as a good phenomenological model for the time development of the size of the universe which naturally includes exponential expansion with power law expansion in a single expression the inflation mechanism presented here is the time reversal of black hole evaporation for a black hole in the early stages of evaporation via hawking radiation the radiation is a weak effect barely changing the mass and space time of the black hole for an frw universe in its late stages the hawking radiation is a weak effect having effectively no effect on the expansion rate of the universe for a black hole in the late stages of evaporation via hawking radiation the radiation is a dominant effect which plays a significant role in the change of the black hole s mass and the structure of the space time for an frw universe in its early stages the hawking radiation is a huge effect and leads to an enormous expansion rate for the universe in the very late stages of black hole evaporationit is postulated that quantum gravity effects will shut off hawking radiation for an frw space time we postulate that in the very early stages quantum gravity effects will shut off hawking radiation and the associated exponential expansion there have been other works that have studied the role of particle creation in the evolution of the universe xcite xcite the present proposal is similar to the work of xcite which views particle creation as an irreversible process from energy transfer and entropy production from the gravitational field to the particles the difference in the present work is that we have proposed a very specific particle creation mechanism namely the hawking radiation associated with frw space time the frw hawking radiation gives rise to an effective negative pressure evolution equation for the energy density xmath4 the resulting xmath4 given in leads to a time dependent scale factor xmath13 given in which has two regimes one where xmath125 with the resulting xmath13 being exponential inflationary expansion as given in and one where xmath127 with the resulting xmath13 being power law expansion as given in there is a natural transition from inflationary expansion to power law expansion so that this model for inflation has a graceful exit from inflationary behavior finally based on the inverse similarity between black hole evaporation and this frw hawking radiation model of the evolution of the scale factor xmath13 where the period of late time black hole evaporation corresponds to early period of the universe and visa versa we have given some speculation as to how frw hawking radiation mechanism for inflation may turn on due to non commutative space time effects thus the frw hawking radiation picture for the evolution of xmath13 provides not only a graceful exit to inflation as well as a possible graceful entrance one final comment this inflation mechanism has a feedback mechanism which forces the scale factor xmath13 to be uniform for example if one assumed that the scale factor also had a dependence on xmath207 ie xmath208 the hawking radiation inflation mechanism would tend to erase this xmath207 dependence if xmath208 were smaller for some xmath207 this would imply a higher hawking temperature and more rapid expansion this would push those regions of xmath207 with smaller scale factor xmath117 to expand more rapidly until they were the same as the scale factor in other regions if xmath208 were larger for some xmath207 this would imply a lower hawking temperature and less rapid expansion this would push those regions of xmath207 with larger scale factor xmath117 to expand less rapidly until they were the same as the scale factor in other regions
we present a model for cosmological inflation which has a natural turn on and a natural turn off mechanism in our model inflation is driven by the hawking like radiation that occurs in friedman robertson walker frw space time this hawking like radiation results in an effective negative pressure fluid which leads to a rapid period of expansion in the very early universe as the universe expands the frw hawking temperature decreases and the inflationary expansion turns off and makes a natural transition to the power law expansion of a radiation dominated universe the turn on mechanism is more speculative but is based on the common hypothesis that in a quantum theory of gravity at very high temperatures high densities hawking radiation will stop applying this speculation to the very early universe implies that the hawking like radiation of the frw space time will be turned off and therefore the inflation driven by this radiation will turn off
introduction thermodynamics and particle creation in frw space-time graceful entrance to inflation summary
in a bidirectional relay network two users exchange information via a relay node xcite several protocols have been proposed for such a network under the practical half duplex constraint ie a node can not transmit and receive at the same time and in the same frequency band the simplest protocol is the traditional two way relaying protocol in which the transmission is accomplished in four successive point to point phases user 1to relay relay to user 2 user 2to relay and relay to user 1 in contrast the time division broadcast tdbc protocol exploits the broadcast capability of the wireless medium and combines the relay to user 1 and relay to user 2 phases into one phase the broadcast phase xcite thereby the relay broadcasts a superimposed codeword carrying information for both user 1 and user 2 such that each user is able to recover its intended information by self interference cancellation another existing protocol is the multiple access broadcast mabc protocol in which the user 1to relay and user 2to relay phases are also combined into one phase the multiple access phase xcite in the multiple access phase both user 1 and user 2 simultaneously transmit to the relay which is able to decode both messages generally for the bidirectional relay network without a direct link between user 1 and user 2 six transmission modes are possible four point to point modes user 1to relay user 2to relay relay to user 1 relay to user 2 a multiple access mode both users to the relay and a broadcast mode the relay to both users where the capacity region of each transmission mode is known xcite xcite using this knowledge a significant research effort has been dedicated to obtaining the achievable rate region of the bidirectional relay network xcitexcite specifically the achievable rates of most existing protocols for two hop relay transmission are limited by the instantaneous capacity of the weakest link associated with the relay the reason for this is the fixed schedule of using the transmission modes which is adopted in all existing protocols and does not exploit the instantaneous channel state information csi of the involved links for one way relaying an adaptive link selection protocol was proposed in xcite where based on the instantaneous csi in each time slot either the source relay or relay destination links are selected for transmission to this end the relay has to have a buffer for data storage this strategy was shown to achieve the capacity of the one way relay channel with fading xcite moreover in fading awgn channels power control is necessary for rate maximization the highest degree of freedom that is offered by power control is obtained for a joint average power constraint for all nodes any other power constraint with the same total power budget is more restrictive than the joint power constraint and results in a lower sum rate therefore motivated by the protocols in xcite and xcite our goal is to utilize all available degrees of freedom of the three node half duplex bidirectional relay network with fading via an adaptive mode selection and power allocation policy in particular given a joint power budget for all nodes we find a policy which in each time slot selects the optimal transmission mode from the six possible modes and allocates the optimal powers to the nodes transmitting in the selected mode such that the sum rate is maximized adaptive mode selection for bidirectional relaying was also considered in xcite and xcite however the selection policy in xcite does not use all possible modes ie it only selects from two point to point modes and the broadcast mode and assumes that the transmit powers of all three nodes are fixed and identical although the selection policy in xcite considers all possible transmission modes for adaptive mode selection the transmit powers of the nodes are assumed to be fixed ie power allocation is not possible interestingly mode selection and power allocation are mutually coupled and the modes selected with the protocol in xcite for a given channel are different from the modes selected with the proposed protocol power allocation can considerably improve the sum rate by optimally allocating the powers to the nodes based on the instantaneous csi especially when the total power budget in the network is low moreover the proposed protocol achieves the maximum sum rate in the considered bidirectional network hence the sum rate achieved with the proposed protocol can be used as a reference for other low complexity suboptimal protocols simulation results confirm that the proposed protocol outperforms existing protocols finally we note that the advantages of buffering come at the expense of an increased end to end delay however with some modifications to the optimal protocol the average delay can be bounded as shown in xcite which causes only a small loss in the achieved rate the delay analysis of the proposed protocol is beyond the scope of the current work and is left for future research cc075xmath0 cc075xmath1 cc075xmath2 cc075xmath3 cc075xmath4 cc075xmath5 cc075xmath6 cc075xmath7 cc075xmath8 cc075xmath9 cc05xmath0 cc05xmath1 cc05xmath2 cc075xmath10 cc075xmath11 cc075xmath12 cc075xmath13 cc075xmath14 cc075xmath15 in this section we first describe the channel model then we provide the achievable rates for the six possible transmission modes we consider a simple network in which user 1 and user 2 exchange information with the help of a relay node as shown in fig we assume that there is no direct link between user 1 and user 2 and thus user 1 and user 2 communicate with each other only through the relay node we assume that all three nodes in the network are half duplex furthermore we assume that time is divided into slots of equal length and that each node transmits codewords which span one time slot or a fraction of a time slot as will be explained later we assume that the user to relay and relay to user channels are impaired by awgn with unit variance and block fading ie the channel coefficients are constant during one time slot and change from one time slot to the next moreover in each time slot the channel coefficients are assumed to be reciprocal such that the user 1to relay and the user 2to relay channels are identical to the relay to user 1 and relay to user 2 channels respectively let xmath3 and xmath4 denote the channel coefficients between user 1 and the relay and between user 2 and the relay in the xmath16th time slot respectively furthermore let xmath17 and xmath18 denote the squares of the channel coefficient amplitudes in the xmath16th time slot xmath19 and xmath20 are assumed to be ergodic and stationary random processes with means xmath21 and xmath22 in expectations for notational simplicity respectively where xmath23 denotes expectation since the noise is awgn in order to achieve the capacity of each mode nodes have to transmit gaussian distributed codewords therefore the transmitted codewords of user 1 user 2 and the relay are comprised of symbols which are gaussian distributed random variables with variances xmath24 and xmath7 respectively where xmath25 is the transmit power of node xmath26 in the xmath16th time slot for ease of notation we define xmath27 in the following we describe the transmission modes and their achievable rates in the considered bidirectional relay networkonly six transmission modes are possible cf fig figmodes the six possible transmission modes are denoted by xmath28 and xmath29 denotes the transmission rate from node xmath30 to node xmath31 in the xmath16th time slot let xmath8 and xmath9 denote two infinite size buffers at the relay in which the received information from user 1 and user 2 is stored respectively moreover xmath32 denotes the amount of normalized information in bits symbol available in buffer xmath33 in the xmath16th time slot using this notation the transmission modes and their respective rates are presented in the following xmath34 user 1 transmits to the relay and user 2 is silent in this mode the maximum rate from user 1 to the relay in the xmath16th time slot is given by xmath35 where xmath36 the relay decodes this information and stores it in buffer xmath8 therefore the amount of information in buffer xmath8 increases to xmath37 xmath38 user 2 transmits to the relay and user 1 is silent in this mode the maximum rate from user 2 to the relay in the xmath16th time slot is given by xmath39 where xmath40 the relay decodes this information and stores it in buffer xmath9 therefore the amount of information in buffer xmath9 increases to xmath41 xmath42 both users 1 and 2 transmit to the relay simultaneously for this mode we assume that multiple access transmission is used see xcite thereby the maximum achievable sum rate in the xmath16th time slot is given by xmath43 where xmath44 since user 1 and user 2 transmit independent messages the sum rate xmath45 can be decomposed into two rates one from user 1 to the relay and the other one from user 2 to the relay moreover these two capacity rates can be achieved via time sharing and successive interference cancelation thereby in the first xmath46 fraction of the xmath16th time slot the relay first decodes the codeword received from user 2 and considers the signal from user 1 as noise then the relay subtracts the signal received from user 2 from the received signal and decodes the codeword received from user 1 a similar procedure is performed in the remaining xmath47 fraction of the xmath16th time slot but now the relay first decodes the codeword received from user 1 and treats the signal of user 2 as noise and then decodes the codeword received from user 2 therefore for a given xmath48 we decompose xmath45 as xmath49 and the maximum rates from users 1 and 2 to the relay in the xmath16th time slot are xmath50 and xmath51 respectively xmath52 and xmath53 are given by xmath54 the relay decodes the information received from user 1 and user 2 and stores it in its buffers xmath8 and xmath9 respectively therefore the amounts of information in buffers xmath8 and xmath9 increase to xmath37 and xmath41 respectively xmath55 the relay transmits the information received from user 2 to user 1 specifically the relay extracts the information from buffer xmath9 encodes it into a codeword and transmits it to user 1 therefore the transmission rate from the relay to user 1 in the xmath16th time slot is limited by both the capacity of the relay to user 1 channel and the amount of information stored in buffer xmath9 thus the maximum transmission rate from the relay to user 1 is given by xmath56 where xmath57 therefore the amount of information in buffer xmath9 decreases to xmath58 xmath59 this mode is identical to xmath55 with user 1 and 2 switching places the maximum transmission rate from the relay to user 2is given by xmath60 where xmath61 and the amount of information in buffer xmath8 decreases to xmath62 xmath63 the relay broadcasts to both user 1 and user 2 the information received from user 2 and user 1 respectively specifically the relay extracts the information intended for user 2 from buffer xmath8 and the information intended for user 1 from buffer xmath9 then based on the scheme in xcite it constructs a superimposed codeword which contains the information from both users and broadcasts it to both users thus in the xmath16th time slot the maximum rates from the relay to users 1 and 2 are given by xmath64 and xmath65 respectively therefore the amounts of information in buffers xmath8 and xmath9 decrease to xmath62 and xmath58 respectively our aim is to develop an optimal mode selection and power allocation policy which in each time slot selects one of the six transmission modes xmath28 and allocates the optimal powers to the transmitting nodes of the selected mode such that the average sum rate of both users is maximized to this end we introduce six binary variables xmath66 where xmath67 indicates whether or not transmission mode xmath68 is selected in the xmath16th time slot in particular xmath69 if mode xmath68 is selected and xmath70 if it is not selected in the xmath16th time slot furthermore since in each time slot only one of the six transmission modes can be selected only one of the mode selection variables is equal to one and the others are zero ie xmath71 holds in the proposed framework we assume that all nodes have full knowledge of the csi of both links thus based on the csi and the proposed protocol cf theorem adaptprot each node is able to individually decide which transmission mode is selected and adapt its transmission strategy accordingly in this section we first investigate the achievable average sum rate of the network then we formulate a maximization problem whose solution is the sum rate maximizing protocol we assume that user 1 and user 2 always have enough information to send in all time slots and that the number of time slots xmath72 satisfies xmath73 therefore using xmath67 the user 1to relay user 2to relay relay to user 1 and relay to user 2 average transmission rates denoted by xmath74 xmath75 xmath76 and xmath77 respectively are obtained as lllratreg123 r1r i 1n r2r i 1n rr1 i 1n cr1iq2i1 rr2 i 1n cr2iq1i1 the average rate from user 1 to user 2 is the average rate that user 2 receives from the relay ie xmath77 similarly the average rate from user 2 to user 1 is the average rate that user 1 receives from the relay ie xmath76 in the following theorem we introduce a useful condition for the queues in the buffers of the relay leading to the optimal mode selection and power allocation policy queue the maximum average sum rate xmath78 for the considered bidirectional relay network is obtained when the queues in the buffers xmath8 and xmath9 at the relay are at the edge of non absorbtion more precisely the following conditions must hold for the maximum sum rate lllratregapp456buffer r1rrr2 i 1n cr2i where xmath74 and xmath75 are given by ratreg123a and ratreg123b respectively please refer to appendix a using this theorem in the following we derive the optimal transmission mode selection and power allocation policy the available degrees of freedom in the considered network in each time slot are the mode selection variables the transmit powers of the nodes and the time sharing variable for multiple access herein we formulate an optimization problem which gives the optimal values of xmath67 xmath25 and xmath48 for xmath79 xmath80 and xmath81 such that the average sum rate of the users is maximized the optimization problem is as follows clladaptprob r1rr2r r1rrr2 r2rrr1 p1p2pr pt k 1 6 qk i 1 i qki 1qki 0 i k pji0 i j 0ti1 i where xmath82 is the total average power constraint of the nodes and xmath83 and xmath84 denote the average powers consumed by user 1 user 2 and the relay respectively and are given by lll p1 i 1n q1iq3ip1 i in the optimization problem given in adaptprob constraints xmath85 and xmath86 are the conditions for sum rate maximization introduced in theorem queue constraints xmath87 and xmath88 are the average total transmit power constraint and the power non negativity constraint respectively moreover constraints xmath89 and xmath90 guarantee that only one of the transmission modes is selected in each time slot and constraint xmath91 specifies the acceptable interval for the time sharing variable xmath48 furthermore we maximize xmath92 since according to theorem 1 and constraints xmath85 and xmath86 xmath93 and xmath94 hold in the following theorem we introduce a protocol which achieves the maximum sum rate adaptprot assuming xmath73 the optimal mode selection and power allocation policy which maximizes the sum rate of the considered three node half duplex bidirectional relay network with awgn and block fading is given by lllseleccrit qki 1 k ki 0 where xmath95 is referred to as selection metric and is given by lllselecmet 1i 11c1ri p1i p1ip11i 2i 12c2ri p2ip2ip22i 3i 11c12ri12c21ri p1ip2ip2ip23ip1ip13i 6i 1 cr2i2 cr1i pripripr6i where xmath96 denotes the optimal transmit power of node xmath30 for transmission mode xmath68 in the xmath16th time slot and is given by llloptpower p11 i p22 i p13 i 12 p23 i 1 2 pr6 i where xmath97maxx0 xmath98 and xmath99 the thresholds xmath100 and xmath101 are chosen such that constraints xmath85 and xmath86 in adaptprob hold and threshold xmath102 is chosen such that the total average transmit power satisfies xmath87 in adaptprob the optimal value of xmath48 in xmath52 and xmath53 is given by lll ti 0 1 2 1 1 2 please refer to appendix appkkt we note that the optimal solution utilizes neither modes xmath13 and xmath14 nor time sharing for any channel statistics and channel realizations the mode selection metric xmath95 introduced in selecmet has two parts the first part is the instantaneous capacity of mode xmath68 and the second part is the allocated power with negative sign the capacity and the power terms are linked via thresholds xmath100 andor xmath101 and xmath102 we note that thresholds xmath100 xmath101 and xmath102 depend only on the long term statistics of the channels hence these thresholds can be obtained offline and used as long as the channel statistics remain unchanged to find the optimal values for the thresholds xmath100 xmath101 and xmath102 we need a three dimensional search where xmath103 and xmath104 adaptive mode selection for bidirectional relay networks under the assumption that the powers of the nodes are fixed is considered in xcite based on the average and instantaneous qualities of the links all of the six possible transmission modes are selected in the protocol in xcite however in the proposed protocol modes xmath13 and xmath14 are not selected at all moreover the protocol in xcite utilizes a coin flip for implementation therefore a central node must decide which transmission mode is selected in the next time slot however in the proposed protocol all nodes can find the optimal mode and powers based on the full csi in this section we evaluate the average sum rate achievable with the proposed protocol in the considered bidirectional relay network in rayleigh fading thus channel gains xmath19 and xmath20 follow exponential distributions with means xmath105 and xmath106 respectively all of the presented results were obtained for xmath107 and xmath108 time slots in fig mixed we illustrate the maximum achievable sum rate obtained with the proposed protocol as a function of the total average transmit power xmath82 in this figure to have a better resolution for the sum rate at low and high xmath82 we show the sum rate for both log scale and linear scale xmath109axes respectively the lines without markers in fig mixed represent the achieved sum rates with the proposed protocol for xmath110 we observe that as the quality of the user 1to relay link increases ie xmath105 increases the sum rate increases too however for large xmath105 the bottleneck link is the relay to user 2 link and since it is fixed the sum rate saturates ct075xmath111 cb075xmath112 lc045xmath113 lc045xmath114 lc045xmath115 lc045xmath116omega11 lc045xmath117omega11 lc045xmath118 lc045xmath119 cc06xmath120 cc06xmath121 cc06xmath122 cc06xmath123 cc06xmath124 cc06xmath125 cc06xmath126 cc06xmath127 cc06xmath128 cc06xmath129 cc06xmath130 cc06xmath131 cc06xmath132 cc06xmath133 cc030xmath124 cc030xmath134 cc030xmath135 cc030xmath136 cc030xmath137 cc030xmath138 cc030xmath139 cc030xmath140 cc030xmath141 cc030xmath142 cc030xmath125 cc030xmath120 cc030xmath121 cc030xmath122 cc030xmath123 cc030xmath124 cc030xmath125 cc030xmath126 cc030xmath127 cc030xmath128 for different protocolstitlefigwidth3 as performance benchmarks we consider in fig mixed the sum rates of the tdbc protocol with and without power allocation xcite and the buffer aided protocols presented in xcite and xcite respectively for clarity for the benchmark schemes we only show the sum rates for xmath143 for the tdbc protocol without power allocation and the protocol in xcite all nodes transmit with equal powers ie xmath144 for the buffer aided protocol in xcite we adopt xmath145 and xmath146 is chosen such that the average total power consumed by all nodes is xmath82 we note that since xmath143 and xmath147 the protocol in xcite only selects modes xmath12 and xmath15 moreover since xmath143 we obtain xmath148 in the proposed protocol thus considering the optimal power allocation in optpowerc and optpowerd we obtain that either xmath149 or xmath150 is zero therefore for the chosen parameters only modes xmath151 and xmath15 are selected ie the same modes as used in xcite the protocol in xcite is optimal for given fixed node transmit powers hence we can see how much gain we obtain due to the adaptive power allocation by comparing our result with the results for the protocol in xcite on the other hand the gain due to the adaptive mode selection can be evaluated by comparing the sum rate of the proposed protocol with the result for the tdbc protocol with power allocation from the comparison in fig mixed we observe that for high xmath82 a considerable gain is obtained by the protocols with adaptive mode selection ours and that in xcite compared to the tdbc protocol which does not apply adaptive mode selection around xmath152 db gain however for high xmath82 power allocation is less beneficial and therefore the sum rates obtained with the proposed protocol and that in xcite converge on the other hand for low xmath82 optimal power allocation is crucial and therefore a considerable gain is achieved by the protocols with adaptive power allocation ours and tdbc with power allocation we have derived the maximum sum rate of the three node half duplex bidirectional buffer aided relay network with fading links the protocol which achieves the maximum sum rate jointly optimizes the selection of the transmission mode and the transmit powers of the nodes the proposed optimal mode selection and power allocation protocol requires the instantaneous csi of the involved links in each time slot and their long term statistics simulation results confirmed that the proposed selection policy outperforms existing protocols in terms of average sum rate in this appendix we solve the optimization problem given in adaptprob we first relax the binary condition for xmath67 ie xmath1530 to xmath154 and later in appendix appbinrelax we prove that the binary relaxation does not affect the maximum average sum rate in the following we investigate the karush kuhn tucker kkt necessary conditions xcite for the relaxed optimization problem and show that the necessary conditions result in a unique sum rate and thus the solution is optimal to simplify the usage of the kkt conditions we formulate a minimization problem equivalent to the relaxed maximization problem in adaptprob as follows clladaptprobmin r1rr2r r1rrr20 r2rrr10 p1p2pr pt0 k 1 6 qk i 1 0 i qki1 0 i k qki 0 i k pji 0 i k ti1 0 i ti 0 i the lagrangian function for the above optimization problem is provided in kkt function at the top of the next page where xmath155 and xmath156 are the lagrange multipliers corresponding to constraints xmath157 and xmath158 respectively the kkt conditions include the following lkkt function r1rr2r 1r1rrr2 2r2rrr1 p1p2pr pt i 1n i k 1 6 qk i 1 i 1n k 1 6 k i qk i 1 i 1n k 1 6 k i qk i i 1n i 1n 1i ti1 i 1n 0i ti 1 stationary condition the differentiation of the lagrangian function with respect to the primal variables xmath159 and xmath160 is zero for the optimal solution ie ccclstationary condition 0 i k 0 i j 0 i 2 primal feasibility condition the optimal solution has to satisfy the constraints of the primal problem in adaptprobmin 3 dual feasibility condition the lagrange multipliers for the inequality constraints have to be non negative ie llldual feasibility condition ki0 i k ki0i k 0 ji 0 i j li 0 i l 4 complementary slackness if an inequality is inactive ie the optimal solution is in the interior of the corresponding set the corresponding lagrange multiplier is zero thus we obtain lllcomplementary slackness k i qk i 1 0i k k i qk i 0 i k p1p2pr pt 0 ji pji 0 i j 1i ti1 0 i 0i ti 0 i a common approach to find a set of primal variables ie xmath161 and lagrange multipliers ie xmath162 which satisfy the kkt conditions is to start with the complementary slackness conditions and see if the inequalities are active or not combining these results with the primal feasibility and dual feasibility conditions we obtain various possibilities then from these possibilities we obtain one or more candidate solutions from the stationary conditions and the optimal solution is surely one of these candidates in the following subsections with this approach we find the optimal values of xmath163 and xmath164 in order to determine the optimal selection policy xmath165 we must calculate the derivatives in stationary conditiona this leads to lllstationary mode 11c1rii1i1i p1i0 12c2rii2i2i p2i0 11c12ri12c21rii 3i3ip1ip2i0 2 cr1ii4i4ipri0 1 cr2ii5i5ipri0 1 cr2i2 cr1ii6i6ipri0 without loss of generality we first obtain the necessary condition for xmath166 and then generalize the result to xmath167 if xmath168 from constraint xmath89 in adaptprobmin the other selection variables are zero ie xmath169 furthermore from complementary slackness we obtain xmath170 and xmath171 by substituting these values into stationary mode we obtain lllmet i1i 11c1ri p1i 1i i2i 12c2ri p2i 2i i3i 11c12ri12c21ri p1ip2i 3i i4i 2 cr1i pri 4i i5i 1 cr2i pri 5i i6i 1 cr2i2 cr1i pri 6i where xmath95 is referred to as selection metric by subtracting meta from the rest of the equations in met we obtain rcleq21 1i ki 1iki k23456 from the dual feasibility conditions given in dual feasibility conditiona and dual feasibility conditionb we have xmath172 by inserting xmath172 in eq21 we obtain the necessary condition for xmath166 as lll 1i 2i 3i 4i 5i 6i repeating the same procedure for xmath167 we obtain a necessary condition for selecting transmission mode xmath173 in the xmath16th time slot as follows llloptmet ki ki where the lagrange multipliers xmath174 and xmath102 are chosen such that xmath175 and xmath87 in adaptprobmin hold and the optimal value of xmath48 in xmath52 and xmath53 is obtained in the next subsection we note that if the selection metrics are not equal in the xmath16th time slot only one of the modes satisfies optmet therefore the necessary conditions for the mode selection in optmet is sufficient moreover in appendix appbinrelax we prove that the probability that two selection metrics are equal is zero due to the randomness of the time continuous channel gains therefore the necessary condition for selecting transmission mode xmath68 in optmet is in fact sufficient and is the optimal selection policy in order to determine the optimal xmath25 we have to calculate the derivatives in stationary conditionb this leads to stationary power xmath177 nonumber gamma frac1n q1iq3i nu1i 0 quad tagstepcounterequationtheequation fracpartialmathcallpartial p2i hspace07mmhspace07mmfrac1nmathrmln2 big big 1hspace07mmhspace07mmmu2q2i hspace07mmhspace07mm1hspace07mmhspace07mmtimu1hspace07mmhspace07mmmu2q3i big nonumber timesfracs2i1hspace07mmhspace07mmp2is2i hspace07mmhspace07mmbigtimu1hspace07mmhspace07mmmu2hspace07mmhspace07mm1hspace07mmhspace07mmmu1 bigq3i nonumber timesfracs2i1p1is1ip2is2i bignonumber gamma frac1n q2iq3i nu2i0 quadtagstepcounterequationtheequation fracpartialmathcallpartial pri frac1nmathrmln2 big mu2leftq4iq6irightfracs1i1pris1inonumber mu1leftq5iq6irightfracs2i1pris2i bignonumber gamma frac1n q4iq5iq6i nuri0 tagstepcounterequationtheequationendaligned the above conditions allow the derivation of the optimal powers for each transmission mode in each time slot for instance in order to determine the transmit power of user 1 in transmission mode xmath10 we assume xmath166 from constraint xmath89 in adaptprobmin we obtained that the other selection variables are zero and therefore xmath178 moreover if xmath10 is selected then xmath179 and thus from complementary slacknessd we obtain xmath180 substituting these results in stationary powera we obtain lll eq11 p11 i where xmath97max0x in a similar manner we obtain the optimal powers for user 2 in mode xmath11 and the optimal powers of the relay in modes xmath13 and xmath14 as follows lllp245 p22 i pr4 i pr5 i in order to obtain the optimal powers of user 1 and user 2 in mode xmath12 we assume xmath181 from xmath89 in adaptprobmin we obtain that the other selection variables are zero and therefore xmath182 and xmath183 we note that if one of the powers of user 2 and user 1 is zero mode xmath12 is identical to modes xmath10 and xmath11 respectively and for that case the optimal powers are already given by eq11 and p245a respectively for the case when xmath179 and xmath184 we obtain xmath180 and xmath185 from complementary slacknessd furthermore for xmath181 we will show in appendix appkktc that xmath48 can only take the boundary values ie zero or one and can not be in between hence if we assume xmath186 from stationary powera and stationary powerb we obtain lllpowerm3 0 0 by substituting powerm3a in powerm3b we obtain xmath187 and then we can derive xmath188 from powerm3a this leads to lll pm3t0 p13 i p11 i s2 p23 i p22 i s2 s1 similarly if we assume xmath189 we obtain lll pm3t1 p13 i p11 i s2 s1 p23 i p22 i s2 we note that when xmath190 we obtain xmath191 which means that mode xmath12 is identical to mode xmath10 thus there is no difference between both modes so we select xmath10 in figs figsregion a and figsregion b the comparison of xmath192 and xmath193 is illustrated in the space of xmath194 moreover the shaded area represents the region in which the powers of users 1 and 2 are zero for xmath151 and xmath12 cc06xmath19 cc06xmath20 cc06xmath195 cc06xmath196 cc06xmath195 cc06xmath196 cc06xmath197 cc06xmath198 cc075xmath199 cb07xmath200 cc07xmath201 cc075xmath202 cc075xmath203 cc075xmath204 cc065xmath200 ct065xmath201 cc075xmath202 cc075xmath205 cc075xmath206 cc075a xmath207 cc075b xmath208 cc075c for mode xmath15 we assume xmath209 from constraint xmath89 in adaptprobmin we obtain that the other selection variables are zero and therefore xmath210 and xmath211 moreover if xmath209 then xmath212 and thus from complementary slacknessd we obtain xmath213 using these results in stationary powerc we obtain lllpowerm6 2 1 2 the above equation is a quadratic equation and has two solutions for xmath7 however since we have xmath214 we can conclude that the left hand side of powerm6 is monotonically decreasing in xmath7 thus if xmath215 we have a unique positive solution for xmath7 which is the maximum of the two roots of powerm6 thus we obtain lllpm6 pr6 i where xmath216 and xmath99 in fig figsregion c the comparison between selection metrics xmath217 and xmath218 is illustrated in the space of xmath194 we note that xmath219 and xmath220 hold and the inequalities hold with equality if xmath221 and xmath222 respectively which happen with zero probability for time continuous fading to prove xmath219 from met we obtain rcl 6i 1cr2i 2cr1i pri pripr6i 1cr2i 2cr1i pripripr4i 2cr1i pri pripr4i 4i where xmath223 follows from the fact that xmath224 maximizes xmath218 and xmath225 follows from xmath226 the two inequalities xmath223 andxmath225 hold with equality only if xmath221 which happens with zero probability in time continuous fading or if xmath227 however in appendix appmuregion xmath227 is shown to lead to a contradiction therefore the optimal policy does not select xmath13 and xmath14 and selects only modes xmath228 and xmath15 to find the optimal xmath48 we assume xmath230 and calculate the stationary condition in stationary conditionc this leads to lll stationary t 12 1i0i0 now we investigate the following possible cases for xmath229 case 1 if xmath231 then from complementary slacknesse and complementary slacknessf we have xmath232 therefore from stationary t and xmath233 we obtain xmath148 then from stationary powera and stationary powerb we obtain rclcontradict 11 0 11 0 in appendix appmuregion we show that xmath234 therefore the above conditions can be satisfied simultaneously only if xmath235 which considering the randomness of the time continuous channel gains occurs with zero probability hence the optimal xmath48 takes the boundary values ie zero or one and not values in between case 2 if xmath236 then from complementary slacknesse we obtain xmath237 and from dual feasibility conditione we obtain xmath238 combining these results into stationary t the necessary condition for xmath186 is obtained as xmath239 case 3 if xmath240 then from complementary slacknessf we obtain xmath241 and from dual feasibility conditione we obtain xmath242 combining these results into stationary t the necessary condition for xmath189 is obtained as xmath243 we note that if xmath148 we obtain either xmath244 or xmath245 therefore mode xmath12 is not selected and the value of xmath48 does not affect the sum rate moreover from the selection metrics in met we can conclude that xmath207 and xmath208 correspond to xmath246 and xmath247 respectively therefore the optimal value of xmath48 is given by lll ti 0 1 2 1 1 2 now the optimal values of xmath159 and xmath160 are derived based on which theorem adaptprot can be constructed this completes the proof in this appendix we prove that the optimal solution of the problem with the relaxed constraint xmath248 selects the boundary values of xmath67 ie zero or one therefore the binary relaxation does not change the solution of the problem if one of the xmath249 adopts a non binary value in the optimal solution then in order to satisfy constraint xmath89 in adaptprob there has to be at least one other non binary selection variable in that time slot assuming that the mode indices of the non binary selection variables are xmath250 and xmath251 in the xmath16th time slot we obtain xmath252 from complementary slacknessa and xmath253 and xmath254 from complementary slacknessb then by substituting these values into stationary mode we obtain lllbinrelax i ki i ki iki ki kk k from binrelaxa and binrelaxb we obtain xmath255 and by subtracting binrelaxa and binrelaxb from binrelaxc we obtain rcl ki ki ki kk k ki ki ki kk k from the dual feasibility condition given in dual feasibility conditionb we have xmath256 which leads to xmath257 however as a result of the randomness of the time continuous channel gains xmath258 holds for some transmission modes xmath259 and xmath260 if and only if we obtain xmath261 or xmath262 which leads to a contradiction as shown in appendix appmuregion this completes the proof in this appendix we find the intervals which contain the optimal value of xmath100 and xmath101 we note that for different values of xmath100 and xmath101 some of the optimal powers derived in eq11 p245 pm3t0 pm3t1 and pm6 are zero for all channel realizations for example if xmath263 we obtain xmath264 from eq11 figmuregion illustrates the set of modes that can take positive powers with non zero probability in the space of xmath174 in the following we show that any values of xmath100 and xmath101 except xmath265 and xmath266 can not lead to the optimal sum rate or violate constraints xmath85 or xmath86 in adaptprobmin case 1 sets xmath267 and xmath268 lead to selection of either the transmission from the users to the relay or the transmission from the relay to the users respectively for all time slots this leads to violation of constraints xmath85 and xmath86 in adaptprobmin and thus the optimal values of xmath100 and xmath101 are not in this region case 2 in set xmath269 both modes xmath13 and xmath15 need the transmission from user 2 to the relay which can not be realized in this set thus this set leads to violation of constraint xmath86 in adaptprobmin similarly in set xmath270 both modes xmath14 and xmath15 require the transmission from user 1 to the relay which can not be selected in this set thus this region of xmath100 and xmath101 leads to violation of constraint xmath85 in adaptprobmin case 3 in set xmath271 there is no transmission from user 2 to the relay therefore the optimal values of xmath100 and xmath101 have to guarantee that modes xmath13 and xmath15 are not selected for any channel realization however from met we obtain where xmath223 follows from the fact that xmath224 maximizes xmath218 and xmath225 follows from xmath272 the two inequalities xmath223 and xmath225 hold with equality only if xmath222 which happens with zero probability for time continuous fading or xmath273 which is not included in this region therefore mode xmath15 is selected in this region which leads to violation of constraint xmath86 in adaptprobmin a similar statement is true for set xmath274 thus the optimal values of xmath100 and xmath101 can not be in these two regions where inequality xmath223 comes from the fact that xmath276 and the equality holds when xmath221 which happens with zero probability or xmath227 inequality xmath225 holds since xmath277 maximizes xmath278 and holds with equality only if xmath279 and consequently xmath227 if xmath280 mode xmath15 is not selected and there is no transmission from the relay to user 2 therefore the optimal values of xmath100 and xmath101 have to guarantee that modes xmath10 and xmath12 are not selected for any channel realization thus we obtain xmath263 which is not contained in this region if xmath227 from met we obtain where both inequalities xmath223 and xmath225 hold with equality only if xmath281 if xmath282 modes xmath13 and xmath15 are not selectedthus there is no transmission from the relay to the users which leads to violation of xmath85 and xmath86 in adaptprobmin if xmath281 we obtain xmath283 thus mode xmath11 can not be selected and either xmath244 or xmath284 thus mode xmath12 can not be selected either since both modes xmath13 and xmath15 require the transmission from user 2 to the relay and both modes xmath11 and xmath12 are not selected constraint xmath86 in adaptprobmin is violated and xmath227 and xmath281 can not be optimal a similar statement is true for set xmath285 therefore the optimal values of xmath100 and xmath101 are not in this region s j kim n devroye p mitran and v tarokh achievable rate regions and performance comparison of half duplex bi directional relaying protocols ieee trans inf theory 57 no 6405 6418 oct 2011 n zlatanov and r schober capacity of the state dependent half duplex relay channel without source destination link submitted ieee transactions on information theory 2013 online available httparxivorgabs13023777 v jamali n zlatanov a ikhlef and r schober adaptive mode selection in bidirectional buffer aided relay networks with fixed transmit powers submitted in part to eusipco13 2013 online available httparxivorgabs13033732
in this paper we consider the problem of sum rate maximization in a bidirectional relay network with fading hereby user 1 and user 2 communicate with each other only through a relay ie a direct link between user 1 and user 2 is not present in this network there exist six possible transmission modes four point to point modes user 1to relay user 2to relay relay to user 1 relay to user 2 a multiple access mode both users to the relay and a broadcast mode the relay to both users most existing protocols assume a fixed schedule of using a subset of the aforementioned transmission modes as a result the sum rate is limited by the capacity of the weakest link associated with the relay in each time slot motivated by this limitation we develop a protocol which is not restricted to adhere to a predefined schedule for using the transmission modes therefore all transmission modes of the bidirectional relay network can be used adaptively based on the instantaneous channel state information csi of the involved links to this end the relay has to be equipped with two buffers for the storage of the information received from users 1 and 2 respectively for the considered network given a total average power budget for all nodes we jointly optimize the transmission mode selection and power allocation based on the instantaneous csi in each time slot for sum rate maximization simulation results show that the proposed protocol outperforms existing protocols for all signal to noise ratios snrs specifically we obtain a considerable gain at low snrs due to the adaptive power allocation and at high snrs due to the adaptive mode selection
introduction system model joint mode selection and power allocation simulation results conclusion proof of theorem 2 (mode selection protocol) proof of optimality of binary relaxation threshold regions
the arecibo l band feed array zone of avoidance alfa zoa survey searches for 21cm line emission from neutral hydrogen h in galaxies behind the disk of the milky way the survey uses the alfa receiver on the 305m arecibo radio telescope this region of the sky is termed the zone of avoidance by extragalactic astronomers because of its low galaxy detection rate extragalactic observations at visual wavelengths struggle with high extinction levels near and far infrared observations suffer confusion with galactic stars dust and gas 21cm line observations are sensitive to late type galaxies in general and are not affected by extinction as a spectral line survey we generally only have confusion with galactic h within approximately xmath9100 xmath0 the alfa zoa survey is sensitive to galaxies behind the milky way that go undetected at other wavelengths it has been suggested by loeb and narayan 2008 that undiscovered mass behind the milky way may explain the discrepancy between the cosmic microwave background dipole and what is expected from the gravitational acceleration imparted on the local group by matter in the local universe erdogdu et al 2006 two large area h zoa surveys have preceded alfa zoa the dwingeloo obscured galaxies survey and the hi parkes zone of avoidance survey hizoa the dwingeloo survey detected 43 galaxies in the northern hemisphere within xmath10 of the galactic plane it was sensitive only to nearby massive objects because of its relatively high noise level of 40 mjy beamxmath11 with velocity resolution of 4 km sxmath11 henning et al more recently hizoa covered decl 90 to 25 at 6 mjy beam rms with velocity resolution of 27 km s and detected about 1000 galaxies donley et al 2005 henning et al 2000 2005 shafi 2008 the alfa zoa survey is being conducted in two phases a shallow and a deep phase the shallow phase rms 5 mjy with velocity resolution of 10 km s covers 900 square degrees through the inner galaxy xmath12 xmath13 and is expected to detect 500 galaxies hundreds of galaxies have been detected so far and data reduction and analysis are ongoing this is complemented by a deep survey xmath12 xmath14 xmath15 5 times more sensitive in which we expect to detect thousands of galaxies based on the himf of davies et al 2011 but for which observations are not yet complete this paper presents the discovery and the results from follow up observations of a nearby galaxy alfa zoa j1952 1428 section 2 describes the discovery and follow up with the arecibo radio telescope section 3 describes follow up observations with the expanded very large array evla section 4 describes ongoing optical follow up with the 09m southeastern association for research in astronomy sara telescope section 5 discusses the results from these observations alfa zoa j1952 1428 was initially detected with the shallow portion of the alfa zoa survey observations were taken with the mock spectrometer covering 300 mhz bandwidth in two 170 mhz sub bands of 8192 channels each giving a hanning smoothed velocity resolution of 10 xmath0 at z 0 the survey uses a meridian nodding mode observation technique the telescope slews up and down in zenith angle along the meridian for an effective 8 second integration time per beam giving rms 5 mjy per beam observations were taken in 2008 and 2009 the angular resolution of the survey is 34xmath16 more details of the alfa zoa survey techniques are presented by henning et al 2010 in order to confirm this detection it was followed up with the l band wide receiver on the arecibo telescope for 180 seconds of integration time using a total power on off observation data were taken with the wapp spectrometer with 4096 channels across a bandwidth of 25 mhz giving a velocity resolution of 13 kmsxmath11 and rms 25 mjy the spectrum from the follow up observation can be seen in figure 1 the velocity width at 50 peak flux is xmath17 xmath9 2 xmath0 the heliocentric velocity measured at the mid point of the velocity width is xmath18 xmath0 the integrated flux density is xmath19 094 xmath9 007 jy xmath0 errors were calculated as in henning et al 2010 following the methods of koribalski et al alfa zoa j1952 1428 has no cataloged counterparts within xmath20 two arecibo half power beamwidths in the nasa extragalactic database ned follow up c configuration evla observations were carried out to obtain high resolution h imaging of alfa zoa j1952 1428 the observations were scheduled dynamically for 3 xmath21 1 hour sessions and observed on december 3rd and 4th 2010 we utilized the widar correlator with 2 mhz bandwidth over 256 spectral channels resulting in 78 khz 16 xmath0 channel width the on source integration time was two hours the source 3c48 was used to calibrate the flux density scale and the source j1925 2106 xmath22 from the target source was used to calibrate the complex gains the editing calibration deconvolution and processing of the data were carried out in aips line free channels were extracted from the spectral line data cube and averaged to image the continuum in the field of the h source and to refine the phase and amplitude calibration the resulting phase and amplitude solutions were applied to the spectral line data set and a continuum free uv data cube was constructed by subtracting the continuum emission we then created a total intensity stokes i h image cube that was cleaned using natural weighting giving a synthesized beamwidth of xmath23 and an rms noise level of 26 mjy beamxmath11 channelxmath11 moment 0 h flux density moment 1 velocity field and moment 2 velocity dispersion maps were produced from the h image cube by smoothing across 3 velocity channels 5 xmath0 and 5 pixels spatially xmath24 at xmath25 per pixel and clipping at 26 mjy the 1xmath26 level of the unsmoothed cube these maps can be seen in figure 2 the angular extent of the h out to 1 mxmath6 pcxmath27 is xmath28 the h flux density shows a main peak and a secondary peak xmath29 away that overlaps a region of high velocity as well as significant velocity dispersion the velocity field shows structure but non uniform rotation the integrated flux from the arecibo and the evla spectra are 094 xmath9 007 jy km sxmath11 and 080 xmath9 013 jy km sxmath11 respectively the evla recovered all integrated flux to within 1xmath26 a comparison of the h profile between arecibo and the evla can be seen in figure 1 digitized sky survey dss images show what looks to be a very faint uncataloged galaxy that may be the optical counterpart the dss magnitudes of this object from supercosmos are mxmath1 175 xmath9 03 mag mxmath30 170 xmath9 03 mag the extinction in the area is relatively low for the zoa with values estimated to be axmath1 11 and axmath30 07 from the dirbexmath31 extinction maps schlegel finkbeiner davis 1998 though these values are somewhat uncertain at such low galactic latitudes applying extinction corrections gives xmath32 01 mag in order to obtain more accurate photometry alfa zoa j1952 1428 was observed using a bessell xmath2 band filter on april 12 2011 with the 09m sara telescope at kitt peak national observatory using an apogee alta u 42 2048 xmath21 2048 ccd the field of view was 138xmath33 xmath21 138xmath33 giving a plate scale of xmath34 pixelxmath11 the source was low on the horizon with an average airmass of 17 and an average seeing of xmath35 nine 5minute exposures were taken on source for a total exposure time of 45 minutes and calibration was done using the equatorial standard star pg1657 078a landolt 1992 the ccd images were bias subtracted dark corrected flat fielded and co added in iraf the image can be seen in figure 2 the apphot package was used for standard star photometry the reduced image reached a 1xmath26 surface brightness level of 25 mag arcsecxmath27 astrometric calibration and aperture photometry of alfa zoa j1952 1428 were carried out interactively with the graphical astronomy and image analysis tool gaia flux from the galaxy could be recovered out to a radius of xmath36 reaching a surface brightness of 235 mag arcsecxmath27 after which stellar contamination became significant the recovered flux within this radius was mxmath1 xmath37 magnitudes optical follow up observations with the sara telescope are ongoing xmath2 xmath38 and xmath3 band and h alpha observations are planned over the coming months xmath2 xmath38 and xmath3 band observations are being taken to a 1xmath26 surface brightness of 265 mag arcsecxmath27 this should be sufficient to see low surface brightness features that correspond to the faint outer parts of a normal spiral and will allow us to measure the galaxy s diameter at the standard 25 mag arcsecxmath27 level alfa zoa j1952 1428 has a heliocentric velocity of xmath39 279 xmath0 solving for its local group centered velocity using derivations of the solar motion with respect to the local group by courteau xmath40 van den bergh 1999 gives xmath41 491 xmath0 using hubble s law with xmath42 70 xmath43 puts this source at a distance of 7 mpc however hubble s law is not a reliable distance indicator here because the dispersion of peculiar velocities in the local universe xmath44 xmath0 is xmath45 km sxmath11 masters 2008 the galaxy is probably not closer than 3 mpc as the h linear size at this distance would be smaller than most compact galaxies containing h huchtmeier et al 2007 for the following analysis we take the distance to be 7 mpc although future observations may well revise this number as can be seen in the evla and sara images the h peak is slightly offset xmath46 from the optical emission indicating either a false counterpart or a disturbed h distribution the offset is xmath47300 pc at 7 mpc which is not uncommon even for isolated galaxies cf xmath47400 pc offset in vv 124 bellazzini et al this could conceivably be a pair of low surface brightness dwarf galaxies cf hizss 3 with separation of xmath47900 pc begum et al 2005 but there is no evidence for a second peak in the high signal to noise h spectrum shown in figure 1 further alfa zoa j1952 1428 has half the velocity width that the pair in hizss 3 appeared to have wxmath48 55 km sxmath11 for hizss 3 henning et al 2000 compared to wxmath48 28 km sxmath11 here any second galaxy would have to be much closer both spatially and in velocity than the pair in hizss 3 in order to escape detection deeper interferometric observations would be needed to be entirely conclusive it is possible that alfa zoa j1952 1428 is a high velocity cloud hvc co incident with an optical source cf hipass j1328 30 grossi et al 2007 though this is unlikely as there is strong evidence that alfa zoa j1952 1428 is not an hvc its recessional velocity does not lie near hvcs in this part of the sky cf figure 3a in morras et al the nearest population of hvcs is the smith cloud which lies xmath49 and 170 xmath0 away lockman et al 2008 at its nearest point if alfa zoa j1952 1428 were an hvc it would be a remarkable outlier furthermore the velocity field of alfa zoa j1952 1428 shows a gradient ten times larger than those of hvcs begum et al 2010 alfa zoa j1952 1428 appears to be a dwarf galaxy judging from its gaussian h profile and low h mass at a distance of 7 mpc xmath50 xmath51 mxmath6 which is significantly lower than the gaseous content of spiral type galaxies roberts and haynes 1994 also its low luminosity xmath7 xmath8 lxmath6 at 7 mpc h content and blue colors are strong evidence that it is not an early type galaxy there is no possible counterpart visible in 2mass archive images or listed within 8xmath33 in the 2mass extended source catalog jarrett et al we plan follow up nir observations later this year with the 14m infrared survey facility in sutherland south africa we will use observations by its main instrument sirius which has three detectors that operate simultaneously xmath52 xmath53 xmath54 with a field of view of 78 xmath21 78 arcmin table 1 summarizes the observational data and derived quantities columns 1 and 2 give equatorial coordinates j2000 for the h peak columns 3 and 4 give the galactic coordinates column 5 gives the heliocentric velocity from the mid point of the velocity width at 50 peak flux column 6 gives the velocity width at 50 peak flux column 7 gives the integrated flux column 8 gives the mxmath4lxmath1 ratio using the mxmath1 calculated from the sara telescope the error on lxmath1 is dominated by the unknown uncertainty in axmath1 thus we do not quote an error on mxmath4lxmath1 column 9 gives the angular size of the h out to the 1 mxmath6 pcxmath27 level the last two columns give values as a function of the distance to the galaxy in mpc column 10 gives the linear size of the h at its largest extent column 11 gives total h mass cols compared to other dwarf galaxies roberts haynes 1994 oneil et al 2000 alfa zoa j1952 1428 is not particularly gas rich xmath55 03 mxmath6lxmath6 but it is very small and blue with an h linear size of 14 xmath21 13 kpc at 7 mpc and xmath2 xmath3 01 mag the h mass xmath56 ratio blue optical colors and linear size of alfa zoa j1952 1428 are similar to those of blue compact dwarf bcd galaxies huchtmeier et al bcds are small blue irregular dwarf galaxies which have low surface brightness features ongoing star formation and higher metallicities than typical dwarf galaxies the velocity field of alfa zoa j1952 1428 shows structure but non uniform rotation which is common in blue compact dwarf galaxies ramya et al the velocity dispersion map shown in figure 2 shows a significant amount of dispersion around the stellar looking object on the left side of the galaxy this could be an ionized hydrogen region h alpha observations with the sara telescope will examine this and quantify the star formation in this system deep xmath2 xmath38 and xmath3 band observations with the sara telescope will reveal whether there are low surface brightness features in alfa zoa j1952 1428 alternatively there is evidence for the existence of blue metal poor gas rich dwarf galaxies on the margins of galaxy groups grossi et al these dwarfs are old 2 10 gyrs but have had remarkably little star formation in their history they are thought to be galaxies in transition between dwarf irregular and dwarf spheroidal galaxies alfa zoa j1952 1428 differs from the grossi et al galaxies because it has a lower xmath56 ratio and appears to be a field galaxy though it may be a part of a group behind the milky way that has not yet been discovered there is a recently discovered local group galaxy vv124 bellazzini et al 2011 which is similar to alfa zoa j1952 1428 in size h mass and xmath55 ratio this galaxy is isolated as alfa zoa j1952 1428 appears to be there is only one galaxy 38xmath57 away with xmath58 xmath0 within 10xmath57 of alfa zoa j1952 1428 in ned though this is not unusual in the zoa vv124 also shows an offset between the h and the optical counterpart as well as a velocity field with structure but non uniform rotation vv124 is considered to be a precursor of modern dwarf spheroidal galaxies that did not undergo an interaction driven evolutionary path follow up observations will reveal whether alfa zoa j1952 1428 has metallicity and star formation rates similar to a vv124type galaxy it is possible that alfa zoa j1952 1428 is a local group galaxy but it appears unlikely alfa zoa j1952 1428 does not follow the relationship between radial velocity and angle from the solar apex that most other local group members do as can be seen in figure 3 courteau xmath40 van den bergh 1999 further the linear size of the h would be 210 pc at 1 mpc which is 4 times smaller than the smallest compact dwarfs huchtmeier et al 2007 there are no known galaxy groups within xmath59 with xmath60 1000 xmath0 fouqu et al 1992 making alfa zoa j1952 1428 either a field galaxy or a member of an undiscovered nearby group continued analysis of the alfa zoa data may clarify this begum a chengalur j n karachentsev i d sharina m e 2005 mnras 359 53 henning p a staveley smith l ekers r d green a j haynes r f juraszek s kesteven m j koribalski b kraan korteweg r c price r m sadler e m schrder a 2000 aj 119 2686 henning p a springob c m minchin r f momjian e catinella b mcintyre t p day f muller e koribalski b rosenberg j l schneider s staveley smith l van driel w 2010 aj 139 2130
the arecibo l band feed array zone of avoidance alfa zoa survey has discovered a nearby galaxy alfa zoa j1952 1428 at a heliocentric velocity of 279 xmath0 the galaxy was discovered at low galactic latitude by 21cm emission from neutral hydrogen h we have obtained follow up observations with the evla and the 09m sara optical telescope the h distribution overlaps an uncataloged potential optical counterpart the h linear size is 14 kpc at our adopted distance of d 7 mpc but the distance estimate is uncertain as hubble s law is unreliable at low recessional velocities the optical counterpart has mxmath1 169 mag and xmath2 xmath3 01 mag these characteristics including mxmath4 10xmath5 mxmath6 and xmath7 xmath8 lxmath6 if at 7 mpc indicate that this galaxy is a blue compact dwarf but this remains uncertain until further follow up observations are complete optical follow up observations are ongoing and near infrared follow up observations have been scheduled
introduction arecibo observations and results evla observations, data reduction, and analysis optical observations, data reduction, and analysis discussion acknowledgments
among the great variety of the works devoted to random motions at finite speed in the euclidean spaces xmath8 see xcite xcite xcite xcite for the markovian case and xcite xcite for different non markovian cases the markov random flight in the three dimensional euclidean space xmath1 is undoubtedly the most difficult and hard to study while in the low even dimensional spaces xmath9 and xmath10 one managed to obtain the distributions of the motions in an explicit form see xcite xcite and xcite respectively in the important three dimensional case only a few results are known the absolutely continuous part of the transition density of the symmetric markov random flight with unit speed in the euclidean space xmath1 was presented in formulas 13 and 421 therein it has an extremely complicated form of an integral with variable limits whose integrand involves inverse hyperbolic tangent function this formula has so complicated form that can not even be evaluated by means of standard computer environments moreover the lack of the speed parameter in this formula impoverishes somewhat the model because it does not allow to study the limiting behaviour of the motion under various scaling conditions under kac s condition for example the presence of both parameters ie the speed and the intensity of switchings in any process of markov random flight makes it undoubtedly the most adequate and realistic model for describing the finite velocity diffusion in the euclidean spaces these parameters can not be considered as independent because they are connected with each other through the time namely the speed is the distance passed per unit of time and the intensity is the mean number of switchings per unit of time another question concerning the density presented in xcite is the infinite discontinuity at the origin xmath11 while the infinite discontinuity of the transition density on the border of the diffusion area is a quite natural property in some euclidean spaces of low dimensions see xcite for the euclidean plane xmath12 and the second term of formulas 13 and 421 xcite formula 312 in the space xmath1 the discontinuity at the origin looks somewhat strange and hard to explain the difficulty of analysing the three dimensional markov random flight and on the other hand the great theoretical and applied importance of the problem of describing the finite velocity diffusion in the space xmath1 suggest to look for other methods of studying this model that is why various asymptotic theorems yielding a good approximation would be a fairly desirable aim of the research such asymptotic results could be obtained by using the characteristic functions technique in the case of the three dimensional symmetric markov random flightsome important results for its characteristic functions were obtained in particular the closed form expression for the laplace transform of the characteristic function was obtained by different methods in formulas 16 and 58 for unit speed and in formula 45 xcite for arbitrary speed a general relation for the conditional characteristic functions of the three dimensional symmetric markov random flight conditioned by the number of changes of direction was given in formula 38 the key point in these formulas is the possibility of evaluating the inverse laplace transforms of the powers of the inverse tangent functions in the complex right half plane this is the basic idea of deriving the series representations of the conditional characteristic functions corresponding to two and three changes of direction given in section 3 based on these representations an asymptotic formula as time xmath5 for the unconditional characteristic function is obtained in section 4 and the error in this formula has the order xmath6 the inverse fourier transformation of the unconditional characteristic function yields an asymptotic formula for the transition density of the process which is presented in section 5 this formula shows that the density is discontinuous on the border but it is continuous at the origin xmath11 as it must be the unexpected and interesting peculiarity is that the conditional density corresponding to two changes of direction contains a term having an infinite discontinuity on the border of the diffusion area from this fact it follows that such conditional density is discontinuous itself on the border and this differs the 3d model from its 2d counterpart where only the conditional density of the single change of direction has an infinite discontinuity on the border the error in the obtained asymptotic formula has the order xmath6 in section 6we estimate the accuracy of the asymptotic formula and show that it gives a good approximation on small time intervals whose lengths depend on the intensity of switchings finally in appendices we prove a series of auxiliary lemmas that have been used in our analysis consider the stochastic motion of a particle that at the initial time instant xmath13 starts from the origin xmath14 of the euclidean space xmath1 and moves with some constant speed xmath15 note that xmath15 is treated as the constant norm of the velocity the initial direction is a random three dimensional vector with uniform distribution on the unit sphere xmath16 the motion is controlled by a homogeneous poisson process xmath17 of rate xmath3 as follows at each poissonian instant the particle instantaneously takes on a new random direction distributed uniformly on xmath18 independently of its previous motion and keeps moving with the same speed xmath15 until the next poisson event occurs then it takes on a new random direction again and so on let xmath19 be the particle s position at time xmath20 which is referred to as the three dimensional symmetric markov random flight at arbitrary time instant xmath20 the particle with probability 1 is located in the closed three dimensional ball of radius xmath21 centred at the origin xmath22 xmath23 consider the probability distribution function xmath24 of the process xmath4 where xmath25 is the infinitesimal element in the space xmath1 for arbitrary fixed xmath20 the distribution xmath26 consists of two components the singular component corresponds to the case when no poisson events occur on the time interval xmath27 and it is concentrated on the sphere xmath28 in this case at time instant xmath29 the particle is located on the sphere xmath30 and the probability of this event is xmath31 if at least one poisson event occurs on the time interval xmath32 then the particle is located strictly inside the ball xmath33 and the probability of this event is xmath34 the part of the distribution xmath26 corresponding to this case is concentrated in the interior xmath35 of the ball xmath33 and forms its absolutely continuous component let xmath36 be the density of distribution xmath37 it has the form xmath38 where xmath39 is the density in the sense of generalized functions of the singular component of xmath26 concentrated on the sphere xmath30 and xmath40 is the density of the absolutely continuous component of xmath26 concentrated in xmath41 the singular part of density struc2 is given by the formula xmath42 where xmath43 is the dirac delta function the absolutely continuous part of density struc2 has the form xmath44 where xmath45 is some positive function absolutely continuous in xmath41 and xmath46 is the heaviside unit step function given by xmath47 asymptotic behaviour of the transition density struc2 on small time intervals is the main subject of this research since its singular partis explicitly given by denss then our efforts are mostly concentrated on deriving the respective asymptotic formulas for the absolutely continuous component densac of the density our main tool is the characteristic functions technique because as it was mentioned above some closed form expressions for the characteristic functions both conditional and unconditional ones of the three dimensional symmetric markov random flight xmath4 are known in this section we obtain the series representations of the conditional characteristic functions corresponding to two and three changes of direction these formulas are the basis for our further analysis leading to asymptotic relations for the unconditional characteristic function and the transition density of the three dimensional symmetric markov random flight xmath4 on small time intervals the main result of this section is given by the following theorem theorem 1 the conditional characteristic functions xmath48 and xmath49 corresponding to two and three changes of direction are given respectively by the formulas xmath50 xmath51 xmath52 where xmath53 is bessel function xmath54 is the generalized hypergeometric function given by hypergeom54 see below and the coefficients xmath55 are given by the formula xmath56 02 cm proof it was proved in formula 38 that for arbitrary xmath20 the characteristic function xmath57 that is fourier transform xmath58 with respect to spatial variable xmath59 of the conditional density xmath60 of the three dimensional markov random flight xmath4 corresponding to xmath61 changes of directions is given by the formula xmath62boldsymbolalpha fracntn cvertboldsymbolalphavertn1 mathcal ls1 left left textarctg fraccvertboldsymbolalphaverts rightn1 rightt xmath63 where xmath64 is the inverse laplace transformation with respect to complex variable xmath65 and xmath66 is the right half plane of the complex plane xmath67 in particular in the case of two changes of directions xmath68 formula eq1 yields xmath69boldsymbolalpha frac2t2 cvertboldsymbolalphavert3 mathcal ls1 left left textarctg fraccvertboldsymbolalphaverts right3 rightt qquad boldsymbolalphainbbb r3 quad sinbbb c applying lemma b3 of the appendix b to the power of inverse tangent function in eq2 we obtain xmath70t frac2sqrtpi t2 sumk0infty fracgammaleft kfrac12 rightk 2k1 cvertboldsymbolalphavert2k qquad times 5f4left 111k kfrac12 kfrac12 kfrac12 frac32 2 1 right mathcal ls1 biggl frac1left s2 cvertboldsymbolalphavert2 rightk32 biggrt endaligned note that evaluating the inverse laplace transformation of each term of the series separately is justified because it converges uniformly in xmath65 everywhere in xmath71 and the complex functions xmath72 are holomorphic and do not have any singular points in this half plane moreover each of these functions contains the inversion complex variable xmath73 in a negative power and behaves like xmath74 as xmath75 and therefore all these complex functions rapidly tend to zero at infinity according to table 84 1 formula 57 we have xmath76t fracsqrtpigammaleft kfrac32 right left fract2cvertboldsymbolalphavert rightk1 jk1ctvertboldsymbolalphavert substituting this into eq3 after some simple calculations we obtain char2 for xmath77 formula eq1 yields xmath78boldsymbolalpha frac3t3 cvertboldsymbolalphavert4 mathcal ls1 left left textarctg fraccvertboldsymbolalphaverts right4 rightt qquad boldsymbolalphainbbb r3 quad sinbbb c applying lemma b4 of the appendix b to the power of inverse tangent function in eq4 and taking into account that xmath79t fracsqrtpik1 left fract2cvertboldsymbolalphavert rightk32 jk32ctvertboldsymbolalphavert we obtain xmath80t 3pi32 sumk0infty fracgammak ctvertboldsymbolalphavertk322k32 k1 jk32ctvertboldsymbolalphavert endaligned where the coefficients xmath55 are given by coef1 the theorem is proved xmath81 the series in formulas char2 and char3 are convergent for any fixed xmath20 however this convergence is not uniform in xmath82 therefore we can not invert each term of these series separately moreover one can see that the inverse fourier transform of each term does not exist for xmath83 thus while there exist the inverse fourier transforms of the whole series char2 and char3 it is impossible to invert their terms separately and therefore we can not obtain closed form expressions for the respective conditional densities these formulas can nevertheless be used for obtaining the important asymptotic relations and this is the main subject of the next sections using the results of the previous section we can now present an asymptotic relation on small time intervals for the characteristic function xmath84 of the three dimensional symmetric markov random flight where xmath85 are the conditional characteristic functions corresponding to xmath86 changes of direction this result is given by the following theorem theorem 2 for the characterictic function xmath87 of the three dimensional markov random flight xmath4 the following asymptotic formula holds xmath88 qquadqquadqquad fraclambda2 tcvertboldsymbolalphavert j1ctvertboldsymbolalphavert fraclambda3 sqrtpi t322 cvertboldsymbolalphavert32 j32ctvertboldsymbolalphavertbiggr ot3 endaligned xmath52 where xmath89 and xmath90 are the incomplete integral sine and cosine respectively given by the formulas xmath91 02 cm proof we have xmath92 since all the conditional characteristic functions are uniformly bounded in both variables that is xmath93 then xmath94 and therefore xmath95 in view of char2 we have xmath96 endaligned from the asymptotic formula xmath97 we get xmath98 and therefore xmath99 thus we obtain the following asymptotic relation xmath100 similarly according to char3 we have xmath101 endaligned in view of asbes we have xmath102 and therefore xmath103 thus taking into account that xmath104 see coef1 we arrive at the formula xmath105 since see formula 311 xmath106 and xmath107 that is characteristic function of the uniform distribution on the surface of the three dimensional sphere of radius xmath21 then by substituting these formulas as well as eq8 and eq9 into eq7 we finally obtain asymptotic relation eq6 the theorem is completely proved asymptotic formula eq6 for the unconditional characteristic function enables us to obtain the respective asymptotic relation for the transition density of the process xmath4 this result is given by the following theorem theorem 3 for the transition density xmath108 of the three dimensional markov random flight xmath4 the following asymptotic relation holds xmath109 thetactvertbold xvert ot3 endaligned xmath110 02 cm proof applying the inverse fourier transformation xmath111 to both sides of eq6 we have xmath112bold x qquadquad mathcal fboldsymbolalpha1 biggl fraclambdac2 t vertboldsymbolalphavert2 biggl sinctvertboldsymbolalphavert textsi2ctvertboldsymbolalphavert cosctvertboldsymbolalphavert textci2ctvertboldsymbolalphavert biggr biggrbold x qquadquad mathcal fboldsymbolalpha1 biggl fraclambda2 tcvertboldsymbolalphavert j1ctvertboldsymbolalphavert biggrbold x qquadquad mathcal fboldsymbolalpha1 biggl fraclambda3 t2cvertboldsymbolalphavert2 left fracsinctvertboldsymbolalphavertctvertboldsymbolalphavert cosctvertboldsymbolalphavert right biggrbold x biggr ot3 endaligned note that here we have used the fact that due to the continuity of the inverse fourier transformation the asymptotic formula xmath113bold x ot3 holds let us evaluate separately the inverse fourier transforms on the right hand side of dens2 the first one is well known see xcite xmath114bold x frac14pi ct2 deltac2t2vertbold xvert2 that is the uniform density concentrated on the surface of the sphere xmath115 of radius xmath21 centred at the origin xmath11 the second fourier transform on the right hand side of dens2 is also well known see the theorem or formulas 311 and 312 xmath116bold x hskip 4 cm fraclambda4pi c2 t vertbold xvert lnleft fracctvertbold xvertctvertbold xvert right thetactvertbold xvert endaligned applying the hankel inversion formula we have for the third fourier transform on the right hand side of dens2 xmath117bold x fraclambda2 tc 2pi32 vertbold xvert12 int0infty j12vertbold xvert xi xi32 xi1 j1ctxi dxi taking into account that xmath118 and applying formula 212152 we have xmath119bold x fraclambda2 t2pi2 c vertbold xvert int0infty sinvertbold xvert xi j1ctxi dxi fraclambda2 t2pi2 c vertbold xvert c2t2vertbold xvert212 left fracvertbold xvertct right thetactvertbold xvert fraclambda22pi2 c2 sqrtc2t2vertbold xvert2 thetactvertbold xvert endaligned this is a fairly unexpected result showing that the conditional density xmath120 corresponding to two changes of direction has an infinite discontinuity on the border of the three dimensional ball xmath33 this property is similar to that of the conditional density xmath121 corresponding to the single change of direction for the respective joint density see dens4 applying the hankel inversion formula and taking into account bessin we have for the fourth term on the right hand side of dens2 xmath122bold x fraclambda3 sqrtpi t322 c32 2pi32 vertbold xvert12 int0infty j12vertbold xvert xi xi32 xi32 j32ctxi dxi fraclambda3 sqrt2 t328c32 pisqrtpi vertbold xvert int0infty xi12 sinvertbold xvert xi j32ctxi dxi endaligned using formula 66991 we obtain xmath123bold x fraclambda3 sqrt2 t328c32 pisqrtpi vertbold xvert frac212 sqrtpi vertbold xvert ct32gamma1 thetactvertbold xvert fraclambda38pi c3 thetactvertbold xvert endaligned substituting now dens3 dens4 dens5 and dens6 into dens2 we arrive at dens1 the theorem is proved xmath81 at instant xmath124 xmath125 for xmath126 on the interval xmath127width377height302 1 cm the shape of the absolutely continuous part of density dens1 at time instant xmath124 for xmath128 on the interval xmath127 is plotted in fig the error in these calculations does not exceed 0001 we see that the density increases slowly as the distance xmath129 from the origin xmath11 grows while near the border this growth becomes explosive from this factit follows that for small time xmath29 the greater part of the density is concentrated outside the neighbourhood of the origin xmath11 and this feature of the three dimensional markov random flight is quite similar to that of its two dimensional counterpart the infinite discontinuity of the density on the border xmath130 is also similar to the analogous property of the two dimensional markov random flight see for comparison formula 20 and figure 2 therein notethat density dens1 is continuous at the origin as it must be remark 2 using dens1 we can derive an asymptotic formula as xmath5 for the probability of being in a subball xmath131 of some radius xmath132 centred at the origin xmath11 applying formula 4642 and formula 15131 we have xmath133 this series can be expressed through the special lerch xmath134function applying again formula 4642 we get xmath135 where we have used the easily checked equality xmath136 then by integrating the absolutely continuous part of dens1 over the ball xmath137 and taking into account dens7 and dens8 we have for arbitrary xmath132 xmath138 elambda t biggl fraclambda4pi c2 t 8pi r ct sumk1infty frac14k2 1 left fracr2c2t2 rightk qquadqquad fraclambda22pi2 c2 biggl 2pi ct2 arcsinleft fracrct right 2pi r sqrtc2t2r2 biggr fraclambda38pi c3 frac43 pi r3 biggr endaligned and after some simple computations we finally arrive at the following asymptotic formula for xmath132 xmath139 qquad tto 0 endaligned the error in asymptotic formula dens1 has the order xmath6 this means that for small xmath29 this formula yields a fairly good accuracy to estimate it let us integrate the function in square brackets of dens1 over the ball xmath33 for the first term in square brackets of dens1 we have xmath140 because the second integrand is the conditional density corresponding to the single change of direction see the theorem or formula 312 and therefore the second integral is equal to 1 applying formula 4642 we have for the second term in square brackets of dens1 xmath141 for the third term in square brackets of dens1 we get xmath142 hence in view of est1 est2 and est3 the integral of the absolutely continuous part in asymptotic formula dens1 is xmath143 dx1 dx2 dx3 elambda t left lambda t fraclambda2t22 fraclambda3 t36 right endaligned note that est4 can also be obtained by passing to the limit as xmath144 in asymptotic formula dens9 on the other hand according to struc1 and densac the integral of the absolutely continuous part of the transition density of the three dimensional markov random flight xmath4 is xmath145 the difference between the approximating function xmath146 and the exact function xmath147 given by est4 and est5 enables us to estimate the value of the probability generated by all the terms of the density aggregated in the term xmath6 of asymptotic relation dens1 the shapes of functions xmath147 and xmath146 on the time interval xmath148 for the values of the intensity of switchings xmath149 are presented in figures 2 and 3 we see that for xmath150 the function xmath146 yields a very good coincidence with function xmath147 on the subinterval xmath151 fig 2 left while for xmath152 fig 2 right such coincidence is good only on the subinterval xmath153 the same phenomenon is also clearly seen in figure 3 where for xmath154 the function xmath146 yields a very good coincidence with function xmath147 on the subinterval xmath155 fig 3 left while for xmath156 such good coincidence takes place only on the subinterval xmath157 fig 3 right thus we can conclude that the greater is the intensity of switchings xmath7 the shorter is the subinterval of coincidence this fact can easily be explained really the greater is the intensity of switchings xmath7 the shorter is the time interval on which no more than three changes of directions can occur with big probability this means that for increasing xmath7 the asymptotic formula dens1 yields a good accuracy on more and more small time intervals however for arbitrary fixed xmath7 there exists some xmath158 such that formula dens1 yields a good accuracy on the time interval xmath159 and the error of this approximation does not exceed xmath160 this is the essence of the asymptotic formula dens1 appendices in the following appendices we establish some lemmas that have been used in the proofs of the above theorems note that some of them are of a separate mathematical interest because no similar results can be found in the mathematical handbooks lemma a1 for arbitrary integer xmath161 and for arbitrary real xmath162 the following formula holds xmath163 xmath164 02 cm proof using the well known relations for pochhammer symbol xmath165 and the formula for euler gamma function xmath166 we can easily check that the sum on the left hand side of appa1 is xmath167 where xmath168 is the generalized hypergeometric function according to item 744 page 539 formula 88 xmath169 substituting this into appa3 we obtain appa1 the lemma is proved in this appendix we derive series representations for some powers of the inverse tangent function that have been used in the proofs of the above theorems moreover these results are of a more general mathematical interest because to the best of the author s knowledge there are no series representations similar to appb2 appb4 and appb6 see below in mathematical handbooks including xcite xcite xcite xmath184 substituting these coefficients into appb3 we obtain appb2 the uniform convergence of the series in formula appb2 can be established similarly to that of lemma b1 this completes the proof of the lemma xmath81 lemma b3 for arbitrary xmath185 the following series representation holds xmath186 where xmath187 is the generalized hypergeometric function the series in appb4 is convergent uniformly in xmath172 02 cm proof from appb1 and appb2 it follows that xmath188 where the coefficients xmath55 are given by xmath189 applying apa3 appa2 and the formula xmath190 after some simple computations we arrive at the relation xmath191 substituting these coefficients into appb5 we obtain appb4 the lemma is proved xmath81 lemma b4 for arbitrary xmath177 the following series representation holds xmath192 where the coefficients xmath55 are given by the formula xmath193 the series in appb6 is convergent uniformly in xmath172 proof according to lemma b2 we have xmath194 where the coefficients xmath195 are xmath196 frac2k2 suml0k fracl k ll1 gammaleft lfrac32 right gammaleft k lfrac32 right endaligned substituting this into appb7 we get the statement of the lemma xmath81
we consider the markov random flight xmath0 in the three dimensional euclidean space xmath1 with constant finite speed xmath2 and the uniform choice of the initial and each new direction at random time instants that form a homogeneous poisson flow of rate xmath3 series representations for the conditional characteristic functions of xmath4 corresponding to two and three changes of direction are obtained based on these results an asymptotic formula as xmath5 for the unconditional characteristic function of xmath4 is derived by inverting it we obtain an asymptotic relation for the transition density of the process we show that the error in this formula has the order xmath6 and therefore it gives a good approximation on small time intervals whose lengths depend on xmath7 estimate of the accuracy of the approximation is analysed asymptotic relation for the transition density of the three dimensional markov random flight on small time intervals alexander d kolesnik institute of mathematics and computer science academy street 5 kishinev 2028 moldova e mail kolesnikmathmd 02 cm 01 cm keywords markov random flight persistent random walk conditional density fourier transform characteristic function asymptotic relation transition density small time intervals 02 cm ams 2010 subject classification 60k35 60k99 60j60 60j65 82c41 82c70
introduction description of the process and structure of distribution conditional characteristic functions asymptotic formula for characteristic function asymptotic relation for the transition density estimate of the accuracy auxiliary lemma powers of the inverse tangent function
the dimensionality of the 115 materials cerhinxmath1 ceirinxmath1 and cecoinxmath1 appears to be related to their superconducting transition temperature the material with the highest txmath2 cecoinxmath0 has the most 2d like fermi surface fs of the three xcite cerhinxmath0 has a high txmath2 xmath321 k but only under a pressure of xmath316 kbar at ambient pressures cerhinxmath0 is an anti ferromagnet the fs of cerhinxmath0 was the subject of one of our recent publicationsxcite in order to confirm the link between the superconducting state and fs dimensionality the fs as a function of pressure in cerhinxmath0 should be measured if the fs becomes more 2d like as the critical pressure is approached then this will be evidence for making a connection in these materialsit seems that superconductivity does not appear until the overlap between the f electron wavefunctions is sufficient to allow band like behavior measurements of the fs as a function of pressure should show this increasing overlap as a change in topography herewe present measurements up to 79 kbar about half the critical pressure for cerhinxmath1 we have designed and built small pressure cells capable of running in a dilution refrigerator and in a rotator measuring torque inside a pressure cell is impossible so we have made small compensated pickup coils which fit into the cell each coil has four to five thousand turns the filling factor approaches unity because we are able to situate the coil along with the sample inside the cell a small coil is wound on the exterior of the cell to provide an ac modulation of the applied field we have measured the fs of cerhinxmath0 under several pressures at each pressurewe measure fs frequencies and their amplitude dependence as a function of temperature from thiswe can extract information about how the effective mass of the quasiparticles is changing as the pressure is increased the figures show the fourier spectra of cerhinxmath0 under xmath379 kbar the crystal was oriented so that the a b axis plane is perpendicular to the applied field at xmath379 kbar and at ambient pressures measured in the pressure cell prior to pressurization reveals little that is suggestive of change we show the 79 kbar data compared with two sets of data taken at ambient pressure in fig highfft the fs at 79 kbar is compared with the ambient data taken with a torque cantilever the same data reported in xcite because the modulation field for the ac measurements in the pressure cell was so small the lowest frequencies can be ignored notice that the 1411 t fxmath4 the designation given in ref xcite and 1845 t peaks are reproduced exactly in the ambient and the pressure data sets the 1845 t peak was not included in ref xcite because of its small amplitude in ambient pressure torque measurements the 3600 t fxmath5 and 6120 t fxmath6 peaks are present in both data sets however the fxmath5 appears to have split and the fxmath6 appears to have shifted down in frequency such changes could be explained as slight differences of sample alignment with respect to the applied field between the torque measurement and the pressure cell measurement three other frequencies 2076 t 2710 t and 4613 t emerge in the pressure data which are close to to some reported in ref xcite to be observed only at the lowest temperatures 25 mk all but the first of these frequencies are seen also in ambient pressure data taken with the sample in the pressure cell prior to pressurization as shown in fig lowfft thus assuming the differences in frequency between the torque measurements and pressure cell measurements are due to differences in alignment we can make frequency assignments that follow ref xcite also shown in fig lowfft the relative increase in amplitude with increasing pressure of these three peaks could be a result of the increase of the coupling factor between the sample and the coil as the two are compressed together the lack of any clear differences in the fs up to 79 kbar suggests that if the fs changes then such change is not a linear function of pressure nor is there a compelling reason to think that it should be a linear function possibly at some pressure closer the the critical pressure the transition to f electron itinerate behavior will take place leading to more noticable changes in the fs the fs of cerhinxmath1 appears to remain topographically stable under the application of pressure up to 79 kbar additional measurements which approach the critical pressure xmath316 kbar are of prime importance this work was performed at the national high magnetic field laboratory which is supported by nsf cooperative agreement no dmr9527035 and by the state of florida work at los alamos was performed under the auspices of the u s dept of energy donavan hall ec palm tp murphy sw tozer eliza miller ricci lydia peabody charis quay huei li u alver rg goodrich jl sarrao pg pagliuso j m wills and zfisk b 64 064506 2001 cond mat0011395
measurements of the de haas van alphen effect have been carried out on the heavy fermion anti ferromagnet cerhinxmath0 at temperatures between 25 mk and 500 mk under pressure we present some preliminary results of our measurements to track the evolution of the fermi surface as the pressure induced superconducting transition is approached de haas van alphen heavy fermions superconductivity high pressure
introduction results discussion conclusions
the presence of a surfactant film at a fluid fluid interface alters the dynamics of the interface this is manifested in behavior of the interfacial waves induced either externally or by thermal fluctuations xcite the interfacial dynamics can be probed by measuring the light scattered on such surface waves see the review by earnshaw xcite the scattering of light on surface waves is a powerful tool for probing the properties of surfactant films at fluid interfaces xcite and a variety of systems have been recently investigated using this method eg refs xcite see also the review by cicuta and hopkinson xcite recently the application of surfactant films to modify the interfacial properties has been extended to the systems in which one of the fluids is in liquid crystalline phase eg liquid crystal colloids xcite the presence of a liquid crystal as one of the fluids complicates the problem of probing the interfacial properties by studying the dynamics of the surface waves for the following reasons firstly there are additional degrees of freedom in the bulk of the liquid crystal phase due to its anisotropy secondly the interaction with the surfactant film is more complicated due to anisotropic anchoring finally the surfactant film in the anisotropic field created by the neighboring liquid crystal can itself show anisotropic behavior even if it behaves as an two dimensional isotropic fluid at the boundary between isotropic fluids a promising new direction for chemical and biological sensing devices has recently emerged which utilizes the properties of surfactant films self assembled on the interface between water and a nematic liquid crystal the surfactant film induces preferred orientation of the nematic director xcite the adsorption of chemical or biological molecules at such interface can then lead to reorientation of the nematic director enabling detection by an imaging system xcite in these methods easy detection is limited to the systems in which adsorption changes anchoring properties of the interface with respect to the adjacent liquid crystal phase quite considerably namely the equilibrium anchoring angle should change in magnitude the range of application of these systems could be made significantly broader however if a method were used that was sensitive to changes in the anchoring properties of the interface that did not necessarily result in nematic director reorientation for example the anchoring orientation may remain unchanged xcite the adsorption only changing the strength of the anchoring if a small amount of an analyte is present in the water it may be adsorbed at the surfactant layer provided the surfactant molecules possess appropriate chemical properties generally such adsorption will result in a change in the elastic and viscous properties of the interface hence sensitive experiments which are able to determine the interfacial properties will allow much more detailed experimental insight into the properties of the interaction between the surfactants and the analyte than has hitherto been available and experimental study of surface waves is a possible technique for this purpose the theoretical description of surface waves at interfaces between nematic and isotropic liquids was made back in 1970s xcite the results demonstrated that the spectrum of surface waves has a more complicated structure than in the isotropic case and allows the use surface scattering experiments to determine properties of nematic interfaces xcite since then several theoretical and experimental advances have been made and presently these systems remain a subject of investigation xcite the present paper presents a theoretical study of the dispersion of the surface waves at a monomolecular surfactant film between an isotropic liquid eg water and a nematic liquid crystalthe main distinguishing features of such interfaces are i the anchoring induced by the surfactant layer ii the curvature energy of the interface iii reduction of surface tension due to surfactant and iv the anisotropy of the surface viscoelastic coefficients we base our treatment on the mechanical model for anisotropic curved interfaces by rey xcite which takes into account anchoring and bending properties of the surfactant we consider the case of the insoluble surfactant film that is in its most symmetric phase isotropic two dimensional fluid and induces homeotropic normal to the surface orientation of the director the paper is organized as follows the continuum model used in the rest of the paperis set up in section sec model in section sec dispersion the dispersion relation for surface waves is derived in section sec modes the numerical solution of the dispersion relation is solved with typical values of material parameters and dispersion laws for different surface modes are analyzed in absence of the external magnetic field and the influence of the magnetic field is discussed in section sec field the explicit form of the dispersion relation is written in appendix app dispersion in this section we formulate the model of the surfactant laden interface between an isotropic liquid and a nematic liquid crystal used in the present paper and write down the governing equations we base our treatment upon the models of the nematic isotropic interface by rey xcite and well known hydrodynamic description of isotropic liquids xcite and nematic liquid crystals xcite we consider the case when the surfactant film induces homeotropic normal to the surface orientation of the nematic director which is usually true in a range of the surfactant concentrations xcite this case is the simplest to analyze and at the same time the most important for biosensing applications where the direct change in anchoring angle can not be always observed we include optional external magnetic field in our study and limit our analysis by considering the direction of the magnetic field that does not change equilibrium orientation of the nematic director we assume that the system is far enough from any phase transitions both in the surfactant film xcite and in the nematic phase xcite thus we avoid complications related to the fluctuations of the nematic and surfactant order parameters and the divergence of viscoelastic parameters near phase transitions the surfactant films can exhibit rich phase behavior xcite and the form of the surface stress tensor depends upon the symmetry of the interface however this does not normally influence much the dispersion laws of the surface modes compared to the isotropic case xcite in the present paperwe assume that the surfactant film is in the most symmetric phase isotropic two dimensional fluid although the symmetry of the film should break in presence of the adjacent liquid crystalline bulk phase the film remains isotropic in equilibrium if the anchoring of the nematic is homeotropic and symmetry breaking can occur only due to fluctuations of the director field if we introduce the order parameter for the film the corresponding anisotropic contributions to the interfacial stress tensor would be of higher order in the fluctuations of the dynamic variables than is required in our linearized treatment so such contributions can be omitted we consider a surfactant layer at an interface between nematic and isotropic liquids to be macroscopically infinitely thin we assume that the surfactant film is insoluble and newtonian this means that the model is applicable to systems in which the interchange of surfactant molecules between the interface and adjacent bulk fluids is small and the relaxation of the orientation of surfactant molecules is fast compared to relaxation of surface waves we also assume heat diffusion to be sufficiently fast so that the system is in thermal equilibrium we do not consider systems where other effects such as polarity are important we shall choose coordinate system in such a way that the unperturbed interface lies at a plane xmath0 the half space xmath1 is occupied by the uniaxial nematic liquid crystal and the half space xmath2 is filled by the isotropic liquid other details of the geometry used in the present paper are summarized in appendix app geometry the central equations in the present section are the conditions for the balance of forces eq eq forcebalance and torques eq eq torquebalance at the interface the explicit form of these equations depends upon the chosen macroscopic model and the rest of this section is devoted to formulation of the model used in the present paper the interfacial force balance equation is the balance between the interfacial force and the bulk stress jump xmath3 here xmath4 is the force per unit area exerted by the interfacial stress xmath5 xmath6 is the force per unit area exerted by the isotropic fluid xmath7 is the force per unit area exerted by the nematic liquid crystal the subscript xmath8 indicates that the bulk stress fields in the isotropic liquid xmath9 and in the nematic xmath10 are evaluated at the interface xmath11 is the unit vector normal to the interface and directed into the isotropic liquid the interfacial torque balance equation can be cast as xmath12 where xmath13 is the interfacial torque arising due to surface interactions xmath14 is the torque exerted upon the interface by the adjacent nematic liquid crystal the explicit model for surface and bulk stresses and torques that enter eqs eq forcebalance and eq eq torquebalance is expanded in the remainder of this section in this and the following subsections we summarize the equations for the surface stress tensor xmath5 and surface torque vector xmath13 we represent these quantities as a sum of corresponding non dissipative elastic and dissipative viscous contributions xmath15 xmath16 to describe the non dissipative contributions in the surface stress tensor xmath17 and surface torque vector xmath18 we use the equilibrium model proposed by rey xcite which is summarized below rey considered the interface with the helmholtz free energy per unit mass xmath19 of the form xmath20 where xmath21 is the surface mass density xmath22 is the second fundamental tensor of the interface see appendix app geometry the corresponding differential was written as xmath23 where xmath24mathbf kmathbf b is the interfacial tension xmath25 is the tangential component of the capillary vector xmath26 is the surface projector and xmath27 is the bending moment tensor the elastic surface stress tensor was found to be xmath28 where the tangential surface molecular field is given by xmath29 xmath30 is surface gradient operator xmath31 denotes variational derivative with respect to xmath11 the elastic contribution to surface torquewas written as xmath32 where xmath33 is the surface couple stress xmath34 is the levi civita tensor and xmath35 is the surface alternator tensor the viscous properties of interfaces between an isotropic fluid and a nematic liquid crystal were considered in detail by rey xcite and the results are summarized below the forces and fluxes that contribute to the dissipation function xmath36 were identified as follows xmath37 where xmath38 and xmath39 are correspondingly symmetric and antisymmetric parts of the surface viscous stress tensor xmath40 xmath41 and xmath42 are the components of the surface viscous molecular field tangential and normal to the surface xmath43 is the surface rate of deformation tensor xmath44 denotes the transposed tensor xmath45 is the surface vorticity tensor xmath46 is surface velocity xmath47 and xmath48 are the total time derivatives of the components xmath49 and xmath50 of the nematic director field xmath51 tangential and normal to the surface correspondingly generally presence of the surfactant film at the interface complicates the form of the entropy production due to additional internal degrees of freedom of the surfactant and to the anisotropy of the adjacent nematic liquid however if the surfactant film that is in its isotropic liquid phase and favors homeotropic anchoring of the nematic the resulting anisitropic terms in the entropy production introduce corrections to the hydrodynamic equations of higher order than linear and therefore can be neglected in the linearized treatment since this is the case we are considering we shall adopt the form of the entropy production eq entropyproduction in our model and use the form of the viscous contribution to the surface stress tensor derived by rey xcite which is given by xmath52endaligned where xmath53 is the surface jaumann corrotational derivative xcite of the tangential component of the director xmath54 and xmath55 xmath56 are nine independent surface viscosity coefficients in the isotropic case xmath57 the expression for the surface viscous stress tensor reduces to the viscous stress tensor of boussinesq schriven surface fluid xcite with the interfacial shear viscosity xmath58 given by xmath59 and dilatational viscosity xmath60 given by xmath61 the surface viscous torque corresponding to eq eq entropyproduction is given by xcite xmath62 where the surface viscous molecular field xmath63 is xmath64 the viscosity coefficients xmath65 can be expressed in terms of quantities xmath66 we shall need only the expression for the tangential rotational viscosity xmath67 to calculate explicitly the interfacial tension xmath68 eq eq def tau the tangential component of the capillary vector xmath69 eq eq def xi and the bending moment tensor xmath70 eq eq def m we need to know the dependence of the surface free energy xmath19 on the orientation of the interface given by unit normal vector xmath11 and on its curvature described by second fundamental tensor xmath22 for small deviations of xmath11 and xmath22 from equilibrium we can expand the free energy in powers of these quantities and truncate the series the result can be represented as xmath71 each of the contribution described below the contribution xmath72 corresponds to the surface tension xmath73 of the equilibrium interface flat interface adjacent nematic director normal to the interface xmath74 the anchoring contribution to the surface free energy density xmath75 describes the energetics of the preferred alignment direction of the nematic director relative to the interface for the homeotropic equilibrium anchoring it can be written in terms of xmath54 as follows xmath76 such expansion applied to the widely used rapini papoular form of the anchoring free energy density xcite xmath77 shows that these definitions of the anchoring strength coefficient have opposite signs xmath78 we shall use xmath79 as the anchoring strength coefficient to ensure that it is positive in the case of the homeotropic anchoring being considered the third contribution to the surface free energy density xmath80 is caused by finite interface thickness and is related to the difference of the curvature of a surfactant film from the locally preferred spontaneous value the widely used form of this contribution is the helfrich curvature expansion xcite xmath81 here the geometry of the interface is described by the mean curvature xmath82 and the gaussian curvature xmath83 and the material parameters characterizing the interface are the bending rigidity xmath84 the saddle splay or gaussian rigidity xmath85 and the spontaneous curvature xmath86 the term xmath87 guarantees that the curvature energy of a flat interface xmath88 xmath89 is zero to complete the description of the interface we need the continuity equation for the surfactant concentration xmath90 for insoluble surfactants the continuity equation reads xmath91 we shall extend the description of the dependence of the interfacial tension upon the concentration of surfactant presented by buzza xcite to other parameters characterizing the interface surface tension xmath73 anchoring strength xmath79 bending rigidity xmath84 saddle splay rigidity xmath85 spontaneous curvature xmath86 and surface viscosities xmath66 xmath92 for small deviation xmath93 of the surfactant concentration xmath90 from its equilibrium value xmath94 these coefficients can be written in form xmath95 and similarly for other quantities casting surface velocity xmath46 as the time derivative of the small surface displacement xmath96 xmath97 we obtain from the continuity equation eq eq continuitys that xmath98 this allows us to represent the material parameters of the interface as xmath99 xmath100 xmath101 xmath102 in these formulas xmath103 xmath104 xmath105 xmath106 are correspondingly the interfacial tension anchoring strength bending rigidity and spontaneous curvature in the unperturbed interface xmath107 is the static dilatational elasticity xmath108 xmath109 and xmath110 are coefficients in the first order term of the expansion of anchoring strength bending rigidity and spontaneous curvature in powers of xmath111 there are similar expansions for gaussian rigidity xmath85 and surface viscosities xmath66 xmath92 magnetic field xmath112 in the isotropic and nematic regions satisfies maxwell equations xcite xmath113 xmath114 neglecting magnetization of the interface the boundary conditions read xmath115 xmath116 here the magnetization of the isotropic liquid is xmath117 where xmath118 is the magnetic permeability of the isotropic liquid the magnetization of the uniaxial nematic liquid crystal is xcite xmath119 where xmath120 is the difference of the longitudinal and transversal magnetic permeabilities of the nematic xmath121 we assume both the isotropic liquid and the nematic liquid crystal are incompressible so that their densities xmath122 and xmath123 are constant the linearized equations for the incompressible isotropic liquid are well known xcite they are the continuity equation xmath124 and navier stokes equations xmath125 where the hydrodynamic stress tensor is given by xmath126 where xmath127 is the shear viscosity of the isotropic liquid xmath128 is the unit tensor xmath129 is the strain rate tensor we assume the non slip boundary condition for the velocities of bulk fluids adjacent to the interface which means the equality of the velocity of surfactant xmath46 and that of the bulk fluids at an interface xmath130 xmath131 to describe the dynamics of the nematic liquid crystal that is far from the isotropic nematic transition and has small deviations from its equilibrium state we shall use the linearized form of the eriksen leslie theory xcite the linearized equations for the incompressible nematic liquid crystal are the continuity equation eq continuity the equation for the velocity xmath132 and the equation for the director xmath133 here xmath134 is the antisymmetric vorticity tensor xmath135 is the reactive material parameter xmath136 is orientational viscosity xmath137 is the molecular field which assuming frank form of the elastic free energy of a nematic liquid crystal in magnetic field xcite xmath138 2 frack32leftmathbf ntimesleftnablatimesmathbf nrightright2 frac12chiamathbf ncdotboldsymbolmathcal h2endaligned has the linearized form xmath139 where xmath140right n0timesleftmathbf n0times leftnablatimesdeltamathbf nrightrightright chiamathbf ncdotboldsymbolmathcal hboldsymbolmathcal hendaligned xmath141 xmath142 and xmath143 are the splay twist and bend frank elastic constants correspondingly the stress tensor can be represented as a sum of reactive and viscous dissipative contributions xmath144 the linearized form of the reactive part is xmath145 the linearized viscous stress tensor of incompressible nematic is xmath146 the quantities xmath147 xmath148 xmath149 xmath136 and xmath135 can be expressed through more commonly used leslie viscosity coefficients xcite note that equating xmath150 recovers the viscous stress tensor xmath151 of the isotropic incompressible fluid last term in eq eq sigmai the aim of this section is to construct the dispersion relation for the surface waves on the basis of the model set up above we consider a surface wave with frequency xmath152 and wavevector xmath153 propagating along xmath154 axis and solve force balance equation eq eq forcebalance and torque balance equation eq eq torquebalance using linearized form of the hydrodynamic equations written in section sec model in order to linearize the hydrodynamic equations we represent pressure xmath155 and the nematic director xmath156 where xmath157 is the position in space xmath158 is time in form xmath159 xmath160 where xmath161 and xmath162 are the deviations of pressure and director from their equilibrium values xmath163 and xmath164 correspondingly the velocity xmath165 is itself the deviation from zero equilibrium velocity homeotropic anchoring corresponds to xmath166 for small deviations from the equilibrium we shall use the hydrodynamic equations linearized in xmath167 xmath161 and xmath162 we shall assume these quantities to be independent of the coordinate xmath168 xmath169 and vanish at xmath170 the magnetic field can be also represented as xmath171 where xmath172 is the equilibrium value and the deviation xmath173 can be found from the linearized form of the maxwell equations eq rotmaxwell eq divmaxwell the terms in the final equations containing xmath173 are of higher order than linear so we shall use only the equilibrium value and skip the 0 subscript so that xmath174 substituting the interfacial free energy density eq fs into eqs eq def tau eq def xi eq def m and eq def hse we find the contributions up to the first order in xmath96 and its derivatives and xmath54 into surface tension xmath175 bending moment tensor xmath176mathbf is barkappamathbf b tangential component of the capillary vector xmath177 and tangential surface molecular field xmath178 nablascdotleftbarkappamathbf bright the non vanishing components of the surface viscous stress tensor eq sigmasv are xmath179 the total interfacial force xmath180 can be found by substituting eqs eq sigmas eq sigmase and eq expl taueq expl sigmasv into eq eq def forces and has components xmath181 where xmath182 to write the explicit form of the force balance equations eq forcebalance we also need the expressions for the components of the force eq def forcei exerted by the isotropic fluid xmath183 and the components of the force eq def forces exerted by the nematic liquid crystal xmath184z0 fnyleftfrac1lambda2hynu3partialzvyrightz0 fnzleftp2nu1partialzvzrightz0endaligned the hydrodynamic fields xmath167 xmath185 xmath51 in the bulk isotropic and nematic liquids are found by solution of the hydrodynamic expressions the explicit formulas are presented in appendices app isotropic and app nematic next we introduce fourier transforms in the xmath154 coordinate and in time as xmath186 xmath187 xmath188 for brevity we shall henceforth omit arguments of the transformed functions performing fourier transform of the force balance equation eq forcebalance and substituting xmath189 we obtain balance equations for the force components in form xmath190cinvert nonumber iomegafrac1lambda2 sumi13leftk3leftminvertright2k1q2chiamathcal h2right bivert cinvert labeleq balancevx 0quadendaligned xmath191 bibot cinbot labeleq balancevy 0endaligned xmath192 where xmath193 is the complex dilatational modulus xmath194 is defined in appendix app isotropic by eq eq mi and the quantities xmath195 xmath196 xmath197 xmath198 xmath199 xmath200 and xmath201 are defined in appendix app nematic by eqs eq mvert eq mbot eq cvert eq cbot eq bvert eq bbot and eq a correspondingly to write the interfacial torque balance equation eq torquebalance we cast the torque exerted upon the interface by the nematic liquid crystal xmath14 and the interfacial torque arising due to surface interactions xmath13 entering the interfacial torque balance equation eq torquebalance in form xmath202 and xmath203 where the molecular field from the bulk xmath204s has linearized components xmath205 xmath206 and the surface molecular field xmath207 can be represented as a sum of elastic xmath208 and viscous xmath63 contributions xmath209 given by eqs eq def hse and eq def hsv correspondingly and can be represented in components as xmath210 xmath211 xmath212 xmath213 then the surface torque balance equations can be written as xmath214 xmath215 or substituting the expressions eq solvz eq solnx and eq solny xmath216cnverti0endaligned xmath217 the interfacial force balance equations eqs eq balancevxeq balancevz and the interfacial torque balance equations eqs eq balancenxeq balanceny form with account of eqs eq cbot and eq cvert a homogeneous system of linear algebraic equations in xmath218 xmath219 xmath220 xmath221 and xmath222 the dispersion relation is obtained from the condition of existence of a solution to these equations ie the requirement for the determinant xmath223 of the matrix of coefficients for this system to be zero xmath224 the equations eq balancevy and eq balanceny in xmath219 and xmath222 decouple from the equations eq balancevx eq balancevz and eq balancenx in xmath218 xmath220 and xmath225 therefore the matrix of coefficients is block diagonal and the dispersion relation eq eq d is equivalent to a pair of relations for xmath226 and xmath168 directions xmath227 xmath228 where xmath229 is the determinant of the xmath230 matrix xmath231 of coefficients for the equations eq balancevx eq balancevz and eq balancenx and xmath232 is the determinant of the xmath233 matrix of coefficients for the equations eq balancevy and eq balanceny the explicit form of the dispersion relations is presented in appendix app dispersion and can be readily used for the numerical analysis of surface modes in this section the dispersion equation which is presented in appendix app dispersion is solved numerically and surface modes of different types are analyzed for simplicity we assume the density of the isotropic liquid xmath122 to be small enough to be neglected eg nematic surfactant air interface we also assume that the magnetic field is absent the surface modes can be easily classified at low wavevectors xmath234 expansion of the dispersion relation in powers of the wavevector xmath234 is a straightforward exercise in algebra and the resulting modes are described below firstly there is a transverse capillary mode which has the dispersion law similar to that in the case of an isotropic liquid liquid interface xcite xmath235 the principal contribution to this mode at large wavelengths arises due to the restoring influence of surface tension xmath236 and the predominant motion is in the direction normal to the interface xmath237 the differences from the isotropic case related to anisotropy of viscous dissipation in the nematic appear in higher orders in xmath234 the dilatational or compressional mode with predominant motion in the direction along wave propagation xmath154 arises in presence of surfactant layer due to the restoring force provided by the dilatational elastic modulus xmath238 the dispersion law for this mode can be written as xmath23913 oleftq43right where the miesowicz viscosity xmath240 is given by xcite xmath241 the difference from the dispersion law for the dilatational mode in the case of a surfactant film at the interface between isotropic fluids given by xcite xmath242 arises due to anisotropy of viscous dissipation in nematic a new mode specific to the nematic is driven by relaxation of the director field to equilibrium due to anchoring at the interface and has the disperion law xmath243 such relaxation is present even in absence of motion of the interface eg when the interface is solid so that xmath244 does not vanish at xmath245 for nematic isotropic interfaces the corresponding motion of the interfaceis induced by backflow effects finally behavior of the in plane shear mode with motion in xmath168direction is also governed by relaxation of the nematic director due to anchoring the corresponding dispersion law xmath246 appears to be different from the isotropic case where the damping of the in plane shear mode in absence of anchoring is governed by the surface viscosity xmath58 xcite gravity xmath247 so far neglected in our analysis becomes important at wavevectors xmath248 and can be taken into account by adding the hydrostatic pressure term xmath249 to eq eq m22 which corresponds to the additional contribution xmath250 to the vertical component of the force eq eq fsz the resulting dispersion law for transversal mode is given by expression xmath251 which describes well known gravity waves xcite in the opposite case of large wavevectors the curvature energy becomes important analysis of eqs eq fsz and eq tsx yields the characteristic values of xmath234 xmath252 and xmath253 below which one can neglect in the dispersion relation the terms containing bending rigidity xmath84 and its derivatives with respect to surfactant concentration given by xmath254 usually xmath255 and the range of xmath234 in which both gravity and curvature contributions become small given by xmath256 is rather wide for typical values xmath257kg mxmath258 xmath259m sxmath260 xmath261j mxmath260 xmath262j the equation eq good q reads xmath263xmath264xmath265 which includes the range of wavevectors typically probed by surface light scattering experiments to obtain the dispersion laws for surface modes at larger values of the wavevector xmath234 the dispersion equation must be solved numerically the numerical solution presented below uses the following typical values of the material parameters when it is not indicated otherwise for the nematic liquid crystal we use the parameters of 4xmath266pentyl4cyanobiphenyl 5cb at 26xmath267c xcite the density xmath268kg mxmath269 the elastic constants xmath270n xmath271n xmath272n the leslie viscosities xmath273kgmxmath274s xmath275kgmxmath274s xmath276kgmxmath274s xmath277kgmxmath274s xmath278kgmxmath274s xmath279kgmxmath274s the viscosity coefficient used in the present paper can be calculated from the leslie equations xcite and equal xmath280kgmxmath274s xmath281kgmxmath274s xmath282kgmxmath274s xmath283kgmxmath274s xmath284 we use the value of the bending rigidity xmath285j which is typical for surfactant layers xcite for other parameters we use the following typical values xmath286kg s xmath287 xmath288n m xmath289n m xmath290j mxmath291 xmath292j mxmath291 dispersion law xmath293 for different surface modes in absence of gravity obtained by solution of the dispersion relation eq dvert with the values of the parameters given in the text numbers 1 2 3 denote transverse dilatational and nematic director relaxation modes correspondingly prime and double prime denote real solid line and imaginary dashed line parts of xmath152 correspondingly the dispersion law xmath293 for different surface modes in absence of gravity obtained by solution of the dispersion relation eq dvert with the values of the parameters given above is presented in figure fig nogravity at low xmath234 the dispersion of for modes 1 2 3 as denoted figure fig nogravity is in good agreement with approximate formulas eq omegac eq omegad and eq omegan correspondingly the noticeable discrepancy in behavior of capillary and dilatational modes appears at xmath294xmath265 and the damping of surface waves becomes large at larger xmath234 which is qualitatively similar to the case of the interface between isotropic liquids the results presented in figure fig nogravity suggest that in the typical range of xmath234 probed by surface light scattering experiments xmath295xmath296xmath265 the approximate expressions eq omegac eq omegad do not describe well the dispersion curves and accurate solution of the dispersion equation should be used instead dispersion law xmath293 for different surface modes in presence of gravity xmath297m sxmath260 obtained by solution of the dispersion relation eq dvert with the values of the parameters given in the text numbers 1 2 3 denote transverse dilatational and nematic director relaxation modes correspondingly prime and double prime denote real solid line and imaginary dashed line parts of xmath152 correspondingly vertical dotted line corresponds to the value of xmath298 given by eq eq qg figure fig gravity presents the dispersion law xmath293 for different surface modes obtained by solution of the dispersion relation eq dvert in presence of gravity xmath297m sxmath260 in agreement with the discussion above the influence of gravity on the dispersion laws is small at xmath299 where xmath298 is given by eq eq qg dependence of the real solid line and imaginary dashed line parts of the frequency of the mode 1 as defined on figure fig nogravity upon the bending rigidity xmath300 calculated at xmath301xmath265 in absence of gravity vertical line corresponds to the value of xmath300 that satisfies eq eq qkappa if the bending rigidity xmath84 is large its influence becomes noticeable as it is demonstrated in figure fig bending for xmath302 typical for surfactant films the value of xmath303 given by eq eq qkappa corresponds to wavelength close to atomic scales and curvature energy can be neglected in typical surface light scattering experiments in agreement with the discussion above dependence of the real solid line and imaginary dashed line parts of the frequency xmath152 of the mode 3 as defined on figure fig nogravity normalized by xmath304 see eq eq omegan upon the anchoring strength xmath305 calculated at xmath306xmath265 in absence of gravity dependence of the real solid line and imaginary dashed line parts of the frequency xmath152 of the in plane shear mode normalized by xmath304 see eq eq omegas upon the anchoring strength xmath305 calculated at xmath306xmath265 in absence of gravity the dispersion law for the modes governed by relaxation of the nematic director field in xmath154 and xmath168 directions due to anchoring of the nematic director at the interface obtained by numerical solution of the dispersion equation with the values of the parameters given above are well described by the equations eq omegan and eq omegas however as the anchoring strength becomes smaller other mechanisms start to take over as demonstrated in figures fig xanchoring and fig yanchoring in this section we discuss how the surface modes described in section sec modes are altered in presence of the external magnetic field directed normally to the surface along xmath237 axis the external magnetic field effectively acts on the nematic molecules as an additional molecular field see eq eq h star and the primary counteracting mechanism is provided by orientational shear relaxation thus we may expect the influence of the magnetic field become noticeable at xmath307 dependence of the real solid lines and imaginary dashed lines parts of the frequencies of the modes 1 and 2 as defined on figure fig nogravity upon the magnetic field calculated at xmath308xmath265 in absence of gravity vertical dotted line corresponds to xmath309 see eq eq hstar the results of the numerical solution of the dispersion equation in presence of magnetic field presented in figure fig field confirm that noticeable change in dispersion of capillary and dilatational modes arises only around the value of the field given by eq eq hstar the change due to magnetic field in modes governed by anchoring is found to be negligibly small at low xmath234the dispersion of a capillary mode in strong magnetic field is different from the law eq omegac and is given by xmath310 the frequency of this mode becomes sensitive to the anchoring properties of the interface because the nematic director tends to be oriented along the field rather than to be advected with the nematic liquid the practical use of this effect is however limited because at short wavelengths extremely large magnetic field is required and at long wavelengths gravity becomes dominating eq eq omegag in principle magnetic field can also influence surface waves through change in the properties of the interface eg surface tension due to the magnetization of the surfactant separate study is required to estimate the magnitude of this effect we have obtained the dispersion relation for the surface waves at a surfactant laden nematic isotropic interface for the case when the surfactant film induces homeotropic normal to the surface orientation of the director and the surfactant film is in the isotropic two dimensional fluid phase we have analyzed the dispersion law of different surface modes analytically in long wavelength limit and numerically in broader range of wave vectors using typical values of the material parameters at long wavelengthsthe dispersion of capillary dilatational or compression in plane shear and director relaxation modes is described by equations eq omegac or eq omegag eq omegad eq omegas and eq omegan correspondingly at smaller wavelength the solution of the full dispersion relation should be used gravity influences the transversal mode at small wavevectors eq eq qg and curvature energy of surfactant can be neglected if wavevector is not too large eq eq qkappa for all modes the influence of the external magnetic field directed normally to the interface is small the influence of the magnetic field should be more pronounced if the direction of the field does not coincide with equilibrium nematic director in this case the dispersion law for surface modes may be expected to be quantitatively different due to anisotropy of viscous dissipation in nematic and different anchoring energy the results of the present paper can be readily extended to the case of arbitrary direction of the external field and to other types of nematic anchoring other possible developments which may increase the range of accessible systems and conditions is the extension of the results to wider range of the states of the surfactant film and the study of the effects which may be caused by the phase transitions in the surfactant film and bulk liquid crystal dependence of the dispersion waves upon the parameters of the interface suggests the surface light scattering on a surfactant laden nematic isotropic interface as a potential method for determining of the properties of surfactant laden nematic isotropic interfaces and as a possible candidate for a chemical or biological sensing technique i thank prof c m care for fruitful discussion of the results and prof p d i fletcher for the discussion about surfactant laden nematic isotropic interfaces which instigated this work the geometrical description we use is similar to that of that presented in works xcite and xcite we choose the plane xmath0 to coincide with the unperturbed interface the half space xmath1 to be occupied by the uniaxial nematic liquid crystal and the half space xmath2 to be filled by the isotropic liquid let the position of a fluid particle at the interface be xmath311 where xmath312 is its position on the undeformed interface xmath0 and xmath313 is the displacement vector with components xmath314 we shall use xmath315 and xmath316 as surface coordinates and denote them as xmath317 xmath318 and other greek indices taking values 1 and 2 the position xmath319 of fluid particles at the interface in 3d space can be cast as xmath320 the surface tangent base vectors xmath321 corresponding to the chosen surface coordinates can be written in terms of the components of the displacement vectors xmath322 and xmath323 the surface metric tensor xmath324 has determinant xmath325 the corresponding reciprocal base vectors xmath326 and metric tensor xmath327 take form xmath328xmath329 xmath330 the base and reciprocal base vectors satisfy xmath331 we write the unit vector xmath11 normal to the interface and directed into the isotropic liquid as xmath332 we shall also define the dyadic surface idem factor xmath333 the surface gradient operator xmath334 and the second fundamental tensor xmath335 the mean curvature xmath82 and gaussian curvature xmath83 are given by xmath336 xmath337 other useful identities include the surface projection xmath54 of a nematic director field xmath51 eqs eq n and eq n0 xmath338 and its surface divergence xmath339this appendix presents the solution to the linearized hydrodynamic equations in bulk isotropic liquid obtained by kramer xcite substitution of eq eq vfourier into eq eq continuity yields xmath340 substituting eqs eq vfourier and eq pfourier into eqs eq navier stokeseq s we obtain xmath341 tilde vxiqtilde p labeleq kramer vy leftiomegarhoietaleftq2partialz2rightright tilde vy0 labeleq kramer vz leftiomegarhoietaleftq2partialz2rightright tilde vzpartialztilde pendaligned where equation eq kramer vy is decoupled from other equations the general solution to eqs eq continuity fouriereq kramer vz vanishing at xmath342 can be written as xmath343 xmath344 xmath345 xmath346 with xmath347 the quantities xmath348 xmath349 and xmath350 are functions of xmath234 and xmath152 and are determined by the boundary conditions at the interface as follows xmath351 xmath352 xmath353 where the superscript xmath354 indicates that the values of the corresponding dynamic variables are taken at xmath355 in this appendix the solution is presented to the linearized hydrodynamic equations in bulk nematic liquid crystal for the equilibrium director along xmath237 axis eq eq n0 the fourier transform similar to equations eq vfouriereq nfourier of the linearized molecular field eq eq h xmath356 has non zero components xmath357 xmath358 substituting them into eqs eq eriksenleslie eq sigmaneq sigmanv we obtain the following linear differential equations xmath359tilde vxfrac1lambda2partialztilde hx iqtilde p xmath360tilde vy frac1lambda2partialztilde hy0 xmath361 tilde vz iqfrac1lambda2tilde hxpartialztilde p which are analogous to eqs eq kramer vxeq kramer vz for isotropic liquids equation eq dndt for the director after fourier transform gives two equations xmath362 and xmath363 where xmath364 and xmath365 are given by eqs eq hx and eq hy thus we have six linear differential equations eqs eq continuity fourier and eq kramer vxneq kramer nyn for six dynamic variables pressure three components of velocity and two components of director equations eq kramer vyn and eq kramer nyn for xmath366 and xmath367 decouple from the others their general solution vanishing at xmath368 can be cast as xmath369 xmath370 where xmath371 xmath372 and xmath373 xmath374 are the roots of the quadratic equation xmath375 where xmath376 xmath377 xmath378 the general solution to the equations eq continuity fourier eq kramer vxn eq kramer vzn and eq kramer nxn vanishing at xmath368 can be cast as xmath379 xmath380 xmath381 xmath382 where xmath383 bverti left leftiomegarhonnu3q2 left2nu1nu3rightleftmnvertiright2right rightendaligned xmath384 xmath385 and xmath386 xmath387 are the roots of the cubic equation xmath388 where xmath389 xmath390 q2rightk3endaligned xmath391 nonumber leftiomegarhon leftfrac1lambda22gamma1 2leftnu1nu2nu3rightrightq2 right nonumber timesleftk1q2chiamathcal h2right leftiomegarhon leftfrac1lambda24gamma1nu3rightq2 rightk3q2endaligned xmath392leftk1q2chiamathcal h2rightq2endaligned the quantities xmath197 and xmath198 are functions of xmath234 and xmath152 and are determined by the boundary conditions at the interface as xmath393 xmath394deltaendaligned where xmath395 and expressions for xmath396 xmath397 and xmath398 are obtained from eqs eq cbot and eq cvert by cyclic permutation of subscript indices to write the explicit form of the dispersion relations eq dvert and eq dbot we recast equations eq cbot and eq cvert in form xmath399 xmath400 where xmath401 xmath402 is given by eq eq delta xmath403 and xmath404 are given by eqs eq bbot and eq bvert xmath196 and xmath195 are given by eqs eq mvert and eq mbot correspondingly then the dispersion relation eq dbot can be written as xmath405 where xmath406 is xmath407 matrix of coefficients for equations eq balancevy and eq balanceny xmath408 with the following components xmath409bbotirightlleftvyrightiendaligned xmath410bbotirightlleftnyrightiendaligned xmath411 xmath412 the dispersion relation eq dvert can be written as xmath413 where xmath414 is xmath415 matrix of coefficients for equations eq balancevx eq balancevz and eq balancenx xmath416 with the following components xmath417 lleftvxrighti iomegafrac1lambda2times timessumi13leftk3leftminvertright2k1q2chiamathcal h2rightbivert lleftvxrightiendaligned xmath418 lleftvzrighti iomegafrac1lambda2times timessumi13leftk3leftminvertright2k1q2chiamathcal h2rightbivert lleftvzrightiendaligned xmath419 lleftnxrighti nonumber iomegafrac1lambda2times timessumi13leftk3leftminvertright2k1q2chiamathcal h2rightbivert lleftnxrightiendaligned xmath420 xmath421 xmath422 xmath423rightlleftvxrightiendaligned xmath424rightlleftvzrightiendaligned xmath425rightlleftnxrightiendaligned note that gravity xmath247 has been incorporated into the dispersion relation by adding the hydrostatic pressure term xmath249 to xmath426 eq eq m22 by setting xmath427 and setting to zero quantities xmath141 xmath142 xmath143 xmath135 xmath136 xmath428 and xmath120 specific to nematic and neglecting curvature contributions by setting to zero xmath300 and xmath254 the dispersion relation is reduced to the well studied form for the case of isotropic liquids xcite
a theoretical study is presented of surface waves at a monomolecular surfactant film between an isotropic liquid and a nematic liquid crystal for the case when the surfactant film is in the isotropic two dimensional fluid phase and induces homeotropic normal to the interface orientation of the nematic director the dispersion relation for the surface waves is obtained and different surface modes are analyzed with account being taken of the anchoring induced by the surfactant layer the curvature energy of the interface and the anisotropy of the viscoelastic coefficients the dispersion laws for capillary and dilatational surface modes retain structure similar to that in isotropic systems but involve anisotropic viscosity coefficients additional modes are related to relaxation of the nematic director field due to anchoring at the interface the results can be used to determine different properties of nematic surfactant isotropic interfaces from experimental data on surface light scattering
[sec:introduction]introduction [sec:model]the model [sec:dispersion]dispersion relation [sec:modes]surface modes [sec:field]influence of magnetic field [sec:conclusion]conclusion [app:geometry]differential geometry of the interface [app:isotropic]bulk solution for isotropic liquid [app:nematic]bulk solution for nematic liquid crystal [app:dispersion]explicit form of dispersion relation
in the last few years a new phenomenon has attracted attention of the community of soft condensed matter physicists appearance of attraction between like charged macromolecules in solutions containing multivalent ions the problem is particularly fascinating because it contradicts our well established intuition that like charged entities should repel xcite the fundamental point however is that the electrolyte solutions are intrinsically complex systems for which many body interactions play a fundamental role the attraction between like charged macromolecules is important for many biological systems one particularly striking example is provided by the condensation of dna by multivalent ions such as xmath1 xmath2 and various polyamines xcite this condensation provides an answer to the long standing puzzle of how a highly charged macromolecule such as the dna can be confined to a small volume of viral head or nuclear zone in procaryotic cell evidently the multivalent ions serve as a glue which keeps the otherwise repelling like charged monomers in close proximity xcite in eukaryotic cells the cytosol is traversed by a network of microtubules and microfilaments rigid chains of highly charged protein f actin which in spite of large negative charge agglomerate to form filaments of cytoskeleton xcite the actin fibers are also an important part of the muscle tissue providing a rail track for the motion of molecular motor myosin although the nature of attraction between like charged macromolecules is still not fully understood it seems clear that the attractive force is mediated by the multivalent counterions xcite a strong electrostatic attraction between the polyions and the oppositely charged multivalent counterions produces a sheath of counterions around each macromolecule the condensed counterions can become highly correlated resulting in an overall attraction it is important to note that the complex formed by a polyion and its associated counterions does not need to be neutral for the attraction to arise under some conditionsthe correlation induced attraction can overcome the monopolar repulsion coming from the net charge of the complexes recently a simple model was presented to account for the attraction between two lines of charges xcite each line had xmath3 discrete uniformly spaced monomers of charge xmath4 and xmath5 condensed counterions of charge xmath6 free to move along the rod the net charge of such a polyion counterion complex is xmath7 nevertheless it was found that if xmath8 and xmath9 at sufficiently short distances the two like charged rods would attract xcite it was argued that the attraction resulted from the correlations between the condensed counterions and reached maximum at zero temperature if xmath10 the force was always found to be repulsive clearly a one dimensional line of charge is a dramatic oversimplification of the physical reality if we are interested in studying the correlation induced forces between real macromolecules their finite radius must be taken into account xcite thus a much more realistic model of a polyion is a cylinder with a uniformly charged backbone xcite or with an intrinsic charge pattern xcite as eg the helix structure of dna molecule furthermore the condensed counterions do not move along the line but on the surface of the cylinder unfortunately these extended models are much harder to study analytically in this paper we explore the effects of finite polyion diameter on the electrostatic interactions between the two polyions using monte carlo simulations we find that the finite diameter and the associated angular degrees of freedom of condensed counterions significantly modify the nature of attraction thus although there is still a minimum charge which must be neutralized by the counterions in order for the attraction to appear this fraction is no longer equal to xmath11 as was the case for the line of charge model we find that the critical fraction depends on the valence of counterions and is less than xmath11 for xmath9 for monovalent counterionsno attraction is found the crystalline structure of the condensed counterions as first suggested by simulations of gronbech jensen et al xcite and refs xcite is also not very obvious in particular we find very similar distributions of condensed counterions in the regime of attractive and repulsive interactions the structure of this paper is as follows the model and the method of calculation are described in section model in section results we present the results of the simulations the conclusions are summarized in section summary the dna model considered here is an extension of the one proposed earlier by arenzon stilck and levin xcite a similar model has been recently discussed by solis and olvera de la cruz xcite the polyions are treated as parallel rigid cylinders of radius xmath12 and xmath3 ionized groups each of charge xmath4 uniformly spaced with separation xmath13 along the principle axis fig modelfig besides the fixed monomers each polyion has xmath5 condensed counterions with valence xmath14 and charge xmath6 which are constrained to move on the surface of the cylinder to locate a condensed counterionit is necessary to provide its longitudinal position xmath15 xmath16 and the transversal angle xmath17 xmath18 to simplify the calculations the angular and the longitudinal degrees of freedomare discretized see fig modelfig the surface of the cylinder is subdivided into xmath3 parallel rings with a charged monomer at the center of each ring each ring has xmath19 sites available to the condensed counterions see figs modelfig and rings the hardcore repulsion between the particles requires that a site is occupied by at most one condensed counterion the two polyions are parallel with the intermolecular space treated as a uniform medium of dielectric constant xmath20 04 we introduce occupation variables xmath21 for the two polyions so that xmath22 and xmath23 thus xmath24 if the xmath25th site of the xmath26th polyion is occupied by a particle of valence xmath27 negative core charge or counterion of valence xmath14 respectively otherwise xmath28 note that the core charge is always occupied while the counterions are free to move between the xmath29 ring sites of each polyion the hamiltonian for the interaction between the two polyions is xmath30 with the xmath31 all lengths are measured in units of xmath13 for dna xmath32 the dimensionless quantity xmath33 is the manning parameter which for dna is xmath34 the partition function is obtained by tracing over all the possible values of xmath21 consistent with the constraint of fixed number of counterions per polyion xmath35 clearly this is a very crude model of the interaction between two macromolecules in a polyelectrolyte solution the molecular nature of the solvent is ignored also the number of condensed counterions is fixed instead of being dependent on the separation between the particles nevertheless we believe that this simple model can provide some useful insights for the mechanism of attraction in real polyelectrolyte solutions 035 we are interested in statistical averages of observables such as the energy and the force between the two polyions furthermore to understand the nature of the interaction between the two macromolecules it is essential to study the correlations between the condensed counterions on the two polyions the force is obtained from the partition function eq partition xmath36 from symmetry only the xmath37component is different from zero for finite macromolecules the symmetry between the two polyionscan not be broken xcite hence it is impossible to produce a true crystalline order in a finite system at non zero temperature since within our simplified modelthe two polyions have exactly the same number of condensed counterions the average angular counterion distribution xmath38 must be symmetric with respect to the mid plane xmath39 see fig modelfig the angle xmath40 labels the site xmath25 on polyion xmath26 see fig rings thus xmath41 denotes the occupation variable for the site 3 on the ring xmath15 located on polyion 2 with an angle of xmath42 indeed fig z20n7ocup shows that the density profiles are completely symmetric up to fluctuations in spite of this symmetry it is possible for the counterions on the two polyions to become highly correlatedclearly the strength of these correlations will depend on the product xmath43 and the separation between the two macromolecules considering fig rings it is evident that if the site xmath44 on the first polyion is occupied the likelihood of occupation of the site xmath45 on the second polyion will be reduced to explore the nature of electrostatic correlations we define a counterion hole correlation function between the adjacent rings on the two polyions xmath46 rangle nonumber langle ni1ztheta1irangle langle left1nj2ztheta2jright rangle endaligned here xmath47 denotes the ensemble average this function should be non zero when sites on the two polyions are correlated that is if one is occupied by a condensed counterion there is an increased probability of the second being empty to calculate the force between the two polyions we have performed a standard monte carlo mc simulation with the usual metropolis algorithm xcite first one counterion on polyion 1 is randomly chosen and displaced to a vacant position on the same polyion this move is accepted or rejected according to the standard detailed balance criterion xcite we do not permit exchange of particles between the polyions next the same is done for polyion 2 in one monte carlo step mcs all xmath48 condensed counterions on the two polyions are permitted to attempt a move the long ranged nature of the coulomb interaction requires evaluation of all the pair interactions in eq hamiltonian at every mcs due to the limited computational power available to us we have confined our attention to relatively small systems with xmath49 and xmath50 we have checked however that for xmath50 the force has already reached the continuum limit and did not vary further with increase of xmath19 also we note that the thermodynamic limit is reached reasonably quickly so that there is a good collapse of data already for xmath51 see fig z20force 2000 mcs served to equilibrate the system after which 500 samples were used to calculate the basic observables namely the mean force and energy to obtain the correlation functions 5000 samples were used with 5000 mcs for equilibration the simulations were performed for xmath34 and xmath52 relevant for dna for monovalent counterionsthe simulation results indicate that the force is purely repulsive this is in complete agreement with the experiments xcite which do not find any indication of dna condensation for monovalent counterions 035 for divalent counterions the force between the two complexes can becomes negative indicating appearance of an effective attraction fig z20force the range of attraction is larger than was found for the one dimensional line of charge model ref xcite 035 within the manning theory xcite xmath53 of the dna s charge is neutralized by the divalent counterions however there are indications that even a larger fraction of dna s charge can become neutralized by the multivalent ions if the counterion correlations are taken into account xcite in this casethe interaction is purely attractive with the range of about xmath54 or xmath55 xmath0 surface to surface fig z20force 04 04 a minimum number of condensed counterions is necessary for attraction to appear in fig d0 we present the surface to surface separation xmath56 below which the force between the two complexes becomes negative attractive as a function of the number of multivalent counterions for the case of dna with divalent counterions xmath57 the attraction appears only if xmath58 of the core charge is neutralized for xmath59this fraction decrease to xmath60 furthermore decrease in the value of the manning parameter xmath61 increases the minimum number of condensed counterions necessary for the attraction to appear this is fully consistent with the fact that the attraction is mediated by the correlations between the condensed counterions since raise in temperature tends to disorganize the system the state of highest correlation between the condensed counterions corresponds to xmath62 or xmath63 the surface to surface distance at which the attraction first appears tends to zero as the number of condensed counterions is diminished we find xmath64 where the average counterion concentration is xmath65 and the critical fraction xmath66 depends on the valence of condensed counterions xmath14 from fig d0 it is evident that xmath67 this should be contrasted with the line of charge model ref xcite for which xmath68 035 in fig snaps we show two snapshots of the characteristic equilibrium configurations for a xmath69 and b xmath70 looking at this figures it is difficult to see something that would distinguishes between them both appear about the same there is no obvious crystallization or transversal polarization suggested in previous studies xcite yet the case a corresponds to the repulsive while the case b corresponds to the attractive interaction between the polyions to further explore this point in figs d328 and d1665 we present the site site correlation function eq correl for macromolecules with xmath71 and xmath72 for xmath69 the surface to surface distance between the two polyions is sufficiently large for their condensed counterions to be practically uncorrelated fig d328 on the other hand for xmath73 strong correlations between the condensed counterions are evident fig d1665 the fig d1665 shows that the sites two and three on the first polyion are strongly correlated with the sites seven and eight on the second polyion respectively it is these correlations between the adjacent sites on the two polyions which are responsible to the appearance of attraction between the two macromolecules when they are approximated fig z20force we have presented a simple model for polyion polyion attraction inside a polyelectrolyte solution it is clear from our calculations that the attraction results from the correlations between the condensed counterions and reaches maximum for xmath62 the thermal fluctuations tend to diminish the correlations decreasing the amplitude of the attractive force consistent with the experimental evidence the attraction exists only in the presence of multivalent counterions our simulations demonstrate that a critical number of condensed counterions is necessary for the appearance of attraction the fraction of bare charge that must be neutralized for the attraction to arise depends on the valence of counterions the larger the valence the smaller the fraction of the bare polyion charger that must be neutralized for the attraction to appear this result should be contrasted with the line of charge model xcite for which the critical fraction was found to be equal to xmath11 independent of the counterion charge we thank j j arenzon for helpful comments on simulations this work was supported by cnpq conselho nacional de desenvolvimento cientfico e tecnolgico and finep financiadora de estudos e projetos brazil
a simple model is presented for the appearance of attraction between two like charged polyions inside a polyelectrolyte solution the polyions are modeled as rigid cylinders in a continuum dielectric solvent the strong electrostatic interaction between the polyions and the counterions results in counterion condensation if the two polyions are sufficiently close to each other their layers of condensed counterions can become correlated resulting in attraction between the macromolecules to explore the counterion induced attraction we calculate the correlation functions for the condensed counterions it is found that the correlations are of very short range for the parameters specific to the double stranded dna the correlations and the attraction appear only when the surface to surface separation is less than xmath0 2
introduction model and method results and discussion summary
non gaussianity from the simplest inflation models that are based on a slowly rolling scalar field is very small xcite however a very large class of more general models eg models with multiple scalar fields features in inflation potential non adiabatic fluctuations non canonical kinetic terms deviations from the bunch davies vacuum among others predict substantially higher level of primordial non gaussianity for a review and references therein primordial non gaussianity can be described in terms of the 3point correlation function of bardeen s curvature perturbations xmath4 in fourier space xmath5 depending on the shape of the 3point function ie xmath6 non gaussianity can be broadly classified into two classes xcite first the local squeezed non gaussianity where xmath7 is large for the configurations in which xmath8 second the non local equilateral non gaussianity where xmath7 is large for the configuration when xmath9 the local form arises from a non linear relation between inflaton and curvature perturbations xcite curvaton models xcite or the new ekpyrotic models xcite the equilateral form arises from non canonical kinetic terms such as the dirac born infeld dbi action xcite the ghost condensation xcite or any other single field models in which the scalar field acquires a low speed of sound xcite while we focus on the local form in this paper it is straightforward to repeat our analysis for the equilateral form the local form of non gaussianity may be parametrized in real space as xcite xmath10 where xmath0 characterizes the amplitude of primordial non gaussianity different inflationary models predict different amounts of xmath0 starting from xmath11 to xmath12 beyond which values have been excluded by the cosmic microwave background cmb bispectrum of wmap temperature data xmath13 at the xmath14 level xcite so far all the constraints on primordial non gaussianity use only temperature information of the cmb by also having the e polarization information together with cmb temperature information one can improve the sensitivity to the primordial fluctuations xcite although the experiments have already started characterizing e polarization anisotropies xcite the errors are large in comparison to temperature anisotropy the upcoming experiments such as planck satellite will characterize e polarization anisotropy to a higher accuracy it is very timely to develop the tools which can optimally utilize the combined cmb temperature and e polarization information to constrain models of the early universe throughout this paperwe use the standard lambda cdm cosmology with the following cosmological parameters xmath15 xmath16 xmath17 xmath18 xmath19 and xmath20 for all of our simulations we used healpix maps with xmath21 pixels in our recent paperxcite we described a fast cubic bispectrum estimator of xmath0 using a combined analysis of the temperature and e polarization observations the estimator was optimal for homogeneous noise where optimality was defined by saturation of the fisher matrix bound in this paperwe generalize our previous estimator of xmath0 to deal more optimally with a partial sky coverage and the inhomogeneous noise the generalization is done in an analogous way to how xcite generalized the temperature only estimator developed by xcite however the final result of xcite their eq 30 is off by a factor of two which results in the error in xmath0 that is much larger than the fisher matrix prediction as we shall show below the fast bispectrum estimator of xmath0 from the combined cmb temperature and e polarization data can be written as xmath22 where xcite xmath23 xmath24 xmath25 xmath26 xmath27 xmath28 and xmath29 is a fraction of the sky observed indices xmath30 and xmath31 can either be xmath32 or xmath33 here xmath34 is 1 when xmath35 6 when xmath36 and 2 otherwise xmath37 is the theoretical bispectrum for xmath38 xcite xmath39 is the power spectrum of the primordial curvature perturbations and xmath40 is the radiation transfer function of adiabatic perturbations it has been shown that the above mentioned estimator is optimal for the full sky coverage and homogeneous noise xcite to be able to deal with the realistic data the estimator has to be able to deal with the inhomogeneous noise and foreground masks the estimator can be generalized to deal with a partial sky coverage and the inhomogeneous noise by adding a linear term to xmath41 xmath42 for the temperature only case this has been done in xcite following the same argument we find that the linear term for the combined analysis of cmb temperature and polarization data is given by xmath43 where xmath44 and xmath45 are the xmath46 and xmath47 maps generated from monte carlo simulations that contain signal and noise and xmath48 denotes the average over the monte carlo simulations the generalized estimator is given by xmath49 which is the main result of this paper note that xmath50 and this relation also holds for the equilateral shape therefore it is straightforward to find the generalized estimator for the equilateral shape first find the cubic estimator of the equilateral shape xmath51 and take the monte carlo average xmath52 let us suppose that xmath51 contains terms in the form of xmath53 where xmath46 xmath47 and xmath54 are some filtered maps use the wick s theorem to re write the average of a cubic product as xmath55 finally remove the mc average from single maps and replace maps in the product with the simulated maps xmath56 this operation gives the correct expression for the linear term both for the local form and the equilateral form one can find the estimator of xmath0 from the temperature data only by setting xmath57 we have compared our formula in the temperature only limit with the original formula derived by xcite their eq 30 and found a discrepancy to see the discrepancy let us re write the estimator as xmath58 our formula gives xmath59 while eq 30 of xcite gives xmath60 to make sure that our normalization gives the minimum variance estimator we have done monte carlo simulations with varying xmath61 we find that xmath59 minimizes the variance as shown in fig normalization we conclude that the analysis given in xcite resulted in the larger than expected uncertainty in xmath0 because of this error in their normalization of the linear term the main contribution to the linear term comes from the inhomogeneous noise and sky cut for the temperature only case most of the contribution to the linear term comes from the inhomogeneous noise and the partial sky coverage does not contribute much to the linear term this is because the sky cut induces a monopole contribution outside the mask in the analysis one subtracts the monopole from outside the mask before measuring xmath41 which makes the linear contribution from the mask small xcite for a combined analysis of the temperature and polarization maps however the linear term does get a significant contribution from a partial sky coverage see the right panel of fig fnlstdev subtraction of the monopole outside of the mask is of no help for polarization as the monopole does not exist in the polarization maps by definition the lowest relevant multipole for polarization is xmath62 the estimator is still computationally efficient taking only xmath63 times the xmath31 sampling which is of order 100 operations in comparison to the full bispectrum calculation which takes xmath64 operations herexmath3 refers to the total number of pixels for planck xmath65 andso the full bispectrum analysis is not feasible while our analysis is in the left panel of figure fnlstdev we show the variance of xmath0 using the estimator with and without the linear term for the gaussian cmb simulations in the presence of inhomogeneous noise and partial sky coverage for this analysis we use the noise properties that are expected for the planck satellite assuming the cycloidal scanning strategy xcite inhomogeneous nature of the noise is depicted in the lower map of figure noisesabb where we show the number of observations xmath66 for the different pixels in the sky as for the foreground masks we use wmap kp0 intensity mask and p06 polarization mask we find that with the inclusion of the linear term the variance reduces by more than a factor of 5 the linear term greatly reduces the variance approaching the fisher matrix bound however the estimator is close to but not exactly the same as the fisher variance prediction in the noise dominated regime nevertheless we do not observe the increase of variance at higher xmath67 the variance becomes smaller as we include more multipoles this result is in contradiction with the result of xcite and xcite we attribute this discrepancy to the error in the normalization of linear term in their formula in the right panel of figure fnlstdev we show the variance of xmath0 again using gaussian simulations but now in the presence of flat sky cut and in the absence of any noise the purpose of the plot is to demonstrate as pointed out in the previous section that for the combined cmb temperature and polarization analysis sky cut does contribute significantly to the linear term we find that the generalized estimator does a very good job in reducing the variance excess and the simulated variance of xmath0 does accurately saturate the fisher matrix bound can our estimator recover the correct xmath0 ie is our estimator unbiased we have tested our estimator against simulated non gaussian cmb temperature and e polarization maps the non gaussian cmb temperature and e polarization maps were generated using the method described in xcite we find that our estimator is unbiased ie we can recover the xmath0 value which was used to generate the non gaussian cmb maps the results for the unbiasedness of the estimator are shown in table nongfnl the analysis also shows the unbiasedness of the estimator described in xcite unbiasedness of the generalized estimator non gaussian cmb maps with xmath68 are used for xmath69 the standard deviation of xmath0 xmath70 was obtained using gaussian simulations colsoptionsheader figures sabb and noisesabb show the maps xmath71 and xmath72 which appear in the linear term eq slin of the estimator these maps are calculated using 100 monte carlo simulations of the data since the linear term contributes only in the presence of inhomogeneities we also show these maps calculated with noise only simulations ie no signal notice how these maps correlate with the inhomogeneous noise as shown in the lower map of figure noisesabb upcoming cmb experiments will provide a wealth of information about the cmb polarization anisotropies together with temperature anisotropies the combined information from the cmb temperature and polarization data improves the sensitivity to primordial non gaussianity xcite the promise of learning about the early universe by constraining the amplitude of primordial non gaussianity is now well established in this paperwe have generalized the bispectrum estimator of non gaussianity described in xcite to deal with the inhomogeneous nature of noise and incomplete sky coverage the generalization from xcite enables us to increase optimality of the estimator significantly without compromising the computational efficiency of the estimator the estimator is still computationally efficient scaling as xmath73 compared to the xmath74 scaling of the full bispectrum xcite calculation for sky maps with xmath3 pixels for the planck satellite this translates into a speed up by factors of millions reducing the required computing time from thousands of years to just hours and thus making xmath0 estimation feasible the speed of our estimator allows us to study its statistical properties using monte carlo simulations we have used gaussian and non gaussian simulations to characterize the estimator we have shown that the generalized fast estimator is able to deal with the partial sky coverage very well and in fact the variance of xmath0 saturates the fisher matrix bound in the presence of both the realistic noise and galactic mask we find that the generalized estimator greatly reduces the variance in comparison to the xcite estimator of non gaussianity using combined cmb temperature and polarization data since the estimator is able to deal with the partial sky coverage very effectively the estimator can also be used to constrain primordial non gaussianity using the data from ground and balloon based cmb experiments which observe only a small fraction of the sky the estimator also solves the problem xcite of non trivial polarization mode coupling due to foreground masks earlier this issue was dealt with by removing the most contaminated xmath75 modes from the analysis usually xmath76 the naive approach of using galactic masks to deal with the polarization contamination is to be refined both temperature and polarization foregrounds are expected to produce non gaussian signals some sources of non primordial non gaussianity are cmb lensing point sources and the sunyaev zeldovich effect understanding the non gaussianity from the polarization foreground sources and refining the estimator to be able to deal with it will be the subject of our future work some of the results in this paper have been derived using the cmbfast package by uros seljak and matias zaldarriaga xcite and the healpix package xcite this work was partially supported by the national center for supercomputing applications under tg mca04t015 and by university of illinois we also utilized the teragrid cluster wwwteragridorg at ncsa bdw acknowledges the friedrich wilhelm bessel research award by the alexander von humboldt foundation bdw and apsy also thank the max planck institute for astrophysics for hospitality bdw and apsy are supported in part by nsf grant numbers ast 0507676 and 0708849 nasa jpl subcontract no ek acknowledges support from the alfred p sloan foundation komatsu e kogut a nolta m n bennett c l halpern m hinshaw g jarosik n limon m meyer s s page l spergel d n tucker g s verde l wollack e wright e l 2003 apj 148 119 t e ade p a r bock j j bond j r borrill j boscaleri a cabella p contaldi c r crill b p de bernardis p de gasperis g de oliveira costa a de troia g di stefano g hivon e jaffe a h kisner t s jones w c lange a e masi s mauskopf p d mactavish c j melchiorri a natoli p netterfield c b pascale e piacentini f pogosyan d polenta g prunet s ricciardi s romeo g ruhl j e santini p tegmark m veneziani m vittorio n 2006 647 813 l hinshaw g komatsu e nolta m r spergel d n bennett c l barnes c bean r dore o halpern m hill r s jarosik n kogut a limon m meyer s s odegard n peiris h v tucker g s verde l weiland j l wollack e wright e l 2007 apjs 170 335 d n bean r dore o nolta m r bennett c l hinshaw g jarosik n komatsu e page l peiris h v verde l barnes c halpern m hill r s kogut a limon m meyer s s odegard n tucker g s weiland j l wollack e wright e l 2006 apjs in press astro ph0603449
in our recent paper xcite we described a fast cubic bispectrum estimator of the amplitude of primordial non gaussianity of local type xmath0 from a combined analysis of the cosmic microwave background cmb temperature and e polarization observations in this paper we generalize the estimator to deal with a partial sky coverage as well as inhomogeneous noise our generalized estimator is still computationally efficient scaling as xmath1 compared to the xmath2 scaling of the brute force bispectrum calculation for sky maps with xmath3 pixels upcoming cmb experiments are expected to yield high sensitivity temperature and e polarization data our generalized estimator will allow us to optimally utilize the combined cmb temperature and e polarization information from these realistic experiments and to constrain primordial non gaussianity
introduction results conclusion and discussion
neutrino magnetic moments are no doubt among the most well theoretically understood and experimentally studied neutrino electromagnetic properties xcite as it was shown long ago xcite in a wide set of theoretical frameworks neutrino magnetic moment is proportional to the neutrino mass and in general very small for instance for the minimally extended standard model the dirac neutrino magnetic moment is given by xcite xmath0 at the same time the magnetic moment of hypothetical heavy neutrino with mass xmath1 is xmath2 xcite it should be noted here that much larger values for the neutrino magnetic moments are possible in various extensions of the standard model see for instance in xcite constraints on the neutrino magnetic moment can be obtained in xmath3 scattering experiments from the observed lack of distortions of the recoil electron energy spectra recent reactor experiments provides us with the following upper bounds on the neutrino magnetic moment xmath4 munu collaboration xcite xmath5 texono collaboration xcite the gemma collaboration has obtain the world best limit xmath6 xcite another kind of neutrino experiment borexino solar neutrino scattering has obtained rather strong bound xmath7 xcite the best astrophysical constraint on the neutrino magnetic moment has been obtained from observation of the red giants cooling xmath8 xcite as it was pointed out above the most stringent terrestrial constraints on a neutrinoeffective magnetic moments have been obtained in antineutrino electron scattering experiments and the work to attain further improvements of the limits is in process in particular it is expected that the new bound on the level of xmath9 can be reached by the gemma collaboration in a new series of measurements at the kalinin nuclear power plant with much closer displacements of the detector to the reactor that can significantly enhanced the neutrino fluxsee xcite an attempt to reasonably improve the experimental bound on a neutrino magnetic moment was undertaken in xcite where it was claimed that the account for the electron binding effect in atom can significantly increase the electromagnetic contribution to the differential cross section in respect to the case when the free electron approximation is used in calculations of the cross section however as it was shown in a series of papers xcite the neutrino reactor experiments on measurements of neutrino magnetic moment are not sensitive to the electron binding effect so that the free electron approximation can be used for them one may expect that neutrino electromagnetic properties can be much easier visualized when neutrino is propagating in external magnetic fields and dense matter also neutrino propagation in matter is a rather longstanding research field nevertheless still having advances and obtaining a lot of interesting predictions for various phenomena the convenient and elegant way for description of neutrino interaction processes in matter has been recently offered in a series of papers xcite the developed method is based on the use of solutions of the modified dirac equation for neutrino in matter in feynman diagrams the method was developed before for studies of different processes in quantum electrodynamics and was called as the method of exact solutions xcite the gain from the introduction of the method was sustained by prediction and detailed quantum description of the new phenomenon of the spin light of neutrino in matter the xmath10 first predicted in xcite within the quasi classical treatment of neutrino spin evolution the essence of the xmath10 is the electromagnetic radiation in neutrino transition between two different helicity states in matter the simplification of the process framework such as use of the uniform unpolarized and non moving matter neglect of the matter influence on the radiated photon makes the estimate of real process relevance in astrophysical settings far from the practical scope in this short paperwe should like to make a step towards the completeness of the physical picture and to consider the incomprehensible at first glance question of the plasmon mass influence on the xmath10 the importance of plasma effects for the xmath10 in matter was first pointed out in xcite the investigations already carried out in this area xcite indicated that the plasmon emitted in the xmath10 has a considerable mass that can affect the physics of the process to see how the plasmon mass enters the xmath10 quantities we appeal to the method of exact solutions and carry out all the computations relevant to the xmath10 in this respect in order to have the conformity we also set all the conditions for the task the same as for corresponding studies on the xmath10 in particular we consider only the standard model neutrino interactions and take matter composed of electrons in the exact solutions method one starts with the modified dirac equation for the neutrino in matter in order to have initial and final neutrino states which would enter the process amplitude the equation reads as follows xcite xmath11 where in the case of neutrino motion through the non moving and unpolarized matter xmath12 with xmath13 being matter electrons number density under this conditions the equation eq dirac has plane wave solution determined by 4momentum xmath14 and quantum numbers of helicity xmath15 and sign of energy xmath16 for the details of equation solving and exact form of the wave functions xmath17 the readeris referred to xcite and xcite here we cite only the expression for the neutrino energy spectrum xmath18 the s matrix of the process involves the usual dipole electromagnetic vertex xmath19igamma5bf sigmabig and for given spinors for the initial and final neutrino states xmath20 can be written as xmath21 here xmath22 is the photon polarization vector xmath23 is the transitional magnetic moment and xmath24 is the normalization length the delta functions before spinors convolution part lead to the conservation laws xmath25 with energies for the initial and final neutrinos xmath26 taken in accordance to eq dispersion for the photon dispersion for the purpose of our studyit is sufficient to use the simplest expression xmath27 as it was discussed in our previous studies on the xmath10 xcite the most appropriate conditions for the radiation to manifest its properties are met in dense astrophysical objects this is the setting we will use further for the process and in the case of cold plasma the plasmon mass should be taken as xmath28 the numerical evaluation at typical density gives xmath29 while the density parameter xmath30 let us now consider the influence of dense plasma on the process of spin light of neutrino similarly to the original spin light calculation we consider the case of initial neutrino possessing the helicity quantum number xmath31 and the corresponding final neutrino helicity is xmath32 using the neutrino energies eq dispersion with corresponding helicities one can resolve the equations eq conservation in relation to plasmon momentum which is not equal to its energy since we take into account the dispersion of the emitted photon in plasma photon dispersion for convenience of calculations it is possible to use the following simplification in most cases the neutrino mass appeared to be the smallest parameter in the considered problem and it is several orders smaller then any other parameter in the system so we could first examine our process in approximation of zero neutrino mass though we should not forget that only neutrino with non zero mass could naturally possess the magnetic moment this our simplification should be considered only as a technical one it should be pointed here that in order to obtain the consistent description of the xmath10 one should account for the effects of the neutrino mass in the dispersion relation and the neutrino wave functions from the energy momentum conservation it follows xcite that the process is kinematically possible only under the condition taking account of the above mentioned simplification xmath33 provided with the plasmon momentumwe proceed with calculation of the xmath10 radiation rate and total power the exact calculation of total rate is an intricate problem and the final expression is too large to be presented here however one can consider the most notable ranges of parameters to investigate some peculiarities of the rate behavior first of all we calculate the rate for the case of the xmath10 without plasma influence this can be done by choosing the limit xmath34 and the obtained result is in full agreement with xcite xmath35 from gammasl one easily derives the xmath10 rate for two important cases ie high and ultra high densities of matter just by choosing correspondingly xmath14 or xmath36 as the leading parameter in the brackets while neutrino mass is the smallest quantity our system fall within the range of relativistic initial neutrino energies the corresponding expression for the total power also covers high and ultra high density cases xcite as well as the intermediate area where the density parameter and the neutrino momentum are comparable xmath37 if we account for the plasma influence thus xmath38 on the xmath10 we can discuss two important situations one is the area of parameters near the threshold and the other is connected with direct contribution of xmath39 into the radiation rate expression the later case is particularly important for this study because it fulfill the aim of the present research in finding the conditions under which the plasmon mass can not be neglected for physically reliable conditions the density parameter usually appears to be less then the plasmon mass which in its turn is less then the neutrino momentum xmath40 obviously the threshold condition threshold should be satisfied as we consider the conditions similar to different astrophysical objects it is natural to use high energy neutrino using the series expansion of the total rateone could obtain the rate of the process in the following form xmath41 where xmath42 approaching the threshold xmath43 the expansion gammaslseries becomes inapplicable however it is correct in rather wide range of parameters with xmath44 and xmath45 near the thresholdthe the total rate can be presented in the form xmath46 but the exact coefficient is too unwieldy to be presented here concerning the power of the xmath10 with plasmon one can use the expansion xmath47 the expression intensslseries is correct only if the system meets the requirement xmath48 otherwise one should use higher orders of quantity xmath49 in the expansion to achieve a reliable value of intensity near the threshold the power has the same dependence on the distance from the threshold xmath50 as the rate of the process there is an increasing interest to neutrino electromagnetic properties and neutrino magnetic moments in particular this interest is stimulated first by the progress in experimental bounds on magnetic moments which have been recently achieved as well as theoretical predictions of new processes emerging due to neutrino magnetic moment such as the xmath10 and a believe in its importance for possible astrophysical applications further developing the theory of the spin light of neutrino we have explicitly shown that the influence of plasmon mass becomes significant see gammaslseries and intensslseries when the parameter xmath51 is comparable with xmath52 this corresponds to the system near the threshold as soon as the quantity xmath48 so the system is far from the threshold one can use either xmath10 radiation rate and total power from xcite or their rather compact generalizations gammaslseries and intensslseries where the plasmon mass is accounted for as a minor adjustment sincehigh energy neutrinos propagating in matter could be rather typical situation in astrophysics for instance in neutron stars the influence of photon dispersion in plasma on the xmath10 process can be neglected and the threshold generated by the non zero plasmon mass should not be taken into account however the method of exact solutions of modified dirac equation provides us with analytical expressions for probability and intensity in the whole range of possible parameters one of the authors as is thankful to giorgio bellettini giorgio chiarelli mario greco and gino isidori for the invitation to participate in les rencontres de physique de la vallee daoste on results and perspectives in particle physics
recent discussion on the possibility to obtain more stringent bounds on neutrino magnetic moment has stimulated new interest to possible effects induced by neutrino magnetic moment in particular in this note after a short review on neutrino magnetic moment we re examine the effect of plasmon mass on neutrino spin light radiation in dense matter we track the entry of the plasmon mass quantity in process characteristics and found out that the most substantial role it plays is the formation of the process threshold it is shown that far from this point the plasmon mass can be omitted in all the corresponding physical quantities and one can rely on the results of massless photon spin light radiation theory in matter 19991201 v14c il nuovo cimento
neutrino magnetic moment magnetic moment and neutrino propagation in matter plasmon mass influence conclusion
the intensity anisotropy pattern of the cmbr has already been measured to an extraordinary precision which helped significantly to establish the current cosmological paradigm of a flat universe with a period of inflation in its first moments and the existence of the so called dark energy xcite the polarization anisotropies of the cmbr are an order of magnitude smaller than the intensity anisotropies and provide partly complementary information the polarization pattern is divided into two distinct components termed e and b modes which are scalar pseudoscalar fields the e modes originate from the dynamics due to the density inhomogeneities in the early universe the b modes are caused by lensing of the e modes by the matter in the line of sight and by gravitational waves in the inflationary period in the very early universe and are expected to be at least one order of magnitude smaller than the e modes the status of the e mode measurements is summarized in figure emodes from which it becomes obvious that the measurements are consistent with the theoretical model but not yet giving meaningful constraints of special importance and interest are the b modes expected from gravitational waves in the inflationary epoch since a detection would allow unique access to the very first moments of the universe the size of this contribution can not be predicted by theory but is parametrized by the tensor to scalar ratio xmath1 xcite interesting inflationary energy scales of the order of the grand unifying theory gut scale of 10xmath2 gev correspond to an xmath1 of xmath310xmath0 which would give rise to detectable signals of a few 10 nk the tiny signal requires unprecedented sensitivity and control of systematics and foregrounds bynow receivers have reached sensitivities close to fundamental limits so that the sensitivity will only be increased with the number of receivers recent developments at the jet propulsion laboratory jpl led to the successful integration of the relevant components of a polarization sensitive pseudo correlation receiver at 90 and 40 ghz in a small chip package this opened the way to future inexpensive mass production of large coherent receiver arrays and led to the formation of the q u imaging experiment quiet collaboration experimental groups from 12 international institutes have joined the experiment and are working on the first prototype arrays which are planned for deployment for 2008 in chile a w band 90 ghz array of 91 receivers and a q band 40 ghz array of 19 receivers will be deployed on new 14 m telescopes mounted on the existing platform of the cosmic background imager cbi in the atacama desert at an altitude of 5080 m it is foreseen to expand the arrays for a second phase of data taking 2010 to arrays with 1000 receivers for the expansion it is planned to mount more 14 m telescopes on the platform and relocate the 7 m crawford hill antenna from new jersey to chile to also access small angular scales a sketch of one receiver and its components can be seen in figure receiver the incoming radiation couples via a feedhorn to an orthomode transducer omt and from that to the two input waveguides of the chip package the chip contains a complete radiometer with high electron mobility transistors hemts implemented as monolithic microwave integrated circuits mmics phase shifters hybrid couplers and diodes the outputs of the four diodes of the radiometer provide measurements of the stokes parameters q and u and fast 4khz phase switching reduces the effects of the 1f drifts of the amplifiers for 10xmath4 of the receiversthe omt will be exchanged by a magic tee assembled in a way that the receivers measure temperature differences between neighboured feeds the signals from the diodes are processed by a digital backend sampling at 800 khz with subsequent digital demodulation this allows unique monitoring of high frequency noise as well as the production of null data sets with out of phase demodulation giving a valuable check of possible subtle systematics the receiver arrays together with the feedhorns are assembled in large cryostats and the chip radiometers are kept at 20 k to ensure low noise from the hemts for a single element a bandwidth of 188 ghz and a noise temperature of 45 20 k is aimed for at 90 40 ghz leading to expected sensitivites in chile of 250 160 xmath5kxmath6 per element a prototype array of 7 elements with one omt mounted on top of one chip radiometer is shown on the right hand side of figure receiver the hexagonal prototype arrays of 91 and 19 elements are being assembled from similar subarrays the omts were built in cost effective split block technique and the corrugated horn arrays were produced as platelet arrays where 100 plates with feed hole patterns are mounted together by diffusion bonding the increase in sensitivity is a necessary but not yet sufficient condition for the successful measurement of b modes as the signal of interest is smaller than the one from astrophysical foregrounds the diffuse emission synchrotron dust from our galaxy and extragalactic sources produces polarized signals of which the distribution and characteristics are not yet known to the precision required for a full removal multifrequency observations are mandatory to study the foreground behaviour and enable the clean extraction of the cmbr polarization anisotropies quiet in its observations will use two frequencies which frame the frequency where the contamination from foregrounds in polarization is expected to be minimal around 70 ghz also it will coordinate the patches to be observed with other polarization experiments to gain additional frequency information fields were selected in which minimal foreground contamination is expected the b modes from gravitational waves will suffer from yet another foreground which in intself is of scientific interest which is the lensing of e modes into b modes using the observations at small angular scales quiet will be able to determine a lensing correction and with that be able to remove that contribution properly while currently ongoing cmbr experiments bicep quad are running with tens of receivers all future experiments are aiming for large arrays with several hundreds of receivers all of them but quiet using bolometers figure comparison visualizes the main parameters of quiet in comparison to other ongoing and planned cmb experiments no interferometers are shown some of the experiments have their main focus on observations of the sunyaev zeldovich effect marked accordingly and not polarization observations but may still upgrade their detector arrays for polarization sensitivity the parameters of the future experiments were taken from recent papers and talks about the various efforts but since some of the technologies are not yet fully established and not all of the experiments are completely funded it is clear that some of the parameters may change in the course of the production both the left and middle plot display beam size versus frequency for the different experiments while the size of the squares indicates different parameters of the experiments since some experiments quiet polarbear are planned to operate in different phases they have several squares at the same position in the left panelthe square area is proportional to the total sensitivity of the experiments which means the smaller the square the more sensitive the experiment as can be seen the next generation of cmb experiments will achieve the desired level of a few nk sensitivity except for the space based missionplanck and the balloon experiment spider all ground based experiments focus their sensitivity on small fractions of the sky in this way it is possible to avoidregions of high foreground contamination and also gain a higher signal to noise ratio in the maps which helps characterizing foregrounds and systematics in order to compare the sensitivity on a map the middle figure displays squares which are in size proportional to the sensitivity in xmath5k square degree the right figure then shows the corresponding white noise level for the different polarization experiments as a function of multipole l in comparison to the different polarization power spectra as can be seen the white noise power is for planck a factor of 100 higher than for the ground based experiments which means the noise on a quiet map is about one order of magnitude lower than on the maps expected from planck the main sensitivity of planck for the measurement of b modes from gravitational waves comes from the reionization peak at low multipoles of l while the ground based experiments like quiet will constrain xmath1 from measuring at the maximum of the gravitational wave signal at xmath7100 corresponding to an angular scale of 2 degrees quiet is complementary to other experiments in many different ways quiet is the only ground based effort using coherent receivers and thus dealing with different systematics than the bolometric systems it is the only experiment to measure the stokes parameters q and u simultaneously in one pixel which provides a good handle on several systematic effects the array at 40 ghz complements the high frequencies of the bolometer arrays and thus allows to account for the contamination from synchrotron radiation which dominates at low frequencies by using different telescope sizes quiet will be able to measure both large and small angular scales with the same receivers 036 lensing contribution not shownwidth449 already in phase i quiet will be able to measure the e mode spectrum to an unprecedented precision the expected e and b mode power spectra for the phase ii of quiet with 1000 elements are shown in figure powspec only the sensitivity of the w band 90 ghz arrays from the 14 m telescopes was used assuming that the q band sensitivity is used for foreground removal the results were derived by including several real data effects a realistic observing strategy has been simulated and the method used in capmap to remove ground pickup by mode removal in single scans has been applied xcite the simulations also incorporate effects from e b leakage where the b mode measurement is degraded due to the e mode signal leakage into the b mode spectrum due to the finite size of the observed patch xcite additionally the errors include a marginalization over the power in adjacent xmath7bins and for b modes also over e power the expected precision on cosmological parameters assuming initial adiabatic conditions is summarized in table cosmpar note that these estimates had been performed before the publication of wmap results but do agree well with the published wmap parameter errors from the tableone can see that quiet will improve the wmap parameter errors to a size competitive to the expected precision of planck addingthe quiet measurements to planck will only bring a small improvement in most of the parameters however quiet will already in phase i be able to constrain the tensor to scalar ratio together with planck to a level significantly smaller than planck is expected to adding quiet phase ii will bring the limit on xmath1 down to the level of 10xmath0 3cmllllll a b c d e xmath8 6 4 1 1 1 xmath9 8 7 4 2 2 xmath10 15 14 8 4 3 xmath11 34 23 14 7 6 xmath12 4 2 1 1 1 xmath13 135 0021 0009 0042 0009 we are entering an era where probing gut scale physics is possible a number of experiments are in preparation for seeing the signature of inflation in the b modes of these quietis the only one using coherent detectors a convincing discovery of the tiny signal will need consistent measurements from complementary techniques and observing frequencies already within the next yearsquiet will reach the sensitivity to probe together with other experiments interesting levels of xmath1
a major goal of upcoming experiments measuring the cosmic microwave background radiation cmbr is to reveal the subtle signature of inflation in the polarization pattern which requires unprecedented sensitivity and control of systematics since the sensitivity of single receivers has reached fundamental limits future experiments will take advantage of large receiver arrays in order to significantly increase the sensitivity here we introduce the q u imaging experiment quiet which will use hemt based receivers in chip packages at 9040 ghz in the atacama desert data taking is planned for the beginning of 2008 with prototype arrays of 9119 receivers an expansion to 1000 receivers is foreseen with the two frequencies and a careful choice of scan regions there is the promise of effectively dealing with foregrounds and reaching a sensitivity approaching 10xmath0 for the ratio of the tensor to scalar perturbations 19991201 v14c il nuovo cimento
the status of polarization measurements the quiet experiment conclusion
by xmath119 xcite the lines represent the numerical results for the delta function ie all nodes have same activity potential and power law activity distributions the arrows indicate xmath741 we set xmath120 and xmath121width326 we consider sis dynamics on a star graph with xmath7 leaves and derive xmath23 xmath25 xmath31 xmath32 and xmath33 let us denote the state of the star graph by xmath122 where xmath123 and xmath124 are the states of the hub and a specific leaf node respectively and xmath125 is the number of infected nodes in the other xmath126 leaf nodes although a general network with xmath127 nodes allows xmath128 states using this notation we can describe sis dynamics on a star graph by a continuous time markov process with xmath129 states xcite we denote the transition rate matrix of the markov process by xmath130 its element xmath131 is equal to the rate of transition from xmath132 to xmath133 the diagonal elements are given by xmath134 the rates of the recovery events are given by xmath135 the rates of the infection events are given by xmath136 the other elements of xmath130 are equal to xmath137 let xmath138 be the probability for a star graph to be in state xmath132 at time xmath21 because xmath139 where xmath140 is the xmath129dimensional column vector whose elements are xmath138 we obtain xmath141 note that xmath23 and xmath25 are the probabilities with which xmath142 at time xmath6 when the initial state is xmath143 and xmath144 respectively and that xmath31 xmath32 and xmath33 are the probabilities that xmath145 at time xmath6 when the initial state is xmath144 xmath143 and xmath146 respectively therefore we obtain xmath147i y zi s0 sumy zleft expbmmtau righti y zs i0 sumx zleft expbmmtau rightx i zs i0 sumx zleft expbmmtau rightx i zi s0 sumx zleft expbmmtau rightx i zs s1 endpmatrixendaligned when xmath76 eq eq cis yields xmath148 nonumber c2c4 nonumber fracetau2 leftebetatau efrac1beta2tauleftcoshfrackappatau2frac1 3betakappasinhfrackappatau2rightright nonumber endaligned where xmath149 and xmath33 is not defined when xmath81 we can apply an individual based approximation xcite we assume that the state of each node is statistically independent of each other ie xmath150 where xmath151 for example is the probability that the hub takes state xmath123 we have suppressed xmath21 in eq eq independent assumption under the individual based approximation xmath123 and xmath124 obey bernoulli distributions with parameters xmath152 and xmath153 respectively and xmath125 obeys a binomial distribution with parameters xmath126 and xmath154 where xmath155 is given by xmath156 by substituting eq eq p in the time derivative of eq pmf we obtain xmath157 if xmath158 xmath159 obeys linear dynamics given by xmath160 where xmath161 in a similar fashion to the derivation of eq eq cis we obtain xmath16211 expbmmrm mftau12 expbmmrm mftau22 expbmmrm mftau21 frac1m1expbmmrm mftau23 endpmatrix nonumber etau beginpmatrix coshbetasqrtmtau frac1sqrtm sinhbetasqrtmtau 1fraccoshbetasqrtmtau1m frac1sqrtmsinhbetasqrtmtau frac1mcoshbetasqrtmtau1endpmatrix labeleq cis for large mendaligned we estimate the extent to which eq eq cis for large m is valid as follows first we need xmath81 because the initial condition xmath163 should satisfy xmath158 second xmath164 must satisfy xmath165 because xmath166 in eq eq pdotmf to satisfy xmath158 we need xmath167 these two conditions are sufficient for this approximation to be valid at the epidemic threshold the largest eigenvalue of xmath168 is equal to unity let xmath169 be the corresponding eigenvector of xmath168 we normalize xmath170 such that xmath171 by substituting eq 7 in xmath172 we obtain the system of equations xmath173 equation v3 gives xmath174 where xmath175 by combining eqs v2 and eq vj we obtain xmath176v2endaligned where xmath177 because xmath170 is normalized we obtain xmath178left 11qsr rightrlangle arangle1qsleft qlangle aranglerightr fracleft 1fracqlangle arangle rightqrrlangle arangle1qsleft qlangle aranglerightr fracfracqlangle arangleleft 1fracqlangle arangle rightqrrlangle arangle1qsleft qlangle aranglerightr fracleftfracqlangle arangleright2left 1fracqlangle arangle rightqrrlangle arangle1qsleft qlangle aranglerightr vdots endpmatrix labeleq v equation v1 leads to xmath179v1 langle arangleleft ss 1qsu rightv2 labeleq v12endaligned where xmath180 by substituting eq eq v in eq eq v12 we obtain xmath181 which is eq 8 in the main text if all nodes have the same activity potential xmath20 eq eq f derivation is reduced to xmath182when xmath81 the epidemic threshold can be obtained by the individual based approximation xcite we assume that all nodes have the same activity potential xmath20 by substituting eq eq cis for large m in eq eq f for same activity we obtain xmath183 equation eq et for large m agrees with the value derived in xcite note that this approximation is valid only for small xmath6 xmath184 in the limit xmath185 we obtain xmath98 xmath186 for general activity distributions xmath187 leads to xmath188 where xmath1893left2m1langle aranglerightsleftfraclangle arangle1mlangle aranglerightnonumber mlangle arangleleft 1mlangle arangleright2left m1m2 1langle arangleright d m2langle arangle2left 1mlangle arangleright3left1m1langle aranglerightsleftfraclangle arangle1mlangle aranglerightnonumber m2langle arangle2left 1mlangle arangleright2 endaligned at xmath191 an infinitesimal increase in xmath6 from xmath137 to xmath192 does not change the xmath60 value for general activity distributions by setting xmath193 for xmath67 given by eq eq f derivation we obtain xmath194 eq eq tau star for clique and xmath103 respectively all nodes are assumed to have the same activity potential given by eq eq a for clique we set xmath116width326 we consider the case in which an activated node creates a clique a fully connected subgraph with xmath7 randomly chosen nodes instead of a star graph this situation models a group conversation among xmath127 people we only consider the case in which all nodes have the same activity potential xmath20 the mean degree for a network in a single time window is given by xmath195 the aggregate network is the complete graph we impose xmath196 so that cliques in the same time window do not overlap as in the case of the activity driven model we denote the state of a clique by xmath197 where xmath123 and xmath124 are the states of the activated node and another specific node respectively and xmath125 is the number of infected nodes in the other xmath126 nodes the transition rate matrix of the sis dynamics on this temporal network model is given as follows the rates of the recovery events are given by eqs eq recovery1 eq recovery2 and eq recovery3 the rates of the infection events are given by xmath198 we obtain xmath34 xmath99 from xmath130 in the same fashion as in the case of the activity driven model because of the symmetry inherent in a clique we obtain xmath199 and xmath200 therefore eq eq f for same activity is reduced to xmath201 calculations similar to the case of the activity driven model lead to xmath202 the phase diagram shown in fig fs2 is qualitatively the same as those for the activity driven model fig note that in fig fs2 we selected the activity potential value xmath20 to force xmath60 to be independent of xmath7 at xmath203 ie xmath204 although eq eq tau star for clique coincides with the expression of xmath117 for the activity driven model eq 10 xmath117 as a function of xmath7 is different between the activity driven model solid line in fig 3a and the present clique network model solid line in fig fs2 this is because the values of xmath20 are different between the two cases when xmath205 5ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty linkdoibase 1010881367 2630187073013 httpswwwamazoncomguidetemporalnetworkscomplexityscienceebookdpb01kjcomfw3fsubscriptionid3d0jyn1nvw651kca56c10226tag3dtechkie2026linkcode3dxm226camp3d202526creative3d16595326creativeasin3db01kjcomfw linkdoibase 101007s00285 010 0344x linkdoibase 101103physrevx5021005 linkdoibase 101103revmodphys87925
social contact networks underlying epidemic processes in humans and animals are highly dynamic the spreading of infections on such temporal networks can differ dramatically from spreading on static networks we theoretically investigate the effects of concurrency the number of neighbors that a node has at a given time point on the epidemic threshold in the stochastic susceptible infected susceptible dynamics on temporal network models we show that network dynamics can suppress epidemics ie yield a higher epidemic threshold when nodes concurrency is low but can also enhance epidemics when the concurrency is high we analytically determine different phases of this concurrency induced transition and confirm our results with numerical simulations introduction social contact networks on which infectious diseases occur in humans and animals or viral information spreads online and offline are mostly dynamic switching of partners and usually non markovian activity of individuals for example shape network dynamics on such temporal networks xcite better understanding of epidemic dynamics on temporal networks is needed to help improve predictions of and interventions in emergent infectious diseases to design vaccination strategies and to identify viral marketing opportunities this is particularly so because what we know about epidemic processes on static networks xcite is only valid when the timescales of the network dynamics and of the infectious processes are well separated in fact temporal properties of networks such as long tailed distributions of inter contact times temporal and cross edge correlation in inter contact times and entries and exits of nodes considerably alter how infections spread in a network xcite in the present study we focus on a relatively neglected component of temporal networks ie the number of concurrent contacts that a node has even if two temporal networks are the same when aggregated over a time horizon they may be different as temporal networks due to different levels of concurrency concurrency is a long standing concept in epidemiology in particular in the context of monogamy polygamy affecting sexually transmitted infections xcite modeling studies to date largely agree that a level of high concurrency eg polygamy as opposed to monogamy enhances epidemic spreading in a population however this finding while intuitive lacks theoretical underpinning first some models assume that the mean degree or equivalently the average contact rate of nodes increases as the concurrency increases xcite in these cases the observed enhancement in epidemic spreading is an obvious outcome of a higher density of edges rather than a high concurrency second other models that vary the level of concurrency while preserving the mean degree are numerical xcite in the present study we build on the analytically tractable activity driven model of temporal networks xcite to explicitly modulate the size of the concurrently active network with the structure of the aggregate network fixed with this machinery we show that the dynamics of networks can either enhance or suppress infection depending on the amount of concurrency that individual nodes have note that analysis of epidemic processes driven by discrete pairwise contact events which is a popular approach xcite does not address the problem of concurrency because we must be able to control the number of simultaneously active links possessed by a node in order to examine the role of concurrency without confounding with other aspects model we consider the following continuous time susceptible infected susceptible sis model on a discrete time variant of activity driven networks which is a generative model of temporal networks xcite the number of nodes is denoted by xmath0 each node xmath1 xmath2 is assigned an activity potential xmath3 drawn from a probability density xmath4 xmath5 activity potential xmath3 is the probability with which node xmath1 is activated in a window of constant duration xmath6 if activated node xmath1 creates xmath7 undirected links each of which connects to a randomly selected node fig f1 if two nodes are activated and send edges to each other we only create one edge between them however for large xmath0 and relatively small xmath3 such events rarely occur after a fixed time xmath6 all edges are discarded then in the next time window each node is again activated with probability xmath3 independently of the activity in the previous time window and connects to randomly selected nodes by xmath7 undirected links we repeat this procedure therefore the network changes from one time window to another and is an example of a switching network xcite a large xmath6 implies that network dynamics are slow compared to epidemic dynamics in the limit of xmath8 the network blinks infinitesimally fast enabling the dynamical process to be approximated on a time averaged static network as in xcite width326 for the sis dynamics each node takes either the susceptible or infected state at any time each susceptible node contracts infection at rate xmath9 per infected neighboring node each infected node recovers at rate xmath10 irrespectively of the neighbors states changing xmath6 to xmath11 xmath12 is equivalent to changing xmath9 and xmath10 to xmath13 and xmath14 respectively whilst leaving xmath6 unchanged therefore we set xmath15 without loss of generality analysis we calculate the epidemic threshold as follows for the sake of the analysis we assume that star graphs generated by an activated node which we call the hub are disjoint from each other because a star graph with hub node xmath1 overlaps with another star graph with probability xmath16 where xmath17 is the mean activity potential we impose xmath18 we denote by xmath19 the probability that a node with activity xmath20 is infected at time xmath21 the fraction of infected nodes in the entire network at time xmath21 is given by xmath22 let xmath23 be the probability with which the hub in an isolated star graph is infected at time xmath24 when the hub is the only infected node at time xmath21 and the network has switched to a new configuration right at time xmath21 let xmath25 be the probability with which the hub is infected at xmath24 when only a single leaf node is infected at xmath21 the probability that a hub with activity potential xmath20 is infected after the duration xmath6 of the star graph denoted by xmath26 is given by xmath27 in deriving eq eq rho1 we considered the situation near the epidemic threshold such that at most one node is infected in the star graph at time xmath21 and hence xmath28 the probability that a leaf with activity potential xmath20 that has a hub neighbor with activity potential xmath29 is infected after time xmath6 is analogously given by xmath30 where xmath31 xmath32 and xmath33 are the probabilities with which a leaf node with activity potential xmath20 is infected after duration xmath6 when only that leaf node the hub and a different leaf node is infected at time xmath21 respectively we derive formulas for xmath34 xmath35 in the supplemental material the probability that an isolated node with activity potential xmath20 is infected after time xmath6 is given by xmath36 by combining these contributions we obtain xmath37 to analyze eq eq rho further we take a generating function approach by multiplying eq eq rho by xmath38 and averaging over xmath20 we obtain xmath39gz labeleq thetaendaligned where xmath40 xmath41 xmath42 xmath43 xmath44 xmath45 is the probability generating function of xmath20 xmath46 and throughout the paper the superscript xmath47 represents the xmath48th derivative with respect to xmath49 we expand xmath19 as a maclaurin series as follows xmath50 let xmath51 be the fraction of initially infected nodes which are uniformly randomly selected independently of xmath20 we represent the initial condition as xmath52 epidemic dynamics near the epidemic threshold obey linear dynamics given by xmath53 by substituting xmath54 and xmath55 in eq eq theta we obtain xmath56 a positive prevalence xmath57 ie a positive fraction of infected nodes in the equilibrium state occurs only if the largest eigenvalue of xmath58 exceeds xmath59 therefore we get the following implicit function for the epidemic threshold denoted by xmath60 xmath61 where xmath62 xmath63 xmath64 xmath65 and xmath66 see supplemental material for the derivation note that xmath67 is a function of xmath9 xmath68 through xmath69 xmath70 xmath71 and xmath72 which are functions of xmath9 in general we obtain xmath60 by numerically solving eq eq implicit eq but some special cases can be determined analytically in the limit xmath73 eq eq implicit eq gives xmath741 which coincides with the epidemic threshold for the activity driven model derived in the previous studies xcite in fact this xmath60 value is the epidemic threshold for the aggregate and hence static network whose adjacency matrix is given by xmath75 xcite as demonstrated in fig s1 for general xmath6 if all nodes have the same activity potential xmath20 and if xmath76 we obtain xmath60 as the solution of the following implicit equation xmath77 nonumber etau1 2a0 labeleq et for 1 endaligned where xmath78 the theoretical estimate of the epidemic threshold eq eq implicit eq we use eq eq et for 1 in the case of xmath76 is shown by the solid lines in figs f2a and f2b it is compared with numerically calculated prevalence values for various xmath6 and xmath9 values shown in different colors equations eq implicit eq and eq et for 1 describe the numerical results fairly well when xmath76 the epidemic threshold increases with xmath6 and diverges at xmath79 fig f2a furthermore slower network dynamics ie larger values of xmath6 reduce the prevalence for all values of xmath9 in contrast when xmath80 the epidemic threshold decreases and then increases as xmath6 increases fig f2b the network dynamics ie finite xmath6 impact epidemic dynamics in a qualitatively different manner depending on xmath7 ie the number of concurrent neighbors that a hub has note that the estimate of xmath60 by the individual based approximation xcite see supplemental material for the derivation which may be justified when xmath81 is consistent with the numerical results and our theoretical results only at small xmath6 dashed lines in fig f2b a c and e and xmath80 b d and f in a and b all nodes have the same activity potential value xmath20 the solid lines represent the analytical estimate of the epidemic threshold eq eq implicit eq we plot eq eq et for 1 instead in a the dashed lines represent the epidemic threshold obtained from the individual based approximation supplementary material the color indicates the prevalence in c and d the activity potential xmath82 xmath83 obeys a power law distribution with exponent xmath84 in ad we set xmath85 and adjust the values of xmath20 and xmath86 such that the mean degree is the same xmath87 in the four cases in e and f the activity potential is constructed from workspace contact data obtained from the sociopatterns project xcite this data set contains contacts between pairs of xmath88 individuals measured every xmath89 seconds we calculate the degree of each node averaged over time denoted by xmath90 and define the activity potential as xmath91m we simulate the stochastic sis dynamics using the quasistationary state method xcite as in xcite and calculate the prevalence averaged over xmath92 realizations after discarding the first xmath93 time steps we set the step size xmath94 in ad and xmath95 in e and fwidth326 the results shown in figs f2a and f2b are qualitatively simillar when the activity potential xmath20 is power law distributed figs f2c and f2d and when xmath4 is constructed from empirical data obtained from the sociopatterns project xcite figs f2e and f2f to illuminate the qualitatively different behaviors of the epidemic threshold as xmath6 increases we determine a phase diagram for the epidemic threshold we focus our analysis on the case in which all nodes share the activity potential value xmath20 noting that qualitatively similar results are also found for power law distributed activity potentials fig f3b we calculate the two boundaries partitioning different phases as follows first we observe that the epidemic threshold diverges for xmath96 in the limit xmath97 infection starting from a single infected node in a star graph immediately spreads to the entire star graph leading to xmath98 xmath99 by substituting xmath98 in eq eq implicit eq we obtain xmath100 where xmath101 when xmath102 infection always dies out even if the infection rate is infinitely large this is because in a finite network infection always dies out after sufficiently long time due to stochasticity xcite second although xmath60 eventually diverges as xmath6 increases there may exist xmath103 such that xmath60 at xmath104 is smaller than the xmath60 value at xmath105 motivated by the comparison between the behaviour of xmath60 at xmath76 and xmath80 fig f2 we postulate that xmath103 xmath106 exists only for xmath107 then we obtain xmath108 at xmath109 the derivative of eq eq implicit eq gives xmath110 because xmath108 at xmath111 we obtain xmath112 which leads to xmath113 when xmath114 network dynamics ie finite xmath6 always reduce the prevalence for any xmath6 figs f2a f2c and f2e when xmath115 a small xmath6 raises the prevalence as compared to xmath105 ie static network but a larger xmath6 reduces the prevalence figs f2b f2d and f2f the phase diagram based on eqs eq tau star and eq mc is shown in fig f3a the xmath60 values numerically calculated by solving eq eq implicit eq are also shown in the figure it should be noted that the parameter values are normalized such that xmath60 has the same value for all xmath7 at xmath105 we find that the dynamics of the network may either increase or decrease the prevalence depending on the number of connections that a node can simultaneously have extending the results shown in fig f2 these results are not specific to the activity driven model the phase diagram is qualitatively the same for a different model in which an activated node induces a clique instead of a star fig s2 modeling a group conversation event as some temporal network models do xcite when the activity potential is a equal to xmath20 for all nodes or b obeys a power law distribution with exponent xmath84 xmath82 we set xmath116 at xmath76 and adjust the value of xmath20 and xmath86 such that xmath60 takes the same value for all xmath7 at xmath105 in the die out phase infection eventually dies out for any finite xmath9 in the suppressed phase xmath60 is larger than the xmath60 value at xmath105 in the enhanced phase xmath60 is smaller than the xmath60 value at xmath105 the solid and dashed lines represent xmath117 eq eq tau star and xmath103 respectively the color bar indicates the xmath60 values in the gray regions xmath118width326 discussion our analytical method shows that the presence of network dynamics boosts the prevalence and decreases the epidemic threshold xmath60 when the concurrency xmath7 is large and suppresses the prevalence and increases xmath60 when xmath7 is small for a range of values of the network dynamic timescale xmath6 this result lends theoretical support to previous claims that concurrency boosts epidemic spreading xcite the result may sound unsurprising because a large xmath7 value implies that their exists a large connected component at any given time however our finding is not trivial because a large component consumes many edges such that other parts of the network at the same time or the network at other times would be more sparsely connected as compared to the case of a small xmath7 our results confirm that monogamous sexual relationship or a small group of people chatting face to face as opposed to polygamous relationships or large groups of conversations hinders epidemic spreading where we compare like with like by constraining the aggregate static network to be the same in all cases for general temporal networks immunization strategies that decrease concurrency eg discouraging polygamy may be efficient restricting the size of the concurrent connected component eg size of a conversation group may also be a practical strategy another important contribution of the present study is the observation that infection dies out for a sufficiently large xmath6 regardless of the level of concurrency as shown in figs 3 and s1 the transition to the die out phase occurs at values of xmath6 that correspond to network dynamics and epidemic dynamics having comparable timescales this is a stochastic effect and can not be captured by existing approaches to epidemic processes on temporal networks that neglect stochastic dying out such as differential equation systems for pair formulation dissolution models xcite and individual based approximations xcite our analysis methods explicitly consider such stochastic effects and are therefore expected to be useful beyond the activity driven model or the clique based temporal networks analyzed in the supplemental material and the sis model we thank leo speidel for discussion we thank the sociopatterns collaboration http wwwsociopatternsorg for providing the data set to acknowledges the support provided through jsps research fellowship for young scientists jg acknowledges the support provided through science foundation ireland grant numbers 15spp e3125 and 11pi1026 nm acknowledges the support provided through jst crest and jst erato kawarabayashi large graph project 41ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty linkdoibase 101016jphysrep201203001 noop httpswwwamazoncomguidetemporalnetworkscomplexityscienceebookdpb01kjcomfw3fsubscriptionid3d0jyn1nvw651kca56c10226tag3dtechkie2026linkcode3dxm226camp3d202526creative3d16595326creativeasin3db01kjcomfw noop linkdoibase 101017cbo9780511791383 linkdoibase 101103revmodphys87925 linkdoibase 101007978 3 319 26641 1 vol noop noop linkdoibase 1010160378 87339500268s linkdoibase 1010160025 55649500093 3 arxiv161104800 httpwwwncbinlmnihgovpubmed1551000 linkdoibase 10109701olq0000194586664097a linkdoibase 101016jmbs201609009 linkdoibase 101016jmbs200402003 linkdoibase 101098rspb20001244 doibase httpdxdoiorg101097qad0000000000000676 linkdoibase 101038srep00469 noop linkdoibase 101038srep03006 linkdoibase 101103physrevlett112118702 linkdoibase 101103physrevlett117228302 noop noop linkdoibase 101103physreve83045102 noop linkdoibase 101007978 1 4612 0017 8 systems and control foundations and applications linkdoibase 101103physrevlett111188701 linkdoibase 101137120893409 linkdoibase 1010881367 2630187073013 linkdoibase 101017nws201510 linkdoibase 101103physreve71016129 linkdoibase 101098rsif20071106 linkdoibase 101007s00285 010 0344x linkdoibase 10114512811921281269 noop linkdoibase 101371journalpone0028116 linkdoibase 101007s10461 010 9787 8 linkdoibase 101103physrevx5021005 linkdoibase 101038srep31456 supplemental material for concurrency induced transitions in epidemic dynamics on temporal networks
prevalence on the aggregate network derivation of @xmath23, @xmath25, @xmath31, @xmath32, and @xmath33 derivation of eq. (8) epidemic threshold under the individual-based approximation derivation of @xmath117 for general activity distributions derivation of @xmath190 for general activity distributions temporal networks composed of cliques
the theoretical treatment of the longstanding problem of turbulent flows xcite has to relate dynamical systems theory with non equilibrium statistical physics xcite the central notion of physical turbulence theory is the concept of the energy cascade highlighting the fact that turbulent flows are essentially transport processes of quantities like energy or enstrophy in scale although well established theories due to richardson kolmogorov onsager heisenberg and others for reviews we refer the reader to xcite can capture gross features of the cascade process in a phenomenological way the dynamical aspects are by far less understood and usually are investigated by direct numerical simulations of the navier stokes equations an exception in some sense are inviscid fluid flows in two dimensions based on the work of helmholtz xcite it was kirchhoff xcite who pointed out that the partial differential equation can be reduced to a hamiltonian system for the locations of point vortices provided one considers initial conditions where the vorticity is a superposition of delta distributions we refer the reader to the works of aref xcite as well as the monographs xcite due to onsager xcite for a discussion we refer the reader to xcite a statistical treatment of point vortex dynamics is possible for equilibrium situations because of the hamiltonian character of the dynamics provided the ergodic hypothesis holds extensions to non equilibrium situations based on kinetic equations have been pursued eg by joyce and montgomery xcite lundgren and pointin xcite as well as more recently by chavanis xcite the purpose of the present article is to generalize kirchhoff s point vortex model to a rotor model that exhibits the formation of large scale vortical structures due to the formation of rotor clusters the existence of such a process in two dimensional flows where a large scale vorticity field spontaneously emerges from an initially random distribution of vortices was first predicted by kraichnan xcite and is termed an inverse cascade thereby the energy that is injected into the small scales is transfered to larger scales whereas the enstrophy follows a direct cascade from large to small scales it was also kraichnan xcite who gave an intuitive explanation of the possible mechanism of the cascade he considered a small scale axisymmetric vortical structure that is exposed to a large scale strain field eventually the vortex is elongated along the stretching direction of the strain ie to a first approximation drawn out into an elliptical structure this thinning mechanism induces relative motions between vortices that have been deformed under their mutual strain which leads to a decrease of the kinetic energy of the small scale motion and consequently to an energy transfer upscale more recently it has been pointed out numerically and experimentally by chen et al xcite that the effect of vortex thinning is indeed an important feature of the inverse cascade an appropriate vortex model for the inverse cascade therefore has to provide a mechanism similar to that identified in xcite although several point vortex models have been known for a long time to form large scale vortical structures from an initially random distribution of point vortices due to the events of vortex merging xcite or special forcing mechanisms xcite an explicit inclusion of the concept of vortex thinning never has been taken into account in our vortex model the small scale vortical structure is represented by a rotor consisting of two point vortices with equal circulation that are glued together by a nonelastic bond the main observation now is that the two co rotating point vortices mimic a far field that is similar to an elliptical vortex which makes the rotor sensitive to a large scale strain the model is motivated by a representation of the vorticity field as a superposition of vortices with elliptical gaussian shapes along the lines of melander styczek and zabusky xcite the nonelastic bond in a rotor can be considered as an over damped spring which models the influence of forcing and viscous damping however the main renewal in this model is not the mechanism of how the energy is injected into the system but how the energy is transfered upscale due to the strain induced relative motions between the rotors in the sense of vortex thinning the efficiency of the cascade in the rotor model is supported by the relatively fast demixing of the system as well as a kolmogorov constant of xmath0 that is within the range of accepted values xcite this paper is organized as follows first of all we consider a decomposition of the vorticity field into localized vortices with different shapes in section dec in section ans we make an ansatz for the shapes which corresponds to an elliptical distribution of the vorticity and discuss the interaction of two vortices with like signed circulation within the point vortex model the gaussian vortex model and the elliptical model it will explicitly be shown that the former two models do not lead to a relative motion between the vortices and that the thinning mechanism is only taken into account by the elliptical model a suitable forcing mechanism for the vorticity equationis introduced in section forcing and then used within our generalized vortex model presented in section modelsection as it is known from basic fluid dynamics the vorticity xmath1 only possesses one component in two dimensional flows and obeys the evolution equation xmath2 here the advecting velocity field is determined by biot savart s law according to xmath3 we consider the two dimensional vorticity equation in fourier space derived from equation omega in the appendix fourier vorticity according to xmath4 with xmath5 in the following the vorticity is decomposed into vortices xmath6 with the circulation xmath7 that are centered at xmath8 and that possess the shapes xmath9 namely xmath10 our ansatz thus reads xmath11 for xmath12 we recover the vorticity field xmath13 of point vortices xmath14 that are located at the positions xmath8 and that are a solution of the ideal vorticity equation xmath15 which conserves the vorticity along a lagrangian trajectory inserting the vorticity field from point into biot savart s law biot immediately yields the evolution equation for the point vortices xmath16 we now insert our ansatz ansatz into the vorticity equation and obtain xmath17 nonumber ibf k cdot sumj lgammaj gammal int textrmdbf k bf ubf k eibf kbf kcdot bf xjibf kcdot bf xl times ew jbf kbf ktwlbf ktendaligned the left hand side of this equation contains the sweeping dynamic of the vortices encoded in the temporal change of xmath8 as well as the temporal change of the shapes xmath9 due to shearing and vorticity in the inviscid case the entire dynamic of the xmath18th vortex is determined by the nonlinearity on the right hand side of equation evom which couples the different fourier modes of the vortices xmath19 as well as the self interaction term from xmath20 in a rather complicated manner nevertheless a separation of the effects becomes possible under the assumption that the overlap of the different vortex structures is negligible which is valid for widely separated vortices to this end we single out the terms in the summations over xmath18 and get xmath21 the sweeping dynamic can now be defined via the terms in the evolution equation evomneu which are proportional to xmath22 this immediately yields the evolution equations for the center of the vortices xmath23 where we have defined the velocity kernels xmath24 inserting the evolution equation of the vortex centers back into yields the evolution equations for the shapes xmath25 ewibf ktwlbf kt nonumber times left ewibf kbf ktwibf ktwibf kt1rightendaligned here the sum includes also the self interaction term with xmath20 the system of equations xj and shape is the extension of the set of evolution equations for the xmath26point vortices point and takes into account possible changes of the shapes xmath9 of each vortex it is important to stress that up to now we did not impose any restrictions on the shapes xmath27 the vorticity of an elliptical vortex with major and minor semi axes xmath28 and xmath29 can be written according to xmath30 where xmath31 is the symmetric matrix of the dyadic products of the semi axes a rotation of the coordinate system then turns elli into xmath32 where xmath33 is the is the ratio of the major to the minor semi axes the vorticity in fourier space thus reads xmath34 which again corresponds to an elliptical distribution of the vorticity an elliptical representation of the shapes can thus be obtained via the approximation xmath35 with the symmetric matrix xmath36 in approximating the last term on the right hand side of equation shape by xmath37endaligned we are able to derive an evolution equation for the matrix xmath36 namely xmath38 nonumber suml ne j gammal sjlbf xjbf xl cjcj sjlbf xjbf xlt suml gammal nabla bf uilbf xibf xl cici nabla bf uilbf xibf xlendaligned here we explicitly have introduced the matrix xmath39 and have singled out the term with xmath20 the velocity field is now determined from eq u up to the first order in xmath40 valid for widely separated vortices xmath41 fracbf r2pibf r2endaligned the evolution equation for the vortex centers then reads xmath42nablabf xj bf ez times fracbf xjbf xl 4pibf xjbf xl2endaligned a similar system of equations evolc and xi has been obtained by melander et al xcite by means of a truncation of the stream function within their second order moment model for the euler equations it is illustrative to consider the interaction of two vortices xmath43 and xmath44 at the positions xmath45 and xmath46 that possess equal circulation xmath47 in the realm of the different vortex models considered above namely the point vortex model the gaussian shape model and the elliptical gaussian shape model i point vortex model for the case where xmath48 we recover the evolution equations of two point vortices xmath49 where we made use of xmath50 with the inverse laplacian xmath51 the evolution equation for the relative coordinate xmath52 then reads xmath53 or xmath54 which is a circular motion of the point vortices around their center xmath55 with the angular velocity xmath56 gaussian shapes let us consider the case of gaussian shapes xmath57 and xmath58 the symmetry of the problem imposes that xmath59 and we arrive at the following evolution equations for the centers xmath60 in making use of xmath61 which is the velocity profile of a lamb oseen vortex the evolution equation for the relative coordinate reads xmath62 the evolution equation for the shapes xmath63 has to be evaluated in a similar fashion from eq evolc but for now we invoke the approximation xmath64 where we have neglected the interaction terms in evolc this yields the evolution equations for two lamb oseen vortices xmath65 in comparison to the angular velocity of the point vortex pair from above the angular velocity of the gaussian vortex pair is thus slowed down by viscosity however if we observe such two vortices in real flows we would see a deformation of the two vortices due to their mutual strain this deformation in turn leads to an attractive motion of the vortex centers and ultimately to a merging process of the two vortices at this point it is important to notice that a direct consequence of an axisymmetric vorticity profile is that xmath66 which means that no relative motion is induced furthermore in this context we want to mention that a recent investigation of the two point vorticity statistic in two dimensional turbulence within a gaussian approximation revealed the absence of an energy flux from smaller scales to larger scales xcite the emergence of deformable structures that induce such relative motions in the context of vortex thinning can thus be considered as an important feature of the inverse cascade iii elliptical shapes as we have discussed in ii the mutual interaction of gaussian vortices in real flows leads to deformations and subsequently attractive motions of the vortex centers such deformations can be considered in a first approximation as elliptical deformations therefore the interaction of two elliptical vortices should for the first time lead to non vanishing relative motions the evolution equation for two elliptically shaped vortices read xmath67 bf k dot bf x2 gamma int textrmdbf k bf ubf k ei bf k cdot bf x2bf x1 efrac12 bf k c1c2 bf kendaligned for widely separated vortices the evolution equation for the relative coordinate thus reads xmath68nablabf r right bf ez times fracbf r r2endaligned which can lead to contributions to the relative motion xmath69 provided that the matrices xmath70 and xmath71 do not reduce to diagonal matrices as in the case of gaussian shapes whether the motion is attractive or repulsive is to a far extend determined by the alignment angle xmath72 between xmath73 and the major semi axis xmath28 of the vortices which is explicitly derived for the interaction of two rotors in section inter for instance in eq ri as it can be seen from eq shape the viscous contributions causes the broadening of the shape of a vortex since this effect is more pronounced for smaller vortex structures thus larger values of xmath74 in evolc an appropriate forcing mechanism has to counteract this effect and provide an energy input at small scales the forcing mechanism we want to introduce consists in forcing the semi axes of each elliptical vortex and thus the whole shape of this vortex back to a fixed shape xmath75 it will be seen in section modelsection that the influence of this kind of forcing makes the two like signed point vortices of our rotor model to behave as if they were connected by an over damped spring the described forcing mechanism can now be introduced in the following way xmath76 nonumber labelmodelg3 suml gammal silbf xibf xl cici silbf xibf xltendaligned such type of forcing may be obtained from the vorticity equation vorticity by just adding a linear damping term xmath77 as well as the forcing term xmath78 xmath79 nonumber endaligned where the centers xmath80 as well as the shapes xmath81 are close to the centers and the shapes of the elliptical vortices the first contribution in eq force leads to a modulation of the circulation the second term describes a shift of the rotor center and the third one corresponds to a modification of the width of the gaussian vortex shape that forces the elliptical vortex back to a certain shape xmath75 the stretching of the semi axes of the elliptical vortex due to viscous broadening represented by the first term on the right hand side in eq modelg3 is thus counteracted by the second term trying to contract the shape of the vortex back to xmath75 a striking analogy to this forcing mechanism can be found in the explanation of the magneto rotational instability xcite thereby two elements of an electrically conduct ing fluid that undergo a rotation around a fixed center are supposed to be connected by an elastic spring repre senting the magnetic field as a consequencethe angular momentum of the system is not a conserved quantity anymore and the fluid motion becomes unstable although the introduced forcing mechanism is an ad hoc forcing it emerges in a physically plausible way from the basic equations of the elliptical model evolc and xi furthermore it should be mentioned that the system of equations modelg1 can be obtained from the instanton equations of two dimensional turbulence by means of a variational ansatz with gaussian elliptical vortices xcite as we have seen in section models about the interaction between two point vortices with equal circulation compared to the interaction between two elliptical vortices with equal circulation the former model fails to describe a relative motion xmath82 in the direction of xmath73 the thinning mechanism mentioned in xciteis thus clearly neither captured by onsager s point vortex model nor by a gaussian distribution of the vorticity in analogy to xcite our vortex model is based on the observation that the point vortex couple considered in section models under i generates a far field that is similar to that of one elliptical vortex with circulation xmath83 we therefore consider point vortex couples with equal circulation xmath84 at the positions xmath85 and xmath86 as indicated in fig vector the center of this object that we want to term a rotor is then given by xmath87 in order to model a forcing and viscous damping mechanism similar to that mentioned in section forcing the two point vortices in a rotor are supposed to be glued together by an inelastic spring such that each rotor possesses an additional degree of freedom and that the size of a single rotor relaxes with relaxation time xmath88 to xmath89 our model then reads xmath90 nonumber dot bf yi fracgamma2 d0bf yibf xi bf ei gammai bf ubfyibf xi nonumber sumj gammaj bf ubf yibf yj bf ubf yibf xjendaligned where we have defined the unit vector xmath91 and the velocity field xmath92 is the velocity field of a point vortex centered at the origin xmath93 the first two terms on the right hand side of equation model describe the interaction within one rotor whereas the last two terms describe the interaction with the other rotors for vortices moving inside a closed regime the velocity field has to be changed based on the introduction of mirror vortices xcite it is important to stress that the above system is not a hamiltonian system anymore due to the inelastic coupling which mimics an energy input to the system on a scale xmath89 furthermore by the additional degree of freedom the rotor is sensitive with respect to a shear velocity field which can be seen from the multipole expansion of the relative coordinate xmath94 with respect to the leading terms in xmath95 derived in the appendix app xmath96 the influence of the forcing can be seen from the first term if a rotor is subjected to shear the spring between the point vortices in a rotor pulls back and the rotor relaxes to the size xmath89 the shear velocity in the last term is thereby generated by the other rotors in a similar way the multipole expansion of the center coordinate of the rotor in appendix app leads to the evolution equation xmath97 bf ubf rij the evolution equation is identical to equation xi provided that the matrix xmath98 can be written as xmath99 which corresponds to an infinitely thin elliptical vortex oriented in xmath100direction the relative distance xmath100 can thus be considered as an elliptical deformation of the velocity field that depends on the shear velocity field induced by the remaining vortices and the effect of the overdamped spring furthermore we again want to emphasize that the last term in eq locr induces relative motions between the rotors as we have seen in section models the usual point vortex dynamics solely represented by the first term on the right hand side of equation locr is thus extended to a dynamical system that is sensitive to the effect of vortex thinning we have numerically solved the dynamical system model in a square periodic domain xmath101 the temporal evolution of 200 rotors with an equal number of positive and negative circulations starting from a random initial condition exhibits the formation of a large scale vortical structure via the formation of rotor clusters a typical time series is exhibited in fig unequal for the parameter values xmath102 xmath103 xmath104 xmath105 xmath101 the boxes have been continued periodically with up to 5 layers of neighboring boxes which guarantees a sufficient degree of homogeneity the temporal evolution of the system can be quantified by the introduction of a characteristic time scale of the system which is given as the period that a rotor possesses at a fixed distance xmath89 and is in the following termed as one rotor turnover time xmath106 which follows from equation phi 1 as it can be seen from fig unequal the clustering of like signed rotors already occurs within the first 100 rotor turnover times which means that the separation of the rotors takes place on a relatively short time scale the temporal evolution of 200 rotors with identical circulations starting from random initial positions of the rotors is exhibited in fig lattice a fluctuating lattice of rotor clusters appears and after approximately 1500 rotor turnover times the system forms a monopole which attracts the remaining rotors we have calculated the kinetic energy spectra of the rotor system with xmath107 at different times in fig spec1 starting from 20 different initial configurations of the rotors we let the systems evolve in time and performed the ensemble average at a specific time xmath108 thereby the spectrum is calculated from the velocity field in eq biot that has been interpolated on a grid and then transformed into fourier space initially the rotors possess a clear point vortex spectrum following a power law xmath109 only at high values of xmath110 deviations due to the singular structure of the vorticity and corresponding discontinuities in the velocity field manifest themselves in an increase of xmath111 this effect can be observed in the following spectra too however after a few xmath112 rotor turnover times as the rotor clustering sets in a more universal energy spectrum can be observed due to an energy flux from smaller to larger scales the spectra begin to steepen for smaller xmath110values revealing a spectrum that is close to the predicted xmath113 as it can be seen from the compensated spectra in fig spec1 this slope remains constant for nearly xmath114 and an energy flux into the large scales takes place this is also in agreement with the time averaged spectral energy flux xmath115 depicted in fig flux the inlet plot in fig flux corresponds to the kinetic energy transfer rate xmath116 which is related to xmath115 according to xcite xmath117 it is obvious that energy accumulates at small xmath110values this is not surprising since the rotor model only provides an energy input on small scales and it will be a task for the future to extend the model in order to achieve a damping at small values of xmath110 and thus to extract energy at the integral scale we now turn to the determination of the kolmogorov constant of the energy spectrum from the binary rotor system xmath107 the spectrum as it was predicted by kraichnan xcite reads xmath118 where xmath119 is the energy dissipation rate in the following xmath119is determined from the time dependence of the total kinetic energy that shows up to be linear in time within xmath120 for the sake of completeness we have provided the corresponding plot in fig energy in the appendix the slope of the fitted line can thus be interpreted as the rate of energy input into the system and we obtain a value of xmath121 in order to make an estimate for xmath122 we take an average of the compensated spectra in fig spec1 of times between xmath123 and xmath124 which yields xmath125 the kolmogorov constant xmath126 of the rotor system for times t between xmath123 and xmath127 thus lies in the range xmath0 the high inaccuracy of our estimate is due to the estimation of xmath122 reported values from direct numerical simulations xcite and experiments xcite lie within the range from 58 to 70 the kolmogorov constant of the rotor system thus lies on the lower end of that range in comparison to the point vortex model of siggia and aref xcite who report a kolmogorov constant of xmath128 which is twice the accepted value the rotor model thus seems to provide an efficient mechanism for the energy transfer upscale due to the effect of vortex thinning another important way to determine the distribution and the occuring structures in the rotor modelwill be discussed in the following in order to quantify the emergence of the rotor clusters in fig unequal and lattice we make use of the radial distribution function xmath129 which can be considered as the probability of finding a like signed rotor at a distance xmath73 away from a reference rotor for further references see for instance xcite the radial distribution function is therefore given as xmath130 where xmath131 and the prime indicates that summation over xmath132 is left out the averaging is performed in such a way that the number of like signed rotors populating a concentric segment of radius xmath133 at a given radius r is divided by its area in the following the radial distribution functionis assumed to be isotropic so that xmath134 for a disordered state one expects the radial distribution function to be equal to 1 for every xmath135 as the formation of the rotor clusters sets in one should observe an increase of xmath136 for small xmath137 since the probability of finding a like signed rotor in the neighborhood of a reference rotor increases the radial distribution functions for the two time series are plotted in fig radialdis and one clearly observes an increase of xmath136 at small xmath137 in order to get smooth curves xmath136 was calculated in such a way that it shows no discontinuities for xmath138 due to a minimum distance between neighboring rotors the radial distribution function can thus be used as a qualitative measure for the formation of the clusters and their typical sizes furthermore the radial distribution function is related to the structure factor xmath139 in a way that xmath140 where xmath141 is the bessel function of order zero the structure factor xmath142 can thus be calculated via the hankel transform of xmath143 provided that the radial distribution function is isotropic the structure factors for the two system are plotted in fig structure for the case of the mixed system of fig unequal one observes an increase of xmath142 over time whether this increase is governed by a power law for intermediate xmath33 has to be evaluated within further simulations of the model equations model furthermore eq structurefactor is of great importance for the investigation of the rotor model since it relates macroscopic quantities on the left hand side to microscopic quantites such as the radial distribution function it is thus a good starting point for the interpretation of the fluctuations of the rotor clusters in the realm of phase transitions the growth rate of the rotor clusters can be determined from the time dependence of the structure factor the growth of the largest structures of the system is given by xmath144 in fig temp the temporal evolution of xmath144 is plotted for the two systems the fluctuating rotor lattice below exhibits a pronounced growth rate after xmath145 whereas the growth rate of the mixed system above already increases for xmath146 for comparison two power laws xmath147 and xmath148were plotted in the figures the growth rate of our rotor clusters can thus be considered as relatively strong compared to typical growth rates from pattern formation for instance compared to the growth rate of droplets in the cahn hilliard equation where xmath149 according to slyozov lifshitz theory xcite the fact that the rotor vortex system exhibits a pronounced inverse cascade already for moderate numbers of rotors 200 rotors have been used for the figures on a small time scale allows us to investigate the inverse cascade using methods of nonlinear dynamics although usual point vortex models such as xcite have been known for a long time to possess inverse energy cascades the present model incorporates the aspect of vortex thinning due to a possible change of the ellipticity of the rotor in much the same way as identified in the experiments of chen et al xcite hence it is a minimal dynamical model containing the mechanisms of the inverse cascade in the followingwe shall discuss the origin of the formation of clusters of rotors with like signed circulations in the following we consider the configuration of two rotors with circulations xmath84 and xmath7 depicted in fig vector which can be considered as the interaction of two infinitely thin elliptical vortices in the same manner as iii from section models it is straightforward to show that the center of vorticity xmath150 is a conserved quantity the distance vector xmath151 between the two rotors obeys the evolution equation xmath152 nonumber frac18left 2 fracbf rbf r4 bf rj2 4 fracbf rjbf r4bf rjcdot bf r8 fracbf rbf r6 bf rjcdot bf r2 rightendaligned which follows from equation locr in calculating the corresponding velocity field gradients described in the appendix app for the following it is convenient to represent the unit vectors according to xmath153 as well as xmath154 which yields xmath155bf eicdot bf erfrac12 sin2 varphii varphir we obtain the equation for the relative distance xmath156endaligned the evolution equation for the relative coordinate of a rotor reads xmath157endaligned which follows from equation dipol1 and the calculation of the velocity field gradients performed in the appendix app we have to determine the quantities xmath158 xmath159 which are determined by the evolution equations xmath160 we can solve iteratively for small deviations of xmath161 from xmath89 xmath162 a similar treatment applies to xmath163 splitting the rotation into its fast xmath164 andslow varying parts xmath165 ie xmath166 we obtain after a partial integration xmath1671 nonumber times e2itilde varphiittilde varphirtdottilde varphiitdot tilde varphirt endaligned in order to proceed with the adiabatic approximation we neglect the second term in eq adiabatic since it contains time derivatives of the slowly varying parts of the rotations assuming that the damping constant xmath168 is large compared to the rotation frequency of the rotor we obtain xmath169 to lowest order in xmath170 we thus obtain xmath171 nonumber rj2 d0 2left1 fracgammaipi gamma r2 sin2 varphijvarphirrightendaligned here the last terms on the right hand side arise due to the change of the size of the rotors connected with a change of the far field induced by the mutually generated shear it thus mimics the mechanism of vortex thinning identified in xcite the relative motion of the rotors obeys the evolution equation xmath172 endaligned we now average the evolution equation with respect to the rotations of the vectors xmath173 and xmath174 taking into account that the averages xmath175 vanish furthermore the averages xmath176 are positive as a consequence the relative distance behaves according to xmath177 two rotors approach each other except for xmath178 it is important to stress that this attractive relative motion arises only if we include the irreversible effect of the strain induced stretching of the rotors furthermore the symmetry breaking of xmath179 in equation r can be considered as an important feature of the rotor model in comparison to the point vortex model which conserves this symmetry we have presented a generalized point vortex model a rotor model exhibiting an inverse cascade based on clustering of rotors we have discussed how this rotor model can be derived from the vorticity equation by an expansion of the vorticity field into a set of elliptical vortices at locations xmath180 and shapes xmath181 an important point has been the inclusion of a forcing term which prevents the elliptical far field of the rotors from diffusing away the added forcing term breaks the symmetry xmath182 xmath183 this symmetry breaking lies at the origin of cluster formation and the inverse cascade as can be seen from the two rotor interaction inducing in average a relative motion proportional to xmath184 the numerical simulations of the model equations model reveal the formation of rotor clusters on a short time scale in addition the calculated energy spectra and energy fluxes give strong evidence for the important role of vortex thinning during the cascade process in two dimensional turbulence the presented rotor model can be investigated by applying methods from dynamical systems theory like the evaluation of finite time ljapunov exponents and ljapunov vectors these and further dynamical aspects are the basis for future work and will be covered in a following paper the model system modelg1 may also be studied as a stochastic system by considering the velocity xmath185 to be a white noise force the corresponding fokker planck equation allows one to draw analogies with quantum mechanical many body problems furthermore we emphasize that a continuum version of the model equations modelg1 leads to a subgrid model exhibiting analogies with the work of eyink xcite it will be a task for the future to investigate the cluster formation from a statistical point of view based on the formulation of kinetic equations along the lines as has been performed for fully developed turbulence xcite and rayleigh bnard convection xcite in this respectwe hope to find a relation to the kinetic equation for the two point vorticity statistics recently derived on the basis of the monin lundgren novikov hierarchy taking conditional averages from direct numerical simulations xcite is very grateful for discussions with michael wilczek and frank jenko about the organization of this paper sadly rudolf friedrich 16th august 2012 unexpectedly passed away during this work he was as much an inspiring physicist as well as a caring father for the decomposition of the vorticity field into a discrete set of vortices with arbitrary shapes in section dec we have made use of the vorticity equation in fourier space vorticity this equation can be derived from the vorticity equation in real space omega in defining the vorticity and the velocity field in fourier space according to xmath186 and xmath187 the nonlinearity in eq omega can thus be expressed in terms of the convolution between xmath188 and xmath189 which yields xmath190 furthermore the velocity field in fourier space can be calculated from biot savart s law in eq biot according to xmath191 a substitution xmath192 in the last integral yields xmath193 where we have defined the inverse laplacian in xmath22space as xmath194 inserting the fourier space representation of xmath195 into om yields the evolution equation vorticity used for the decomposition of the vorticity field in section dec in this part we calculate the multipole expansion of a rotor defined by xmath85 and xmath86 in fig vector to this end we introduce relative and center coordinates according to xmath196 as well as the vector xmath197 in using equation model we obtain the evolution equation for the relative coordinate xmath198 a taylor expansion of the curled bracket yields xmath199 where we have only retained the leading terms in xmath95 the evolution equation for the center coordinate reads xmath200 again a taylor expansion yields xmath201 2 bf ri bf rj cdot nablabf rij 2 right bf u bf rij bigg nonumber 2 sumj gammaj bf u bf rij frac14 sumj gammaj bf ri cdot nablabf rij2 bf rj cdot nablabf rij2 bf u bf rijendaligned the gradients of the velocity fields are now calculated according to xmath202 which is needed in equation dipol1 and xmath203 now this is the counterpart of equation locr
we generalize kirchhoff s point vortex model of two dimensional fluid motion to a rotor model which exhibits an inverse cascade by the formation of rotor clusters a rotor is composed of two vortices with like signed circulations glued together by an overdamped spring the model is motivated by a treatment of the vorticity equation representing the vorticity field as a superposition of vortices with elliptic gaussian shapes of variable widths augmented by a suitable forcing mechanism the rotor model opens up the way to discuss the energy transport in the inverse cascade on the basis of dynamical systems theory
introduction decomposition of the vorticity field into vortices with arbitrary shapes approximation via vortices with elliptical shapes motion of vortices with equal circulation within the different models the forcing mechanism formulation of the rotor model numerical results interaction of two rotors conclusions
intermediate mass stars ims comprise objects with zams masses between 08 and 8 corresponding to spectral types between g2 and b2 the lower mass limit is the minimum value required for double shell h and he fusion to occur resulting in thermal pulsations during the asymptotic giant branch agb phase and eventually planetary nebula formation above the upper mass limit stars are capable of additional core burning stages and it is generally assumed that these stars become supernovae a salpeter 1955 imf can be used to show that ims represent about 4 of all stars above 008 but this may be a lower limit if the imf is flat at low stellar masses scalo 1998 ims evolution is an interesting and complex subject and the literature is extensive a good complete generally accessible review of the subject is given by iben 1995 shorter reviews focussing on the agb stage can be found in charbonnel 2002 and lattanzio 2002 i will simply summarize here intermediate mass stars spend about 10 20 of their nuclear lives in post main sequence stages schaller et al fresh off the main sequence a star s core is replete with h burning products such as 4 14 the shrinking core s temperature rises a h burning shell forms outward from the core and shortly afterwards the base of the outer convective envelope moves inward and encounters these h burning products which are then mixed outward into the envelope during what is called the first dredge up as a result envelope levels of 4 14 and rise externally the star is observed to be a red giant as the shrinking he core ignites the star enters a relatively stable and quiescent time during which it synthesizes and once core he is exhausted the star enters the agb phase characterized by a co core along with shells of h and he fusing material above it early in this phase for masses in excess of 4 second dredge up occurs during which the base of the convective envelope again extends inward this time well into the intershell region and dredges up h burning products increasing the envelope inventory of 4 14 and as before later in the agb phase however the he shell becomes unstable to runaway fusion reactions due to its thin nature and the extreme temperature sensitivity of he burning the resulting he shell flash drives an intershell convective pocket which mixes fresh outward toward the h shell but as the intershell expands h shell burning is momentarily quenched and once again the outer convective envelope extends down into the intershell region and dredges up the fresh into the envelope an event called third dredge up subsequently the intershell region contracts the h shell reignites and the cycle repeats during a succession of thermal pulses observational consequences of thermal pulsing and third dredge up include the formation of carbon stars mira variables and barium stars now in ims more massive than about 3 4 the base of the convective envelope may reach temperatures which are high enough xmath360 million k to cause further h burning via the cn cycle during third dredge up as a result substantial amounts of are converted to 14 in a process referred to as hot bottom burning renzini voli 1981 hbb hbb not only produces large amounts of 14 but also results in additional neutron production through the xmath4cxmath0nxmath5o reaction where extra mixing is required to produce the necessary xmath4c these additional neutrons spawn the production of s process elements which are often observed in the atmospheres of agb stars note that carbon star formation is precluded by hbb in those stars where it occurs other nuclei that are synthesized during thermal pulsing and hbb include xmath6ne xmath7 mg xmath8al xmath9na and xmath10li karakas lattanzio 2003 the thermal pulsing phase ends when the star loses most of its outer envelope through winds and planetary nebula pn formation and thus the main fuel source for the h shell and for the star is removed and evolution is all but over note that the pn contains much of the new material synthesized and dredged up into the atmosphere of the progenitor star during its evolution as this material becomes heated by photoionization it produces numerous emission lines whose strengths can be measured and used to infer physical and chemical properties of the nebula models of intermediate mass star evolution are typically synthetic in nature a coarse grid of models in which values for variable quantities are computed directly from fundamental physics is first produced then interpolation formulas are inferred from this grid which are subsequently used in a much larger run of models thus reducing the computation time requirements the models described below are of this type the major parameters which serve as input for ims models include stellar mass and metallicity the value of the mixing length parameter the minimum core mass required for hbb the formulation for mass loss and third dredge up efficiency the first substantial study of ims surface abundances using theoretical models was carried out by iben truran 1978 whose calculations accounted for three dredge up stages including thermal pulsing renzini voli 1981 rv introduced hot bottom burning and the reimers 1975 mass loss rate to their models and explicitly predicted pn composition and total stellar yields van den hoek groenewegen 1997 hg introduced a metallicity dependence heretofore ignored into their evolutionary algorithms along with an adjustment upwards in the mass loss rate the latter being a change driven by constraints imposed by the carbon star luminosity function see below finally boothroyd sackmann 1999 demonstrated effects of cool bottom processing on the ratio marigo bressan chiosi 1996 buell 1997 and marigo 2001 m01 employed the mass loss formalism of vassiliadis wood 1993 which links the mass loss rate to the star s pulsation period to predict yields of important cno isotopes and langer et al 1999 and meynet maeder 2002 studied the effects of stellar rotation on cno yields table t1 provides a representative sample of yield calculations carried out over the past two decades to the right of the author columnare columns which indicate the lower and upper limits of the mass and metallicity ranges considered an indication of whether hot bottom burning or cold bottom processing was included in the calculations yes or no the type of mass loss used r reimers 1975 vw vassiliadis wood 1993 an indication of whether the calculations included stellar rotation yes or no and some important nuclei whose abundances were followed during the calculations values for xmath12 in table t2 are in turn plotted against metallicity in figure p the figure legend identifies the correspondence between line type and yield source where the abbreviations are the same as those defined in the footnote to table t2 note that massive star integrated yields are indicated with bold lines while thin lines signify ims integrated yields for 4 note that the three ims yield sets predict similar results except at low metallicity where the m01 yields are higher according to m01 this difference is presumably due to the earlier activation and larger efficiency of third dredge up in her models it s also clear that ims contribute to the cosmic buildup of 4 at roughtly the 20 30 level the rv yields for tend to be less than those of hg while those of m01 are greater due to differences in onset time and average efficiency of third dredge up globally the role of ims in production is therefore ambiguous because it depends upon which set of massive star yields one uses to compare with the ims yields for example ims yields are comparable to the massive star yields of ww yet significantly less than those of portinari et al finally there is a significant difference between the three yields sets where 14 is concerned the lifetimes of the thermal pulses in the stars at the upper end of the mass range in m01 s calculations are largely responsible for her 14 yields being significantly less than the others on the other hand rv s lower mass loss rate lengthens a star s lifetime on the late agb and results in more 14 production when compared with massive star yields rv and hg predict that ims will produce several times more 14 particularly at lower z when compared to either the ww or p yields on the other hand m01 s models predict less 14 universally speaking then ims yield predictions indicate that these stars contribute significantly to 14 production moderately to productions and hardly at all to 4 production remember though that these conclusions are heavily based upon model predictions the strength of these conclusions is only as strong as the models are realistic the respective roles of ims and massive stars in galactic chemical evolution can be further assessed by confronting observations of abundance gradients and element ratio plots with chemical evolution models which employ the various yields to make their predictions because there is a time delay of at least 30 myr between birth and release of products by ims these roles may be especially noticeable in young systems whose ages are roughly comparable to such delay times or in systems which experienced a burst less that 30 myr ago henry edmunds kppen 2000 hek explored the c o vs o h and n o vs o h domains in great detail using both analytical and numerical models to test the general trends observed in a large and diverse sample of galactic and extragalactic h ii regions located in numerous spiral and dwarf irregular galaxies using the ims yields of hg and the massive star yields of maeder 1992 they were able to explain the broad trends in the data and in the end they concluded that while massive stars produce nearly all of the in the universe ims produce nearly all of the 14 they also illustrated the impact of the star formation rate on the age metallicity relation and the behavior of the n o value as metallicity increases in low metallicity systems in this conference moll gaviln buell 2003 report on their chemical evolution models which use the buell 1997 ims yields along with the ww massive star yields where the former employ the mass loss rate scheme of vassiliadis wood 1993 their model results confirm those of hek in terms of the star formation rate and the age metallicity relation recently pilyugin et al 2003 reexamined the issue of the origin of nitrogen and found that presently the stellar mass range responsible for this element can not be clearly identified because of limitations in the available data chiappini et al 2003 have explored the cno question using chemical evolution models to study the distribution of elements in the milky way disk as well as the disk of m101 and dwarf irregular galaxies using the hg and ww yields like hek they conclude that 14 is largely produced by ims however they find that by assuming that the ims mass loss rate varies directly with metallicity production in these stars is relatively enhanced at low z in the end they conclude that ims not massive stars control the universal evolution of in disagreement with hek figure chiappini is similar to fig 10 in their paper andis shown here to graphically illustrate the effects of ims on the chemical evolution of and 14 according to their models each panel shows logarithmic abundance as a function of galactocentric distance in kpc for the milky way disk besides the data points two model results are shown in each panel the solid line in each case corresponds to the best fit model in their paper while the dashed line is for the same model but with the ims contribution to nucleosynthesis turned off as can be seen ims make roughly a 05 dex and 1 dex difference in the case of c and n respectively ie their effects are sizeable finally the question of ims production of nitrogen has become entangled in the debate over the interpretation of the apparent bimodal distribution of damped lymanxmath0 systems dlas in the nxmath0xmath0h plane prochaska et al 2002 centurin et al 2003 represents elements such as o mg si and s whose abundances are assumed to scale in lockstep most dlas fall in the region of the primary plateau located at a nxmath0 value of xmath3 07 and between metallicities of 15 and 20 on the xmath0h axis however a few objects are positioned noticeably below the plateau by roughly 08 dex in nxmath0 although still within the same metallicity range as the plateau objects the prochaska group proposes that these low n objects ln dlas correspond to systems characterized by a top heavy initial mass function with a paucity of ims or in the same spirit a population of massive stars truncated below some threshold mass either possibility works through suppressing the ims contribution to nitrogen production by reducing the proportion of these stars in a system s stellar population the centurin group on the other hand suggests that ln dlas are less evolved than the plateau objects ie star formation occurred within them less than 30 myr ago so the ln dlas are momentarily pausing at the low n region until their slowly evolving ims begin to release their nitrogen the latter picture while not needing to invoke a non standard imf an action which causes great discomfort among astronomers does require that the time to evolve from the low n ledge to the plateau region be very quick otherwise their idea is inconsistent with the observed absence of a continuous trail of objects connecting these points this problem is bound to be solved when the number of dlas with measured nitrogen abundances increases but it nevertheless illustrates an important role that ims play in questions involving early chemical evolution in the universe intermediate mass stars play an important role in the chemical evolution of 14 and xmath14li as well as s process isotopes stellar models have gained in sophistication over the past two decades so that currently they include effects of three dredge up stages thermal pulsing and hot bottom burning on the agb metallicity and mass loss by winds and sudden ejection generally speaking yield predictions from stellar evolution modelsindicate that yields increase as metallicity declines as the mass loss rate is reduced and when rotation is included furthermore observational evidence supports the claim that the lower mass limit for hot bottom burning is between 3 and 4 integration of yields over a salpeter initial mass function shows clearly that ims have little impact on the evolution of 4 while at the same time playing a dominant role in the cosmic buildup of 14 the case of is a bit more confused the issue of 14 production is particularly important in the current discussion of the distribution of damped lymanxmath0 systems in the nxmath0xmath0h plane finally what i believe is needed are grids of models which attempt to treat ims and massive stars in a consistent and seamless manner the role of each stellar mass range would be easier to judge if yield sets of separate origins did not have to be patched together in chemical evolution models otherwise it is not clear to what extent the various assumptions which are adapted by stellar evolution theorists impact and therefore confuse the analyses i d like to thank the organizing committee for inviting me to write this review and to present these ideas at the conference i also want to thank corinne charbonnel georges meynet francesca matteucci cristina chiappini jason prochaska john cowan and paulo molaro for clarifying my understanding on several topics addressed in this review finally i am grateful to the nsf for supporting my work under grant ast 98 19123 charbonnel c 2003 carnegie observatories astrophysics series vol 4 origin and evolution of the elements ed a mcwilliam and m rauch pasadena carnegie observatories httpwwwociweduociwsymposiaseriessymposium4proceedingshtml karakas ai lattanzio jc 2003 carnegie observatories astrophysics series vol 4 origin and evolution of the elements ed a mcwilliam and m rauch pasadena carnegie observatories httpwwwociweduociwsymposiaseriessymposium4proceedingshtml moll m gaviln m buell jf 2003 carnegie observatories astrophysics series vol 4 origin and evolution of the elements ed a mcwilliam and m rauch pasadena carnegie observatories httpwwwociweduociwsymposiaseriessymposium4proceedingshtml
intermediate mass stars occupy the mass range between 08 8 in this contribution evolutionary models of these stars from numerous sources are compared in terms of their input physics and predicted yields in particular the results of renzini voli van den hoek groenewegen and marigo are discussed generally speaking it is shown that yields of 4 and 14 decrease with increasing metallicity reduced mass loss rate and increased rotation rate integrated yields and recently published chemical evolution model studies are used to assess the relative importance of intermediate mass and massive stars in terms of their contributions to universal element buildup intermediate mass stars appear to play a major role in the chemical evolution of 14 a modest role in the case of and a small role for 4 furthermore the time delay in their release of nuclear products appears to play an important part in explaining the apparent bimodality in the distribution of damped lymanxmath0 systems in the nxmath0xmath0h plane 19960601 aa 4xmath1he 14xmath2n
element yields of intermediate-mass stars
after carrying out a fourier transform and a multipole decomposition the radial and time parts of the retarded green function for linear fields on a schwarzschild black hole can be written as xmath15 where xmath16 xmath17 is the multipole number xmath18 xmath19 and xmath20 is the wronskian of the two functions xmath21 and xmath22 these functions are linearly independent solutions of the radial ode xmath23right psiell0 where xmath24 and xmath25 the solutions are uniquely determined when xmath26 by the boundary conditions xmath27 as xmath28 and xmath29 as xmath30 the behaviour of the radial potential at infinity leads to a branch cut in the radial solution xmath22 xcite the contour of integration in eqeq green can be deformed in the complexxmath3 plane xcite yielding a contribution from a high frequency arc a series over the residues the qnms and a contribution from the branch cut along the nia xmath31 where the bcms are xmath32 with xmath33 where xmath34 is the discontinuity of xmath22 across the branch cut we present here methods for the analytic calculation of the bcms we calculate xmath21 using the jaff series eq39 xcite the coefficients of this series which we denote by xmath35 satisfy a 3term recurrence relation we calculate xmath22 using the series in eq73 xcite which is in terms of the confluent hypergeometric xmath36function and the coefficients xmath35 this series has seldom been used and one must be aware that in order for xmath22 to satisfy the correct boundary condition we must set xmath37 which itself has a branch cut to find an expression for xmath38 on the nia we exploit this series by combining it with the known behavior of the xmath36function across its branch cut xmath39 where we are taking the principal branch both for xmath40 and for the xmath36function in order to check the convergence of this series we require the behaviour for largexmath41 of the coefficients xmath35 using the birkhoff series as in appb xcite we find the leading order xmath42 we have calculated up to four orders higher in xcite as xmath43 we note that this behaviour corrects leaver s eq46 xcite in the power xmath44 instead of xmath45 the integral test then shows that the series eq leaver liu series for deltagt converges for any xmath46 although convergent the usefulness of eq leaver liu series for deltagt at smallxmath47 is limited since convergence becomes slower as xmath47 approaches 0 while for largexmath47 xmath38 grows and oscillates for fixed xmath48 and xmath49 therefore we complement our analytic method with asymptotic results for small and large xmath8 the smallxmath50 asymptotics are based on an extension of the mst formalism xcite we start with the ansatz xmath51 imposing eqeq radial ode yields a 3term recurrence relation for xmath52 and requiring convergence as xmath53 yields an equation for xmath54 that may readily be solved perturbatively in xmath3 from starting values xmath55 and xmath56 likewise for the coefficients xmath52 taking xmath57we obtain xmath58 a2mu fracell1s2ell2s24ell12ell12ell32omega2oleftomega3right while xmath59 and xmath60 are given by the corresponding terms with xmath61 apparent possible singularities in these coefficients are removable the xmath62 term in eqeq f small nu corresponds to page s eqa9 xcite to obtain higher order aymptoticswe employ the barnes integral representation of the hypergeometric functions xcite which involves a contour in the complex xmath63plane from xmath64 to xmath65 threading between the poles of xmath66 xmath67 and xmath68 as xmath69 double poles arise at the non negative integers from 0 to xmath70 however we may move the contour to the right of all these ambient double poles picking up polynomials in xmath48 with coefficients readily expanded in powers of xmath3 leaving a regular contour which admits immediate expansion in powers of xmath8 by the method of mst we can also construct xmath22 and hence determine xmath71 and xmath72 for compactness we only give the following smallxmath8 expressions for the case xmath10 cases xmath73 and xmath12 are presented in xcite xmath74 nonumber frac 1ell pi 22 ell1 leftfrac2ell1ell2 ell12right2 nu2ell3 leftfrac415ell2 15ell112 ell12ell1 2 ell3 leftln 2 nu hell4 h2 ell gammae rightright left 4 left8 hell2 8 hell3 hell2 2hinfty2 right frac512ell6 2016 ell5 1616 ell4 1472 ell3 1128 ell2 722 ell592 ell12 2 ell12 2 ell32right onu2ell3 nonumberendaligned where xmath75 is the xmath17th harmonic number of order xmath48 we note that the xmath76 term at second to leading order originates both in xmath71 and in xmath72 in fact both functions possess a xmath76 already at next to leading order for smallxmath47 but they cancel each other out in xmath77 similarly the coefficient of a potential term in xmath77 of order xmath78 is actually zero let us now investigate the branch cut contribution to the black hole response to an initial perturbation given by the field xmath79 and its time derivative xmath80 at xmath81 xmath82 nonumberendaligned we obtain the asymptotics of the response for late times xmath83 using eqseq deltag in terms of deltag and eq f small nuq w2 s0 gral l we note the following features the orders xmath84 and xmath85 in the bcms xmath38 yield tail terms behaving like xmath86 and xmath87 respectively we have thus generalized leaver s eq56 xcite to finite values of xmath48 furthermore eq56xcite is an expression containing the leading orders from xmath79 and from xmath80 however the next to leading order from xmath80 will be of the same order as the leading order from xmath79 in our approach above we consistently give a series in smallxmath47 thus obtaining the correct next to leading order term for largexmath88 in the power law tail importantly we also obtain the following two orders in the perturbation response xmath89 and xmath90 we note the interesting xmath89 behaviour which is due to the xmath91 term in eqq w2 s0 gral l to the best of our knowledge this is the first time in the literature that any of the above features has been obtained the logarithmic behaviour is not completely surprising given the calculations in xcite however one may be led to a wrong logarithmic behaviour xcite if the calculations are not performed in detail in order to exemplify our results we give the explicit asymptotic behaviour in the case xmath10 and xmath92 and initial data xmath93 and xmath94 with xmath95 the perturbation response due to the branch cut at xmath96 at late times is given by xmath97t7 oleftt7right nonumberendaligned fig fig perturbation response shows that these asymptotics are in excellent agreement with a numerical solution of the wave equation to the gaussian described above eqeq pertasymp compared to the late time asymptotics solid red numerical solution dashed black eqeq pertasymp lower curves numerical solution minus the first green first 2 blue and first 4 cyan terms in eqeq pertasymp at largexmath47 we obtain the asymptotics xmath98 s02 dfracsqrtpiilambdasin2pinunu32s1 endcases fellr inu sim begincases 1s2 enu rsin2pinuenu r s02 dfracsqrtpilambda2nu12sin2pinuenu renu rs1 endcases nonumberendaligned these asymptotics show a divergence in xmath99 when xmath100 they also lead to a divergence in the perturbation response at fixed xmath101 and xmath48 for a non compact gaussian as initial data both types of divergences are expected to cancel out with the other contributions to the green function we have thus provided a complete account of the bcms for all frequencies along the nia the behaviour is illustrated in figfig deltag s l2 and s0l1 as a function of xmath47 for xmath102 and xmath103 a using eqeq leaver liu series for deltagt dashed green xmath104 continuous blue xmath10 xmath92 dot dashed orange xmath105 note the interesting behaviour near the algebraically special frequency xcite at xmath106 for xmath107 b xmath10 xmath92 for small xmath47 continuous blue using eqeq leaver liu series for deltagt dashed red using eqq w2 s0 gral l to xmath108 see xcite c xmath10 xmath92 for large xmath47 continuous blue using eqeq leaver liu series for deltagt dashed red using the asymptotics of eqeq largenu width309 cols we present here an analysis for largexmath1 of the electromagnetic qnms we may find solutions of eqeq radial ode valid for fixed xmath109 as expansion in powers of xmath110 as xmath111 xmath112 starting with the two independent solutions xmath113 and xmath114 we may express any higher order solution in terms of the xmath115order green function as xmath116fracpsiik1usqrtnu right nonumber left 2u12left4d2 2d lambdarightfracpsiik2unu rightendaligned where xmath117 from this expression it follows that xmath118 nonumber endaligned where xmath119 in addition for xmath120 xmath121 and xmath122 are both real it follows that along xmath123 up to power law corrections xmath124 equating asymptotic expansions at xmath125 yields xmath126 xmath127 and also serves to determine xmath128 except when xmath129 for xmath130 which do not contribute to the qnm condition by matching the xmath134 to wkb solutions along xmath120 and xmath135we are able to find largexmath8 asymptotics for xmath22 also we may use the exact monodromy condition xmath136 to obtain largexmath47 asymptotics for xmath21 the asymptotic qnm condition xmath137 in the 4th quadrantthen becomes xmath138 it is straightforward to find the qnm frequencies to arbitrary order in xmath1 in terms of the xmath132 by systematically solving eqeq qnm cond explicitly using the values in eqeq alpha values we have xmath13996n52 oleftn3right nonumberendaligned it is remarkable that the terms in the expansion show the behaviour xmath140 to all orders in fig fig qnm numeric closed form we compare these asymptotics with the numerical data in xcite in xcite we apply the method used to obtain eqeq qnm s1 to the cases xmath10 and xmath12 and we obtain the corresponding qnm frequencies up to order xmath141 and have agreement with xcite we are thankful to sam dolan and particularly to barry wardell for helpful discussions thanks luis lehner and the perimeter institute for theoretical physics for hospitality and financial support m c is supported by a ircset marie curie international mobility fellowship in science engineering and technology ao acknowledges support from science foundation ireland under grant no 10rfp phy2847 37ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty noop linkdoibase 101103physrevlett742414 noop linkdoibase 1010880264 93812616163001 noop noop linkdoibase 101103physrevd78044006 linkdoibase 101103physrevd62024027 noop noop linkdoibase 1010160370 26939501148j linkdoibase 101103physrevlett814293 linkdoibase 101103physrevlett90081301 linkdoibase 101103physrevlett100141301 linkdoibase 101103physrevd84084031 noop noop linkdoibase 10106311626805 noop linkdoibase 101016jphysletb200704068 linkdoibase 101103physrevd69044004 linkdoibase 101103physrevd62064009 linkdoibase 101103physrevd52419 linkdoibase 101103physrevd52439 noop noop linkdoibase 1010880264 9381289094021 linkdoibase 101038nphys1907 noop noop noop linkdoibase 101103physrevd522118 noop noop noop noop noop noop noop
linear field perturbations of a black hole are described by the green function of the wave equation that they obey after fourier decomposing the green function its two natural contributions are given by poles quasinormal modes and a largely unexplored branch cut in the complex frequency plane we present new analytic methods for calculating the branch cut on a schwarzschild black hole for arbitrary values of the frequency the branch cut yields a power law tail decay for late times in the response of a black hole to an initial perturbation we determine explicitly the first three orders in the power law and show that the branch cut also yields a new logarithmic behaviour xmath0 for late times before the tail sets in the quasinormal modes dominate the black hole response for electromagnetic perturbations the quasinormal mode frequencies approach the branch cut at large overtone index xmath1 we determine these frequencies up to xmath2 and formally to arbitrary order highly damped quasinormal modes are of particular interest in that they have been linked to quantum properties of black holes the retarded green function for linear field perturbations in black hole spacetimes is of central physical importance in classical and quantum gravity an understanding of the make up of the green function is obtained by performing a fourier transform thus yielding an integration just above the real frequency xmath3 axis in his seminal paper leaver xcite deformed this realxmath3 integration in the case of schwarzschild spacetime into a contour on the complexxmath3 plane he thus unraveled three contributions making up the green function 1 a high frequency arc 2 a series over poles of the green function quasinormal modes qnms and 3 an integral of modes around a branch cut originating at xmath4 and extending down the negative imaginary axis nia which we refer to as branch cut modes bcms the three contributions dominate the black hole response to an initial perturbation at different time regimes the high frequency arc yields a direct contribution which is expected to vanish after a certain finite time xcite the qnm contribution to the green function dominates the black hole response during intermediate times and it has been extensively investigated eg xcite for a review at late times the qnm contribution decays exponentially with a decay rate given by the overtone number xmath5 qnms have also triggered numerous interpretations in different contexts in classical and quantum physics ranging from astrophysical ringdown xcite to hawking radiation xcite the gauge gravity duality xcite for schwarzschild black holes which are asymptotically anti de sitter and xcite for asymptotically flat ones black hole area quantization xcite and structure of spacetime at the shortest length scales xcite the quantum interpretations are given in the highly damped limit ie for large xmath1 the highly damped qnm frequencies in schwarzschild have been calculated up to next to leading order in despite all the efforts the leading order of the real part of the frequencies for electromagnetic perturbations has remained elusive only in xcite they find numerical indications that it goes like xmath6 the contribution from the bcms on the other hand remains largely unexplored the technical difficulties of its analysis mean that most of the studies have been constrained to large radial coordinate as well as small xmath7 along the nia an exception is a largexmath8 asymptotic analysis of the bcms in xcite and near the algebraically special frequency in xcite solely for gravitational perturbations the smallxmath8 bcms are known to give rise to a power law tail decay at late times of an initial perturbation xcite in general however there is an appreciable time interval between when the qnm contribution becomes negligible and when the power law tail starts xcite the calculation of the bcms for general values of the frequency ie not in the asymptotically small nor large regimes to the best of our knowledge has only been attempted in xcite where the radial functions were calculated off the nia via a numerical integration of the radial ode eq radial ode followed by extrapolation to the nia and only for the gravitational case in this letter we present the following new results a new analytic method for the calculation of the bcms directly on the nia and valid for any value of xmath8 in particular this method provides analytic access for the first time to the midxmath8 regime a consistent expansion up to xmath9th order for smallxmath8 of the bcms for arbitrary value of the radial coordinate we explicitly derive a new logarithmic behaviour xmath0 at late times a largexmath8 asymptotic analysis of the bcms it shows a formal divergence which is expected to be cancelled out by the other contributions to the green function a new asymptotic analysis for largexmath1 of the electromagnetic qnms the analysis is formally valid up to arbitrary order in xmath1 we explicitly calculate the corresponding frequencies up to xmath2 methods in 13 provide the first full analytic account of the bcms and they are valid for any spin xmath10 scalar xmath11 electromagnetic and xmath12 gravitational of the field perturbation for the qnm calculation we focus on spin1 as this is the least well understood case we note that spin1 perturbations are acquiring increasing importance xcite although it is expected that only the lowest overtones of the qnms are astrophysically relevant we present details in xcite and xcite we take units xmath13 where xmath14 is the mass of the black hole
the green function & branch cut spin-1 quasinormal modes
intensity modulated radiation therapy imrt is usually used for head and neck cancer patients because it delivers highly conformal radiation doses to the target with reduction of toxicity to normal organs as compared with conventional radiation therapy techniques xcite volumetric modulated arc therapy vmat is a novel imrt technique vmat has less mu less treatment time high quality planning and more efficiency than static gantry angle imrt xcite during vmat the linear accelerator linac control system changes the dose rate and the multi leaf collimator mlc positions while gantry is rotating around the patient collimator angle is usually rotated in the plans of vmat to reduce radiation leakage between mlc leaves at a zero angle the leakage between mlc leaves accumulates during the gantry rotation and the summed leakage results in unwanted dose distributions which can not be controlled by optimization at different collimator angles the unwanted doses can be controlled by dose constraints in the optimization procedure so that we can reduce the unwanted doses the optimal collimator angle for vmat planis thus required to be determined there are several factors for consideration in the choice of the collimator angle of the vmat plan among themwe concentrated on the accuracy of the vmat delivery we studied the effect of the collimator angle on the results of dosimetric verifications of the vmat plan for nasopharyngeal cancer npc ten patients with late stage nasopharyngeal cancer were treated with concurrent chemo radiation therapy ccrt eight patients had stage iii disease and 2 patients had stage iv disease according to american joint committee on cancer staging system 7 nine patients were male and 1 patient was female one radiation oncologist delineated radiation targets and organs at risk oars the clinical target volume ctv included the primary nasopharyngeal tumor neck nodal region and subclinical disease considering the setup uncertainty margins ranging from 3 10 mm were added to each ctv to create a planning target volume ptv reduced field techniques were used for delivery of the 66 70 gy total dose the treatment plan course for each patient consisted of several sub plans in this study we selected the first plan with prescribed doses of 50 60 gy in 25 30 fractions to study the effect of the collimator angles on dosimetric verifications of the vmat the radiation treatment planning system eclipse v10042 varian medical systems usa was used to generate vmat plans the vmat rapidarc varian plans were generated for clinac ix linear accelerator using 6 mv photons the clinac ix is equipped with a millennium 120 mlc that has spatial resolution of 5 mm at the isocenter for the central 20 cm region and of 10 mm in the outer 2xmath110 cm region the maximum mlc leaf speed is 25 cm s and leaf transmission is 18 dosimetric leaf gap of the mlc was measured using the procedure recommended by varian medical systems the value of the dosimetric leaf gap was 1427 mm for 6 mv photons for volume dose calculation grid size of 25 mm inhomogeneiy correction the anisotropic analytical algorithm aaa v10028 and the progressive resolution optimizer pro v10028 were used in all plans vmat plans for npc patients were composed of 2 coplanar full arcs in 181 179 degree clockwise and 179 181 degree counterclockwise directions the 2 full arc delivery was expected to achieve better target coverage and conformity than the single arc xcite we generated 10 vmat plans plan set a with different collimator angles for each patient ten collimator angles for the first arc were 0 5 10 15 20 25 30 35 40 and 45 degrees for the second arc the collimator angle was selected explementary to the collimator angle of the first arc in the same plan ie the 2 collimator angles added up to 360 degree the average field size of vmat plans was 22 xmath1 22 xmath2 we used the same dose constraints for all the 10 vmat plans and optimization was conducted for each plan the maximum dose rate was 600 mu min the target coverage was aimed to achieve a 100 volume covered by 95 of prescribed dose optimization of each plan resulted in different fluences and different mlc motions for each plan therefore we had 2 variables ie the collimator angle and mlc motions to simplify the analysis we generated another set of 10 plans plan set b with the same mlc motions and different collimator angles for each patient the mlc motions were those of the plan with 30 degree collimator angle the plans in this set had different dose distributions and usually can not be used for treatment purposes excepting the plan with a 30 degree collimator angle we performed patient specific quality assurances qa of 2 sets of 10 vmat plans for each patient the measurements were made by the 2dimensional ion chamber array matrixx iba dosimetry germany xcite the matrixx has 1020 pixel ion chambers arranged in a 32xmath132 matrix covering 244xmath1244 xmath2 each ion chamber has the following dimensions 45 mm in diameter 5 mm in height and a sensitive volume of 008 xmath3 the distance between chambers is 7619 mm the matrixx has an intrinsic buildup and backscatter thicknesses of 03 mm and 35 mm respectively the matrixx was placed between solid water phantoms multicube iba dosimetry germany figure fig1 so that thickness of total buildup and backscatter was 5 cm figure fig2 the source to surface distance was 95 cm with the measurement plane of the matrixx at the isocenter of the linac measurement was done for each arc in the plan therefore we conducted 40 measurements for each patient and the total number of measurements was 400 the angular dependence of the matrixx was corrected after the measurements using the gantry angle sensor xcite iba dosimetry germany the comparison between the calculations and the measurements were made by xmath0index 22 mm 33 mm analysis xcite using omnipro imrt v17b iba dosimetry germany the xmath0index was calculated only for the regions that have dose values above 10 xcite in the measured area average xmath0index passing rates of patient specific qas were given in table table1 the results were averaged over the 2 arcs and 10 patients because the 2 arcs in each vmat plan rotated almost 360 degrees andthe measurement set up is mirror symmetric about the measurement plane xmath4 plane in figure fig2 of the matrixx detector and a vertical plane passing through the isocenter xmath5 plane in figure fig2 the arc with collimator angle xmath6 is symmetric to the arc with collimator angle xmath7 therefore we regarded the collimator angle of the second arc which was equal to 360 minus the collimator angle of the first arc as the collimator angle of the first arc in the analysis xmath0index passing rates of the patient specific qas as a function of the collimator angle cols table1 maximum difference between xmath0index 22 mm passing rates of plans in plan set a for each patient ranged from 283 to 1432 and the average value was 844xmath8424 using the 33 mm criteria the maximum difference ranged from 146 to 560 and the average value was 367xmath8129 maximum difference between xmath0index 22 mm passing rates of plans in plan set b for each patient ranged from 371 to 1044 and the average value was 797xmath8217 using the 33 mm criteria the maximum difference ranged from 146 to 723 and the average value was 469xmath8251 2dimensional dose distributions calculated by the eclipse treatment planning system dose distributions measured by the matrixx detector and xmath0index 33 mm distributions of 1 patient plans in the plan set a for collimator angle 5 and 35 degree were shown in figure fig3 1 and fig3 2 respectively the passing rate for the 35 degree collimator angle was less than the passing rate for the 5 degree collimator angle index 33 mm distributions in the xmath0index distributions red color indicates the region where the 33 mm criteria failed width566 plan in the plan set a for collimator angle 35 the first figure is the 2dimensional dose distribution calculated by the eclipse treatment planning system the second one is the dose distribution measured by the matrixx detector the last one is the xmath0index 33 mm distributions in the xmath0index distributions red colorindicates the region where the 33 mm criteria failed width566 the increase in collimator angle resulted in decreased xmath0index passing rates as shown in figure fig4 in the figure passing rates were normalized to the value of 0 degree black and white squares indicated xmath0index 22 mm and 33 mm passing rates respectively averaged over plan set a black and white triangles indicated xmath0index 22 mm and 33 mm passing rates respectively averaged over plan set b index passing rates 2 2 mm of patient specific delivery qas as a function of the collimator angle width377 there were statistically significant negative correlations between the collimator angle and the xmath0index passing rates pearson correlation coefficients for pair wise ratings of the xmath0index 22 mm and 33 mm passing rates of plans in the plan set a and b were 0524 and 0412 respectively with p values xmath9 0001 for accuracy of vmata smaller collimator angle is better and for mlc leakage a larger collimator angle is better we were thus required to make a compromise based on this study in our hospitalthe collimator angles of the vmat plans for head and neck patients range between 15 25 degrees because the average xmath0index passing rates were above or near to 90 for the 22 mm criteria and 97 for the 33 mm criteria as shown in the results of the passing rates for the plan set a table table1 in other hospitals these results can be somewhat different because they have different vmat delivery systems and diffterent vmat planning systems we think that they can find optimal collimator angles by conducting the similar measurements described in this article although not included in this article we performed the patient specific qas for other treatment sites with smaller field sizes that are xmath9 13 xmath1 13 xmath2 maximum difference of the passing rates for vmat plans with various collimator angles was xmath9 15 collimator angle does not affect the accuracy of the vmat delivery with small field sizes the accuracy of radiation delivery by the linac depends on geometrical accuracies such as gantry isocentricity collimator isocentricity and mlc position it was reported that leaf limiting velocity mlc position and mechanical isocenter varied at different collimator and gantry angles xcite this may explain the xmath0index passing rates dependence on the collimator angle further study is needed to investigate the origin of the collimator angle dependence of the accuracy of vmat delivery the quality of the plan itself is another factor for consideration in the choice of the collimator angle of the vmat plan optimized dose distributions with the same dose constraints can vary according to the collimator angle of the vmat plan further study is needed to evaluate the quality of vmat plans with different collimator angles we found that the results of the patient specific qas for vmat plans using the 2dimensional ion chamber array matrixx are dependent on the collimator angle of the vmat plans the xmath0index 22 mm and 33 mm passing rates were negatively correlated with the collimator angle we showed that collimator angles of the vmat plans for head and neck cancer patients range between 15 25 degrees resulting in the average xmath0index passing rates above or near to 90 for the 22 mm criteria and 97 for the 33 mm criteria f k lee et al med 39 44 2014 m w k kan et al j appl 13 6 2012 s a syam kumar et al rep radiother 8 87 2013 a holt et al radiat 8 26 2013 j alvarez moret et al raiat oncol 5 110 2010 t lee et al j appl 12 4 2011 x jin et al med 38 418 2013 j herzen et al 52 1197 2007 l d wolfsberger et al j appl 11 1 2010 m rao et al med 37 1350 2010 s korreman et al acta oncologica 45 185 2009 m stasi et al 39 7626 2012 g a ezzell et al med 36 5359 2009 c c ling et al j radiat 722 575 2008 m okumura et al phys 55 3101 2010
collimator angle is usually rotated when planning volumetric modulated arc therapy vmat due to the leakage of radiation between multi leaf collimator mlc leaves we studied the effect of the collimator angles on the results of dosimetric verification of the vmat plans for head and neck patients we studied vmat plans for 10 head and neck patients we made 2 sets of vmat plans for each patient each set was composed of 10 plans with collimator angles of 0 5 10 15 20 25 30 35 40 45 degrees plans in the first set were optimized individually and plans in the second set shared the 30 degree collimator angle optimization two sets of plans were verified using the 2dimensional ion chamber array matrixx iba dosimetry germany the comparison between the calculation and measurements were made by the xmath0index analysis the xmath0index 22 mm and 33 mm passing rates had negative correlations with the collimator angle maximum difference between xmath0index 33 mm passing rates of different collimator angles for each patient ranged from 146 to 560 with an average of 367 there were significant differences maximum 56 in the passing rates of different collimator angles the results suggested that the accuracy of the delivered dose depends on the collimator angle these findings are informative when choosing a collimator angle in vmat plans
introduction materials and methods results and discussion conclusions
this work was supported by the singapore mit alliance under the hpces program 10 c cercignani the boltzmann equation and its applications springer verlag new york 1988 g chen nanoscale energy transport and conversion oxford new york 2005 vl gurevich transport in phonon systems north holland new york 1986 m lundstrom fundamentals of carrier transport 2nd ed cambridge university press cambridge 2000 b davidson jb sykes neutron transport theory clarendon press 1957 m f modest radiative heat transfer academic press usa 2003 g chen ballistic diffusive heat conduction equations physical review letters 86 22973000 2001 a majumdar microscale heat conduction in dielectric thin films journal of heat transfer 115 716 1993 l l baker and n g hadjiconstantinou variance reduction for monte carlo solutions of the boltzmann equation physics of fluids 17 051703 2005 g a bird molecular gas dynamics and the direct simulation of gas flows clarendon press oxford 1994 n g hadjiconstantinou a l garcia m z bazant and g he statistical error in particle simulations of hydrodynamic phenomena journal of computational physics 187 274 297 2003 l l baker and n g hadjiconstantinou variance reduced particle methods for solving the boltzmann equation journal of computational and theoretical nanoscience 5 165174 2008 t m m homolle and n g hadjiconstantinou low variance deviational simulation monte carlo physics of fluids 19 041701 2007 t m m homolle and n g hadjiconstantinou a low variance deviational simulation monte carlo for the boltzmann equation journal of computational physics 226 2341 2358 2007 k xu a gas kinetic bgk scheme for the navier stokes equations and its connection with artificial dissipation and godunov method journal of computational physics 171 289335 2001 y sone kinetic theory and fluid dynamics birkhauser 2002 p bassanini c cercignani and c d pagani comparison of kinetic theory analyses of linearized heat transfer between parallel plates international journal of heat and mass transfer 10 447460 1967 n g hadjiconstantinou the limits of navier stokes theory and kinetic extensions for describing small scale gaseous hydrodynamics physics of fluids 18 111301 2006 c cercignani and a daneri flow of a rarefied gas between two parallel plates journal of applied physics 34 35093513 1963 g a radtke and n g hadjiconstantinou variance reduced particle simulation of the boltzmann transport equation in the relaxation time approximation to appear in physical review e
we present and discuss a variance reduced stochastic particle method for simulating the relaxation time model of the boltzmann transport equation the present paper focuses on the dilute gas case although the method is expected to directly extend to all fields carriers for which the relaxation time approximation is reasonable the variance reduction achieved by simulating only the deviation from equilibrium results in a significant computational efficiency advantage compared to traditional stochastic particle methods in the limit of small deviation from equilibrium more specifically the proposed method can efficiently simulate arbitrarily small deviations from equilibrium at a computational cost that is independent of the deviation from equilibrium which is in sharp contrast to traditional particle methods the boltzmann transport equation xmath0 textrmcoll where xmath1 is the single particle distribution function xcite xmath2 textrmcoll denotes the collision operator xmath3 is the acceleration due to an external field xmath4 is the position vector in physical space xmath5 is the molecular velocity vector and xmath6 is time is used to describe under appropriate conditions transport processes in a wide variety of fields xcite including dilute gas flow xcite phonon xcite electron xcite neutron xcite and photon transport xcite recently it has received renewed attention in connection to micro and nano scale science and technology where transport at lengthscales of the order of or smaller than the carrier mean free path is frequently considered eg nanoscale solid state heat transfer xcite numerical solution of the boltzmann equation remains a formidable task due to the complexity associated with the collision operator and the high dimensionality of the distribution function both these features have contributed to the prevalence of particle solution methods which are typically able to simulate the collision operator through simple and physically intuitive stochastic processes while employing importance sampling which reduces computational cost and memory usage xcite another contributing factor to the prevalence of particle schemes is their natural treatment of the advection operator which results in a numerical method that can easily handle and accurately capture traveling discontinuities in the distribution function xcite an example of a particle method is the direct simulation monte carlo dsmc xcite which has become the prevalent simulation method for dilute gas flow one of the most important disadvantages of particle methods for solving the boltzmann equation derives from their reliance on statistical averaging for extracting field quantities from particle data when simulating processes close to equilibrium thermal noise typically exceeds the available signal when coupled with the slow convergence of statistical sampling statistical error decreases with the square root of the number of samples this often leads to computationally intractable problems xcite for example to resolve a flow speed of the order of 1 m s to 1 statistical uncertainty in a dilute gas on the order of xmath7 independent samples are needed xcite in a recent paper baker and hadjiconstantinou have shown xcite that this rather severe limitation can be overcome with a form of variance reduction achieved by simulating only the deviation from equilibrium by adopting this approach it is possible to construct monte carlo simulation methods that can capture arbitrarily small deviations from equilibrium at a computational cost that is independent of the magnitude of this deviation this is in sharp contrast to regular monte carlo methods such as dsmc whose computational cost for the same signal to noise ratio increases sharply xcite as the deviation from equilibrium decreases the work in refs xcite focused on the boltzmann equation for hard spheres and the associated hard sphere collision operator the complexity associated with this collision operator as well as others in related fields has prompted scientists to search for simplified models one particularly popular model is the relaxation time approximation xcite xmath8 textrmcollfrac1tau leftf flocright where xmath9 is the local equilibrium distribution function and xmath10 is a relaxation time despite the approximation involved this collision model has enjoyed widespread application in a variety of disciplines concerned with transport processes xcite in response to this widespread use in the present paper we present a variance reduced particle method for simulating the boltzmann equation under the relaxation time approximation to focus the discussion we specialize our treatment to the dilute gas case however we hope that this exposition can serve as a prototype for development of similar techniques in all fields where the relaxation time approximation is applicable within the rarefied gas dynamics literature the relaxation time approximation is known as the bgk model xcite in the interest of simplicity in the present paper we assume xmath11 and that no external forces are present the first assumption can be easily relaxed as discussed below external fields also require relatively straightforward modifications to the algorithm presented below as discussed in previous work xcite a variance reduced formulation is obtained by simulating only the deviation xmath12 from an arbitrary but judiciously chosen underlying equilibrium distribution xmath13 in other words computational particles represent the deviation from equilibrium and as a result they may be positive or negative depending on the sign of the deviation from equilibrium at the location in phase space where they reside as in other particle schemes xcite in the interest of computational efficiency each computational deviational particle represents an effective number xmath14 of physical deviational particles a dilute gas in equilibrium is described by a maxwell boltzmann distribution leading to a local equilibrium distribution xmath15 that is parametrized by the local number density xmath16 the local flow velocity xmath17 and the most probable speed xmath18 based on local temperature xmath19 here xmath20 is boltzmann s constant and xmath21 is the molecular mass in the work that follows the underlying equilibrium distribution xmath13 will be identified with absolute equilibrium xmath22 where xmath23 is a reference equilibrium number density and xmath24 is the most probable molecular speed based on the reference temperature xmath25 this choice provides a reasonable balance between generality computational efficiency and simplicity other choices are of course possible and depending on the problem perhaps more efficient however care needs to be taken if a spatially varying or time dependent underlying equilibrium distribution is chosen since this results in a more complex algorithm xcite particle methods typically solve the boltzmann equation by applying a splitting scheme in which molecular motion is simulated as a series of collisionless advection and collision steps of length xmath26 in such a scheme the collisionless advection step integrates xmath27 by simply advecting particles for a timestep xmath26 while the collision step integrates xmath28 textrmcolllabelcollision by changing the distribution by an amount xmath2 textrmcollbf rbf ctdelta t spatial discretization is introduced by treating collisions as spatially homogeneous within small computational cells of volume xmath29 our approach retains this basic structure the particular form of these steps can be summarized as follows advection step it can be easily verified that when the underlying equilibrium distribution is not a function of space or time as is the case here the advection step for deviational particles is identical to that of physical particles ie xmath30 is also governed by equation advection during the advection step boundary condition implementation however differs somewhat because the mass flux to boundaries is now split into a deviational contribution and an equilibrium contribution a more extensive discussion as well as algorithmic details can be found in xcite collision step the variance reduced form of collision can be written as xmath8 textrmcoll frac1tau leftflocfrightfrac1tau fd labelstart within each computational cell we integrate equation start using a two part process this integration requires local cell values of various quantities denoted here by hats which are updated every timestep by sampling the instantaneous state of the gas in the first part we remove a random sample of particles by deleting particles with probability xmath31 to satisfy xmath32 in our implementation this is achieved through an acceptance rejection process which can also treat the case xmath33 in the second part we create a set of positive and negative particles using an acceptance rejection process to satisfy xmath34 delta t labeladd this step can be achieved by the following procedure let xmath35 be a positive value such that xmath36 is negligible for xmath37 where xmath38 is an xmath39norm furthermore let xmath40 bound xmath41 from above then repeat xmath42 times 1 generate uniformly distributed random velocity vectors xmath43 with xmath44 2 if xmath45 create a particle with velocity xmath46 at a randomly chosen position within the cell and sign xmath47 here xmath48 is a random number uniformly distributed on 01 to find xmath49 we note that the number of particles of all velocities and signs that should be generated in a cell to obtain the proper change in the distribution function is xmath50 where xmath51 is the cell volume the expected total number of particles ultimately generated by the above algorithm is xmath52 by equating the two expressions we obtain xmath53 we have verified the above algorithm using a variety of test cases some representative results are presented below figure heat shows a comparison between numerical solution of the bgk model of the boltzmann equation xcite and our simulation results for the heat flux xmath54 between two parallel infinite fully accommodating plates at slightly different temperatures xmath25 and xmath55 and a distance xmath56 apart the figure compares the heat flux normalized by the free molecular ballistic value xmath57 as a function of a knudsen number xcite xmath58 where xmath59 is the equilibrium gas pressure the agreement is excellent the simulations used approximately 50000 particles yielding a relative statistical uncertainty of less than 05 these simulations were performed at xmath60k although as shown below the cost is expected to be independent of the magnitude of xmath61 in the limit of small deviation from equilibrium we also performed dsmc simulations of the bgk model using xmath62k and otherwise identical discretization and sampling parameters xmath62k was chosen as a compromise between best performance and a deviation from equilibrium that is small enough for the linearized conditions and thus the benchmark results of xcite to be valid figure poi shows a comparison between a numerical solution of the linearized bgk model of the boltzmann equation xcite and our simulation results for pressure driven flow for small pressure gradients under linear conditions pressure driven flow can be described xcite by xmath63 textrmcoll kappa cz f where xmath64 is the normalized pressure gradient here assumed to be in the xmath65direction and xmath66 is the channel transverse direction for xmath67 the term xmath68 can be included in our formulation as a source of xmath69 positive and an equal number of negative deviational physical particles per unit volume drawn from the distribution xmath70 for xmath71 and xmath72 respectively figure poi shows the normalized flowrate xmath73 as a function of xmath74 where xmath75 is the gas density and xmath76 is the average flow velocity averaged across the channel width excellent agreement is observed as stated above and shown in xcite this class of deviational methods exhibit statistical uncertainties that scale with the local deviation from equilibrium thus allowing the simulation of arbitrarily low deviations from equilibrium at a cost that is independent of this deviation here we demonstrate this feature by studying the statistical uncertainty of the temperature in a problem involving heat transfer specifically figure fluct shows the relative statistical uncertainty in the temperature xmath77 as a function of the normalized wall temperature difference xmath78 in the heat transfer problem discussed above for xmath79 and xmath80k in evaluating xmath77 the characteristic value for temperature was taken to be the difference xmath81 the standard deviation is measured from two computational cells in the middle of the computational domain each containing approximately 950 particles the figure shows that for small xmath81 the relative statistical uncertainty remains independent of this quantity in sharp contrast to non deviational methods moreover the variance reduction achieved is such that significant computational savings are expected for xmath82 the algorithm described above imposes no restrictions on the magnitude of xmath30 although it is expected that the deviational approach will be significantly more efficient than traditional approaches when xmath30 is small if xmath30 is sufficiently small for linearization to be appropriate under some conditions significant gains in computational efficiency can be achieved by taking the following into consideration under linear conditions for the present model we can write xmath83 labellin where xmath84 and xmath85 this representation can be very useful for improving the computational efficiency of update add for example for isothermal constant density flows particles can be generated from a combination of a normal distribution and analytic inversion of the cumulative distribution function which is significantly more efficient than acceptance rejection alternatively lin provides a means of obtaining tight bounds for xmath86 and thus reducing the number of rejections if the acceptance rejection route is followed we have presented an efficient variance reduced particle method for solving the boltzmann equation in the relaxation time approximation the method combines simplicity with a number of desirable properties associated with particle methods such as robust capture of traveling discontinuities in the distribution function and efficient collision operator evaluation using importance sampling xcite without the high relative statistical uncertainty associated with traditional particle methods in low signal problems in particular the method presented here can capture arbitrarily small deviations from equilibrium for constant computational cost more sophisticated techniques with spatially variable underlying equilibrium distribution xcite are expected to increase computational efficiency by reducing the number of deviational particles required to describe the local state of the gas one such technique is described in xcite
acknowledgements
it is well known that an essential characteristic of compounds forming liquid crystals is the rod like shape of their constituent molecules with an high length to breadth ratio therefore the molecules are supposed to be cylindrically symmetrical for example the ordering matrix which is often used to describe the partial alignment in a mesophase contains only one independent element and this can be determined by some techniques 1 the fact that the molecular cylindrical symmetry is assumed is appealing to a statistical mechanician because the pairwise anisotropic intermolecular potential required in any calculation is simple for such particles 2 however the molecules in fact are lath like and thus do not possess the high symmetry the ordering matrix has two principal components and therefore these components are required to describe the orientational order of a uniaxial mesophase composed of lath like molecules in this sense the deviation of this ordering matrix from cylindrical symmetry was found to be significant 3 the importance of deviations from cylindrical symmetry may be inferred from unambiguous determinations of the ordering matrix for rod like molecules such as xmath0 4 moreover it is found that these matrices are comparable to those estimated for a pure mesophase 3 there are some studies in which the consequences of deviations from molecular cylindrical symmetry are investigated it is shown that a system consisting of particles with a lower symmetry than xmath1 is capable of existing either as a uniaxial or a biaxial liquid crystal 5 the possible existence of a biaxial phase is studied in detail for a system of hard rectangular plates using a lattice model 6 the landau approach 7 and the molecular field approximation 8 the deviations of the ordering matrix from cylindrical symmetry is clearly determined by the molecular symmetry and the element of the ordering matrix for the long axis will also be influenced by the form of the pseudo intermolecular potential the calculations of the ordering matrix for an ensemble of hard rectangular particles is performed in 9 it must be emphasized that although these calculations are of some interest they may not be particularly realistic because some experiments indicate that dispersion forces may make a dominant contribution to the anisotropic intermolecular potential 9 considering the cases above luckhurst et al developed a theory 10 for non cylindrically symmetric molecules interacting via a completely general intermolecular potential within molecular field approximation for a decade nonextensive statistics has an increasing interest and recently tsallis thermostatistics tt has been applied to the nematic isotropic transition 11 13 as a nonextensive statistics in 11 the maier saupe mean field theory has been generalized within tt and applied to a nematic liquid crystal para azoxyanisole in the other study 12 the the effects of the nonextensivity on the dimerization process has been studied and finally the mean field theory of anisotropic potentail of rank xmath2 has been generalized within tt and the effect of the nonextensivity on the order parameters has been illustrated in 13 up to now the mean field theories for uniaxial nematogens formed by cylindrically symmetric molecules have been studied by using tt in this manner we aim in this study to enlarge the applications of tt to the liquid crystal systems and to handle luckhurst et als theory which considers the molecules to be non cylindrically symmetric in doing so we first give some essential properties of luckhurst et als theory then we mention on tt and its axioms finally we apply tt to the luckhurst et als theory and some possible concluding remarks are made we must emphasize that we would like to give only the possible contributions of the nonextensivity to the theory so we must keep in mind that since one relies on the generalized theory or not more extensional studies related with it must be performed in the nematic isotropic transition however we believe that this study is sufficient to give motivation for further applications of tt to the liquid crystals the intermolecular potential for particles of general shape is given by 10 xmath3 in a product basis of wigner rotation matrix 14 where xmath4 is the distance between molecules xmath5 and xmath6 the orientation of molecule xmath7 in a coordinate system containing the intermolecular vector as the xmath8 axis is denoted by xmath9 this potential energy is invariant under rotation of the coordinate system about xmath10 axis therefore the summation in eq1 can be restricted as follows 10 xmath11 in what follows the redundant subscripts on the coefficient xmath12 will be suppressed because it is convenient to define the molecular orientation in terms of a common coordinate system the potential energy xmath13 could be transformed to a laboratory frame the choice of this coordinate system is determined according to the symmetry of the liquid crystal phase so for a uniaxial mesophase the laboratory xmath10 axis can be taken to be parallel to the symmetry axis of the mesophase the transformation of xmath14 is carried out by performing the rotation from the intermolecular vector to the molecule in two steps using the relationship xmath15 where the subscript xmath16 is the rotation from the laboratory to the intermolecular frame xmath17 denotes that from the laboratory to the molecule coordinate system then the intermolecular potential can be written as xmath18 if the distribution function for the intermolecular vector is independent of orientation then one could use the orthogonality of the rotation matrices to evaluate the ensemble average xmath19 the partially averaged potential may then be written as xmath20 then now it is the time to average over the orientations adopted by particle xmath6 however one needs xmath21 this average is taken to be independent of the orientation of molecule xmath5 within the molecular field approximation and it is to be identified with the normal ensemble average since we only consider a uniaxial mesophase xmath22 vanishes when xmath23 and xmath24 is even in this manner the potential is reduced to the following form xmath25 where the summation is over xmath26 xmath24 even the scalar contribution could be ignored because we would only like to study the orientational properties of the mesophase the average of the orientational pseudo potential for molecule xmath5 is given by xmath27 the expansion coefficients in eq8 are defined by xmath28 where xmath29 is the scalar potential of the ensemble 1015 the pseudo potential is conveniently written as xmath30 where xmath31 with xmath32 and xmath33 is the euler angles and define the orientation of the director in the molecular coordinate system luckhurst et al made some comments in 10 on this form of the pseudo potential and the symmetry properties of the coefficients xmath34 because luckhurst et al investigated the influence of deviations from molecular cylindrical symmetry on various orientational order parameters for a uniaxial nematic mesophase they tried to minimize the number of the variables in the calculation without any loss of the essential physics as a first approximation they considered only those terms with xmath24 equal to xmath6 in the expansion of the pseudo potential the neglect of terms higher than quadratic could be a good approximation since similar assumptions for cylindrically symmetric molecules had let to results in reasonable accord with experiment 1016 as mentioned in 10 the number of expansion coefficients are reduced by appealing to some specific model for the interactions or by imposing symmetry restrictions on the molecules if one follows straley 9 and takes the molecules to be hard rectangular parallelopipeds then the coefficients xmath35 are independent of the sign of either xmath36 or xmath37 and zero if either of these subscripts is odd that is xmath38 3 c220 left l2bwright left b wright sqrt6 c222 lleft w bright 22endaligned where xmath24 is the length xmath39 the breadth and xmath40 the width of the molecule it must be noted that this parametrization is only approximate 9 and also anisotropic repulsive forces are probably not dominant in determining the behaviour of real nematics therefore assuming the molecules to interact via dispersion forces the expressions for xmath35 could be derived the similar restriction can be imposed on the expansion coefficients xmath41 without appealing to specific forms of the intermolecular interactions by using more formal arguments based on the molecular symmetry and its influence on the pair potential for example if each molecule has a centre of symmetry then 10 xmath42 and consequently xmath43 in addition if the molecules also possess a plane of symmetry orthogonal to their xmath10 axes then xmath44 and xmath45 however eqs1618 can only be consistent when both xmath36 and xmath37 have even values moreover the coefficients are independent of the sign of xmath36 and xmath37 then the pseudo potential may be written as xmath46 left d0jlbeta gamma d0jlbeta gamma right 1delta 0j1delta 0p according to the restrictions above if limiting the summation to those terms with xmath24 equal to xmath6 one has xmath47 d002beta nonumber 2left c200overlined0022c222overlined022 cos 2gamma right d022beta cos 2gamma endaligned where xmath48 is a reduced wigner rotation matrix 14 the second rank order parameters xmath49 and xmath50 in this expression are related to the principal elements of the ordering matrix by xmath51 and xmath52 the order parameter xmath53 indicates the deviation of the ordering matrix from cylindrical symmetry for simplicity it is convenient to write the pseudo potential as xmath54 where xmath55 kt and xmath56 kt the partition function for this single particle potential is then given by xmath57 sin beta dbeta dgamma the orientational molar potential energy and the molar entropy are given as follows xmath58 s rleft aoverlined002boverlined022cos 2gamma right rln zendaligned respectively then the orientational molar free energy is straightforward xmath59 rtln z the order parameters are calculated from the equations below xmath60 sin beta dbeta dgamma z and xmath61 sin beta dbeta dgamma z it is worthwhile to note that xmath49 in eq30 is simply xmath62 xmath63 is the second legendre polynomial the free energy xmath64 will be a minimum provided the consistency equations for these order parameters are satisfied with the aid of eqs2425 one can write the following equations xmath65 and xmath66 where xmath67 xmath68 in eq34 is the ratio xmath69 and it is a measure of the deviation from cylindrical symmetry if xmath68 is taken to be zero the theory reduced to the maier saupe mean field theory now we would like to give the description of tsallis thermostatistics with its axioms below tsallis thermostatistics tt has been extensively used to investigate the concepts of statistical mechanics and thermodynamics in this context it has been applied to various phenomena 17 and tt seems to be appropriate for the study of nonextensive systems as mentioned above we have choosen the nematic isotropic transition as an application area of tt in earlier studies 11 13 and the present study would be the extensional study to investigate the nematic liquid crystals in all of these earlier studies we consider that the uniaxial nematic liquid crystals are formed by cylindrically symmetric molecules however we wonder in this study that if the mesophase could be formed by non cylindrically symmetric molecules how does the nonextensivity affect the nematic isotropic transition therefore our starting point is luckhurst et als theory 10 which is a molecular field theory and assumes the molecules of the nematic liquid crystals to be non cylindrical and we examine the effects of the nonextensivity on the nematic isotropic transition by considering this molecular field theory the understanding of the basic principles and properties of tt has gain fundamental importance in this manner tsallis et al have studied the role of the constraint within tt tsallis proposed an entropy definition in 1988 18 xmath70 where xmath71 is a constant xmath72 is the probabilty of the system in the xmath73 microstate xmath40 is the total number of configurations and xmath74 is called the entropic index whose meaning will be given below in the limit xmath75 the entropy defined in eq35 reduced to the boltzmann gibbs bg entropy in other words tt contains boltzmann gibbs statistics as a special case with xmath76 as well known bg statistics is a powerful one to study a variety of the physical systems however it fails for the systems which xmath77 have long range interactions xmath78 have long range memory effects and xmath79 evolve in multi fractal space time the systems which has these properties is called nonextensive and if a system does obey these restrictions bg statistics seems to be inappropriate and one might need a nonextensive formalism to study on the physical system it will be useful to write the pseudo additivity entropy rule xmath80 which reflects the character of the entropic index xmath74 and also of the nonextensivity in this equation xmath81 and xmath39are two independent systems since all cases xmath82 which is called nonnegativity property xmath83 xmath76 and xmath84 correspond to superadditivity additivity and subadditivity respectively because of some unfamiliar consequences of first two energy constraints discussed in 1319 tsallis et al present the third internal energy constraint as follows xmath85 this choice is commonly considered to study physical systems and is denoted as the tsallis mendes plastino tmp choice the optimization of tsallis entropy given by eq35 according to the third choice of the energy constraint results in xmath86 frac11q zq3 with xmath87 frac11q therefore the xmath74expectation value of any observable is defined as xmath88 where xmath81 represents any observable quantity which commutes with hamiltonian as can be expected when xmath76 the xmath74expectation value of the observable reduces to the conventional one at this stage the important point is to solve eq40 which is an implicit one and tsallis et al suggest two different approaches xmath89 and xmath900xb4 beginexpansion acute endexpansion transformation the nonextensivity in fact appears in systems where long range interactions andor fractality exist and such properties have been invoked in recent models of manganites 2021 as well as in the interpretation of experimental results these properties are also appear in 22 where the role of the competition between different phases to the physical properties of these materials is emphasized the formation of micro clusters of competing phases with fractal shapes randomly distributed in the material is considered in 2324 and the role of long range interactions to phase segregation 2526 if we look at the recent studies in which tt is employed we see also that the entropic index is frequently established from the dynamics of the system under consideration analytically some of such studies are boghosian et als study 27 for the incompressible navier stokes equation in boltzmann lattice models baldovin et als studies 2829 at the edge of chaos for the logistic map universality class oliveira et al studies 2021 on manganites according to these axioms of tt given above the order parameters are calculated by using eq40 instead of eqs30 and 31 the exponential form of any function say x is given by xmath91 frac11q in tsallis thermostatistics substituting this equation in the calculation eq40 is employed and then the self consistent equations are solved to obtain the order parameters the generalized free energy then follows within tt xmath92 rtln qzq where xmath93 now it is the time to give the results of application of tt to the luckhurst et als theory presented above the first stage in our calculation is the identification of the transition from the uniaxial nematic phase to the isotropic phase this may be accomplished for a given molecular symmetry by determining the value of xmath94 at which the orientational free energy vanishes provided there is no volume change at the transition the results of such calculations are listed tables 1 3 table 1 shows the transition temperatures the critical values of order parameters xmath49 and xmath95 at the transition temperature and the entropy changes at the transition concerning xmath96 for some values of xmath74 we can see immediately that increasing the entropic index increases xmath97 and so decreases the nematic isotropic transition temperature for constant xmath98 similar behaviour is also seen from tables 2 and 3 concerning xmath99 and xmath100 respectively also we see from tables 1 3 that increasing the entropic index increases the order parameter xmath101 at the transition for xmath102 and xmath100 however there is a somewhat suprising result that as xmath74 is increased the secondary order parameter xmath53 also increases at the transition for xmath103 while this is not the case for xmath96 that is the secondary order parameter decreases with increasing xmath74 for xmath104 we also observe from the tables 1 3 that the entropy changes at the transition increases with increasing xmath74 in fig1 we illustrate the dependence of the order parameter xmath105 and on reduced temperature for some xmath74 values with xmath106 only the results are plotted as a function of the reduced variable xmath107 which is identical to the reduced temperature xmath108 provided the coefficients xmath35 are themselves independent of temperature we observe that as xmath74 increased the transition becomes more markedly first order it is well known from 10 that the effect of introducing deviations from cylindrical symmetry is to lower the curves for xmath109 and xmath100 respectively as well as changing their slopes it is clear from fig1 that as xmath74 is decreased the similar behaviour is also seen that is the slope of the curve for xmath106 decreases with increasing the entropic index xmath74 the similar behaviour could also expected for other xmath68 values xmath110 the secondary order parameter xmath111 is plotted as a function of xmath49 that is xmath62 in fig2 with xmath99 for various values of the entropic index these results exhibit an unusual behaviour for the order parameter xmath53 is observed to increase with increasing xmath62 pass through a maximum and then decrease it is interesting to note that as xmath74 is decreased the maximum value of the secondary order parameter assumes a higher value than those concerning lower value of xmath74 another important point is that maier saupe theory gives a universal value of the xmath49 at the transition temperature xmath112 however both luckhurst et als theory and the generalized form of the maier saupe theory including the present study assume different values of this order parameter with changing xmath68 and xmath74 parameters this fact is consistent with experiment in that experimental values of xmath49 for various nematics assume in a range of xmath113 30 while xmath68 denotes the ratio xmath69 ie it is responsible from the deviations from the cylindrical symmetry xmath74 is a measure of the nonextensivity of the system but it is interesting that they exhibit some similar behaviour about the critical value of the order parameter xmath49 at the transition colsoptionsheader table 1 xmath114 c 02 q xmath115kxmath115xmath115xmath115sc nk xmath116099xmath1154335xmath1150332xmath11500334xmath1150256xmath1160995xmath115436xmath1150336xmath11500335xmath1150265xmath1161xmath1154386xmath1150341xmath11500336xmath1150275xmath1161005xmath1154412xmath1150348xmath11500337xmath1150301xmath116101xmath1154438xmath1150354xmath11500338xmath1150314xmath117 table 2 xmath114 c 03 q xmath115kxmath115xmath115xmath115sc nk xmath116099xmath1154104xmath11502xmath11500399xmath1150101xmath1160995xmath1154132xmath1150203xmath11500404xmath1150105xmath1161xmath1154160xmath1150207xmath1150041xmath1150112xmath1161005xmath11541894xmath1150212xmath11500417xmath1150117xmath116101xmath1154218xmath1150218xmath11500424xmath1150126xmath117 table 3 many molecular theories assume that an essential characteristic of compounds forming liquid crystals is the rod like shape of their constituent molecules that means an high length to breadth ratio this assumption leads to the approximation that the molecules might be assumed to have cylindrical symmetry however the molecules in fact are lath like and thus do not possess the high symmetry in this sense a molecular theory was developed 10 for an ensemble of such particles based upon a general expansion of the pairwise intermolecular potential together with the molecular field approximation tsallis thermostatistics has been commonly employed in studying on the physical systems as a nonextensive statistics and many studies are investigating the nonextensive effects in different systems and phenomena thus we would like to handle in this study this molecular field theory 10 and to investigate the effects of the nonextensivity for uniaxial nematics formed by non cylindrically symmetric molecules with this aim we investigate the dependence of the long range order parameter on reduced temperature and report the variation of the critical values of the order parameters and entropy change at the transition temperature with the entropic index 1 a saupe z naturf a 19 1964 161 p diehl and c l khetrapal nmr basic principles prog 1 1969 1 2 j a pople proc r soc a 221 1954 498 3 r alben j r mccoll c s shih solid st commun 11 1972 1081 4 niederberger w diehl p lunazzi l molec phys 26 1973 571 5 m j freiser phys 1970 1041 liquid crystals 3 1972 edited by g h brown and m m labes part 1 281 6 c s shih r alben j chem 57 1972 3057 7 r alben phys rev30 1973 778 8 j p straley phys a 10 1974 1881 9 e sackman p krebs j u rega j voss h mhwald molec crystals and liq crystals 24 1973 283 10 g r luckhurst c zannoni p l nordio u segre mol phys 30 1975 1345 11 o kayacan f bykkili d demirhan physica a 301 2001 255 12 o kayacan physica a 328 2003 205 13 o kayacan chem phys in press 14 m e rose elementary theory of angular momentum john wiley sons 1957 15 t d shultz liquid crystals 3 1972 edited by g h brown and m m labes part 1 263 16 a saupe angew int edition 7 1968 97 17 see httptsalliscatcbpfbr for an updated bibliography 18 c tsallis j stat 52 1988 479 19 c tsallis rs mendes and ar plastino physica a 1998 534 20 m s reis j p arajo v s amaral e k lenzi and i s oliveira phys b 66 2002 134417 21 ms reis jcc freitas mtd orlando ek lenzi is oliveira europhys 58 2002 42 22 e dagotto t hotta a moreo phys 344 2001 1 23 m mayr a moreo ja verges j arispe a feiguin e dagotto phys 86 2001 135 24 al malvezzi s yunoki e dagotto phys b 59 1999 7033 25 a moreo s yunoki e dagotto science 283 1999 2034 26 j lorenzana c castellani cd castro phys b 64 2001 235127 27 bm boghosian pj love pv coveney iv karlin s succi j yepez galilean invariant lattice boltzmann models with h theorem preprint 2002 e print cond mat0211093 28 f baldovin a robledo phys e 66 2002 045104 r 29 f baldovin a robledo nonextensive pesin identity exact renormalization group analytical results for the dynamics at the edge of chaos of the logistic map preprint 2003 e print cond mat0304410 30 a beguin j c dubois p le barny j billard f bonamy j m busisine p cuvelier sources of thermodynamic data on mesogens mol cryst 115 1984 1 table 1 the order parameters and entropy change at the nematic isotropic transition with xmath96 for some xmath74 values
many molecular theories of nematic liquid crystals consider the constituent molecules as cylindrically symmetric in many cases this approximation may be useful however the molecules of real nematics have lower symmetry therefore a theory was developed mol phys 30 1975 1345 for an ensemble of such particles based upon a general expansion of the pairwise intermolecular potential together with the molecular field approximation in this study we would like to handle this molecular field theory by using tsallis thermostatistics which has been commonly used for a decade to study the physical systems with this aim we would like to investigate the dependence of the order parameters on temperature and would like to report the variation of the critical values of the order parameters at the transition temperature with the entropic index pacs numbers 0520y 0570a 6130cz 6130 gd keywords tsallis thermostatistics maier saupe theory nematic liquid crystals non cylindrically symmetrical molecules
introduction results and discussion conclusion references figure and table captions
let xmath2 be the euclidean ball in xmath3 xmath4 centered at the origin with radius xmath5 let xmath6 xmath7 and xmath8 consider local minimizers of the dirichlet functional xmath9 over the closed convex set xmath10 ie functions xmath11 which satisfy xmath12 this problem is known as the boundary thin obstacle problem or the elliptic signorini problem it was shown in xcite that the local minimizers xmath13 are of class xmath14 besides xmath13 will satisfy xmath15 the boundary condition is known as the complementarity or signorini boundary condition one of the main features of the problem is that the following sets are apriori unknown xmath16 where by xmath17 we understand the boundary in the relative topology of xmath18 the free boundary xmath19 sometimes is said to be thin to indicate that it is expected to be of codimension two one of the most interesting questions in this problem is the study of the structure and the regularity of the free boundary xmath19 to put our results in a proper perspective below we give a brief overview of some of the known results in the literature the proofs can be found in xcite and in chapter 9 of xcite we start by noting that we can extend solutions xmath13 of the signorini problem to the entire ball xmath2 in two different ways either by even symmetry in xmath20 variable or by odd symmetry the even extension will be harmonic in xmath21 while the odd extension will be so in xmath22 in a sense those two extensions can be viewed as two different branches of a two valued harmonic function this gives a heuristic explanation for the monotonicity of almgren s frequency function xmath23 which goes back to almgren s study of multi valued harmonic functions xcite in particular the limiting value xmath24 for xmath25 turns out to be a very effective tool in classifying free boundary points by using the monotonicity of the frequency xmath26 it can be shown that the rescalings xmath27 converge over subsequences xmath28 to solutions xmath29 of the signorini problem in xmath30 such limits are known as blowups of xmath13 at xmath31 moreover it can be shown that such blowups will be homogeneous of degree xmath32 regardless of the sequence xmath33 it is readily seen from the the definition that the mapping xmath34 is upper semicontinuous on xmath19 furthermore it can be shown that xmath35 for every xmath25 and more precisely that the following alternative holds xmath36 this brings us to the notion of a regular point a point xmath37 is called regular if xmath38 by classifying all possible homogeneous solutions of homogeneity xmath39 the above definition is equivalent to saying that the blowups of xmath13 at xmath31 have the form xmath40 after a possible rotation of coordinate axes in xmath41 in what follows we will denote by xmath42 the set of regular free boundary points and call it the regular set of xmath13 xmath43 the upper semicontinuity of xmath44 and the gap of values between xmath39 and xmath45 implies that xmath42 is a relatively open subset of xmath19 besides it is known that xmath42 is locally a xmath46 regular xmath47dimensional surface in this paper we are interested in the higher regularity of xmath42 since the codimension of the free boundary xmath19 is two this question is meaningful only when xmath4 in fact in dimension xmath48 the complete characterization of the coincidence set and the free boundary was already found by lewy xcite xmath49 is a locally finite union of closed intervals we will use fairly standard notations in this paper by xmath3we denote the xmath50dimensional euclidean space of points xmath51 xmath52 xmath53 for any xmath54we denote xmath55 and xmath56 we also identify xmath57 with xmath58 thereby effectively embedding xmath41 into xmath3 similarly we identify xmath59 with xmath60 and xmath61 for xmath62 xmath63 if xmath31 is the origin we will simply write xmath64 xmath65 xmath66 and xmath67 let xmath68 be the euclidean distance between two sets xmath69 in this paper we are interested in local properties of the solutions and their free boundaries only near regular points and therefore without loss of generality we make the following assumptions we will assume that xmath13 solves the signorini problem in xmath70 and that all free boundary points in xmath71 are regular ie xmath72 furthermore we will assume that there exists xmath73 with xmath74 such that xmath75 next we assume xmath76 and that xmath77 moreover we will also assume the following nondegeneracy property for directional derivatives in a cone of tangential directions for any xmath78 there exist xmath79 and xmath80 such that xmath81 for any xmath82 where xmath83 is the unit normal in xmath41 to xmath19 at xmath31 outward to xmath49 and xmath84 for a unit vector xmath85 we explicitly remark that if xmath13 is a solution to the signorini problem then the assumptions hold at any regular free boundary point after a possible translation rotation and rescaling of xmath13 see eg xcite xcite following the approach of kinderlehrer and nirenberg xcite in the classical obstacle problem we will use the partial hodograph legendre transformation method to improve on the known regularity of the free boundary the idea is to straighten the free boundary and then apply the boundary regularity of the solution to the transformed elliptic pde this works relatively simply for the classical obstacle problem and allows to prove xmath86 regularity and even the real analyticity of the free boundary in the signorini problem the free boundary xmath19 is of codimension two and in order to straighten both xmath19 and xmath49 we need to make a partial hodograph transform in two variables namely for xmath13 satisfying the assumptions in section sec assumptions consider the transformation xmath87 consider also the associated partial legendre transform of xmath13 given by xmath88 formally the inverse of xmath89 is given by xmath90 and we can recover the free boundary in the following way xmath91 however we note that the mapping xmath89 is only xmath0 regular and even the local invertibility of such mapping is rather nontrivial besides even if one has a local invertibility of xmath89 the function xmath92 will satisfy a degenerate elliptic equation and apriori it is not clear if the equation will have enough structure to be useful concerning the first complication we make a careful asymptotic analysis based the precise knowledge of the blowups and this does allow to establish the local invertibility of xmath89 thm t invert let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions then there exists a small xmath93 such that the partial hodograph transformation xmath89 in is injective in xmath94 then via an asymptotic analysis of xmath92 at the straightened free boundary points we observe that the fully nonlinear degenerate elliptic equation for xmath92 has a subelliptic structure which can be viewed as a perturbation of the baouendi grushin operator then using the xmath1 theory for the baouendi grushin operator and a bootstrapping argument we obtain the smoothness and even the real analyticity of xmath92 thm fb regul let xmath13 be as in theorem thm t invert and xmath92 be given by then exists xmath95 such that the mapping xmath96 is real analytic on xmath97 in particular xmath98 is locally an analytic surface the signorini problem is just one example of a problem with thin free boundaries many problems with thin free boundaries arise when studying problems for the fractional laplacian and using the caffarelli silvestre extension xcite to localize the problem at the expense of adding an extra dimension which makes the free boundary thin thus the signorini problem can be viewed as an obstacle problem for half laplacian see xcite we hope that the methods in this paper can be used to study the higher regularity of the free boundary in many such problems a different approach to the study of the higher regularity of thin free boundaries is being developed by de silva and savin xcite in particular in xcite they prove the xmath86 regularity of xmath99 free boundaries in the thin analogue of alt caffarelli minimization problem xcite their method is based on schauder type estimates rather than hodograph legendre transform used in this paper at about the same time as we completed this work de silva and savin xcite extended their approach to include the signorini problem as well as lowered the initial regularity assumption on the free boundary to xmath46 the main ingredient in their proof is an interesting new higher order boundary harnack principle applicable to regular as well as slit domains see also xcite the paper is organized as follows in section sec nond prop we study the so called xmath39homogeneous blowups of the solutions near regular points this is achieved by a combination a boundary hopf type principle in domains with xmath46 slits as well as weiss and monneau type monotonicity formulas in section sec hodograph we introduce the partial hodograph transformation and show it is a homeomorphism in a neighborhood of the regular free boundary points this is achieved by using the precise behavior of the solutions near regular free boundary points established in section sec nond prop in section sec legendre funct nonl we consider the corresponding legendre transform xmath92 and show some basic regularity of xmath92 inherited from xmath13 in section sec smoothness we study the fully nonlinear pde satisfied by xmath92 which is the transformed pde of xmath13 we show the linearization of the pde is a perturbation of the baouendi grushin operator using the xmath1 estimates available for this operator and a bootstrapping argument we obtain the smoothness of xmath92 which in turn implies the smoothness of the free boundary in section sec analyticity we give more careful estimates on the derivatives of xmath92 which imply that xmath92 and consequently xmath19 is real analytic we start our study by establishing a stronger nondegeneracy property for the tangential derivatives xmath100 than the one given in namely we want to improve the lower bound in to a multiple of xmath101 to achieve this we construct a barrier function by using the xmath46regularity of xmath19 to obtain a result that can be viewed as a version of the boundary hopf lemma for the domains of the type xmath102 to proceed we introduce some notations for xmath103 let xmath104 lem hopf barrier there exists a continuous function xmath105 on xmath106 and a small xmath107 such that xmath108 and xmath109 inspired by the construction in xcite we will show that the following function satisfies the conditions of the lemma xmath110 we next verify each of the properties in the lemma first of all is immediate next since xmath111 we have xmath112 therefore on xmath113 one has xmath114 hence there exists xmath107 such that xmath115 on xmath116 this implies further to show notice that xmath117 which yields xmath118 for small xmath119 as claimed next we show it is easy to check that xmath120 and xmath121 satisfy xmath122 xmath123 combining we obtain that for xmath124 xmath125 geqfrac12lefthat falphau2n3frachat falphauurightfrac12lefthat falphax122n3frachat falphax12x12rightendgathered since xmath126 and xmath127 is decreasing on xmath128 then by we have xmath129 in xmath130 this shows finally by we have xmath131 this completes the proof of the lemma using the function constructed in lemma lem hopf barrier as a barrier we have the improvement of the nondegeneracy for nonnegative harmonic functions in xmath130 cor hopf1 let xmath132 be a nonnegative superharmonic function in xmath130 xmath133 and xmath134 on xmath135 moreover suppose that xmath132 satisfies xmath136 then there exists xmath137 and xmath138 such that xmath139 let xmath140 and xmath119 be as in lemma lem hopf barrier then by and the continuity of xmath140 there exists xmath141 such that xmath142 which combined with gives that xmath143 then by the maximum principle xmath144 in particular xmath145 for xmath146 with xmath107 small from lemma cor hopf1 we obtain the following nondegeneracy property for the tangential derivatives of the solution to the elliptic signorini problem prop hopf let xmath13 be a solution to the elliptic signorini problem in xmath70 satisfying the assumptions in section sec assumptions then for each xmath82 and xmath78 there exist xmath147 depending on xmath148 such that xmath149 for each xmath82 we can rotate a coordinate system in xmath41 so that xmath150 then we have xmath151 where xmath152 since xmath13 is harmonic in xmath153 and xmath154 satisfies the nondegeneracy condition then xmath100 satisfies the assumptions of lemma cor hopf1 with a small difference that there is constant xmath155 in the definition of the set xmath156 above however by a simple scaling we can make xmath157 thus there exists xmath158 and xmath159 such that holds for our further study we need to consider the following rescalings xmath160 with xmath37 note that these are different form rescalings in the sense that the xmath161 norm of xmath13 is not preserved under the rescaling but it is better suited for the study of regular points first from the growth estimate see eg xcite xmath162 for xmath82 where xmath163 and xmath164 we know that the family xmath165 is locally uniformly bounded moreover by the interior xmath166 estimate see eg xcite xmath167 we get that xmath165 is uniformly bounded in xmath168 thus there exists xmath169 such that xmath170 in xmath171 for any xmath172 over a certain subsequence xmath173 it is also immediate to see that xmath174 is a global solution of the signorini problem ie a solution of in xmath30 furthermore it is important to note that xmath174 is nonzero because of the nondegeneracy provided by proposition prop hopf sometimes we will refer to the function xmath175 as the xmath39homogeneous blowup to indicate the way it was obtained the following weiss type monotonicity formula whose proof can be found in xcite implies that xmath174 is a homogeneous global solution of the signorini problem of degree xmath39 lem weiss let xmath13 be a nonzero solution of the signorini problem in xmath176 for any xmath25 and xmath177 define xmath178 then xmath179 is nondecreasing in xmath180 moreover for aexmath181 we have xmath182 furthermore xmath183 for xmath184 if and only if xmath13 is homogeneous of degree xmath39 with respect to xmath31 in xmath185 ie xmath186 rem weiss the weiss type monotonicity formula above is specifically adjusted to work with rescalings namely by a simple change of variables one can show that xmath187 besides by the definition of regular points we also have that xmath188 since xmath189 prop unique let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions then there exist two positive constants xmath190 and xmath191 depending only on xmath13 such that if xmath192 and that xmath193 over a sequence xmath28 then xmath194 with a constant xmath195 satisfying xmath196 we have already noticed at the beginning of section sec blowups that xmath175 is a nonzero global solution of the signorini problem besides by the weiss type monotonicity formula we will have xmath197 for any xmath198 hence by lemma lem weiss xmath175 is a homogeneous of degree xmath39 in xmath199 then by proposition 99 in xcite we must have the form xmath200 for some xmath201 and a tangential unit vector xmath202 we claim that xmath203 indeed we have xmath204 provided xmath205 so that xmath206 passing to the limit xmath207 we therefore have xmath208 hence for any xmath78 xmath209 this is possible only if xmath203 thus we have the claimed representation xmath210 the estimates on constant xmath195 now follow from the growth estimate and the nondegeneracy proposition prop hopf which are preserved under the xmath39homogeneous blowup rem unique this proposition also holds for the rescaling family with varying centers xmath211 where xmath212 xmath213 and xmath214 indeed by lemma lem weiss we have xmath215 hence applying dini s theorem form the classical analysis to the family of monotone continuous functions xmath216 on the compact set xmath217 we have that the above convergence is uniform on xmath217 hence passing to the limit xmath218 we obtain xmath219 for any xmath198 arguing as in proposition prop unique we conclude that xmath220 in order to get the uniqueness of the blowup limit we need to show that the constant xmath195 in proposition prop unique does not depend on the subsequence xmath221 but only depends on xmath31 this is a consequence of the following monneau type monotonicity formula xcite without apriori knowledge on the free boundary this formula is known to hold only at so called singular points ie xmath37 with xmath222 xmath223 however using the xmath46 regularity of the free boundary we will be able to establish this result also at regular points lem monneau let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions for any xmath224 xmath164 and a positive constant xmath225 we define xmath226 where xmath227 then there exists a constant xmath228 which depends on the xmath46 norm of xmath229 xmath191 in xmath225 and xmath230 such that xmath231 is monotone nondecreasing for xmath232 for simplicitywe assume xmath233 and write xmath234 xmath235 letting xmath236 and using the scaling properties of xmath237 we have xmath238 next we compute the weiss energy functional xmath239 by remark rem weiss xmath240 hence xmath241 an integration by parts gives xmath242 noticing that xmath243 we have xmath244 similarly integrating by parts and using xmath245 in xmath65 we obtain xmath246 combining we have xmath247 since xmath175 is homogeneous of degree xmath39 the second integral above is zero moreover using xmath248 on xmath66 we have the third integral is equal to xmath249 recalling we have xmath250 noticing that xmath251 for xmath164 by lemma lem weiss and xmath252 xmath253 on xmath254 we have xmath255 using the growth estimate for xmath13 and explicit expression for xmath175 we have xmath256 since the free boundary xmath257 is a xmath46 graph and xmath258 we have xmath259 where xmath260 is a constant depending on xmath261 hence xmath262 which implies the claim of the lemma with xmath263 we can now establish the main result of this section prop uni let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions then for xmath224 there exists a constant xmath264 such that xmath265 where xmath266 moreover the function xmath267 is continuous on xmath268 furthermore for any xmath269 and xmath5 xmath270 we first show xmath271 does not depend on the converging sequences given xmath82 let xmath272 be a converging sequence such that xmath273 in xmath171 for some xmath271 satisfying xmath274 by lemma lem monneau the mapping xmath275 is nonnegative monotone nondecreasing on xmath276 hence xmath277 this implies xmath271 does not depend on the converging sequences xmath272 next we show that xmath271 depends continuously on xmath278 fix xmath82 for any xmath279 let xmath280 be such that xmath281 for xmath282 fixed by the continuity of xmath13 we have xmath283 is continuous on xmath284 moreover from the explicit formulation of xmath285 as well as the xmath286 continuity of xmath287 the function xmath288 is continuous therefore there exists a positive xmath289 small enough such that for all xmath290 with xmath291 xmath292 for xmath293 by lemma lem monneau xmath294 let xmath295 in then xmath296 by the explicit expression of xmath285 there is a constant xmath297 such that xmath298 for any xmath299 this together with gives xmath300 this shows the continuity of xmath267 finally we show in fact xmath301 is continuous on the compact set xmath217 and it monotonically decreases to zero as xmath302 decreases to zero for each xmath31 hence by dini s theorem xmath303 as xmath295 uniformly on xmath217 thus one has xmath304it is not hard to see that xmath305 are subharmonic in xmath106 after the even reflection of xmath13 about xmath20 hence by the xmath306 estimates for the subharmonic functions we have xmath307 by an interpolation of hlder spaces ie for some absolute constant xmath308 xmath309 for xmath310 and xmath311 as well as a rescaling argument we obtain the continuous dependence on xmath31 of xmath271 gives the uniqueness of the blowups with varying centers cor varicenter let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions let xmath312 such that xmath313 as xmath218 let xmath207 as xmath218 then for any xmath310 and xmath314 defined in theorem prop uni xmath315 this follows from the uniform continuity of xmath316 and theorem prop uni let xmath13 be a solution of the signorini problem in xmath70 satisfying the assumptions in section sec assumptions following the idea in the classical obstacle problem xcite we would like to use the method of partial hodograph legendre transforms to study the higher regularity of the free boundary in the signorini problem since xmath19 has codimension two the most natural hodograph transformation to consider is the one with respect to variable xmath317 and xmath20 xmath318 the reader can easily check that if we do the partial hodograph transform in xmath317 variable only it will still straighten xmath19 however the image of xmath319 will not have a flat boundary and this will render this transformation rather useless by doing so we hope that there exists a small neighborhood xmath320 of the origin such that xmath89 is one to one on xmath321 however due to the xmath166 regularity of xmath13 the mapping xmath89 is only xmath0 near the origin hence the simple inverse function theorem which typically requires the transformation xmath89 to be from class xmath322 can not be applied here instead we will make use of the blowup profiles which contain enough information to catch the behavior of the solution near the free boundary points before stating the main results we make several observations a by the assumptions in section sec assumptions xmath323 in xmath324 and xmath325 on xmath254 this together with the complementary boundary condition gives us xmath326 300163250 00 shown for even extension of xmath13 in xmath20 variabletitlefig 5590 9085xmath327 1880 shown for even extension of xmath13 in xmath20 variabletitlefig 226120xmath328 24085xmath329 130120xmath330tsubstackyxyn1uxn1yn uxn b if we extend xmath13 across xmath331 to xmath106 by the even symmetry or odd symmetry in xmath20 then the resulting function will satisfy xmath332 hence xmath13 is analytic in xmath333 from xmath89 is also analytic in xmath333 c to better understand the nature of xmath89 consider the solution xmath334 of the signorini problem and find an explicit formula for xmath335 a simple computation shows that using complex notations the mapping xmath335 is given by xmath336 where by the latter we understand the appropriate branch loosely speaking this tells that xmath89 behaves like xmath337 function in the last two variables the observation above also suggests us to compose xmath89 with the mapping xmath338 which can be expressed by using complex notations denote xmath339 by xmath132 as xmath340 more explicitly alongside xmath89 we consider the transformation xmath341 xmath342 now xmath343 maps xmath344 to the upper half space xmath345 and xmath254 to the hyperplane xmath346 moreover xmath343 straightens the free boundary xmath19 as xmath89 and satisfies xmath347d the advantage of xmath343 is that by arguing as in ii above and making odd and even extensions of xmath13 in xmath20 we can extend it to a mapping on xmath106 moreover this extension will be the same in both cases unlike for the mapping xmath89 in particular this makes xmath343 a single valued mapping on xmath106 which is real analytic in xmath348 in what follows we will prove the injectivity of xmath343 in a neighborhood of the origin for that we will need the make the following direct computations 338163250 00titlefig 5590 9085xmath327 1880titlefig 24390 27885xmath349 138120xmath330t1substackwxwn1uxn12uxn2wn2uxn1 uxn prop computeu0 let xmath350 be given by xmath351 for xmath352 xmath353 then we have the following identities xmath354 direct computation from now on we will use a slightly different notation from theorem prop uni to denote the blowup limit ie for xmath355 we let xmath356 the following proposition is a consequence of theorem prop uni and the xmath357 regularity of xmath229 prop uniformity2 for any xmath279 there exists xmath358 depending on xmath359 and xmath13 such that for all xmath360 and xmath361 we have 1 xmath362 2 xmath363 has a harmonic extension at any xmath364 with xmath365 by making even or odd reflection about xmath20 hence if we let xmath366 then for any multi index xmath367 with xmath368 we have xmath369 i given any xmath279 by there is a positive constant xmath370 depending on xmath13 such that for any xmath371 xmath372 on the other hand there exists xmath373 depending on xmath359 modulus of continuity of xmath271 and xmath46 norm of xmath229 such that xmath374 taking xmath375 we proved i ii we observe that for small enough xmath302 and xmath376 the rescaled free boundary xmath377 satisfies xmath378 when xmath379 this follows immediately from the assumption xmath380 the rest of ii then follows from the estimates for the higher order derivatives of harmonic functions the next proposition is a consequence of proposition prop computeu0 and proposition prop uniformity2 which is useful to understand the transformation xmath89 since xmath89 fixes the first xmath381 coordinates it is more convenient to work on a tubular neighborhood of xmath19 defined as follows first we consider the projection map xmath382 since xmath383 xmath384 is continuous in xmath106 moreover it is easy to verify that for some constant xmath385 xmath386 next for xmath387 we let xmath388 xmath389 by the continuity of xmath384 and xmath390 is a tubular neighborhood of the part of the free boundary xmath19 lying in xmath391 prop square given any xmath279 let xmath392 be the same constant as in proposition prop uniformity2 then for any xmath393 we have xmath394 i first we observe that latexmathlabeleq ob1 for xmath393 let xmath396 applying proposition prop uniformity2i to xmath397 we have xmath398 xmath399 together with implies xmath400 rescaling back to xmath13 and using we obtain i ii the proof of ii is similar to that of i in xmath401 xmath89 is a smooth mapping and xmath402 given xmath403 consider xmath397 and rescaled point xmath404 as above by proposition prop uniformity2ii xmath405 this together with the explicit expression for xmath406 in proposition prop computeu0 gives xmath407 it is easy to check the following rescaling property xmath408 this combined with gives ii now we are ready to prove the main theorem of the section let xmath409 thm injective there exists a small constant xmath95 such that xmath89 is a homeomorphism from xmath410 to xmath411 which is relatively open in xmath412 moreover if we extend xmath89 to xmath413 via the even odd reflection of xmath13 about xmath20 then it is a diffeomorphism from xmath414 xmath415 onto an open subset in xmath3 by observation iii and iv instead of xmath89 we consider the map xmath416 which is first defined in xmath417 as and then extended to xmath106 via even or odd reflection of xmath13 in xmath20 variable we will first show xmath343 is a homeomorphism from xmath390 to xmath418 for sufficiently small xmath419 this is divided into three steps there exists xmath95 such that for any xmath360 and xmath420 xmath421 is injective in xmath422 herexmath423 in fact by proposition prop uniformity2 and the definition of xmath343 for any xmath279 there exists xmath424 such that xmath425 from proposition prop computeu0 xmath426 is a nondegenerate linear map hence if we take xmath427 sufficiently small xmath428 will be injective on the compact set xmath422 xmath343 is injective in xmath390 for xmath419 chosen as above it is clear that xmath343 is injective on xmath19 since it maps xmath429 to xmath61 thus it will be sufficient if we show the injectivity of xmath343 in xmath430 indeed suppose there exist xmath431 such that xmath432 necessarily xmath433 let xmath434 xmath435 xmath436 and wlog assume xmath437 a simple rescaling gives xmath438 on the other hand xmath432 implies xmath439 then recalling by proposition prop squarei for small enough xmath359 by choosing xmath419 small in step 1 one has xmath440 note that xmath441 and xmath442 have first xmath381 coordinates equal to zero thus xmath443 but this contradicts the injectivity of xmath444 on xmath422 obtained in step 1 xmath343 is a homeomorphism from xmath390 to xmath418 now xmath445 is an injective map moreover it is continuous because of the continuity of xmath446 thus by the brouwer invariance of domain theorem xmath418 is open and xmath343 is a homeomorphism between xmath390 and xmath418 now we proceed to show xmath447 is a homeomorphism for xmath419 chosen as above indeed recalling xmath448 we obtain the injectivity of xmath89 from the injectivity of xmath343 next we note that xmath449 is injective in xmath412 which contains xmath450 by observation i thus xmath451 for any subset xmath452 thus xmath89 is an open map from xmath453 to xmath412 since xmath343 is open and xmath449 is continuous combining with the continuity of xmath89 we obtain that xmath89 is an homeomorphism from xmath453 to xmath454 finally we notice that xmath89 is smooth on xmath414 or xmath455 after even or odd extension of xmath13 about xmath20 moreover by proposition prop squareii xmath456 is nonvanishing there for sufficiently small xmath419 hence xmath89 is a diffeomorphism by the implicit function theorem 200200 0134 and extension of xmath89titlefig 134134 and extension of xmath89titlefig 90180xmath457t 90160xmath458t1 25170xmath459 067 and extension of xmath89titlefig 12067 and extension of xmath89titlefig 90110xmath457t 9090xmath458t1 25100xmath460 00 and extension of xmath89titlefig 1200 and extension of xmath89titlefig 9040xmath457t 9020xmath458t1 730xmath461 20030xmath462 next we construct a space xmath461 by gluing two copies of xmath453 xmath463 together properly and extend xmath13 as well as xmath89 to xmath461 the advantage of doing this is the straightened free boundary xmath464 will be contained in xmath465 which is an open neighborhood of the origin in xmath3 this transfers the boundary singularity into the interior singularity which will be easier for us to deal with more precisely we fix the xmath419 chosen in theorem thm injective and consider xmath466 a family of subsets in xmath3 where xmath467 define xmath468 as follows xmath469 where xmath13 is the solution to the signorini problem satisfying the assumptions in the introduction xmath470 is the even reflection of xmath13 about xmath20 xmath471 and xmath472 let xmath473 be the corresponding partial hodograph legendre transform defined in consider the disjoint union of xmath474 xmath475 and denote the elements of it by xmath476 for xmath477 xmath478 define xmath479 now we define an equivalence relation on xmath480 as follows xmath481 it is easy to check from the definition of xmath482 and theorem thm injective that this equivalence relation identifies the points xmath476 and xmath483 if i xmath484 xmath485 or xmath486 ii xmath487 xmath488 or xmath489 in particular for xmath490 xmath476 and xmath483 are identified for all xmath491 let xmath492 denote the quotient space consider on xmath461 xmath493 it is immediate that xmath494 is continuous and injective moreover it is open from xmath461 to xmath3 by theorem thm injective and the special way we glue xmath474 hence we obtain that xmath494 is a homeomorphism from xmath461 to xmath495 in particular xmath495 is an open neighborhood of the origin in xmath3 which contains the straightened free boundary xmath464 we still denote the set xmath496 by xmath19 it is not hard to observe that xmath497 is a double cover of xmath498 with the covering map xmath499 hence xmath497 can be given a smooth structure which makes xmath500 into a local diffeomorphism in the local coordinate charts xmath501 we have xmath502 where xmath13 is the extended function via the even or odd reflection about xmath49 or xmath327 hence xmath503 is continuous on xmath461 smooth in xmath497 and xmath504 there similarly one can compute xmath505 which is a diffeomorphism on xmath506 by theorem thm injective apply theorem thm injective for the extended xmath13 this shows that xmath507 is a diffeomorphism from now on with slight abuse of the notation we will still denote xmath503 by xmath13 and xmath494 by xmath89 in the following we will simply write xmath508 xmath509 etc while having in mind that we are taking the derivatives in the local coordinates in this section we study the partial legendre transform of xmath13 and the fully nonlinear pde it satisfies we let xmath510 which is an open neighborhood of the origin and xmath511 be the straightened free boundary for xmath512 we define the partial legendre transform of xmath13 by the identity xmath513 where xmath514 it is immediate to check the following properties of xmath92 a xmath92 is odd about xmath515 and even about xmath516 b xmath92 is continuous in xmath517 smooth in xmath518 and xmath519 on xmath520 c a direct computation shows that in xmath521 xmath522 hence xmath523 can be written as xmath524 the jacobian matrix of xmath92 is then xmath525 where xmath526 since xmath527 and its differential has an continuous extension to xmath19 this together with the continuity of xmath523 and imply that xmath528 d the restriction of xmath523 to xmath529 is given by xmath530 which gives a local parametrization of the free boundary xmath19 thus the regularity of the free boundary is directly related to the regularity of xmath531 restricted to xmath529 a direct computation using and shows that xmath532 xmath533 since xmath534 in xmath535 the legendre function xmath92 satisfies the following fully nonlinear equation in xmath536 xmath537 multiplying both sides of by xmath538 we can write it in the form xmath539 which can be further rewritten as xmath540 where xmath541 xmath542 is the xmath543 matrix xmath544 in order to study the asymptotic of higher derivatives of xmath92 at the straightened free boundary xmath529 we study the blowup of xmath92 at points on xmath520 let xmath545 be the legendre function of xmath314 as in where xmath546 it is not hard to compute that xmath547 where xmath548 is the unit outer normal of xmath49 at xmath31 in particular at the origin xmath549 for xmath550 and xmath62 we consider the non isotropic dilation xmath551 and the rescaling at xmath552 note xmath553 xmath554 from and one can easily check that xmath555 here rescaling family xmath556 and xmath557 are defined on xmath558 where xmath559 are the topological spaces obtained by gluing four rescaled copies xmath560 together as in the construction of xmath461 to study the convergence of xmath561 to xmath545 we first show a lemma which concerns about in the local coordinate charts the uniform convergence of xmath562 to xmath563 the following two facts are easy to verify a xmath557 is bijective and xmath564 where xmath565 is the non isotropic dilation in in particular we have xmath566 b let xmath567 and let xmath568 then for any sufficiently small xmath302 and xmath569 we have xmath570 iexmath557 always maps the copy xmath571 into the corresponding quarter domain xmath572 let xmath573 with xmath574 denote the restriction of the covering map to xmath575 lem dilation let xmath576 and xmath577 then xmath578 uniformly in any compact subset xmath579 xmath580 moreover this convergence is also uniform for xmath552 varying in a compact subset of xmath529 given xmath576 and a compact set xmath581 by i and ii above there is a positive xmath582 such that xmath583 for any xmath584 by the rescaled version of proposition prop uniformity2i there exists xmath585 small such that xmath586 which is a bounded subset in xmath3 for any xmath587 we know from the xmath588 convergence of xmath556 to xmath314 that xmath589 moreover the limit xmath590 is continuous on xmath572 by a direct computation therefore xmath591 uniformly in xmath592 by theorem prop uni and the continuous dependence of xmath563 on xmath31 we have the above convergence is uniform for xmath552 in any compact subset of xmath529 next we show the following compactness results prop compact v let xmath592 be a compact subset in xmath593 then for any multi index xmath230 xmath594 uniformly in xmath592 moreover the above convergence is also uniform for xmath552 varying in a compact subset of xmath529 given xmath576 let xmath595 for xmath596 and xmath597 using and we can easily conclude from lemma lem dilation together with the uniform convergence of xmath556 to xmath314 that xmath598 converges to xmath599 uniformly in xmath592 for xmath600 using and one can express xmath598 in terms of xmath601 with xmath602 ie for fixed xmath230 xmath603 where xmath604 is some polynomial for xmath605 compact xmath606 is also compact and xmath607 by the local uniform convergence of xmath562 to xmath563 lemma lem dilation as well as the flatness of the free boundary xmath608 ie the hausdorff distance between xmath609 and xmath608 goes to zero as xmath302 goes to zero there exists xmath610 compact and xmath611 small such that for all xmath612 we have xmath613 and xmath614 note that implies that for any xmath612 and xmath580 xmath615 are harmonic in xmath616 thus for any multi index xmath230 we have xmath617 uniformly in xmath616 this combined with and lemma lem dilation gives the conclusion due to theorem prop uni and lemma lem dilation the above convergence is uniform in xmath552 varying the compact subset of xmath529 from proposition prop compact v one can get continuous extension of higher order properly weighted derivatives of xmath92 at xmath529 cor extension for each xmath576 xmath577 we extend the following functions to xmath552 by setting xmath618 then after such extension the above functions are continuous on xmath517 the proof is based on the following two facts a b and a blowup argument a for xmath619 xmath620 xmath621 b from the xmath46 regularity of xmath19 and the explicit expression of xmath545 we have the map xmath622 is continuous from xmath529 to xmath623 where xmath624 compact this together with proposition prop compact v gives that for any multi index xmath230 xmath625 we proceed to show the extended functions are continuous at xmath576 first they are continuous on xmath529 from the xmath286 dependence of xmath83 on xmath31 next for xmath626 we use xmath627 to denote xmath628 and let xmath629 then xmath630 as xmath631 we have xmath295 and xmath632 thus by b above for a fixed xmath624 which is compact and contains the set xmath633 we have xmath634 from the explicit expression of xmath545 see we have xmath635 this together with a and gives iiv in this section we show the fully nonlinear equation xmath636 in has a subelliptic structure in xmath462 let xmath637 be the space of xmath638 symmetric matrices and we may consider xmath639 as a smooth function on xmath637 let xmath640 for xmath641 be the linearization of xmath639 at xmath642 a direct computation shows that for xmath643 and xmath644 the linearization of xmath639 at xmath645 has the form xmath646 observe that one can write xmath647 with xmath648 and xmath649 symmetric matrices of the following forms xmath650 xmath651 where xmath652 with xmath653 for xmath654 and for xmath655 xmath656 note that xmath657 is smooth in xmath658 due to the smoothness of xmath92 there moreover by corollary cor extensioniiiiv and the intermediate value theorem xmath659 thus xmath657 has a continuous extension on xmath520 in particular xmath660 hence xmath657 is positive definite in a small neighborhood of the origin which implies that the linearized operator xmath661 has a subelliptic structure near the origin moreover using we have xmath662 which is the coefficient matrix for the baouendi grushin type operator xmath663 this indicates us to view the linearization of xmath639 in a neighborhood of the origin as a perturbation of baouendi grushin type operator we first recall the classical xmath1 estimate for the baouendi grushin operator for xmath664 xmath665 xmath666 the baouendi grushin operator is xmath667 in order to study the weak solution associated with xmath668 it is natural to consider the following function space associated with the hrmander vector fields xmath669 for xmath670 xmath671 a bounded open subset we define xmath672 by theorem 1 in xcite xmath673 is a separable banach space for xmath674 with the norm xmath675 denote by xmath676 the closure of xmath677 in xmath673 it is not hard to prove by using mollifier that if xmath678 and has compact support in xmath679 then xmath680 xmath674 in this paperonly the spaces with xmath681 are involved we list them below separately xmath682 to simplify the notation we will denote xmath683 we will need the sobolev embedding theorem for xmath684 and the xmath1 estimate for xmath668 similar results for more general subelliptic operators can be found in lots of literature like xcite and xcite since our case is much simpler we provide a relatively shorter and self contained proof in the appendix lem embedding let xmath685 be a bounded domain in xmath686 1 if xmath687 then xmath688 for xmath689 satisfying xmath690 ie there exists xmath691 such that for all xmath692 xmath693 2 if xmath694 then xmath695 ie there exists xmath696 such that for all xmath697 xmath698 see appendix thm grushin let xmath685 be a domain in xmath686 xmath699 and xmath700 is even let xmath701 with xmath702 then there is a positive constant xmath703 which only depends on xmath384 such that xmath704 see appendix next we state the local xmath1 estimates for the perturbed operator xmath705 where xmath706 can be decomposed into the form xmath707 with xmath708 and xmath709 a positive definite matrix with continuous entries and for some small positive xmath710 it satisfies xmath711 where xmath712 if xmath713 and xmath714 if xmath715 from now on we will work on the following scale invariant cylinder wrt xmath668 centered at the origin for xmath62 xmath716 prop perturb grushin let xmath717 satisfy xmath718 where xmath719 is a perturbed baouendi grushin operator given by then if is satisfied for sufficiently small xmath720 there exists xmath691 such that for any xmath721 xmath722 we write xmath723 if xmath724 then by xmath725 hence if we choose xmath726 in for any xmath727 we have xmath728 now we remove the compact support condition let xmath721 be fixed and let xmath729 let xmath730 be a smooth cut off function in xmath731 where xmath732 satisfy xmath733 when xmath734 xmath735 when xmath736 xmath737 when xmath738 xmath739 when xmath740 moreover xmath741 xmath742 xmath743 xmath744 let xmath745 then xmath746 uendaligned by we have xmath747upmathcalcsigma rright compute xmath748u directly using the estimates for the coefficient matrix xmath749 with xmath710 chosen less than 1 as well as the cut off functions we obtain the following for simplicity we write xmath750 xmath751up leq cleftfrac11sigmarnablax upfrac11sigmar2x2 nablat upright fracc1sigma2r2upfracc1sigmarxnablat upendgathered where xmath752 is some absolute constant now using the interpolation between the classical sobolev spaces for xmath753 and young s inequality we have for any xmath279 xmath754 hence by rescaling and then taking the supreme in xmath755 we have xmath756 xmath757 xmath758 combining and choosing xmath359 xmath759 small enough depending on xmath384 we obtain the inequality in this sectionwe show that the legendre function xmath92 which satisfies the fully nonlinear pde is smooth in a neighborhood of the origin we will work on the non isotropic cylinder at the origin xmath760 before proving the main theorem we make the following two remarks a by corollary cor extension and the discussion in section sec subelliptic coeff there is xmath761 small enough such that xmath762 for any xmath674 and the linearized operator xmath661 can be viewed as a perturbation of the baouendi grushin type operator in xmath763 b we note the following rescaling property if xmath92 solves in xmath763 then xmath764 with xmath765 and xmath766 the non isotropic dilation in will solve xmath767 hence by multiplying a nonzero constant we may assume that the coefficient matrix xmath768 is of the form xmath707 in xmath763 with xmath649 continuous and satisfying for sufficiently small xmath710 where xmath710 is chosen such that the xmath1 estimate proposition prop perturb grushin applies the idea to show the smoothness is then to apply iteratively to the first order difference quotient of xmath769 but each step we need to be careful that the non homogeneous rhs coming from differentiation is bounded in xmath1 for notation simplicity in what follows we will discuss the case when xmath770 then equation is simply xmath771 the arguments for xmath772 are the same thm smoothness v let xmath92 be the legendre function of xmath13 defined in let xmath761 such that in xmath763 corollary cor extension holds for xmath92 and xmath768 can be written in the form of with xmath749 satisfying for sufficiently small xmath710 then xmath92 is smooth at the origin we let xmath773 denote the first difference quotient of xmath92 in xmath774 direction step 1 show xmath775 for any xmath702 xmath587 by corollary cor extension it is enough to show it for xmath776 in fact xmath762 implies that xmath777 and xmath778 on xmath779 for xmath780 moreover by taking xmath781 on both sides of we get xmath782 satisfies xmath783 with xmath784 since a translation in xmath785 direction does not change the subelliptic structure of the operator by corollary cor extension and the xmath286 dependence of xmath786 on xmath31 ie xmath787 is still a perturbed baouendi grushin operator in the form of with xmath749 satisfying in xmath788 then by proposition prop perturb grushin there exists xmath691 such that for any xmath789 xmath790 note that the rhs of is uniformly bounded in xmath140 moreover xmath791 and xmath792 here we slightly abuse of the notation to let xmath793 denote the weighted second order derivatives and first order derivatives wrt xmath515 and xmath516 thus xmath794 for any xmath587 with xmath795 depending on xmath796 xmath797 and xmath384 step 2 show xmath798 xmath587 take xmath799 to both sides of from step 1 xmath800 and it satisfies xmath801 applying xmath802 to with xmath803 then xmath804 and it satisfies xmath805 with xmath806 to estimate the xmath807 we first notice that xmath229 up to a translation xmath808 is a summation of the following terms xmath809 next since xmath810 for any xmath811 then by hlder s inequality for xmath812 satisfying xmath813 we have xmath814 apply hlder to estimate xmath815 for some xmath816 satisfying xmath817 we have xmath818 by corollary cor extensioniv the second term on the rhs of is bounded from the boundedness of xmath819 shown in step 1 the third term is uniformly bounded in xmath140 hence combining we have xmath820is uniformly bounded in xmath140 similarly by using corollary cor extension and step 1 we have xmath821 xmath822 are uniformly bounded in xmath140 therefore applying the xmath1 estimate proposition prop perturb grushin to one can find a constant xmath823 independent of xmath140 such that for any xmath721 xmath824 since the rhs is uniformly bounded in xmath140 this implies xmath825 from we know xmath826 xmath827 xmath828 multiplying a cut off function to extend the functions to xmath3 and applying the sobolev embedding lemma lem embeddingi we have for xmath829 with xmath830 the homogeneous dimension associated with xmath661 xmath831 repeat the above arguments starting from with xmath832 replaced by xmath833 note xmath834 if xmath835 after finite steps xmath700 which only depends on the dimension we will get xmath826 xmath836 with xmath816 larger than the homogeneous dimension xmath830 and hence by embedding lemma lem embeddingii are in xmath837 applying proposition prop perturb grushin again we obtain xmath838 for xmath702 noting that xmath839 is chosen arbitrary we complete the proof for step 2 show xmath840 xmath841 with xmath842 first from step 1 xmath843 for xmath844 with xmath845 this together with the boundedness of xmath846 obtained in step 2 gives xmath847 next we estimate xmath848 with xmath849 and xmath828 similar as xmath848 satisfies xmath850 with xmath851 by we immediately have xmath852 is uniformly bounded in xmath140 for any xmath702 applying the xmath1 estimate proposition prop perturb grushin we have xmath853 is uniformly bounded in xmath140 and therefore completes the proof for step 3 show xmath854 for xmath855 with xmath856 depending on xmath857 and dimension this is done in the same way as for xmath841 more precisely we first consider the equation of xmath858 in xmath859 xmath860 with xmath861 bounded uniformly in xmath140 for some xmath862 then we apply the sobolev embedding lemma and the xmath1 estimate iteratively to obtain the boundedness of xmath863 with some xmath864 as well as the boundedness of xmath865 for any xmath702 in particular this combined with the fact that xmath840 for all xmath866 gives that xmath867 is bounded with xmath868 for any xmath702 next we consider the equation for xmath869 xmath870 xmath871 and xmath842 similar as in step 3 one easily get the uniform boundedness of xmath872 for any xmath702 due to the boundedness of xmath873 xmath874 applying xmath1 estimate again we obtain the conclusion cor freeboundary let xmath13 be a solution to the signorini problem in xmath70 and let xmath19 be the regular set of the free boundary then xmath19 is smooth take xmath37 a regular free boundary point in a coordinate chart centered at xmath31 by xcite xmath19 can be locally expressed as the graph of a xmath46 function xmath875 with xmath876 consider in a neighborhood of xmath31 the partial hodograph legendre transform and the corresponding legendre function xmath92 defined in section sec legendre transf its by theorem thm smoothness v xmath92 is smooth at the origin since xmath877hence the smoothness of xmath92 at the origin implies the smoothness of xmath229 at xmath878 in this section we show that the legendre transform xmath92 is analytic in a neighborhood of the origin theorem thm fb regul thm main let xmath92 be the legendre transform defined in section sec legendre funct nonl then xmath92 is real analytic in a neighborhood of the origin we first make some more assumptions and observations 1 for simplicitywe work on xmath879 by the scaling invariant property mentioned at the beginning of section sec smoothness free bound we may assume xmath92 is smooth in xmath880 and solves the fully nonlinear equation there denote xmath881 from corollary cor extensioniiiiv and the intermediate value theorem we have xmath882 2 the following multi index notation will be used for a multi index xmath883 xmath884 let xmath885 xmath886 be two multi index in xmath887 we say xmath888 iff xmath889 xmath890 and xmath891 iff xmath892 xmath893 where xmath894 the strategy to prove the analyticity is as follows given xmath895 taking xmath896 on both sides of and using the summation convention on xmath897s we obtain that xmath769 satisfies xmath898 where xmath899 is the set of all permutations of xmath900 we will apply proposition prop perturb grushin for to get a fine estimate of the xmath1 norm of xmath769 in order to do so usually one needs a sequence of domains with properly shrinking radius as well as the corresponding sequence of cut off functions in this paper we use the trick introduced in xcite to avoid this technical trouble in the followingwe take and fix a cut off function xmath901 which satisfies xmath902 we will estimate xmath903 with xmath904 by xmath903 with xmath905 4 from nowon we will simply write xmath906 if the integral domain is xmath907 we will fix a xmath384 larger than the homogeneous dimension xmath830 by a universal constant we mean an absolute constant which only depends on xmath908 xmath237 dimension which is xmath909 in our setting or xmath384 chosen in particular independent of xmath910 the following observation will be useful in the proof xmath911 which implies xmath912 hence for multi index xmath230 with xmath913 xmath914 for some xmath915 xmath916 with xmath917 and xmath918 the following proposition is the main proposition of this section prop induction there exist universal constants xmath919 xmath920 and xmath921 such that for any xmath922 xmath923 theorem thm main will follow from proposition prop induction indeed by proposition prop induction there exists a universal xmath5 such that xmath924 hence by the classical sobolev embedding xmath925 for xmath835 chosen above one has xmath926 hence xmath92 is in gevrey class xmath927 which is the same as the class of real analytic functions before proving proposition prop induction we first show a lemma on the xmath928norm of xmath769 which roughly speaking is a consequence of the sobolev embedding lemma lemma lem embeddingii lem embedding2 for xmath929 assume proposition prop induction holds for xmath930 and xmath910 then there is a universal constant xmath931 such that xmath932 i first by lemma lem embedding2ii there is a universal constant xmath752 such that xmath933 applying to the rhs of above inequality we have xmath934 by proposition prop inductioni for xmath930 and xmath910 there exists a universal constant xmath935 which can be chosen larger than xmath237 such that xmath936 the estimate for xmath937 follows from the classical sobolev embedding and the assumption ii similar to i iii use corollary cor extension and ii above this is done by induction assume i and ii hold for xmath938 we want to show they hold for xmath910 let xmath230 be a multi index with xmath904 from xmath939 satisfies the following equation xmath940 where xmath941 by the xmath1 estimate for the perturbed baouendi grushin operator proposition prop perturb grushin there is a universal xmath942 such that xmath943 the estimate of xmath944 is standard in fact xmath945 for some xmath946 with xmath947 by the induction assumption iii for xmath930 the above rhs is bounded by xmath948 similarly there is some xmath949 with xmath950 such that xmath951 hence xmath952 next we estimate xmath953 proof of i let xmath954 xmath955 xmath956 then xmath957 we discuss the following two cases xmath958 if xmath959 then xmath960 which is by lemma lem embedding2iii and the induction assumption i for xmath930 bounded by xmath961 if xmath962 then xmath963 which is by lemma lem embedding2iiii and the induction assumption i for xmath930 bounded by xmath964 hence xmath965 by stirling s formula and the fact that xmath966 there is a universal constant xmath967 such that xmath968 hence xmath969 xmath970 or xmath971 we only discuss when xmath972 similarly we consider xmath973 and xmath974 if xmath973 the estimate is indeed included in case 1 if xmath974 then we simply have xmath975 arguing similarly as in case 1 we have xmath976 combining the above two cases we have xmath977 we combine for xmath978 and and use the induction assumption i for xmath979 to estimate xmath980 then xmath981 where xmath982 is a universal constant by for xmath978 xmath983 thus combining with induction assumption i for xmath930 and xmath984 we have xmath985 where xmath986 choosing xmath987 we proved i proof of ii the key step is to estimate xmath953 which is done similarly as for i more precisely lemma lem embedding2i and with xmath987 imply xmath988 this together with the induction assumption ii up to xmath930 yields for any multi index xmath230 xmath989 xmath990 in the estimate of xmath953 we first consider when xmath991 by lemma lem embedding2ii and xmath992 hence xmath993 since for xmath230 with xmath904 fixed we have the following identity eg proposition 21 in xcite xmath994 then if we let xmath995 xmath996 xmath997 the rhs of is bounded by xmath998 by stirling s formula and if we still use xmath999 to denote the universal constant from it then the above quantity is bounded by xmath1000 the arguments for the case xmath1001 are exactly the same hence xmath1002 the rest of the proof for ii is the same as for i and we do not repeat here in the appendix we give a short proof for lemma lem embedding the sobolev embeddings for xmath1003 see also xcite and theorem thm grushin the xmath1 estimates for baouendi grushin operator both statements are well known and available in much greater generality checking that the general results apply however requires familiarity with the theory of subelliptic operators for completeness and the convenience of the reader we provide complete proofs we first prove i suppose that xmath1004 and extend xmath13 to be zero outside xmath679 for every xmath1005 xmath1006 with xmath1007 xmath1008 where xmath1009 is a xmath322 function satisfying xmath1010 xmath1011 let xmath1012 then by a direct computation xmath1013 hence xmath1014 which by a change of variable xmath1015 gives xmath1016 by young s inequality for convolution and fubini for xmath1017 xmath1018 observe that for xmath1019 xmath1020 for any xmath1021 and for xmath1022 xmath1023 hence if we let xmath1024 after integrating over xmath1025 by fubini and a change of variable we have xmath1026 by the hardy littlewood sobolev inequality for the chosen xmath816 satisfying xmath1027 we have xmath1028 next we prove ii since this property is local in nature given xmath1029 we may assume by multiplying a characteristic function that xmath1030 in xmath1031 hence the integration in can be written from xmath878 to xmath155 for some xmath1032 large enough next we give a short proof for the xmath1 estimate for baouendi grushin operator xmath668 the idea is to treat xmath668 as the projected operator of sub laplacian on the heisenberg reiter type group onto certain quotient space the transference method in xcite links the xmath1 estimate on the group to the xmath1 estimate for xmath668 in the quotient space consider xmath1037 equipped with the group law xmath1038 where xmath1039 is the space of xmath1040 real matrices and xmath1041 are understood as the matrix multiplication let xmath1042 xmath1043 is an example of heisenberg reiter group which is by xcite a nilpotent lie group of step xmath45 with dilation xmath1044 and homogeneous dimension xmath1045 it is also immediate that the lebesgue measure xmath1046 is a left and right haar measure on xmath1043 now we recall several facts about the nilpotent lie groups with dilations a direct computation shows that the horizontal vector fields in the lie algebra xmath1047 that agree at the origin with xmath1048 and xmath1049 are xmath1050 consider the sub laplacian in xmath1043 xmath1051 it is easy to check that xmath1052 xmath1053 are hrmander vector fields then xmath1054 is hypoelliptic by theorem 21 in xcite there exists a unique fundamental solution of type xmath45 ie smooth away from xmath878 and homogeneous of degree xmath1055 for xmath1054 which we denote by xmath1056 for xmath1057we have xmath1058 since xmath1059 and xmath1053 are left invariant then xmath1060 where xmath642 is a quadratic polynomial in xmath1059 and xmath1053 by xmath1061 estimates for the singular integral in homogeneous groups see for example xiii 53 in xcite we have xmath1062 and in particular using we have xmath1063 now let xmath1064 one can easily verify that xmath1065 is a closed subgroup of xmath1043 with a bi invariant measure the lebesgue measure xmath1066 let xmath1067 be the quotient space and xmath1068 the natural quotient mapping given xmath1069 define xmath1070 by theorem 1521 in xcite the above correspondence xmath1071 defines a linear mapping of xmath1072 onto xmath1073 we identify xmath1074 with xmath1075 a direct computation gives xmath1076 since the lebesgue measure xmath1077 and xmath1078 are bi invariant on xmath1043 and xmath1065 correspondingly then by theorem 1524 in xcite there exists a unique right invariant measure up to a constant on xmath1079 which is necessarily the lebesgue measure xmath1080 consider the following vector fields which are the push down vector fields of xmath1059 xmath1053 on xmath1081 xmath1082 xmath1083 note that the baouendi grushin operator can be written as xmath1084 to see that they are indeed the push down vector fields on xmath1074 we notice that for xmath1057 xmath1085 similarly using one can check that xmath1086 hence xmath1087 let xmath919 be the representation of xmath1043 acting on xmath1088 given by the right translation ie given xmath1089 xmath1090 it is easy to check that xmath1091 are bounded from xmath1088 to xmath1088 by xmath597 given xmath1092 xmath702 we consider the following linear map xmath89 on xmath1088 xmath1093 notice that xmath1043 is locally compact and by the convolution operator xmath1094 is bounded in xmath1095 thus the transference method theorem 24 in xcite applies and we have for xmath1096 xmath1097 in particular if we take xmath1098 then xmath1099 in the end we show that xmath1100 indeed by and xmath1101 by and fubini then using and we have xmath1102 observing that xmath1103 xmath1104 xmath1105 and xmath1106 can be written as xmath1107 for some quadratic polynomial xmath642 we complete the proof of the theorem
in this paper we study the higher regularity of the free boundary for the elliptic signorini problem by using a partial hodograph legendre transformation we show that the regular part of the free boundary is real analytic the first complication in the study is the invertibility of the hodograph transform which is only xmath0 which can be overcome by studying the precise asymptotic behavior of the solutions near regular free boundary points the second and main complication in the study is that the equation satisfied by the legendre transform is degenerate however the equation has a subelliptic structure and can be viewed as a perturbation of the baouendi grushin operator by using the xmath1 theory available for that operator we can bootstrap the regularity of the legendre transform up to real analyticity which implies the real analyticity of the free boundary
introduction @xmath39-homogeneous blowups of solutions partial hodograph transform partial legendre transform and a nonlinear pde smoothness real analyticity appendix
the pulsar psr b1951 32 located at the center of the morphologically peculiar radio nebula ctb 80 is a 395msec radio pulsar clifton et al 1987 kulkarni et al 1988 with a characteristic age of xmath5 yr and an inferred surface dipole magnetic field of xmath6 g an x ray point source was observed within the x ray nebula related to the radio core of ctb 80 becker helfand szymkowiak 1982 seward 1983 wang seward 1984 search for x ray pulsation from this source with exosat yielded inconclusive evidence for pulsation confidence level of 97 by gelman buccheri 1987 and 93 by angelini et al the pulsed emission was detected by rosat at a 99 confidence level safi harb gelman finley 1995 which shows a single peak roughly consistent in phase with the radio emission the overall spectrum of this point source in 01 24 kev is best fitted by a power law with a photon spectral index xmath7 and an estimated pulsed fraction of 035 the egret instrument on cgro observed and detected gamma ray pulsation from psr b1951 32 above 100 mev ramanamurthy et al 1995 making it a member of egret hard gamma ray pulsar family with a similar age to vela and geminga the gamma ray lightcurve shows two peaks at phase 016 and 060 with phase 00 being the radio peak its spectrum in the egret energy range follows a power law with a photon spectral index of about xmath8 ramanamurthy et al 1995 fierro 1995 over about two decades of photon energy recently pulsed emission is reported from the comptel instrument in the 075 100 mev band kuiper et al the osse and batse instruments on cgro only reported upper limits of pulsed emission in the lower energy band schroeder et al1995 wilson et al 1992 there have been a number of models proposed to explain the gamma ray emission with dramatically different emission sites some at or very near the surface of the neutron star and some very far away recently ho chang 1996 proposed a geometry independent argument to constrain the possible site and mechanism of the gamma ray emission based on the commonality of power law emission in the egret and possibly comptel pulsars in such arguments it is important to know whether and how the gamma ray power law spectra turn over towards low energy see section 4 for more discussions to gain better understanding of the overall spectral behavior especially between kev and mev we conducted an observation of psr b1951 32 using both pca and hexte on board rxte during cycle 1 analysis of the 19k second pca data does not yield conclusive evidence for pulsation from 20 to 130 kev the derived 2xmath1 upper limits provide support for the hard turn over for the high energy gamma ray emission it also indicates that the soft x ray pulsation observed by rosat has a very soft spectrum we described the observation in section 2 the analyses and results for the pca data are discussed in section 3 we discuss the theoretical implications of this observation and future work in section 4 the pca and hexte on board rxte were pointed at psr b1951 32 on march 24 1996 mjd 50166 for about 105 hours including earth occultations the rxte mission spacecraft and instrument capabilities are described in swank et al 1995 giles et al1995 and zhang et al 1993 the pca consists of five essentially identical pcus with a total effective area of 6729 xmath9 with no imaging capability the field of view is one degree after examining the data two exclusions were applied to the data set first data from the pca pulse height channel 36 255 130 900 kev are excluded due to high instrumental noise second we observed unexplained anomalous increase during two intervals of our exposure under the advice of rxte guest observerfacility experts data obtained during these two intervals were excluded in the second half of the observation two of the five pcus were turned off the overall usable data used for this analysis contain two segments of xmath10 and xmath11 for a total of xmath12 or equivalently a total integration time of 19232 seconds and an average effective area of 53633 xmath9 around the same epoch of the rxte observation psr b1951 32 was also monitored at jodrell bank radio observatory the radio ephemeris is summarized in table 1 and used as the input for pulsation search the data were reduced to the barycenter and analyzed using the jpl de200 ephemeris the pulsar position listed in table ephemeris and standard rxte reduction package ftools v352 and xanadu xronos v402 lightcurve folding was performed separately for each of four typical energy bands and various combinations using the radio ephemeris in table ephemeris the four typical energy bands are commonly designated as band 1 through 4 with each covering pca channels 0 13 20 48 kev 14 17 48 63 kev 18 23 63 85 kev and 24 35 85 130 kev respectively none of the folded lightcurves showed significant deviation from a model steady distribution under the pearson s xmath13test leahy et al 1983a b specifically the xmath13 values for the folded lightcurves shown in figure lightcurve are for 19 degrees of freedom 274 for band 1 211 for band 2 and 838 for the combined band of 3 and 4 in addition to instrumental and cosmic x ray background the dc component is mostly likely the contribution from the rosat point source and its associated x ray nebula to further ascertain the absence of pulsation we performed the bin independent parameter free xmath0test de jager swanepoel raubenheimer 1989 in this analysis all detected raw photons with the corrected arrival time are used the xmath0test was applied to the data in different energy bands and various combinations the results of the xmath0test all show high probability that the data are consistent with steady source except for band 1 and 2 the xmath0values are 7286 and 9334 for bands 1 and 2 both at xmath141 this yields a 55 and 24 probability of being consistent with a steady source applying the straight xmath15test the rayleigh test which is more appropriate if the underlying pulse profile is sinusoidal the probability of the data being consistent with a steady source is 29 and 09 for bands 1 and 2 based on these analyses we do not consider the null probability although intriguingly small using the xmath0test provides conclusive evidence of pulsation from psr b1951 32 in the xte pca energy band the upper limit of pulsed flux is estimated following the prescription given by ulmer et al 1991 assuming a duty cycle of 05 and combining bands 2 and 3 to yield a comparable total number of counts to those in bands 1 and 4 individually we obtain the following 2xmath1 upper limits which are also shown in figure spectrum xmath2 for 20 48 kev xmath3 for 48 85 kev and xmath16 for 85 130 kev gamma ray pulsar the current xte pca upper limits provide support to the combined cgro observation that the gamma ray pulsed emission from psr b1951 32 follows a power law with a significant break towards low energy egret and comptel detection along with osse and batse upper limits as indicated in figure spectrum such spectral behavior is seen in the vela and geminga pulsars for psr b1951 32 the break energy photon energyat which there is a significant break in photon spectral indices to say harder than xmath17 is estimated to be between 70 kev and 3 mev as noted in ho chang 1996 this common trait could play an important role in the theoretical modeling of this family of egret pulsars the power law of these pulsars typically covers two orders of magnitude in the egret band with best fit photon spectral indices in the range of xmath18 to xmath8 these photon spectral indices can not be produced by a mono energetic relativistic electron distribution under currently proposed radiation mechanisms the most likely origin of this power law behavior is the cooling energy loss through the dominant radiation mechanism responsible for the gamma rays for example the most simplistic cooling model for a steady state electron distribution will yield a photon spectral index of xmath19 for synchrotron radiation and xmath20 for curvature radiation the cooling spectrum will continue towards low energy until the electron distrbution in energy space is no longer affected by cooling ie the cooling spectrum turns hard at the break energy which corresponds to the location and electron energy where the radiative cooling time scale is comparable to the dynamical time scale of the relativistic motion of the electrons such an argument allows us to constrain the radiation mechanism and more importantly the emission site which to date remains unsettled with great bifurcation among gamma ray pulsar models following this argument and examining various radiation mechanisms we find that for psr b1951 32 the gamma ray pulsations are most likely generated by synchrotron radiation with a typical pitch angle of 01 to 0001 and emission site is about xmath21 to xmath22 cm from the star ie in the outer magnetosphere x ray pulsar safi harb gelman finley 1995 reported rosat observation of psr b1951 32 with an estimated pulsed fraction of 035 and an overall spectrum following a power law with a photon spectral index of xmath7 to date there is no published spectrum for the pulsed component safi harb et al 1995 estimated the soft x ray pulsation duty cycle to be 01 such a duty cycle will lower the xte upper limit estimated above it is clear from figure spectrum that the xte pca observation necessitates a steep drop off at around 2 kev for the pulsed component this is consistent with safi harb et als report of no pulsation near 2 kev it is almost certain that for psr b1951 32 the rosat observed pulsation below 13 kev is separate from the egret comptel observed gamma ray pulsation more than likely they are from different origin and emitted at different location in summary xte pca observation over 19k seconds shows no definitive evidence of pulsation the upper limits can be used to help constrain theoretical models our analysis does show tantalizing hints in addition to the small null probability from the xmath0test the peaks for bands 1 and 2 in figure lightcurve taken at face value are separated by 045 in phase reminiscent to that of the geminga pulsar in 007 15 kev halpern ruderman 1993 a long exposure on xte eg 100 ksecs will provide better statistics and help advance our understanding of this and other similar pulsars we thank the xte team in xte gof at gsfc especially james lochner for their help in data reduction and analysis we also thank andrew lyne for providing the radio ephemeris and comments on the manuscript many useful discussions with ed fenimore and james theiler are gratefully acknowledged we are appreciative of the referee f seward for his many helpful comments in improving this paper this work was performed under the auspices of the us department of energy and was supported in part by the rxte guest observer program and cgro guest investigator program angelini l white ne parmar an smith a stevens ma 1988 apj 330 l43 becker rh helfand dj szymkowiak ae 1982 apj 255 557 clifton tr et al 1987 iau circ 4422 de jager oc swanepoel jwh raubenheimer bc 1989 aa 221 180 fierro jm 1995 phd thesis stanford university giles ab jahoda k swank jh zhang w 1995 publ australia 12 219 halpern jp ruderman m 1993 apj 415 286 ho c chang h k 1996 apj in preparation kuiper l et al 1996 aa submitted kulkarni sr et al 1988 nature 331 50 leahy da et al 1983a apj 266 160 leahy da elsner rf weisskopf mc 1983b apj 272 256 gelman h buccheri r 1987 aa 186 l17 ramanamurthy pv et al 1995 apj 447 l109 safi harb s gelman h finley jp 1995 apj 439 722 schroeder pc et al 1995 apj 450 784 seward fd 1983 in supernova remnants and their x ray emission ed j danziger p gorenstein proc iau symp 101dordrecht reidel 405 strickman ms 1996 apj 460 735 swank jh jahoda k zhang w giles ab 1995 in the lives of the neutron stars ed alpar kizilolu j van paradijs nato asi series c 450boston kluwer 525 ulmer mp purcell wr wheaton wa mahoney wa 1991 apj 369 485 wang zr seward fd 1984 apj 285 607 wilson rb et al 1992 proc aip conf 280 291 zhang w et al 1993 proc spie 2006 324 llr validity interval mjd 50057 50207 epoch xmath23 mjd 50132000000201 xmath24 xmath25 xmath26 xmath27 xmath28 hz 252963865292632 xmath29 hz s xmath30 xmath31 hz s s xmath32
we report results of rxte observations of psr b1951 32 using the pca instrument for 19k seconds during 1996 march 24th we applied the contemporaneous radio ephemeris and various statistical tests to search for evidence of pulsation these analyses yield intriguing yet inconclusive evidence for the presence of the pulsation in the time series confidence level for the presence of pulsation is 945 in the 20 48 kev band and 976 in the 48 63 kev band based on the xmath0test under the premise of non detection of pulsation we derive estimated 2xmath1 upper limits for the pulsed flux to be xmath2 in the 20 48 kev band xmath3 in the 48 85 kev band and xmath4 in the 85 130 kev band these upper limits are consistent with the trend of spectral turn over from high energy gamma ray emission as suggested by the osse upper limit such turn over strongly suggests the outer magnetosphere as the emission site for pulsed gamma rays these rxte upper limits for x ray pulsation are on the other hand not consistent with the extrapolation of reported power law spectra from the point source observed by rosat in the 01 24 kev band assuming a constant pulse fraction the pulsed soft x ray emission detected by rosat must follow a much softer spectrum than that of the overall point source
introduction observation analysis and results discussion
it is an interesting fact that the liquid state has proven to be difficult to describe by theory throughout the history of condensed matter research xcite the problem extends beyond condensed matter and exists in other areas where strong interactions are combined with dynamical disorder such as field theory in a weakly interacting system such as a dense gas the potential energy is much smaller than the kinetic energy these systems are amenable to perturbation treatment giving corrections to the non interacting case xcite perturbation approaches have been widely explored to calculate liquid thermodynamic properties but have not been able to agree with experiments for example the analysis of tractable models such as van der waals or hard spheres systems returns the gas like result for the liquid constant volume specific heat xmath0 xcite this is in contrast to experimental results showing that xmath1 of monatomic liquids close to the melting point is nearly identical to the solid like result xmath2 and decreases to about xmath3 at high temperature xcite as expected on general grounds the perturbation approach does not work for strongly interacting systems strong interactions are successfully treated in solids crystals or glasses where the harmonic model is a good starting point and gives the most of the vibrational energy however this approach requires fixed reference points around which the energy expansion can be made with small vibrations aroundmean atomic positions solids meet this requirement but liquids seemingly do not liquid ability to flow implies that the reference lattice is non existent therefore liquids seemingly have no simplifying features such as small interactions of gases or small displacements of solids xcite in other words liquids have no small parameter one might adopt a general approach not relying on approximations and seek to directly calculate the liquid energy for a model system where interactions and structure are known this meets another challenge because the interactions are both strong and system dependent the resulting energy and other thermodynamic functions will also be strongly system dependent precluding their calculation in general form and understanding using basic principles in contrast to solids and gases xcite consistent with this somewhat pessimistic view the discussion of liquid thermodynamic properties has remained scarce indeed physics textbooks have very little if anything to say about liquid specific heat including textbooks dedicated to liquids xcite as recently reviewed xcite emerging evidence advances our understanding of the thermodynamics of the liquid state the start point is the early theoretical idea of j frenkel xcite who proposed that liquids can be considered as solids at times smaller than liquid relaxation time xmath4 the average time between two particle rearrangements at one point in space this implies that phonons in liquids will be similar to those in solids for frequencies above the frenkel frequency xmath5 xmath6 the above argument predicts that liquids are capable of supporting shear modes the property hitherto attributable to solids only but only for frequencies above xmath5 we note that low frequency modes in liquids sound waves are well understood in the hydrodynamic regime xmath7 xcite however eq 1 denotes a distinct solid like elastic regime of wave propagation where xmath8 in essence this suggests the existence of a cutoff frequency xmath5 above which particles in the liquid can be described by the same equations of motion as in for example solid glass therefore liquid collective modes include both longitudinal and transverse modes with frequency above xmath5 in the solid like elastic regime and one longitudinal hydrodynamic mode with frequency below xmath5 shear mode is non propagating below frequency xmath5 as discussed below recall the earlier textbook assertion xcite that a general thermodynamic theory of liquids can not be developed because liquids have no small parameter how is this fundamental problem addressed here according to frenkel s idea liquids behave like solids with small oscillating particle displacements serving as a small parameter large amplitude diffusive particle jumps continue to play an important role but do not destroy the existence of the small parameter instead the jumps serve to modify the phonon spectrum their frequency xmath5 sets the minimal frequency above which the small parameter description applies and solid like modes propagate it has taken a long time to verify this picture experimentally the experimental evidence supporting the propagation of high frequency modes in liquids currently includes inelastic x ray neutron and brillouin scattering experiments but most important evidence is recent and follows the deployment of powerful synchrotron sources of x rays xcite early experiments detected the presence of high frequency longitudinal acoustic propagating modes and mapped dispersion curves which were in striking resemblance to those in solids xcite these and similar results were generated at temperature just above the melting the measurements were later extended to high temperatures considerably above the melting point confirming the same result it is now well established that liquids sustain propagating modes with wavelengths extending down towards interatomic separations comparable to the wave vectors of phonons in crystals at the brillouin zone boundaries xcite more recently the same result has been asserted for supercritical fluids xcite importantly the propagating modes in liquids include acoustic transverse modes these were first seen in highly viscous fluids see eg refs xcite but were then studied in low viscosity liquids on the basis of positive dispersion xcite the presence of high frequency transverse modes increases sound velocity from the hydrodynamic to the solid like value these studies included water xcite where it was found that the onset of transverse excitations coincides with the inverse of liquid relaxation time xcite as predicted by frenkel xcite more recently high frequency transverse modes in liquids were directly measured in the form of distinct dispersion branches and verified on the basis of computer modeling xcite and the striking similarity between dispersion curves in liquids and their crystalline poly crystalline counterparts was noted we note that the contribution of high frequency modes is particularly important for liquid thermodynamics because these modes make the largest contribution to the energy due to quadratic density of states the above discussion calls for an important question about liquid thermodynamics in solids collective modes phonons play a central role in the theory including the theory of thermodynamic properties can collective modes in liquids play the same role in view of the earlier frenkel proposal and recent experimental evidence we have started exploring this question xcite just before the high frequency transverse modes were directly measured and subsequently developed it in a number of ways xcite this involves calculating the liquid energy as the phonon energy where transverse modes propagate above xmath5 in eq omega the main aim of this paper is to provide direct computational evidence to the phonon theory of liquid thermodynamics and its predictions we achieve this by calculating the liquid energy and xmath5 in extensive molecular dynamics simulations in the next chapter we briefly discuss the main steps involved in calculating the liquid energy we then proceed to calculating the liquid energy and frenkel frequency independently from molecular dynamics simulations using several methods which agree with each other we do this for three systems chosen from different classes of liquids noble metallic and molecular and find good agreement between predicted and calculated results in the wide range of temperature and pressure the range includes both subcritical liquids and supercritical state below the frenkel line where transverse waves propagate we calculate and analyze liquid energy and xmath1 using several different methods finally we discuss how our results offer insights into inter relationships between structure dynamics and thermodynamics in liquids and supercritical fluids we summarize the main result of calculation of the liquid energy on the basis of propagating modes a detailed discussion can be found in a recent review xcite according to the previous discussion the propagating modes in liquids include two transverse modes propagating in the solid like elastic regime with frequency xmath9 the energy of these modes together with the energy of the longitudinal mode gives the liquid vibrational energy in addition to vibrations particles in the liquids undergo diffusive jumps between quasi equilibrium positions as discussed above adding the energy of these jumps to the phonon energy in the debye modelgives the total energy of thermal motion in the liquid xcite xmath10 where xmath11 is the number of particles and xmath12 is transverse debye frequency and the subscript refers to thermal motion here and below xmath13 at low temperature xmath14 where xmath15 is the debye vibration period or xmath16 in this case harmo gives the specific heat xmath17 close to 3 the solid like result at high temperature when xmath18 and xmath19 eq harmo gives xmath1 close to 2 the decrease of xmath1 from 3 to 2 with temperature is consistent with experimental results in monatomic liquids xcite the decrease of xmath1 is also seen in complex liquids xcite harmo attributes the experimental decrease of xmath1 with temperature to the reduction of the number of transverse modes above the frequency xmath20 the comparison of this effect with experiments can be more detailed if xmath1 is compared in the entire temperature range where it decreases from xmath21 to xmath22 this meets the challenge that xmath5 in eq harmo is not directly available in the cases of interest xmath5 xmath4 is measured is dielectric relaxation or nmr experiments in systems responding to electric or magnetic fields only these liquids are often complex and do not include simple model systems that are widely studied theoretically such as liquid ar importantly the range of measured xmath5 does not extend to high frequency comparable to xmath12 and it is in this range where liquid xmath1 undergoes an important change from 3 to 2 as discussed above xmath5 can be calculated from the maxwell relationship xmath23 where xmath24 is the instantaneous shear modulus and xmath25 is viscosity taken from a different experiment xcite more recently it has been suggested xcite that taking the shear modulus at a finite high frequency rather than infinite frequency agrees better with the modelling data apart from rare estimations xcite xmath24 is not available in practice the comparison of experimental xmath1 and xmath1 predicted as xmath26 with xmath27 given by eq harmo is done by keeping xmath24 as a free parameter obtaining a good agreement between experimental and predicted xmath1 and observing that xmath24 lies in the range of several gpa typical for liquids xcite in the last few years eq harmo and its extensions to include the phonon anharmonicity and quantum effects of phonon excitations was shown to account for the experimental xmath1 of over 20 different systems including metallic noble molecular and network liquids xcite in view of the persisting problem of liquid thermodynamics it is important to test eq harmo directly by linking the liquid energy xmath1 on one hand and xmath5 on the other and testing the theory in a precise way this together with achieving consistency with other approaches to calculate the liquid energy is one of the objectives of this study importantly this programme includes supercritical fluids as well as subcritical liquids as discussed below if the system is below the critical point see figure 1 the temperature increase eventually results in boiling and the first order transition with xmath1 discontinuously decreasing to about xmath28 in the gas phase the intervening phase transition excludes the state of the liquid where xmath1 can gradually reduce to xmath28 and where interesting physics operates however this becomes possible above the critical point this brings us to the interesting discussion of the supercritical state of matter theoretically little is known about the supercritical state apart from the general assertion that supercritical fluids can be thought of as high density gases or high temperature fluids whose properties change smoothly with temperature or pressure and without qualitative changes of properties this assertion followed from the known absence of a phase transition above the critical point we have recently proposed that this picture should be modified and that a new line the frenkel line fl exists above the critical point and separates two states with distinct properties see figure frenline xcite physically the fl is not related to the critical point and exists in systems where the critical point is absent the main idea of the fl lies in considering how the particle dynamics change in response to pressure and temperature recall that particle dynamics in the liquid can be separated into solid like oscillatory and gas like diffusive components this separation applies equally to supercritical fluids as it does to subcritical liquids indeed increasing temperature reduces xmath4 and each particle spends less time oscillating and more time jumping increasing pressure reverses this and results in the increase of time spent oscillating relative to jumping increasing temperature at constant pressure or density or decreasing pressure at constant temperature eventually results in the disappearance of the solid like oscillatory motion of particles all that remainsis the diffusive gas like motion this disappearance represents the qualitative change in particle dynamics and gives the point on the fl in figure frenline most important system properties qualitatively change either on the line or in its vicinity xcite in a given system the fl exists at arbitrarily high pressure and temperature as does the melting line quantitatively the fl can be rigorously defined by pressure and temperature at which the minimum of the velocity autocorrelation function vaf disappears xcite above the line defined in such a way velocities of a large number of particles stop changing their sign and particles lose the oscillatory component of motion above the line vaf is monotonically decaying as in a gas xcite for the purposes of this discussion the significance of the fl is that the phonon approach to liquids and eq harmo apply to supercritical fluids below the fl to the same extent as they apply to subcritical liquids indeed the presence of an oscillatory component of particle motion below the fl implies that xmath4 is a well defined parameter and that transverse modes propagate according to eq omega the ability of the supercritical system to sustain solid like rigidity at frequency above xmath5 suggested the term rigid liquid to differentiate it from the non rigid gas like fluid above the fl xcite therefore the fl separates the supercritical state into two states where transverse modes can and can not propagate this is supported by direct calculation of the current correlation functions xcite showing that propagating and non propagating transverse modes are separated by the frenkel line interestingly eq harmo can serve as a thermodynamic definition of the fl the loss of the oscillatory component of particle motion at the fl approximately corresponds to xmath18 here xmath15 refers to debye period of transverse modes or xmath19 according to eq harmo this gives xmath1 of about 2 using the criterion xmath29 gives the line that is in remarkably good coincidence with the line obtained from the vaf criterion above xcite we have considered liquids from three important system types noble ar molecular coxmath30 and metallic fe we have used the molecular dynamics md simulation package dlpoly xcite and simulated systems with xmath31 particles with periodic boundary conditions the interatomic potential for ar is the pair lennard jones potential xcite known to perform well at elevated pressure and temperature for coxmath30 and fe we have used interatomic potentials optimized tested in the liquid state at high pressure and temperature the potential for coxmath30 is the rigid body nonpolarizable potential based on a quantum chemistry calculation with the partial charges derived using the distributed multipole analysis method xcite fe was simulated using the many body embedded atom potential xcite in the case of coxmath30 the electrostatic interactions were evaluated using the smooth particle mesh ewald method the md systems were first equilibrated in the constant pressure and temperature ensemble at respective state points for 20 ps system properties were subsequently simulated at different temperatures and averaged in the constant energy and volume ensemble for 30 ps we are interested in properties of real dense strongly interacting liquids with potential energy comparable to kinetic energy and hence have chosen fairly high densities xmath32 gxmath33 and xmath34 gxmath33 for ar xmath35 gxmath33 and xmath36 gxmath33 for fe and xmath37 gxmath33 for coxmath30 the lowest temperature in each simulation was the melting temperature at the corresponding density xmath38 the highest temperature significantly exceeded the temperature at the frenkel line at the corresponding density xmath39 taken from the earlier calculation of the frenkel line in ar xcite fe xcite and coxmath30 xcite as discussed above the temperature range between xmath38 and xmath39 corresponds to the regime where transverse modes progressively disappear and where eq harmo applies we have simulated xmath40 temperature points at each pressure depending on the system the number of temperature points was chosen to keep the temperature step close to 10 k as discussed above eq harmo applies to subcritical liquids as well as to supercritical fluids below the frenkel line our simulations include the temperature range both below and above the critical temperature this will be discussed in more detail below we have calculated xmath5 in harmo from its definition in omega as xmath20 xmath4 can be calculated in a number of ways most common methods calculate xmath4 as decay time of the self intermediate scattering or other functions by the factor of xmath41 or as the time at which the mean squared displacement crosses over from ballistic to diffusive regime xcite these methods give xmath4 in agreement with a method employing the overlap function depending on the cutoff parameter xmath42 provided xmath43 where xmath44 is the inter molecuar distance xcite we use the latter method and calculate xmath4 at 13 20 temperature points at each density depending on the system at each density we fit xmath4 to the commonly used vogel fulcher tammann dependence and use xmath20 to calculate the liquid energy predicted from the theory the predicted xmath1 is calculated as xmath17 where xmath27 is given by eq harmo xmath45 where xmath11 is the number of atoms for ar and fe and the number of molecules for coxmath30 the first two terms in cv give xmath29 when xmath5 tends to its high temperature limit of xmath5 the last term reduces xmath1 below 2 by a small amount because xmath46 is close to zero at high temperature xcite we now compare the calculated energy and xmath1 with those directly computed in the md simulations we note that the energy in eq harmo is the energy of thermal phonon motion xmath47 which contributes to the total liquid energy as xmath48 where xmath49 is liquid energy at zero temperature and represents temperature independent background contribution due to the interaction energy in comparing the calculated xmath50 in eq harmo with the energy from md simulations we therefore subtract the constant term from the md energy the comparison of xmath17 is performed directly because the constant term does not contribute to xmath1 we have also calculated xmath1 using the fluctuations formula for the kinetic energy xmath51 in the constant energy ensemble xmath52 xcite both methods agree well as follows from figures ara and arb there is only one adjustable parameter in eq harmo xmath12 which is expected to be close to transverse debye frequency xmath5 is independently calculated from the md simulation as discussed above in figures ar and fe we compare the energy and xmath1 calculated on the basis of eqs harmo and cv and compare them with those computed in md simulations blue circle in each figure shows the critical temperature we observe good agreement between predicted and calculated properties in a temperature range including both subcritical and supercritical temperature this involved using xmath53 ps xmath35 gxmath33 and xmath54 ps xmath36 gxmath33 for fe xmath55 ps xmath32 gxmath33 and xmath56 ps xmath34 gxmath33 for ar and xmath57 ps for coxmath30 in reasonable order of magnitude agreement with experimental xmath15 of respective crystalline systems as well as maximal frequencies seen in experimental liquid dispersion curves see eg xcite we note the expected trend of xmath15 reducing with density at high temperature where xmath58 eq cv predicts xmath1 close to 2 noting that the last term gives only a small contribution to xmath1 because xmath5 becomes slowly varying at high temperature consistent with this prediction we observe the decrease of xmath1 from 3 to 2 in figures ar and fe the agreement between the predicted and calculated results supports the interpretation of the decrease of xmath1 with temperature discussed in the introduction xmath5 decreases with temperature and this causes the reduction of the number of transverse modes propagating above xmath5 and hence the reduction of xmath1 for coxmath30 the same mechanism operates except we need to account for degrees of freedom in a molecular system we first consider the case of solid coxmath30 the md interatomic potential treats coxmath30 molecules as rigid linear units contributing the kinetic term of 25 to the specific heat per molecule including 1 from the rotational degrees of freedom of the linear molecular and 15 from translations here we have noted that coxmath30 molecules librate and rotate in the solid at low and high temperature respectively xcite noting the potential energy contributes the same term due to equipartition the specific heat becomes 5 per molecule this implies that for molecular coxmath30 eqs harmo modifies as xmath59 where xmath11 is the number of molecules and xmath5 is related to the jump frequency of molecules and which gives xmath60 in the solid state where xmath5 is infinite we use the modified equation to calculate the energy and xmath1 and compare them to those computed from the md simulation in figure co2 consistent with the above discussion we observe that xmath1 for coxmath30 calculated directly from the md simulations is close to 5 at low temperature just above melting at this temperature xmath16 giving the solid like value of xmath1 as in the case of monatomic ar and fe as temperature increases two transverse modes of inter molecular motionprogressively disappear resulting in the decrease of xmath1 to the value of about xmath61 in agreement with xmath1 calculated from the theoretical equation for xmath50 we note that the temperature range in which we compare the predicted and calculated properties is notably large eg xmath62 k for ar and xmath63 k for fe this range is 10 100 times larger than those typically considered earlier xcite the higher temperatures for fe might appear as unusual however we note that liquid iron as well as supercritical iron fluid remains an unmodified system up to very high temperature the first ionization potential of fe is 79 ev or over 90000 k hence the considered temperature range is below the temperature at which the system changes its structure and type of interactions the very wide temperature range reported here is mostly related to the large part of the temperature interval in figures arco2 being above the critical point where no phase transition intervenes and where the liquid phase exists at high temperature in contrast to subcritical liquids where the upper temperature is limited by the boiling line the agreement between predicted and calculated properties in such a wide temperature range adds support to the phonon approach to liquid thermodynamics we propose we make three points regarding the observed agreement between the calculated and predicted results first the collective modes contributing to the thermal energy in harmo are considered to be harmonic the anharmonicity can be accounted for in the grneisen approximation however this involves an additional parameter xcite we attempted to avoid introducing additional parameters and sought to test eq harmo which contains only one parameter xmath12 second eq harmo involves the debye model and quadratic density of states dos this approximation is justified since the debye model is particularly relevant for disordered isotropic systems such as glasses xcite which are known to be nearly identical to liquids from the structural point of view furthermore the experimental dispersion curves in liquids are very similar to those in solids such as poly crystals xcite therefore the debye model can be used in liquids to the same extent as in solids one important consequence of this is that the high frequency range of the phonon spectrum makes the largest contribution to the energy as it does in solids including disordered solids we also note that liquid dos can be represented as the sum of solid like and gas like components in the two phase thermodynamic model xcite and the solid like component can be extracted from the liquid dos calculated in md simulations this can provide more information about the dos beyond debye approximation third eq harmo assumes a lower frequency cutoff for transverse waves xmath20 as envisaged by frenkel in omega our recent detailed analysis of the frenkel equations shows that the dispersion relationship for liquid transverse modes is xmath64 where xmath65 is the shear speed of sound and xmath66 is wavenumber xcite here xmath67 gradually crosses over from xmath68 to its solid like branch xmath69 when xmath70 in this sense using a lower frequency cutoff in harmo might be thought of as an approximation however we have recently shown xcite that the square root dependence of xmath67 gives the liquid energy that is identical to harmo the results in the previous sections support the picture in which the decrease of liquid xmath1 from 3 to 2 is related to reduction of the energy of transverse modes propagating above xmath5 as described by eq cv according to eq cv xmath29 corresponds to complete disappearance of transverse modes at the fl when xmath58 the disappearance is supported by the direct calculation of transverse modes on the basis of current correlation functions xcite importantly xmath29 marks the crossover of xmath1 because the evolution of collective modes is qualitatively different below and above the fl xcite below the line transverse modes disappear starting from the lowest frequency xmath5 above the line the remaining longitudinal mode starts disappearing starting from the highest frequency xmath71 where xmath72 is the particle mean free path no oscillations can take place at distance smaller than xmath72 this gives qualitatively different behavior of the energy and xmath1 below and above the fl resulting in their crossover at the fl xcite interestingly the thermodynamic crossover at xmath29 implies a structural crossover indeed the energy per particle in a system with pair wise interactions is xmath73 where xmath74 is number density and xmath75 is radial distribution function according to eq toten the liquid energy is xmath76 where xmath50 is given by eq harmo if the system energy undergoes the crossover at the fl where xmath29 eq ene implies that xmath75 should also undergo a crossover therefore the structural crossover in liquids can be predicted on the basis of the thermodynamic properties we also expect the structural crossover at the fl to be related to the dynamical crossover on general grounds as discussed above belowthe fl particles oscillate around quasi equilibrium positions and occasionally jump between them the average time between jumps is given by liquid relaxation time xmath4 figure atoms schematically shows a local jump event from its surrounding cage this means that a static structure exists during xmath4 for a large number of particles below the fl giving rise to the well defined medium range order comparable to that existing in disordered solids xcite on the other hand the particles lose the oscillatory component of motion above the fl and start to move in a purely diffusive manner as in gases this implies that the features of xmath75 are expected to be gas like as a result xmath75 medium range peaks are expected to have different temperature dependence below and above the fl this behavior was observed in ar in md simulations in the short range structure xcite more recently the crossover in supercritical ne in the medium range at the fl was ascertained on the basis of x ray scattering experiments xcite in figure rdfa we plot pair distribution functions pdfs of ar at density xmath34 gxmath33 in a wide temperature range using the fl criterion xmath29 gives the temperature at the fl xmath39 of about 4000 k at that density which we find to be consistent with the criterion of the disappearance of the minimum of the velocity autocorrelation function xcite the pdf was calculated with the distance step of xmath77 giving 600 pdf points we observe pdf peaks in the medium range order up to about 20 at low temperature the peaks reduce and broaden with temperature to study this in more detail we plot the peak heights vs temperature in figure rdfb we observe that the medium range third and fourth peaks persist well above the critical temperature xmath78 k for ar the highest temperature simulated corresponds to xmath79 this interestingly differs from the traditional expectation that the structure of the matter so deep in the supercritical state has gas like features only at temperature above xmath39 the height of the fourth peak becomes comparable to its temperature fluctuations calculated as the standard deviation of the peak height over many structures separated in time by 1 ps at each temperature by order of magnitude the fifth and higher order peaks disappear before the highest temperature in the simulated range is reached we plot the peak heights in figure rdfb in the double logarithmic plot because we expect to see an approximate power law decay of the peak heights at low temperature indeed pdf in solids can be represented as a set of gaussian functions with peaks heights xmath80 depending on temperature as xmath81 where xmath82 is a temperature independent factor xcite this temperature dependence of xmath80 was also quantified in md simulations xcite xmath80 decrease mostly due to the factor xmath83 whereas the effect of the exponential factor on xmath80 is small and serves to reduce the rate at which xmath80 decrease xcite this implies that in solids xmath84 approximately holds in liquids we expect the same relationship to hold below the fl where xmath14 corresponding to a particle oscillating many times before diffusively moving to the next quasi equilibrium position indeed the ratio of the number of diffusing particles xmath85 to the total number of particles xmath11 in the equilibrium state is xmath86 xcite at any given moment of time xmath87 is small when xmath14 below the fl and can be neglected hence xmath84 applies to liquids at any given moment of time below the fl where xmath14 this also applies to longer observation times if xmath80 is averaged over xmath4 xcite we note that the above result xmath88 involves the assumption that the energy of particle displacements is harmonic see eg ref anharmonicity becomes appreciable at high temperature however the anharmonic energy terms are generally small compared to the harmonic energy this is witnessed by the closeness of high temperature xmath1 to its harmonic result for both solids and high temperature liquids xcite we therefore expect that xmath89 approximately holds in the low temperature range below the fl as in solids but deviates from the linearity around the crossover at the fl where xmath18 and where the dynamics becomes gas like the calculated pdf in fig rdfa is normalized to 1 where no correlations are present at large distances hence we plot xmath90 in order to compare it with the theoretical result xmath88 which tends to zero when no correlations are present at high temperature we note that the crossover is expected to be broad because xmath14 applies well below the fl only a substantial diffusive motion takes place in the vicinity of the line where xmath87 can not be neglected affecting the linear relationship consistent with the above prediction we observe the linear regime at low temperature in figure rdfb followed by the deviation from the straight lines taking place around 3000 k for the 2nd peak 5000 k for the 3rd peak and 4000 k for the 4th peak respectively the smooth crossover in the 3000 5000 k range is centered around 4000 k consistent with the temperature at the frenkel line discussed above we also note that 4000 k corresponds to the specific heat xmath29 in figure arb in agreement with the earlier discussion as discussed in the introduction liquids have been viewed as inherently complicated systems lacking useful theoretical concepts such as a small parameter xcite together with recent experimental evidence and theory xcite the modelling data presented here and its quantitative agreement with predictions are beginning to change this traditional perspective our extensive molecular dynamics simulations of liquid energy and specific heat provide direct evidence for the link between dynamical and thermodynamic properties of liquids we have found this to be the case for several important types of liquids at both subcritical and supercritical conditions spanning thousands of kelvin this supports an emerging picture that liquid thermodynamics can be understood on the basis of high frequency collective modes a more general implication is that contrary to the prevailing view liquids are emerging as systems amenable to theoretical understanding in a consistent picture as is the case in solid state theory in addition to the link between dynamical and thermodynamic properties we have discussed how these properties are related to liquid structure this research utilised midplus computational facilities supported by qmul research it and funded by the epsrc grant ep k0001281 we acknowledge the support of the royal society rfbr 15 52 10003 and csc
we develop an approach to liquid thermodynamics based on collective modes we perform extensive molecular dynamics simulations of noble molecular and metallic liquids and provide the direct evidence that liquid energy and specific heat are well described by the temperature dependence of the frenkel hopping frequency the agreement between predicted and calculated thermodynamic properties is seen in the notably wide range of temperature spanning tens of thousands of kelvin the range includes both subcritical liquids and supercritical fluids we discuss the structural crossover and inter relationships between structure dynamics and thermodynamics of liquids and supercritical fluids
introduction phonon approach to liquid thermodynamics simulation details results and discussion summary
several theoretical predictions and scenarios have been proposed for the existence of ultra high energy uhe cosmic rays uhecr and uhe neutrinos xmath5 detection of the uhecr and particularly uhe neutrinos would be of great importance for understanding the energy of powerful agns gamma ray bursts and possible existence of massive particles predicted by the gut theories for detecting uhecr and uhenseveral ambitious terrestrial experiments are being carried out and also planned with very large collecting areas xmath6 1 xmath7 and volumes xmath6 1 xmath8 xcite askaryan noted in 1960s xcite that electromagnetic cascades in dense medium by the uhe particles will develop an excess of negative charge giving rise to incoherent erenkov radiation later dagkesamanski and zheleznykh xcite noted that uhe particles impinging on the lunar regolith at xmath9 10 m20 m deep layers of the moon will give rise to radio pulses of nanosecond ns durations the large surface area of the moon effectively provides a large surface area for detection of the rare uhe particles observations have been made towards the moon at 14 ghz using the parkes 64 m diameter radio telescope xcite and at 22 ghz using the jpl nasa 70 m and 30 m antennas glue experiment xcite and using a single 64 m telescope at kalyazin radio astronomical observatory xcite these have put upper limits on the existence of uhe particles but these are appreciably higher than the predictions by waxman and bahcall xcite askaryan effect has been tested using different media in a series of accelerator experiments one of such experiment is done in silica sand which resembles composition of lunar regolith xcite as shown by alvarez muniz et al xcite the angular distribution of the electric field emitted by 10 tev shower in ice salt and the lunar regolith is much wider at 01 ghz than at 1 ghz scholten et al xcite have calculated differential detection probability for cosmic rays of energy xmath10 ev and neutrinos of energy xmath11 ev hitting the moon as a function of apparent distance from the centre of the moon for different detection frequencies it is shown that the radio emission at higher frequencies arises mostly from uhe particles impinging near the rim of the moon but at lower frequencies from a major part of the moon indicating the advantage of making observations at lower frequencies using already existing or planned radio telescopes of large collecting areas in the frequency range of about 30 to 300 mhz for detecting uhecr and uhe neutrinos observations are currently being carried out by radio astronomers in netherlands using the westerbork radio telescope wsrt xcite at xmath2 140 mhz observations are also planned with the lofar xcite under construction in sectionii we summarize equations giving the expected value of the electric field and flux density for uhe particles as well as 25 times rms detection threshold of a radio telescope of collecting area xmath12 panda et al xcite have recently considered prospects of using the giant metrewave radio telescope gmrt xcite for observing radio pulse emission arising from the uhe particles interacting with the surface of the moon in section iii we describe appropriate parameters of the gmrt for searching the lunar erenkov emission and also summarize expected values of the signal strength as a function of energy of uhe particles and the receiver noise threshold in section iv we propose observations of the erenkov radiation from the lunar regolith using the large ooty radio telescope ort xcite that has an effective collecting area xmath13 8000 xmath1 and is operating at 325 mhz at present ortprovides a bandwidth of only 4 mhz but its receiver system has been modified to provide xmath14 mhz xcite and is being extended to 15 mhz in contrast to the gmrt providing dual polarizations at several frequency bands the ort provides only a single polarization but it would be possible to get observing time of xmath15 hours as it is being used mostly for day time interplanetary scintillations as discussed in sections iv and v search for uhe particles will also allow simultaneous observations of lunar occultation of radio sources in the path of the moon and also variation of brightness temperature of the moon with the lunar phase the latter yielding parameters such as dielectric constant and electrical conductivity of the lunar regolith upto depths of 30 m to 100 m in section vi we discuss model independent limits for detection of uhecr and uhe neutrinos for several current and planned experiments including lofar wsrt gmrt and ort discussions and conclusions are given in section vii the electric field of radio waves on earth xmath16 from a erenkov shower in the lunar regolith due to uhe neutrinos with energy xmath17 has been parameterized based on accelerator measurements and monte carlo simulations xcite neglecting angular dependence giving xmath18right labelfield where r is the distance between the emission point on the moon s surface to the telescope xmath19 is the radio frequency of observations and xmath20 ghz for the lunar regolith material the power flux density at earth xmath21 is given by xmath22 where free space impedance xmath23 377 ohms receiver bandwidth xmath24 is in units of 100 mhz and 1 jy xmath25 substituting from eq field we get xmath26right2 delta nu 100 mhz jy panda et al xcite has given the following value of the power flux density xmath27right2 fracdelta nu100mathrmmhz mathrmjy labelf furthermore there is an angular dependence given by xmath28 with xmath29 and xmath30 here we used gaussian approximation for our calculation where the forward suppression factor xmath31 in f is ignored for high frequenciesthis has no effect for low frequencies the differences at small angles only plays a role for showers nearly parallel to the surface normal while the effects of changing the normalization near the erenkov angle is important also for more horizontal showers a measure of the effective angular spread xmath32 of the emission around the erenkov angle xmath33 is given in terms of xmath34 it is seen from above that the value of xmath21 as given by panda et al is about 3 times lower than that given by eq2 we find that the value of f by panda xcite is 092 that given by scholten et al xcite we have used here eqf as per panda et al xcite by equating the power xmath35 received by a radio telescope due to the incident input threshold threshold power flux density xmath36 with the minimum detectable receiver noise power xmath37 we have xmath38 where the factor xmath39 is due to the reception of a single polarization xmath13 the effective area of the telescope xmath40 the bandwidth and the receiver rms noise xmath41 xmath42 being the system temperature xmath43 boltzmann s constant and xmath44 the integration time hence xmath45 is given by xmath46 for detection of a narrow pulse with width xmath44 using an optimum bandwidth xmath24 xmath47 and hence rms noise xmath45 is given by xmath48 in tables table1 table2 table3 we list the system temperatures at the different observation frequencies and the corresponding noise levels of two different configurations of gmrt and ort using equation f we can solve for xmath49 at the threshold required for measurement with the radio telescope obtained for xmath50 and xmath51 if we take a required signal to noise ratio xmath52 the threshold shower energies xmath53 which can be measured at the different observation frequencies at the gmrt and the ort are given in tables table1 table2 table3 the gmrt is a synthesis radio telescope consisting of 30 nos of fully steerable parabolic dish antennas each of 45 m diameter fourteen antennas are located in a somewhat random array within an area of about 1 xmath54 and other sixteen antennas along 3 y shaped arrays with a length of each xmath9 14 km the gmrt is currently operating in 5 frequency bands ranging from about 130 mhz to 1430 mhz the receiver system provides output at two orthogonal polarizations from each of the 30 antennas with a maximum bandwidth of 16 mhz for each polarization being sampled at 32 ns each the xmath13 of each antenna is nearly 950 xmath1 in the frequency range of 130 to 630 mhz and only 600 xmath1 at 1430 mhz panda et al have made estimates of the sensitivity of the gmrt for observations of uhe cr and uhe neutrinos they have considered the xmath13 of the gmrt xmath55 at 150 235 325 and 610 mhz and 18000 xmath1 at 1390 mhz however we may note that the gmrt provides the above area only when the voltage outputs of all the 30 antennas are added in phase resulting in antenna beam of xmath9 2 arcsec at the highest frequency and xmath9 15 arcsec at 150 mhz and therefore covering only a small part of the moon however the receiver correlator allows incoherent addition of the outputs of the 30 antennas covering the entire front surface of the moon and resulting in xmath56 xmath57 at the lower 4 frequency bands and xmath58 at 1390 mhz insetad if we measure coincidences of the power outputs of the 30 antennas the effective area will also be 5203 xmath1 at the lower frequency bands but would have the advantage of discrimination between the lunar cerenkov emission and any terrestrial radio frequency interferencerfi as the gmrt antennas are located in an array of xmath9 25 km extent an alternative strategy will be more effective if we use the recently installed software correlator at the gmrt for cross multiplications of the voltage outputs of the 30 gmrt dishes with xmath59 mhz it allows 32 ns sampling of the voltage outputs of each of the 30 antennas by combining these voltage outputs for the central 14 antennas of the gmrt with appropriate phase values it would be possible to form 25 phased beams covering the moon each beam having a resolution of about 6 arcmin at 140 mhz the effective area for each of the 25 beams will be 14250 xmath1 at the lower frequency bands and 9000 xmath1 at xmath9 1 ghz providing a competitive radio telescope for searching for uhe neutrinos contributions by the moon s temperature to the system temperature of the gmrt receiver is negligible at 140 mhz but is appreciable at higher frequencies using the system parameters of the gmrt as given in tables table1 and table2 we have estimated sensitivity of uhe cr and uhe neutrinos fluxes as given in figs crlimit100 crblimit100 nlimit100 nblimit100 and fluxgmrt gmrt parameters sensitivity and threshold sensitivity at different frequencies for an incoherent array xmath60 is the full width half maximumfwhm beam of the 45 m dishes xmath61 is the temperature of the moon at frequency xmath62 xmath45 is the expected threshold flux density noise intensity of the gmrt and xmath63 the corresponding electric field the threshold energy xmath53 is given in the last column colsoptionsheader the ort consists of a 530 m long and 30 m wide parabolic cylinder that is placed in the north south direction on a hill with the same slope as the latitude of the station xcite thus it becomes possible to track the moon for 95 hours on a given day by rotating the parabolic cylinder along it s long axis the ort operates only at 325 mhz and has effective collecting area of xmath64 a phased array of 1056 dipoles is placed along the focal line of the parabolic cylinder each dipole is connected to an rf amplifier followed by a 4 bit phase shifter signals received by 48 dipole units are connected to a common amplifier branching network xcite the 22 outputs of the phased array are brought to a central receiver room an analogue system that was originally built for lunar occultation observations xcite provided 12 beams to cover the moon each beam is 6 arcmin in the north side direction and 126 arcmin xmath9 2 deg in the east west direction recently a digital system has been installed by the raman research institute rri bangalore and the radio astronomy centre of ncra tifr at ooty allowing formation of phased array beams with collecting area 8000 xmath1 and a bandwidth of 10 mhz with xmath9 40 ns sampling xcite it is possible to form 6 beams covering the moon and 7th beam far away for discrimination of any terrestrial rfi the proposed upgrade of the 12 beam analogue system will provide a bandwidth of 15 mhz the measured receiver temperature of the ort is 140 k a contribution by moon of xmath2 315126 x 230 k 57 k thus xmath65 of ort for lunar observations at 327 mhz is about 200 k as discussed in the next section observations of the moon for 1000 hrs using the ort at 325 mhz will provide appreciably higher sensitivity than the past searches made by various workers and also compared to a search being made currently in netherlands using the westerbork synthesis radio telescope wsrt at 140 mhz using the ort it may be possible to reach sensitivity to test the predictions of the waxmann bachall model based on theoretical arguments proposed observations particularly with the ort will also provide arcsec resolution for galactic and extragalactic radio sources occulted by the moon and may also search for any transient celestial sources in the antenna beam outside the disc of the moon it would be quite valuable to make passive radio maps of the moon using the gmrt at decimetre and metre wavelengths the suface temperature of the moon is about 130 k in its night time and xmath9 330 k in its day time since moon s surface consists of lossy dielectric material the radio waves emitted by its thermal properties arise from few cm at microwaves to more than 100 m deep at wavelength of several m therefore the observed values of brightness temperature of the moon varies by tens of degrees at microwaves to less than a degree at radio wavelengths the gmrt provides a resolution of about 2 arcsec at xmath9 1420 mhz and xmath9 15 arcsec at 150 mhz polarization observations are also possible with the gmrt therefore maps of radio emission of the moon for its night and day with the gmrt will provide estimates of the dielectric constant and electric conductivity of the lunar regolith the data will be complimentary to the radar measurements xcite in this section we calculate model independent limits for detection of uhe cr and uhe neutrinos for gmrt and ort using the procedure given in panda et al xcite scholten et al xcite have considered dividing the wsrt antennas into 3 groups for the proposed search for neutrinos whence a is likely to be xmath66 they give a value of xmath45 600 jy for wsrt and xmath67 jy the system temperature for the ort xmath65 200k at 327 mhz including contribution by the moon and xmath68 hence xmath69 and xmath70 16875 jy which are much lower than for the wsrt the event rate that would be expected at the telescope can be related to an isotropic flux xmath71 of uhe particles on the moon through xmath72 where xmath73 denote the type of primary particle and xmath74 is an aperture function corresponding to the effective detector area the aperture can be further decomposed into an angular aperture xmath75 and a geometric area factor for the moon xmath76 with xmath77 km to evaluate the aperture we use the analytical methods described in xcite for the case of strongly interacting cosmic rays which can mainly interact on the surface of the moon the angular aperture is given by xmath78 times thetacosbeta mathrmdalphamathrmdcosbeta labelangle where xmath79 and xmath80 are the polar and azimuthal coordinates of the ray normal to the moon s surface in a system where the shower direction defines the xmath81 axis the full geometry and the different angles are described in fig geometry when the uhe primary is instead a neutrino it can produce showers deep below the surface of the moon and there will be considerable attenuation of the radio waves which travel distances longer than xmath82 below the surface for the neutrino induced showers the aperture is defined in the same way as for the cr but the angular aperture is now given xcite by xmath83cal ethbigr times explzbetalambdanu times mathrmdalpha mathrmdcosbeta labelomega nu where xmath84 is the distance the neutrino travels inside the material to reach the interaction point at a distance xmath81 below the surface in performing this integrationwe allow xmath81 to go below the known depth of the regolith despite the attenuation the aperture therefore picks up contributions coming from deep showers especially for the lower frequencies numerically we find for the worst case when xmath85 mhz that imposing a sharp cutoff at a depth of xmath86 m would reduce the aperture by nearly an order of magnitude similarly to what was discussed in xcite as for the cosmic rays the total aperture is obtained by substituting omega nu into acr and integrating over the polar angle xmath87 to estimate the sensitivity of gmrt and ort to cosmic ray and neutrino events we have evaluated the angular apertures by employing this technique and performing numerical integrations for the different parameters given in tables table1table2 table3 in the next sectionwe will discuss these results further in the context of prospectiveflux limits if no events are observed at gmrt and ort over a time xmath88 then an upper limit can be derived on uhe cr and neutrino fluxes at the moon the conventional model independent limit xcite is given by xmath89 where still xmath90 xmath91 and xmath92 the poisson factor xmath93 for a limit at xmath94 confidence level in figortcrlimit are shown prospective limits on the flux of the uhe crs for t100 1000 8760 hours one year of the observation time with ort plots for wsrt and lofar for t100 hours of the observation time are also shown in figs crlimit100 and crblimit100 are given model independent limits on uhe cr flux at different frequencies of the gmrt for an incoherent array and 25 beams case respectively for 100 hours of observations similarly for the uhe neutrinos prospective limits on their flux for t100 1000 and 8760 hours of observation with ort are given in fig ortnulimit figs nlimit100 and nblimit100 give limits on the uhe neutrinos at different frequencies of the gmrt for an incoherent array and 25 beams case respectively for 100 hours of observations for all our calculations we take xmath95 xcite it is clear from the plots that that low frequency observations give more stringent limits on the flux at the expense of a higher threshold this is due to the well known increase in the aperture xcite from radiation spreading at lower frequencies since many radio experiments exist for uhe neutrino detection we have compiled a comparison in figfluxgmrt this figure contains the predicted thresholds of the ort at 325 mhz for 1 year of observation time of the gmrtb 25 beams case at 140 mhz for 100 hrs and 30 days of observation time and the already existing limits from rice xcite glue xcite forte xcite and anita lite xcite also we have indicated the prospective future limits that has been calculated for anita xcite lofar xcite or lord xcite james and protheroe xcite have recently calculated sensitivity of the next generation of lunar cerenkov observations for uhe cr and neutrinos in addition to search for uhe cr and neutrinos simultaneous observations with the full array of the gmrt will provide radio maps of the moon as a function of the lunar phase giving information about the average thermal and electrical conductivity of the moon s regolith up to depth of xmath9 30 m to 100 m therefore for the two experiments to be carried out simultaneously it may be possible to get 2xmath96 50 hours of observations in two gmrt time allocation cycles also observations with the ort at the same time will allow discrimination against man made rfi transients it will be prudent to use both the ort and the gmrt for searching for the uhe neutrinos the new software correlator being installed at the gmrt will allow forming xmath9 25 beams at 140 mhz to cover the moon at xmath9 140 mhz providing 2 bands of 16 mhz and xmath97 14250 xmath3 one may also conveniently use the incoherent mode of the gmrt with xmath97 5203 xmath3 although ort with xmath988000 xmath1 operates only at 325 mhz it is well suited to track the moon for hundreds of hours the rfi is also much lower at ooty than at the gmrt site by using the new digital system installed recently at ooty by prabhu xcite of the raman research institute in conjunction with the 12 beams of the analogue system ort and also it s upgrade it should be possible to reach adequate sensitivity to test the waxman bahcall limit proposed on theoretical arguments on the uhe particle flux proposed observations particularly with the ort will also provide arcsec resolutions for celestial radio sources occulted by the moon and may also detect any transient celestial sources present in the antenna beam outside the disc of the moon search for uhe neutrinos will also allow simultaneous observations for making radio maps of the moon as a function of the lunar phase full moon 5 and 15 days earlier and later providing information about the average thermal and electrical conductivity of the moon s regolith up to a depth of xmath9 30 m the existence of uhe neutrinos of xmath99 ev is implied by the detection of for xmath100 ev the extremely high luminosity of the star burst galaxies agns gamma ray burst are likely to accelerate protons to very high energies that get scattered by the cmbr photons producing a flux of uhe neutrinos there are also predictions of their occurrence by more exotic sources in the early universe as may be seen from fig fluxgmrt observations with the ort and gmrt will provide a threshold sensitivity of xmath101 being comparable to the current searches being made by other investigators detection of the uhe cr and neutrinos of xmath99 ev would be of great importance for testing theories of high energy physics and for understanding several phenomena of cosmological and astrophysical importance acknowledgement we thank t prabhu of the raman research institute pk manoharan and aj selvanayagam of the radio astronomy centre ooty and s sirothia of ncra pune for many valuable discussions the work of sp was supported by the ministerio de educacion y ciencia under proyecto nacional fpa2006 01105 and also by the comunidad de madrid under proyecto hephacos ayuda de id s0505esp0346 t h hankins r d ekers and j d osullivan mnras 283 1027 1996 c w james r m crocker r d ekers t h hankins j d osullivan r j protheroe mnras 379 1037 2007astro ph0702619
searching for the ultra high energy cosmic rays and neutrinos of xmath0 is of great cosmological importance a powerful technique is to search for the erenkov radio emission caused by uhecr or uhe neutrinos impinging on the lunar regolith we examine in this paper feasibility of detecting these events by observing with the giant metrewave radio telescope gmrt which has a large collecting area and operates over a wide frequency range with an orthogonal polarisation capability we discuss here prospects of observations of the erenkov radio emission with the gmrt at 140 mhz with 32 mhz bandwidth using the incoherent array and also forming 25 beams of the central array effective collecting area of 14250 xmath1 to cover the moon we also consider using the ooty radio telescope ort which was specially designed in 1970 for tracking the moon the ort consists of a 530 m long and 30 m wide parabolic cylinder that is placed in the north south direction on a hill with the same slope as the latitude of the station thus it becomes possible to track the moon for 95 hours on a given day by a simple rotation along the long axis of the parabolic cylinder ort operates at 325 mhz and has an effective collecting area of xmath2 8000 xmath3 recently a digital system has been installed by scientists of the raman research institute rri bangalore and the radio astronomy centre rac of ncra tifr at ooty allowing a bandwidth of 10 mhz with xmath2 40 ns sampling it is possible to form 6 beams covering the moon and 7th beam far away for discrimination of any terrestrial rfi increasing the bandwidth of the existing 12 beam analogue system of the ort from 4 mhz to 15 mhz to be sampled digitally is planned it is shown that by observing the moon for xmath4 1000 hrs using the ort it will provide appreciably higher sensitivity than past searches made elsewhere and also compared to the search being made currently in netherlands using the westerbork synthesis radio telescope wsrt at 140 mhz using the gmrt and ort it may be possible to reach sensitivity to test the waxman bachall limit based on theoretical arguments on the uhe particle flux
introduction estimated strength of radio waves from cascades in the lunar regolith prospects for searching using the giant metrewave radio telescope (gmrt) prospects for searching uhe cr and neutrinos using the ooty radio telescope(ort) measurement of dielectric constant and electrical conductivity of the lunar regolith sensitivity calculations for search of uhe neutrinos and discussions discussions and conclusion
the recent discovery of huge quantities of dust xmath5 in very high redshifted galaxies and quasars isaak et al 2002 bertoldi et al 2003 suggests that dust was produced efficiently in the first generation of supernovae sne theoretical studies kozasa et al 1991 todini ferrara 2001 hereafter tf nozawa etal 2003 n03 predicted the formation of a significant quantity of dust xmath6 xmath7 in the ejecta of type ii sne and the predicted dust mass is believed to be sufficient to account for the quantity of dust observed at high redshifts maiolino et al 2006 meikle et al recently a model of dust evolution in high redshift galaxies dwek et al 2007 indicates that at least 1 xmath7 of dust per sn is necessary for reproducing the observed dust mass in one hyperluminous quasar at xmath8 observationally the presence of freshly formed dust has been confirmed in a few core collapsed sne such as sn1987a which clearly have showed several signs of dust formation in the ejecta see mccray 1993 for details the highest dust mass obtained so far for sn 1987a is xmath9 xmath7 xcite spitzer and hst observations sugerman et al 2006 showed that up to 002 xmath7 of dust formed in the ejecta of sn2003gd with the progenitor mass of 612 xmath7 and the authors concluded that sne are major dust factories however from the detailed analysis of the late time mid infrared observations meikle et al 2007 found that the mass of freshly formed dust in the same sn is only xmath10 and failed to confirm the presence 002 xmath7 dust in the ejecta the aforementioned results show that the derived dust mass is model dependent and that the amount of dust that really condenses in the ejecta of core collapsed sne is unknown cassiopeia a cas a is the only galactic supernova remnant snr that exhibits clear evidence of dust formed in ejecta lagage et al 1996 arendt et al 1999 hereafter adm the amount of dust that forms in the ejecta of young snr is still controversial previous observations inferred only xmath11 of dust at temperatures between 90 and 350 k adm douvion et al 2001 hereafter d01 this estimate is 2 to 3 orders of magnitude too little to explain the dust observed in the early universe recent submillimeter observations of casa and kepler with scuba xcite revealed the presence of large amounts of cold dust xmath12 at 1520 k missed by previous iras iso observations on the other hand highly elongated conductive needles with mass of only 10xmath13 to 10xmath14 xmath7could also explain a high sub mm flux of cas a when including grain destruction by sputtering dwek 2004 though the physicality of such needles is doubtful xcite while xcite showed that much of the 160xmath0 m emission observed with multiband imaging photometer for spitzer mips is foreground material suggesting there is no cold dust in cas a xcite used co emission towards the remnant to show that up to about a solar mass of dust could still be associated with the ejecta not with the foreground material these controversial scenarios of dust mass highlight the importance of correctly identifying the features and masses of dust freshly formed in cas a the galactic young snr cas a allows us to study in detail the distribution and the compositions of the dust relative to the ejecta and forward shock with infrared spectrograph onboard the spitzer space telescope cas a is one of the youngest galactic snrs with an age of 335 yr attributed to a sn explosion in ad 1671 the progenitor of cas a is believed to be a wolf rayet star with high nitrogen abundance xcite and to have a mass of 15 25 mxmath15 xcite or 29 30 mxmath15 xcite the predicted dust mass formed in sne depends on the progenitor mass for a progenitor mass of 15 to 30 mxmath15 the predicted dust mass is from 03 to 11 mxmath15 no3 and from 008 to 10 mxmath15 tf respectively in this paper we present spitzer infrared spectrograph irs mapping observations of cas a and identify three distinct classes of dust associated with the ejecta and discuss dust formation and composition with an estimate of the total mass of freshly formed dust we performed spitzer irs mapping observations covering nearly the entire extent of cas a on 2005 january 13 with a total exposure time of 113 hr the short low sl 5 15 xmath0 m and long low ll 15 40 xmath0 m irs mapping involved xmath1616xmath17 and xmath18 pointings producing spectra every 5xmath1 and 10xmath1 respectively the spectra were processed with the s12 version of the irs pipeline using the cubism package kennicutt et al 2003 smith et al 2007 whereby backgrounds were subtracted and an extended emission correction was applied the spectral resolving power of the irs sl and ll modules ranges from 62 to 124 the irs spectra of cas a show bright ejecta emission lines from ar ne s si o and fe and various continuum shapes as indicated by the representative spectra in figure 1 the most common continuum shape exhibits a large bump peaking at 21 xmath0 m as shown by spectrum a in figure sixspec this 21 xmath0m peak dust is often accompanied by the silicate emission feature at 98 xmath0 m which corresponds to the stretching mode a second class of continuum shapes exhibits a rather sharp rise up to 21 xmath0 m and then stays flat thereafter this weak21 xmath0 m dust is often associated with relatively strong ne lines in comparison with ar lines and is indicated by spectrum b in figure sixspec the third type of dust continuum is characterized by a smooth and featureless gently rising spectrum with strong and emission lines as shown by spectra c and d in figure sixspec the spectrum d shows double line structures that may be due to doppler resolved lines of at 26 xmath0 m and at 35 xmath0 m note that the featureless dust spectrum d in fig sixspec is a class of dust separate from the interstellar circumstellar dust spectrum e in fig sixspec heated by the forward shock the interstellar circumstellar dust spectrum in cas a has no associated gas line emission the broad continuum see figure 7b of ennis et al 2006 is a combination of the spectra c and e the spectrum c has contamination from the shock heated dust in projection and for simplicity it is excluded in estimating the masses of the freshly formed dust see 5 the featureless dust lacks the gentle peak around 26 xmath0 m and also lacks the interstellar silicate emission feature between 9 xmath0 m and 11 xmath0 m observed in the spectra from the forward shock region most importantly the featureless dust accompanies relatively strong si and s ejecta lines and mostly from the interior of the remnant blue region in fig imagesf we generated a map of the 21 xmath0m peak dust by summing the emission over 19 23 xmath0 m after subtracting a baseline between 18 19 xmath0 m and 23 24 xmath0 m the line free dust map fig imagesa resembles the ar ii and o ivfe ii ejecta line maps as shown in figures imagesb and imagesc and we also find that the ne ii map is very similar to the ar ii map the map shows a remarkable similarity to the 21 xmath0m peak dust map fig imagesa and imagesb thereby confirming this dust is freshly formed in the ejecta maps of fig imagesd and o ivfe ii fig imagesc shows significant emission at the center revealing ejecta that have not yet been overrun by the reverse shock unshocked ejecta there is also and o ivfe ii emission at the bright ring indicating that some of the si and ofe ejecta have recently encountered the reverse shock while the bright ofe emission outlines the same bright ring structure as the ar ii and 21 xmath0m peak dust maps the bright part of the si shell shows a different morphology from the other ejecta maps we can characterize the spectra of our three dust classes by using the flux ratios between 17 xmath0 m and 21 xmath0 m and between 21 xmath0 m and 24 xmath0 m although the spectra in cas a show continuous changes in continuum shape from strong 21 xmath0 m peak to weak 21 xmath0 m peak and to featureless we can locate regions where each of the three classes dominates figure imagesf shows the spatial distribution of our three dust classes where red green and blue indicate 21 xmath0m peak dust weak21 xmath0 m dust and featureless dust respectively the flux ratios used to identify the three dust classes are as follows where xmath19 is the flux density in the extracted spectrum at wavelength xmath20 xmath0 m 1 21 xmath0m peak dust we use the ratio xmath21 where xmath22 is the dispersion in xmath23 over the remnant which is equivalent to xmath24 the regions with 21 xmath0m peak dust coincide with the brightest ejecta 2 weak21 xmath0 m dust we use the ratio xmath25 which is equivalent to xmath26 the regions showing the weak21 xmath0 m continuum shape mostly coincide with faint ejecta emission but not always 3 featureless dust map we use the ratio xmath27 which is equivalent to xmath28 this ratio also picks out circumstellar dust heated by the forward shock so we used several methods to exclude and mitigate contamination from circumstellar dust emission first using x ray and radio maps we excluded the forward shock regions at the edge of the radio plateau xcite second there are highly structured continuum dominated x ray filaments across the face of the remnant which are similar to the exterior forward shock filaments and may be projected forward shock emission xcite for our analysis we excluded regions where there were infrared counterparts to the projected forward shock filaments third for simplicity we excluded regions with gently rising spectra identified by curve c the spectra which continues to rise to longer wavelengths in figure sixspec this type of spectrum is mainly found on the eastern side of cas a where there is an hxmath29 region the northeast jet and other exterior optical ejecta xcite making it difficult to determine if the continuum emission is due to ejecta dust or circumstellar dust however note that some portion of the continuum in the spectra c is freshly formed dust we finally excluded regions where there was a noticeable correlation to optical quasi stationary flocculi xcite which are dense circumstellar knots from the progenitor wind the featureless dust emission appears primarily across the center of the remnant as shown in figure imagesd blue the featureless dust is accompanied by relatively strong and and lines as shown by the spectrum d of figure sixspec the line map fig imagesc shows significant emission at the center as well as at the bright ring of the reverse shocked material the line map shows different morphology than other line maps and the 21 xmath0m peak dust map depicting center filled emission with a partial shell as shown in figure images this poses the following important question why is the si map more center filled than the ar map the answer is unclear because si and ar are both expected at similar depths in the nucleosynthetic layer eg woosley heger weaver 2002 the relatively faint infrared emission of si and s at the reverse shock may imply relatively less si and s in the reverse shock we suspect it is because the si and s have condensed to solid form such as mg protosilicate mgsioxmath3 mgxmath2sioxmath30 and fes in contrast ar remains always in the gas and does not condense to dust so it should be infrared or x ray emitting gas alternate explanation is that the ionization in the interior is due to photoionization from the x ray shell see hamilton fesen 1988 in this case the lack of interior relative to might be due to its much higher ionization potential 16 ev compared to 8 ev theoretical models of nucleosynthesis accounting for heating photoionization and column density of each element would be helpful for understanding the distribution of nucleosynthetic elements the si and s emission detected at the interior is most likely unshocked ejecta where the revere shock has not yet overtaken the ejecta the radial profile of unshocked ejecta is centrally peaked at the time of explosion as shown by xcite the radial profile of unshocked fe ejecta is also expected to be center filled for xmath161000 yr old type ia snr of sn 1006 xcite the morphology of the featureless dust resembles that of unshocked ejecta supporting the conclusion that the featureless dust is also freshly formed dust the spectrum in figure sixspec curve d shows the resolved two lines at 26 xmath0 m and at 35 xmath0 m the two respective lines at xmath1626 xmath0 m may be resolved lines of and and at xmath1635 xmath0 m and as expected that the unshocked ejecta near the explosion center have a low velocity alternatively they could be highly doppler shifted lines in this case the two lines at 26 xmath0 m are both and the two lines at 35 xmath0 m are both the newly revealed unshocked ejecta deserves extensive studies preliminary doppler shifted maps were presented in xcite and the detailed analysis of velocities and abundances of unshocked and shocked ejecta will be presented in future papers xcite we performed spectral fitting to the irs continua using our example regions in figure sixspec included in the fitting are mips 24 xmath0 m and 70 xmath0 m fluxes xcite and the contribution of synchrotron emission figs 21umpeakspec and weak21umspec estimated from the radio fluxes xcite and infrared array camera irac 36 xmath0 m fluxes xcite we measured synchrotron radiation components for each position using radio maps and assuming the spectral index xmath29071 xcite where log s xmath31 xmath29 log xmath32 because the full width half maximum of 24 xmath0 m is smaller than the irs extracted region the surface brightnesses for 24 xmath0 m were measured using a 15xmath1 box the same size as the area used for the extracted irs ll spectra we also made color corrections to each mips 24 xmath0 m data point based on each irs spectrum and band filter shape the correction was as high as 25 for some positions while the uncertainty of calibration errors in irac is 3 4 that of mips 24 xmath0 m is better than 10 the mips 70 xmath0 m image xcite shown in figure imagese clearly resolves cas a from background emission unlike the 160 xmath0 m image xcite most of the bright 70 xmath0 m emission appears at the bright ring and corresponds to the 21 xmath0 m dust map and the shocked ejecta particularly indicating that the 70 xmath0 m emission is primarily from freshly formed dust in the ejecta the 70 xmath0 m emission also appears at the interior as shown in figure imagese we measured the brightness for 70 xmath0 m within a circle of radius 20xmath1 for each position accounting for the point spread function note that when the emission is uniform the aperture size does not affect the surface brightness we estimated the uncertainties of the 70 xmath0 m fluxes to be as large as 30 the largest uncertainty comes from background variation due to cirrus structures based on our selection of two background areas 5xmath33 to the northwest and south of the cas a the dust continuum is fit with the planck function bxmath34 multiplied by the absorption efficiency xmath35 for various dust compositions varying the amplitude and temperature of each component to determine the dust composition we consider not only the grain species predicted by the model of dust formation in sne tf n03 but also mg protosilicates adm and feo xcite as possible contributors to the 21 xmath0 m feature the optical constants of the grain species used in the calculation are the same as those of xcite except for amorphous si xcite amorphous sioxmath2 xcite amorphous alxmath2oxmath3 xcite feo xcite and we apply mie theory xcite to calculate the absorption efficiencies qxmath36 assuming the grains are spheres of radii xmath37 xmath0 m we fit both amorphous and crystalline grains for each composition but it turned out that the fit results in cas a see 3 favor amorphous over crystalline grains thus default grain composition indicates amorphous hereafter for mg protosilicate the absorption coefficients are evaluated from the mass absorption coefficients tabulated in xcite and we assume that the absorption coefficient varies as xmath38 for xmath39 40 xmath0 m typical for silicates we fit the flux density for each spectral type using scale factors xmath40 for each grain type xmath41 such that fxmath42 xmath43 note that the calculated values of qxmath36xmath44 are independent of the grain size as long as 2xmath45 xmath461 where xmath47 is the complex refractive index thus the derived scale factor cxmath48 as well as the estimated dust mass see 4 are independent of the radius of the dust the dust compositions of the best fits are summarized in table 1 the strong 21 xmath0m peak dust is best fit by mg proto silicate amorphous sioxmath2 and feo grains with temperatures of 60 120 k as shown in figure 21umpeakspec these provide a good match to the 21 xmath0 m feature adm suggested that the 21 xmath0 m feature is best fit by mg proto silicate while d01 suggested it is best fit by sioxmath2 instead we found however that sioxmath2 produced a 21 xmath0 m feature that was too sharp we also fit the observations using mgxmath2sioxmath30 which exhibits a feature around 20 xmath0 m and the overall variation of absorption coefficients of mgxmath2sioxmath30 with wavelength might be similar to that of mg protosilicate xcite however with mgxmath2sioxmath30 the fit is not as good as that of mg protosilicate not only at the 21xmath0 m peak but also at shorter 10 20 xmath0 m and longer 70 xmath0 m wavelengths thus we use mg protosilicate and sioxmath2 as silicates to fit the 21 xmath0m peak dust feature the fit with mg protosilicate sioxmath2 and feo is improved by adding aluminum oxide alxmath2oxmath3 83 k and fes 150 k where alxmath2oxmath3 improved the overall continuum shape between 10 70 xmath0 m and fes improved the continuum between 30 40 xmath0 m underneath the lines of si s and fe as shown figure 21umpeakspec the silicate composition is responsible for the 21 xmath0 m peak suggesting that the dust forms around the inner oxygen and s si layers and is consistent with ar being one of the oxygen burning products we also include amorphous mgsioxmath3 480 k and sioxmath2 300 k to account for the emission feature around the 98 xmath0 m the composition of the low temperature 40 90 k dust component necessary for reproducing 70 xmath0 m is rather unclear either alxmath2oxmath3 80 k model a in table 1 or fe 100 k model b in table 1 and figure modelbspec can fit equally well as listed in table 1 we could use carbon instead of alxmath2oxmath3 or fe but the line and dust compositions suggest the emission is from inner o s si layers where carbon dust is not expected there are still residuals in the fit from the feature peaking at 21 xmath0 m 20 23 xmath0 m and an unknown dust feature at 11 125 xmath0 m it is not a part of typical pah feature as shown in figure 21umpeakspec the former may be due to non spherical grains or different sizes of grains the weak 21 xmath0 m continuum is fit by feo and mgxmath2sioxmath30 or mg protosilicate models c and d in table 1 since the curvature of the continuum changes at 20 21 xmath0 m as shown in figure weak21umspec to fit the rest of the spectrum we use glassy carbon dust and alxmath2oxmath3 grains the glassy carbon grains 220 k can account for the smooth curvature in the continuum between 8 14 xmath0 m carbon dust 80 k and alxmath2oxmath3 100 k contribute to the continuum between 15 25 xmath0 m we could use fe dust instead but we suspect carbon dust because of the presence of relatively strong ne line emission with the weak 21 xmath0 m dust class ne mg and al are all carbon burning products we can not fit the spectrum replacing carbon by alxmath2oxmath3 with a single or two temperatures because xmath49 of alxmath2oxmath3 has a shallow bump around 27 xmath0 m thus the fit requires three temperature components of alxmath3oxmath2 or a combination of two temperature components of alxmath3oxmath2 and a temperature component of carbon the continuum between 33 40 xmath0 m underneath the lines of si s and fe can be optimally fit by fes grains the 70 xmath0 m image shown in figure imagese shows interior emission similar to the unshocked ejecta but that may also be due to projected circumstellar dust at the forward shock in order to fit the featureless spectrum out to 70 xmath0 m we must first correct for possible projected circumstellar dust emission the exterior forward shock emission is most evident in the northern and northwestern shell taking the typical brightness in the nw shell xmath1620 mjy srxmath50 and assuming the forward shock is a shell with 12 radial thickness the projected brightness is less than 4 10 of the interior emission xmath1640 mjy srxmath50 after background subtraction we assume that the remaining wide spread interior 70 xmath0 m emission is from relatively cold unshocked ejecta using the corrected 70 xmath0 m flux the featureless spectra are equally reproduced by three models models e f and g in table 1 and figures modelespec and flessspec all fits include mgsioxmath3 feo and si and either aluminum oxide fe or a combination of the two are required at long wavelength carbon dust can also produce featureless spectra at low temperature but we exclude this composition because of the lack of ne produced from carbon burning aluminum oxide and fe dust are far more likely to be associated with the unshocked ejecta because they result from o burning and si burning respectively and the unshocked ejecta exhibit si s and ofe line emission however one of the key challenges in sn ejecta dust is to understand featureless dust such as fe c and aluminum oxide and to link it to the associated nucleosynthetic products we estimated the amount of freshly formed dust in cas a based on our dust model fit to each of the representative 21 xmath0m peak weak21 xmath0 m and featureless spectra fig sixspec the dust mass of xmath41grain type is given by xmath51 where xmath52 is the flux from xmath41grain species xmath53 is the distance xmath54 is the planck function xmath55 is the bulk density and xmath44 is the dust particle size by employing the scale factor xmath40 and the dust temperature xmath56 derived from the spectral fit the total dust mass is given by xmath57 where xmath58 is the solid angle of the source the total mass of the 21 xmath0m peak dust is then determined by summing the flux of all the pixels in the 21 xmath0m peak dust region red region in fig imagesf and assuming each pixel in this region has the same dust composition as the spectrum in fig 21umpeakspec we took the same steps for the weak21 xmath0 m dust and the featureless dust the estimated total masses for each type of dust using a distance of 34 kpc reed et al 1995 are listed in table 1 using the least massive composition in table 1 for each of the three dust classes yields a total mass of mxmath4 the sum of masses from models a d and f using the most massive composition for each of the three dust classes yields a total mass of mxmath4 the sum of masses from models b c and e the primary uncertainty in the total dust mass between and mxmath4 is due to the selection of the dust composition in particular for the featureless dust we also extracted a global spectrum of cas a but excluding most of the exterior forward shock regions the spectrum is well fit with the combination of our three types of dust including all compositions from models a g as shown in figure totalspec we used the dust composition of models a g as a guideline in fitting the global spectrum because the dust features which were noticeable in representative spectra were smeared out our goal in fitting the global spectrum is to confirm consistency between the mass derived from global spectrum and that derived from representative spectra described above the total estimated mass from the global spectrum fit is xmath160028 mxmath4 being consistent to the mass determined from the individual fits to each dust class the respective dust mass for each grain composition is listed in table 3 the masses of mgsioxmath3 sioxmath2 fes and si are more than a factor of ten to hundred smaller than the predictions the predictions n03 and tf also have the dust features at 9 xmath0 m for mgsioxmath3 21 xmath0 m for sioxmath2 and 30 40 xmath0 m for fes stronger than the observed spectra if the dust mass is increased the carbon mass is also a factor of 10 lower than the predictions we were not able to fit the data with as much carbon dust mass as expected even if we use the maximum carbon contribution allowed from the spectral fits we find an estimated total freshly formed dust mass of mxmath4 is required to produce the mid infrared continuum up to 70 xmath0 m the dust mass we derive is orders of magnitude higher than the two previous infrared estimates of 35xmath5910xmath14 mxmath4 and 77xmath59 10xmath60 mxmath4 which are derived by extrapolation from 16xmath5910xmath13 mxmath4 d01 and 28xmath59 10xmath61 mxmath4 adm for selected knots respectively one of the primary reasons for our higher mass estimate is that we include fluxes up to 70 xmath0 m while the fits in d01 and adm accounted for dust emission only up to 30 and xmath1640 xmath0 m respectively the cold dust 40 150 k has much more mass than the warmer xmath62150 k dust in addition our irs mapping over nearly the entire extent of cas a with higher spatial and spectral resolutions provides more accurate measurements while d01 and adm covered only a portion of the remnant in addition adm use only mg protosilicate dust the absorption coefficient for mg protosilicate is a few times larger than those of other compositions our dust mass estimate is also at least one order of magnitude higher than the estimate of 3xmath59 10xmath14 mxmath4 by xcite they fitted msx and spitzer mips data with mg protosilicate note that they used only one composition they derived a freshly synthesized dust mass of 3xmath5910xmath14 mxmath4 at a temperature of 79 82 k and a smaller dust mass of 5xmath5910xmath61 mxmath4 at a higher temperature of 226 268 k and they explained that the mass estimate depends on the chosen dust temperature as adm mentioned the absorption coefficient for mg protosilicate is a few times larger than those of other compositions therefore even including the long wavelength data the estimated mass was small since only mg protosilicate was modeled with the photometry in xcite one could easily fit the data with only mg protosilicate and would not need additional grain compositions however with the accurate irs data many dust features and the detailed continnum shape could not be fit solely with the mg protosilicate note that the continuum shapes of weak 21 xmath0 m dust and featureless dust are very different from the shape of protosilicate absorption coefficient therefore it was necessary to include many other compositions in order to reproduce the observed irs spectra it should be noted here that in contrast with the previous works we introduced si and fe bearing materials such as si fe fes and feo we explain why we included such dust in our model fitting as follows firstly we included si and fe dust because these elements are significant outputs of nucleosynthesis indeed xcite show that si and fe are primary products in the innermost layers of the ejecta secondly we observed strong si and fe lines in the infrared and x ray spectra strong si lines were detected in the spitzer spectra as shown in figure sixspec also see d01 and the fe line detection at 179 xmath0 m is also shown in figure weak21umspec the fe maps at 179 xmath0 m and at 164 xmath0 m were presented in xcite and xcite respectively si and fe lines from ejecta are also bright in x ray emission xcite thirdly dust such as si fe feo and fes is predicted to form in the ejecta of population iii supernovae n03 tf and n03 predict fexmath3oxmath30 instead of feo in the uniformly mixed ejecta where the elemental composition is oxygen rich but the kind of iron bearing grains in oxygen rich layers of the ejecta is still uncertain partly because the surface energy of iron is very sensitive to the concentration of impurities such as o and s as was discussed by xcite and partly because the chemical reactions at the condensation of fe bearing dust is not well understood depending on the elemental composition and the physical conditions in the ejecta it is possible that fe feo andor fes form in the oxygen rich layers of galactic sne the observations of cas a favor feo dust over fexmath3oxmath30 in order to match the spectral shape of the 21 xmath0m peak dust and the weak21 xmath0 m dust this aspect should be explored theoretically in comparison with the observations in the future our total mass estimate is also about one order of magnitude higher than the estimate of 69xmath5910xmath14 mxmath4 by xcite who used iras fluxes possibly confused by background cirrus and assumed a silicate type dust as stellar or supernova condensates being present in supernova cavity and heated up by the reverse shock our estimated mass is much less than 1 mxmath4 which xcite suggested may still be associated with the ejecta after accounting for results of high resolution co observations our estimated mass of to mxmath15 is only derived for wavelengths up to 70 xmath0 m so it is still possible that the total freshly formed dust mass in cas a is higher than our estimate because there may be colder dust present future longer wavelength observations with herschel scuba2 and alma are required to determine if this is the case also note that we did not include any mass from fast moving knots projected into the same positions as the forward shock such as in the northeast and southwest jets and the eastern portions of the snr outside the 21 xmath0m peak dust region see fig imagese because such dust could not be cleanly separated from the interstellar circumstellar dust we can use our dust mass estimate in conjunction with the models of n03 and tf to understand the dust observed in the early universe if the progenitor of cas a was 15 mxmath4 our estimated dust mass mxmath15 is 718 of the 03 mxmath15 predicted by the models if the progenitor mass was 30 mxmath4 then the dust mass is 25 of the 11 mxmath15 predicted by the models one reason our dust mass is lower than predicted by the models is that we can not evaluate the mass of very cold dust residing in the remnant from the observered spectra up to 70 xmath0 m as described above unless the predicted mass is overestimated another reason is that when and how much dust in the remnant is swept up by the reverse shock is highly dependent on the thickness of the hydrogen envelope at the time of explosion and that the evolution and destruction of dust grains formed in sne strongly depend not only on their initial sizes but also the density of ambient interstellar medium nozawa et al dust formation occurs within a few hundred days after the sn explosion kozasa et al 1989 tf n03 without a thick hydrogen envelope given an age for cas a of xmath16300 years a significant component of dustmay have already been destroyed if dust grains formed in the ejecta were populated by very small sized grains otherwise it is possible that some grain types may be larger which would increase the inferred mass we observed most of the dust compositions predicted by sn type ii models and the global ejecta composition is consistent with the unmixed case n03 model than mixed case model however note that different morphologies of ar and si maps imply that some degree of mixing has occurred our estimated dust mass with spitzer data is one order of magnitude smaller than the predicted models of dust formation in sne ejecta by n03 and tf but one to two orders of magnitude higher than the previous estimations we now compare the dust mass in high redshift galaxies with the observed dust mass of cas a based on the chemical evolution model of morgan edmunds 2001 by a redshift of 4 sne have been injecting dust in galaxies for over 2 billion years and there is enough dust from sne to explain the lower limit on the dust masses xmath167xmath5910xmath63 mxmath4 inferred in submm galaxies and distant quasars xcite it should be noted with the dust mass per sn implied by our results for cas a alone the interpretation of dust injection from sne is limited because the amount of dust built up over time is strongly dependent on the initial mass function stellar evolution models and star formation rates xcite and destruction rates in supernova are believed to be important at timescales greater than a few billion years additional infrared submm observations of other young supernova remnants and supernovae are crucial to measure physical processes of dust formation in sne including the dust size distribution composition and dependence on nucleosynthetic products and environment and to understand the dust in the early universe in terms of dust injection from sne we presented spitzer irs mapping covering nearly the entire extent of cas a and examined if sne are primary dust formation sites that can be used to explain the high quantity of dust observed in the early universe the irs spectra of cas a show a few dust features such as an unique 21 xmath0 m peak in the continuum from mg protosilicate sioxmath2 and feo we observed most of the dust compositions predicted by sn type ii dust models however the dust features in cas a favour mg protosilicate rather than mgxmath2sioxmath30 and feo rather than fexmath3oxmath30 the composition infers that the ejecta are unmixed our total estimated dust mass with spitzer observations ranging from 55 70 xmath0 m is mxmath15 one order of magnitude smaller than the predicted models of dust formation in sne ejecta by n03 and tf but one or more orders of magnitude higher than the previous estimations the freshly formed dust mass derived from cas a is sufficient from sne to explain the lower limit on the dust masses in high redshift galaxies j rho thanks u hwang for helpful discussion of x ray emission of cas a this work is based on observations made with the spitzer space telescope which is operated by the jet propulsion laboratory california institute of technology under nasa contract 1407 partial support for this work was provided by nasa through an go award issued by jpl caltech arendt r g dwek e moseley s h 1999 521 234 adm begemann b et al 1997 apj 476 1991 bertoldi f carilli c l cox p fan x strauss m a beelen a omont a zylka r 2003 406 55 bohren c f huffman d r 1983 absorption and scattering of light by small particles new york chevalier r soker1989 apj 341 867 chini r kruegel e 1994 aa 288 l33 clayton dd deneault e a n meyer bs 2001 apj 562 480 delaney t 2004 phd thesis u of minnesota delaney t smith j rudnick l ennis j rho j reach w kozasa t gomez h 2006 baas 208 5903 delaney t smith j rudnick l ennis j rho j reach w kozasa t gomez h 2007 in preparation dorschner j friedmann c gtler j duley w w 1980 apss 68 159 douvion t lagage p o pantin e 2001 369 589 d01 dunne l eales s ivison r morgan h edmunds m 2003 424 285 dwek e hauser m g dinerstein h l gillett f c rice w 1987 315 571 dwek e 2004 607 848 dwek e galliano f jones a p 2007 apj 662 927 ennis j et al 2006 apj 652 376 ercolano b barlow m j sugerman b e k 2007 mnras 375 753 fesen r a 2001 133 161 gotthelf e v et al 2001 apjl 552 39 gomez h dunne l eales s gomez e edmunds m 2005 mnras 361 1012 gao y carilli cl solomon pm vanden bout pa 2007 apj 660 l93 hamilton a j fesen r 1988 apj 327 178 henning th begemann b mutschke h dorschner j 1995 aa suppl 112 143 isaak k g priddey r s mcmahon r g omont a et al 2002 329 149 jger c dorschner j mutschke h posch th henning th 2003 aa 408 193 kennicutt rc et al 2003 pasp 115 928 philipp h 1985 handbook of optical constamts of solids ed e d palik academic press san diego749 piller h 1985 handbook of optical constamts of solids ed e d palik academic press 571 reed je hester jj fabian ac winkler pf 1995 apj 440 706 rho j reynolds sp reach wt jarrett th allen ge wilson jc 2003 apj 592 299 smail i ivison rj blain aw 1997 apj 490 l5 smith j d t armus l dale da roussel h sheth k buckalew ba jarrett t h helou g kennicutt r c 2007 pasj submitted sugerman ben e k et al 2006 science 313 196 todini p ferrara a 2001 mnras 325 726 tf woosley s e a heger weaver t a 2002 reviews of modern physics 74 1015 young p a et al 2006 apj 640 891 whittet dcb 2003 dust in the galactic environment second edition iop cambridge university press uk llccccccccccccclll catalog 21xmath0m peak a a mg protosilicate mgsioxmath3 sioxmath2 feo fes si alxmath2oxmath3 ar inner o s si 00030 21xmath0m peak b mg protosilicate mgsioxmath3 feo sioxmath2 feo fes si fe ar inner o s si 00120 weak21xmath0 m b c c glass feo alxmath2oxmath3 si mgxmath2sioxmath30 ne si ar s ofe c burning 00180 weak21xmath0 m d c glass feo alxmath2oxmath3 si fes mg protosilicate ne si ar s ofe c burning 00157 featureless d e mgsioxmath3 si fes fe mgxmath2sioxmath30 si s ofe o al burning fe si s 00245 featureless f mgsioxmath3 si fes fe alxmath2oxmath3 si s ofe o al burning fe si s 00171 featureless g mgsioxmath3 si fes alxmath2oxmath3 mgxmath2sioxmath30 si s ofe o al burning fe si s 00009 lllllllllllllll dustmasstab alxmath2oxmath3 666e05 083 000e00 000 513e05 105 103e04 100 000e00 000 813e04 050 650e04 060 c glass 000e00 000 000e00 000 208e03 80180 107e03 80220 000e00 000 000e00 000 000e00 000 mgsioxmath3 119e08 480 119e08 480 000e00 000 000e00 000 255e05 110 319e05 110 255e05 110 mgxmath2sioxmath30 000e00 000 000e00 000 789e05 120 000e00 000 172e06 130 000e00 000 300e06 130 mg protosilicate 500e05 120 467e05 120 000e00 000 377e05 120 000e00 000 000e00 000 000e00 000 sioxmath2 223e03 060300 140e03 065300 000e00 000 000e00 000 000e00 000 000e00 000 000e00 000 si 434e04 096 434e04 100 163e03 090 817e03 080 932e04 090 124e04 120 621e05 120 fe 000e00 075 982e03 110 000e00 000 000e00 000 216e02 95135 136e02 100150 000e00 000 feo 113e04 105 211e04 095 139e02 060 597e03 065 000e00 000 000e00 000 000e00 000 fes 120e04 150 211e04 150 000e00 000 340e04 120 194e03 055 259e03 055 129e04 100 llllllllll dustmasstab alxmath2oxmath3 240e04 xmath16 900e03 820e04 51300e05 122e04 105 carbon 700e02 xmath16 300e01 107e03 20767e03 204e03 070265 mgsioxmath3 200e03 xmath16 700e3 255e05 25500e05 165e04 110 mgxmath2sioxmath30 370e02 xmath16 440e1 300e06 80620e05 321e05 120 mg protosilicate none 877e05 46710e05 670e05 110 sioxmath2 250e02 xmath16 1400e01 223e03 13964e03 135e03 065 si 700e02 xmath16 300e01 866e03 29989e03 442e03 080 fe 200e02 xmath16 400e02 000e00 31459e02 103e02 090 feo none 608e03 14136e02 623e03 070 fes 400e02 xmath16 110e01 590e04 21501e03 290e03 090
we performed spitzer infrared spectrograph mapping observations covering nearly the entire extent of the cassiopeia a supernova remnant snr producing mid infrared 55 35 xmath0 m spectra every 5xmath1 10xmath1 gas lines of ar ne o si s and fe and dust continua were strong for most positions we identify three distinct ejecta dust populations based on their continuum shapes the dominant dust continuum shape exhibits a strong peak at 21 xmath0 m a line free map of 21 xmath0m peak dust made from the 19 23 xmath0 m range closely resembles the ar ii o iv and ne ii ejecta line maps implying that dust is freshly formed in the ejecta spectral fitting implies the presence of sioxmath2 mg protosilicates and feo grains in these regions the second dust type exhibits a rising continuum up to 21 xmath0 m and then flattens thereafter this weak 21 xmath0 m dust is likely composed of alxmath2oxmath3 and c grains the third dust continuum shape is featureless with a gently rising spectrum and is likely composed of mgsioxmath3 and either alxmath2oxmath3 or fe grains using the least massive composition for each of the three dust classes yields a total mass of mxmath4 using the most massive composition yields a total mass of mxmath4 the primary uncertainty in the total dust mass stems from the selection of the dust composition necessary for fitting the featureless dust as well as 70 xmath0 m flux the freshly formed dust mass derived from cas a is sufficient from sne to explain the lower limit on the dust masses in high redshift galaxies
introduction the irs spectra and dust maps spectral fitting and dust composition dust mass discussion conclusion
x ray reflection off the surface of cold disks in active galactic nuclei agn and galactic black holes gbhs has been an active field of research since the work of xcite in early studies the illuminated material was assumed to be cold and non ionized xcite it was soon realized however that photoionization of the disk can have a great impact on both the reflected continuum and the iron fluorescence lines detailed calculations were then carried out by xcite and xcite however in all of these papers the density of the illuminated material was assumed to be constant along the vertical direction this assumption applies only to the simplest version of radiation dominated shakura sunyaev disks xcite and only for the portion where viscous dissipation is the dominating heating process for the surface layers however photoionization and compton scattering are the major heating sources therefore the approximation of constant density is not appropriate moreover thermal instability allows the coexistence of gas at different phases these different phases have very different temperatures and hence different densities to keep the gas in pressure balance recently xcite relaxed the simplifying assumption of constant gas density they determined the gas density from hydrostatic balance solved simultaneously with ionization balance and radiative transfer they made an important observation that the thomson depth of the hot coronal layer can have great influence on the x ray reprocessing produced by the deeper and much cooler disk in order to simplify the calculation of the vertical structure though they ignored thermal conduction and the effects of transition layers between the different stable phases a discontinuous change in temperature was allowed whenever an unstable phase was encountered they argued that such transition layers are of little importance because their thomson depths are negligibly small however without taking into account the role of thermal conduction their method of connecting two different stable layers is rather ad hoc moreover even though the thomson depths of these transition layers are small it does not guarantee that the x ray emission and reflection from such layers are negligible because the temperature regime where the transition layers exist is not encountered in the stable phases some of the most important lines can have appreciable emissivity only in these layers also since resonance line scattering has much larger cross section than thomson scattering the optical depths in resonance lines can be significant including thermal conduction in the self consistent solution of the vertical structure presents a serious numerical challenge the difficulties are due to the coupling between hydrostatic balance radiative transfer and heat conduction xcite first studied the phase equilibrium of a gas heated by cosmic rays and cooled by radiation they found that taking into account heat conduction in the boundary layer allows one to obtain a unique solution of the stable equilibrium xcite calculated the full temperature profile for a compton heated corona and xcite calculated the static conditions of the plasma for different forms of heating and cooling but they did not include much discussion of the spectroscopic signatures resulting from the derived vertical structure in this paper we first calculate the temperature structure in the layers above the accretion disk then calculate the emission lines via radiative recombination rr and reflection due to resonance line scattering from the derived layers certain illuminating continua spectra allow more than two stable phases to coexist with two transition layers connected by an intermediate stable layer for the transition layer since the thomson depth is small the ionizing continuum can be treated as constant and since its geometric thickness is smaller than the pressure scale height the pressure can be treated as constant as well we can thus obtain semi analytic solution of the temperature profile by taking into account thermal conduction for the intermediate stable layer its thickness is determined by the condition of hydrostatic equilibrium in our model the normally incident continuum has a power law spectrum with an energy index of xmath0 we also assume a plane parallel geometry and that the resonance line scattering is isotropic the structure of this paper is as follows in secstructure we discuss the existence of the thermal instability and compute the thermal ionization structure of the transition layers in secspectrum we calculate the recombination emission lines and the reflection due to resonance line scattering in secsummary we summarize the important points of the calculations the validity of various approximations made in the calculations and the detectability of the recombination emission and reflection lines the vertical structure of an x ray illuminated disk at rest is governed by the equations of hydrostatic equilibrium and of energy conservation xmath1 in the first equation xmath2 is the force density due to gravity and radiation pressure the dependence of the force on the plasma density is included explicitly through the hydrogen density xmath3 in the second equation a time independent state is assumed xmath4 is the thermal conductivity and xmath5 is the net heating rate depending on the gas state and the incident flux xmath6 differential in energy we neglect the effects of magnetic field and adopt the spitzer conductivity appropriate for a fully ionized plasma xmath7 erg xmath8 sxmath9 kxmath9 xcite we have used the classical heat flux xmath10 in equation eqtransition because the electron mean free path is short compared to the temperature height scale since the continuum flux may change along the vertical height in principle the above two equations must be supplemented by an equation for radiative transfer a self consistent solution of such equations is difficult to obtain in the following we invoke a few physically motivated approximations which make the problem tractable first in thermally stable regions the gas temperature is slowly varying the heat conduction term in the energy balance equation can be neglected therefore the temperature can be determined locally with the condition xmath11 it is well known xcite that the dependence of xmath12 on the gas pressure xmath13 and the illuminating continuum xmath6 can be expressed in the form of xmath14 where xmath15 is the electron density xmath16 is the net cooling rate per unit volume and xmath17 is an ionization parameter defined by xmath18 where xmath19 is the total flux of the continuum and xmath20 is the speed of light in figure figscurve we show the local energy equilibrium curve xmath21 versus xmath17 at xmath22 calculated with the photoionization code xstar xcite these curves are commonly referred to as s curves due to their appearance the illuminating continuum is assumed to be a power law with energy index xmath0 the solid line labeled with s curve 1 corresponds to a low energy cutoff at 1 ev and a high energy cutoff at 150 kev while the dashed line s curve 2 corresponds to a high energy cutoff at 200 kev the choice of such incident spectra is based on their common appearance in many agns and gbhcs the region is thermally unstable where the s curve has a negative slope and stable where the slope is positive as indicated in figure figscurve in the thermally unstable regions we have xmath23 where the derivative is taken while the energy balance is satisfied ie xmath24 this condition was shown xcite to be equivalent to the instability condition discovered by xcite xmath25 the thermal instability allows the gas to coexist at different phases the gas temperature may change by orders of magnitude over a geometric thin region whenever an unstable phase separates two stable ones this results in enormous temperature gradients and heat conduction therefore the heat conduction in the energy balance equation should be included in such transition layers between stable phases on the other hand the thicknesses of these transition layers are usually smaller than the pressure scale height so one can safely treat the gas pressure as constant in these regions moreover the continuum radiative transfer can be neglected because the compton optical depth is found to be small the vertical structure of the transition regions is then solely determined by the energy balance equation with heat conduction the thomson optical depth xmath26 of such regions is readily estimated by xmath27 where xmath28 is the thomson scattering cross section the transition layer solution is not arbitrary under the steady state conditions ie where there is no mass exchange between the two stable phases which the transition layer connects a similar problem in the context of interstellar gas heated by cosmic rays was well studied by xcite we follow their procedure here and define xmath29 in order to rewrite equation eqtransition in the form xmath30 a steady state requires vanishing heat flux at both boundaries of the transition layer or xmath31 where xmath32 and xmath33 are the temperatures of the two stable phases which are connected by a transition layer this condition determines a unique ionization parameter xmath17 for the transition layer in question and the integration of equation eqdy2 along the vertical height gives the detailed temperature profile as a function of optical depth if the disk does not realize the steady state solution there are additional enthalpy terms in equation eqtransition which require that there be mass flow through the transition region ie the cool material in the disk evaporates or the hot material in the corona condenses xcite physically this corresponds to a movement of the transition layer up or down through the vertical disk structure however since the density increases monotonically toward the center of the disk this motion stops where the transition layer reaches the steady state value of xmath17 thus in the absence of disk winds or continuous condensation from a disk corona the steady state solution should generally be obtained there is a complication for the s curves shown in figure figscurve because in each curve there exist two unstable regions and therefore there should be two transition layers for s curve 1 condition eqy2 can be met for both transition layers with resulting ionization parameters xmath34 where xmath35 and xmath36 are associated with the transition layer which connects to the lowest temperature phase and highest temperature phase respectively for s curve 2 the resulting ionization parameters for two transition layers however satisfy xmath37 such a situation is unphysical where the ionization parameter of the upper transition layer is smaller than that of the lower one because in the context of accretion disks the upper layer receives more ionizing flux and has lower pressure so in practice for s curve 2 the intermediate stable region is skipped and a transition layer connects the lowest temperature phase to the highest temperature phase directly the ionization parameter of this transition layer is determined the same way by applying equation eqy2 there is then an intermediate stable layer of nearly uniform temperature in between these two transition layers for s curve 1 as indicated with bc in figure figscurve the thickness of the intermediate layer should in principle be obtained by solving the coupled equations of hydrostatic equilibrium energy balance and radiative transfer however unlike the stable phase at the disk base or that of the corona which may be compton thick this intermediate stable layer is generally optically thin because its optical depth is restricted by the difference between xmath35 and xmath36 furthermore the temperature in this layer is slowly varying and therefore heat conduction can be neglected we shall make another approximation that the variation of the force density xmath2 in the hydrostatic equation can also be neglected this may not be a good approximation however since our main purpose is to investigate qualitatively the effects of an intermediate stable layer such a simplifying procedure does capture the proper scaling relations of the problem and has the advantage of less specific model dependence writing equation eqhydro in a dimensionless form and parameterizing the force factor by a dimensionless parameter xmath38 we obtain xmath39 the parameter a defined here is identical to the gravity parameter in xcite in the absence of radiation pressure the integration of this equation from xmath35 to xmath36 gives the thomson depth of the intermediate stable layer assuming radiation pressure can be neglected the force factor at a given radius xmath40 of the disk can be estimated as xmath41 where xmath42 is the luminosity of the continuum source in units of the eddington limit xmath43 is the half thickness of the disk at radius xmath40 xmath44 is the mass of the central source xmath45 is the proton mass and xmath46 is the eddington luminosity in this estimate we have assumed that xmath47 this gives xmath48 the value xmath49 for a typical thin disk as those present in agn and black hole binaries one expects xmath50 if the luminosity is sub eddington xmath51 as in most agns xmath48 is of order unity however since the disk surface may not be normal to the continuum radiation xmath19 may be only a fraction of the value assumed above which increases xmath48 by one or two orders of magnitude on the other hand as the source approaches the eddington limit xmath48 may become smaller than 1 therefore we expect xmath48 to be in the range of 01 10 the exceptional cases of much smaller and larger xmath48 are discussed in secsummary the temperature profiles and optical depths of the transition layers and possible intermediate stable layer are shown in figures figtransition three labeled curves in solid lines correspond to s curve 1 in figure figscurve with different values of the force factors xmath48 10 1 and 01 respectively the dashed curve corresponds to s curve 2 where there is no intermediate stable layer for each of the solid curves three layers are clearly seen with two transition layers being connected by an intermediate stable layer as illustrated for the case of xmath52 the smaller xmath48 produces a more extended intermediate stable layer as expected because the stable phase with the lowest temperature is almost neutral and the stable phase at the highest temperature is almost fully ionized they are not efficient in generating x ray line emission except for iron fluorescence lines from the neutral material only the transition layers and the intermediate stable layer are expected to emit discrete lines in the soft x ray band in a photoionized plasma the temperature is too low for collisional excitation to be an important line formation process instead radiative recombination rr followed by cascades dominates the line emission the flux of a particular line can be written as xmath53 where xmath54 is the density of the ion before recombination xmath55 is the line energy and xmath56 is the line emissivity defined as in xcite in ionization equilibrium where the ionization rate is equal to the recombination rate we have xmath57 where xmath58 is the recombination coefficient of ion xmath59 xmath60 is the number density of ion xmath61 xmath6 is again the monochromatic incident flux differential in energy and xmath62 is the photoionization cross section of ion xmath61 defining the branching ratio xmath63 equation fluxemission can be rewritten as xmath64 where xmath65 is the fractional abundance of ion xmath61 with respect to the electron density as indicated xmath66 and xmath67 depend only on temperature xmath21 and ionization parameter xmath17 which are both functions of optical depth xmath68 for convenience we further define the emission equivalent width xmath69 then if xmath70 which depends only on the shape of the incident continuum xmath71 can be written as xmath72 therefore the emission equivalent width xmath71 is independent of the density of the medium and the incident flux it is a unique function of the structure deduced in section secstructure all other variables depend only on xmath21 and xmath17 the numerical values of xmath73 were provided by d liedahl private communication and were calculated using the models described in xcite the values of xmath74 were computed using xstar xcite in figure figemission we plot the spectra of the recombination emission within the 05 15 kev band with a spectral resolution xmath75 ev which is close to the spectral resolution of the grating spectrometers on chandra and xmm newton the top panel corresponds to the case without an intermediate stable layer and the bottom panels corresponds to the case with an intermediate layer for xmath76 and 01 respectively for clarity from top to bottom the flux in each panel is multiplied by a factor indicated in each panel it appears that the existence of an intermediate stable layer enhances the emission in this energy band this is not surprising since the ions that are responsible for these lines have peak abundances at temperatures close to that of the intermediate stable layer in all cases the equivalent widths ews of the emission lines are less than 10 ev with respect to the ionizing continuum the strongest lines are the hydrogen like and helium like lines of oxygen with ews approaching several tenths of an ev hydrogen like and helium like lines from iron outside the plotted energy band are somewhat stronger with ews reaching a few ev we note that our low equivalent width values conflicts with those derived by xcite who found some lines with equivalent widths as high as 30 ev however since they did not consider the appropriate locations for the transition regions in xmath17space their intermediate stable layer subtended a much larger optical depth than we find here naturally with a thicker layer they found larger equivalent widths emission from rr is not the only line formation process in the transition layers and the intermediate stable layer due to very large cross sections in resonance line scattering the reflected flux in these linesmay be significant with the computed thermal and ionization structure the column density in each ion and the optical depth in all resonance lines can be calculated straightforwardly the cross sections for resonance line scattering depend on the line broadening we assume thermal doppler effects as the only mechanism although the gas temperature is a function of depth we calculate the line width for a temperature where the abundance of each ion peaks as an average and assume that the resonance scattering cross sections are uniform along the vertical direction in terms of absorption oscillator strength xmath77 this cross section of the resonance line scattering xmath78 can be written as xmath79 where xmath80 is the electron mass xmath81 is the electron charge xmath82 is the wavelength of the line and xmath83 is the average line width in wavelength under the assumption of thermal doppler broadening xmath84 where xmath85 is the temperature at which the ion abundance peaks and xmath86 is the ion mass the resonance scattering optical depth xmath87 for a line from ion xmath59 can be estimated as xmath88 where xmath89 is the column density of the ion the radiative transfer in the line is a complicated issue xcite a full treatment is beyond the scope of this work however since we are only interested in a reasonable estimate of the reflected line flux a simple approach may be adopted we assume the resonance line scattering is isotropic and conservative and neglect the polarization dependence under such conditions the reflection and transmission contributions by a plane parallel slab of finite optical depth xmath90 have been solved by xcite for normal incident flux xmath19 the angle dependent reflectivity is xmath91 where xmath92 xmath93 is the reflectivity at xmath94 xmath95 is the reflected intensity xmath19 is the incident flux and xmath96 is the scattering function defined as xmath97 where xmath98 and xmath99 are two functions that satisfy the following integral equations xmath100dmuprime nonumber ymu etau mufracmu2int01frac1mumuprimeymuxmuprimexmuymuprimedmuprimeendaligned the solutions of these equations may be obtained by an iterative method with the starting point xmath101 and xmath102 the angle integrated reflectivity xmath103 can be calculated as xmath104 and is shown in figure figtotref as a function of the resonance scattering optical depth xmath87 the reflected flux in a line can be written as xmath105 where xmath106 is the line width in energy similarly to the emission equivalent width xmath71 we define a reflection equivalent width xmath107 which results in xmath108 this reflection equivalent width from our numerical results is a few tenths of an ev for strong resonance lines similar to that of the recombination emission lines in figure figreflection we plot the spectra of the resonantly scattered lines in the energy band 05 15 kev with a spectral resolution xmath75 ev the top panel corresponds to the case without an intermediate stable layer and the bottom panels corresponds to the case with an intermediate layer for xmath76 and 01 respectively for clarity from top to bottom the flux in each panel is multiplied by a factor of 100 we see that the equivalent widths of the reflected lines are notably enhanced when there is an intermediate stable layer but not as significantly as for the recombination emission lines this is because the optical depths of many strong lines become much larger than unity and the reflection is saturated if there are broadening mechanisms other than thermal doppler effects such as turbulent velocity the reflected intensity can be further enhanced in order to gain a crude idea of the relative importance of recombination emission and reflection from the transition layers and intermediate stable layer we compare them to the hump produced by compton scattering off a cold surface xcite we use the greens function obtained by xcite to calculate the compton reflection this method was verified to be accurate with a monte carlo procedure by xcite in figure figspectrum we show the combined spectra including recombination emission and reflection lines from resonance line scattering and the compton reflection hump we now summarize the most important conclusions that can be drawn from the calculations presented in this paper we also discuss the detectability of the predicted line features 1 the unique ionization parameters that characterize the steady state solutions of the transition layers depend on the shape of the s curve we have shown that two power law illuminating spectra with different high energy cutoffs produce very different temperature profiles the harder spectrum only allows one transition layer even though there are two unstable branches in the s curve while the softer one allows two separate transition layers connected by an intermediate stable layer this is due to the fact that the ionization parameter of the upper transition layer must be larger than that of the lower one if they are to exist separately in a disk environment the harder spectrum produces a turnover point of the upper branch of the s curve at smaller xmath17 therefore the transition layer due to the upper unstable region joins the lower one smoothly without allowing the intermediate stable region to form the turnover of the upper s curve represents the point where compton heating starts to overwhelm bremsstrahlung the ionization parameter at which this point occurs is related to the compton temperature of the continuum xmath109 xcite a harder spectrum has larger xmath110 therefore the intermediate stable layer tends to disappear for hard incident spectra although the thomson depths of the transition layers and possible intermediate stable layer are generally negligible the x ray emission lines from them may comprise the main observable line features because the temperatures of these layers are inaccessible to the stable phases and thus some of the important lines can have appreciable emissivity only in these layers due to the much larger cross sections for resonance line scattering reflection due to resonance lines off such transition layers is also important the strengths of reflected lines are at least comparable with those of the recombination emission lines when there is no intermediate stable layer because the appearance of the reflected line spectrum is different from that of the recombination emission spectrum high resolution spectroscopic observations should be able to distinguish these mechanisms 3 the justification of the assumption that the ionizing continuum does not scatter in the intermediate layer depends on the magnitude of the parameter xmath48 the thomson depth of this layer xmath68 is given by xmath111 for the power law continuum with high energy cutoff at 150 kev s curve 1 xmath112 and xmath113 from our numerical results therefore xmath114 xmath68 is much less than unity as long as xmath48 is greater than 01 for smaller xmath48 however another effect comes into play xcite showed that the thomson depth of the coronal layer the stable phase with highest temperature exceeds unity when the gravity parameter identical to xmath48 defined here when the radiation pressure is neglected is xmath115 001 therefore not much ionizing flux can penetrate this layer and the reprocessing in the deeper and cooler layers can be neglected completely as xmath48 becomes much larger than 10 the thickness of the intermediate stable layer is negligible compared to the transition layers therefore its presence may be ignored since the recombination rate must equal the photoionization rate in the irradiated gas recombination radiation is also a form of reflection ie the line equivalent widths are independent of the incident flux they depend only on the structure xmath116 deduced from the hydrostatic and energy balance equations the detectability of these recombination emission and reflection lines depends on whether the primary continuum is viewed directly when the ionizing continuum is in direct view our results show that the ews of the strongest lines in the 05 15 kev band are at most a few tenths of an ev slightly larger when there is an intermediate stable layer the signal to noise ratio snr in such a line can be written as xmath117 where xmath118 is the integration time of the observation xmath55 is the energy of the line xmath119 is the photon flux in the continuum xmath120 and xmath121 are the effective area and resolving power of the instrument respectively for a line at xmath115 1 kev with xmath122 and with hetgs on board chandra we have xmath123 a typical seyfert 1 galaxy has a flux of xmath124 erg xmath125 sxmath9 in the energy band of 210 kev assuming a power law with energy index of 1 the photon flux at 1 kev would be xmath126 xmath125 sxmath9 kevxmath9 for a reasonable integration time of 10 ks we have snr xmath127 when the primary continuum is obscured as in seyfert 2 galaxies the ews of the emission and reflection lines can be orders of magnitude larger because the continuum at this energy region is absorbed severely and the snr can be greatly enhanced making these lines observable acknowledgment smk acknowledges several grants from nasa which partially supported this work mfg acknowledges the support of a chandra fellowship at mit we wish to thank m sako d savin and e behar for several useful discussions the s curves produced by two different incident ionizing spectra the vertical line indicates the unique solution of xmath17 which satisfies condition eqy2 two solutions for s curve 1 xmath35 282 and xmath36 314 and only one for s curve 2 xmath17 222 the temperature profiles of the transition layers and the intermediate stable layer versus thomson optical depth xmath68 the solid curves correspond to s curve 1 with different force factors xmath48 the dashed line corresponds to the s curve 2 the spectra of the emission lines via rr in the transition layers and the intermediate stable layer the spectral resolution is xmath128 ev the top panel corresponds to s curve 2 while bottom panels correspond to s curve 1 with different parameter a the spectra of the reflection lines due to resonance scattering in the transition layers and the intermediate stable layer the spectral resolution is xmath75 ev the top panel corresponds to s curve 2 while bottom panels correspond to s curve 1 with different parameter a the combined spectrum emission lines via recombination red reflection lines due to resonance line scattering and black the compton reflection hump the spectral resolution is xmath129 ev the top panel corresponds to s curve 2 while bottom panels correspond to s curve 1 with different parameter a
we derive a semi analytic solution for the structure of conduction mediated transition layers above an x ray illuminated accretion disk and calculate explicitly the x ray line radiation resulting from both resonance line scattering and radiative recombination in these layers the vertical thermal structure of the illuminated disk is found to depend on the illuminating continuum for a hard power law continuum there are two stable phases connected by a single transition layer while for a softer continuum there may exist three stable phases connected by two separate transition layers with an intermediate stable layer in between we show that the structure can be written as a function of the electron scattering optical depth through these layers which leads to unique predictions of the equivalent width of the resulting line radiation from both recombination cascades and resonance line scattering we find that resonance line scattering plays an important role especially for the case where there is no intermediate stable layer
introduction thermal instability and transition layers x-ray emission and resonance line scattering summary
in reference xcite barrett and crane have introduced a model for quantum general relativity gr the model is based on the topological quantum xmath1 bf theory and is obtained by adding a quantum implementation of the constraint that reduces classical bf theory to euclidean gr xcite to make use of the barrett crane construction in quantum gravity two issues need to be addressed first in order to control the divergences in the sum defining the model the barrett crane model is defined in terms of the xmath2deformation of xmath1 in a realistic model for quantum euclideangr one would like the limit xmath3 to be well defined second the barrett crane model is defined over a fixed triangulation this is appropriate for a topological field theory such as bf theory which does not have local excitations but the implementation of the bf to gr constraint breaks topological invariance and frees local degrees of freedom the restriction of the model to a fixed discretization of the manifold can therefore be seen only as an approximation in order to capture all the degrees of freedom of classical gr and restore general covariance an appropriate notion of sum over triangulationsshould be implemented see for instance xcite a novel proposal to tackle this problemis provided by the field theory formulation of spin foam models xcite in this formulation a full sum over arbitrary spin foams and thus in particular over arbitrary triangulations is naturally generated as the feynman diagrams expansion of a scalar field theory over a group the sum over spinfoams admits a compelling interpretation as a sum over 4geometries the approach represents also a powerful tool for formal manipulations and for model building examples of this are ooguri s proof of topological invariance of the amplitudes of quantum bf theory in xcite and the definition of a spinfoam model for lorentzian gr in xcite using such framework of field theories over a group a spinfoam model for euclidean quantum grwas defined in xcite this model modifies the barrett crane model in two respects first it is not restricted to a fixed triangulation but it naturally includes the full sum over arbitrary spinfoams second the natural implementation of the bf to gr constraint in the field theory context fixes the prescription for assigning amplitudes to lower dimensional simplices an issue not completely addressed in the original barrett crane model this same prescription for lower dimensional simplices amplitudes but in the context of a fixed triangulation was later re derived by oriti and williams in xcite without using the field theory the model introduced in xcite appeared to be naturally regulated by those lower dimensional amplitudes in particular certain potentially divergent amplitudes were shown to be bounded in xcite these results motivated the conjecture that the model could be finite that is that all feynman diagrams might converge in this letterwe prove this conjecture this paper is not self contained familiarity with the formalism defined in xcite is assumed the definition of the model is summarized in the section ii for a detailed description of the model we refer to xcite in section iii a series of auxiliary results is derived the proof of finiteness is given in section iv consider the fundamental representation of xmath1 defined on xmath4 and pick a fixed direction xmath5 in xmath4 let xmath6 be the xmath7 subgroup of xmath1 that leaves xmath5 invariant the model is defined in terms of a field xmath8 over xmath9 invariant under arbitrary permutations of its arguments we define the projectors xmath10 and xmath11 as xmath12 where xmath13 and xmath14 the model introduced in xciteis defined by the action xmath15int dgi left pg phigi right2 frac 15 int dgi left pgphphigi right5 where xmath16 and the fifth power in the interaction term is notation for xmath17 5phig1g2g3g4 phig4g5g6g7 phig7g3g8g9 phig9g6g2g10 phig10g8g5g1 xmath11 projects the field into the space of gauge invariant fields namely those such that xmath18 for all xmath14 the projector xmath10 projects the field over the linear subspace of the fields that are constants on the orbits of xmath6 in xmath1 when expanding the field in modes that is on the irreducible representations of xmath1 this projection is equivalent to restricting the expansion to the representations in which there is a vector invariant under the subgroup xmath6 because the projection projects on such invariant vectors the representations in which such invariant vectors exist are the simple or balanced representations namely the ones in which the spin of the self dual sector is equal to the spin of the antiselfdual sector can be labeled by two integers xmath19 in terms of those integers the dimension of the representation is given by xmath20 simple representations are those for which xmath21 in turn the simple representations are the ones whose generators have equal selfdual and antiself dual components and this equality under identification of the xmath1 generator with the xmath22 field of xmath23 theory is precisely the constraint that reduces xmath23 theory to gr alternatively this constraint allows one to identify the generators as bivectors defining elementary surfaces in 4d and thus to interpret the coloring of a two simplex as the choice of a discretized 4d geometry xcite using the peter weyl theorem one can write the partition function of the theory xmath24 as a perturbative sum over the amplitudes xmath25 of feynman diagrams xmath26 this computation is performed in great detail in xcite yielding xmath27 the first summation is over pentavalent 2complexes xmath26 defined combinatorially as a set of faces xmath28 edges xmath29 and vertices xmath30 and their boundary relations the second sum is over simple xmath1 representations xmath31 coloring the faces of xmath26 xmath32 is the dimension of the simple representation xmath31 the amplitude xmath33 is a function of the four colors that label the corresponding faces bounded by the edge it is explicitly given by xmath34 where xmath35 is the dimension of the space of the intertwiners between the four representations xmath36xcite the vertex amplitude xmath37 is the barrett crane vertex amplitude which is a function of the ten colors of the faces adjacent to the 5valent vertex of xmath26 the barrett crane vertex amplitude can be written as a combination of xmath38 symbols however as it was shown by barrett in xcite it can also be express as an integral over five copies of the 3sphere a representation with a nice geometrical interpretation this representation is better suited for our purposes so we give it here explicitly xmath39 where xmath40 denotes the invariant normalized measure on the sphere if we represent the points in the 3sphere as unit norm vectors xmath41 in xmath4 and we define the angle xmath42 by xmath43 then the kernel xmath44 is given by xmath45 this is a smooth bounded functions on xmath46 with maximum value xmath47 our task is to prove convergence of the feynman integrals of the theory in the mode expansion potential divergences appear in the sum over representations xmath31 therefore we need to analyze the behavior of vertex and edge amplitudes for large values of xmath31 an arbitrary point xmath48 can be written in spherical coordinates as xmath49 where xmath50 xmath51 and xmath52 the invariant normalized measure in this coordinates is xmath53 using the gauge invariance of the vertex the invariance of the three sphere under the action of xmath1 and the normalization of the invariant measure we can compute vv by dropping one of the integrals and fixing one point on xmath54 say xmath55 thus equation vv becomes xmath56 where xmath57 is the normalized measure on the 2sphere xmath58constant now we bound the barrett crane amplitude using that xmath59 namely xmath60 the argument is obviously independent of the choice of the six colors in depe weaker versions of the bound in which more colors are included also hold in particular if we directly bound the xmath61 s in vv we obtain that the absolute value of the amplitude is bounded by the square root of the product of the ten dimensions more in general let xmath62 with xmath63 taking the values xmath64 be an arbitrary subset of xmath65 with xmath66 elements then the following bound holds for any xmath62 xmath67 for xmath68 we recover depe in xcite barrett and williams have analyzed the asymptotic behavior of the oscillatory part of the amplitude in connection to the classical limit of the theory we add here some information on the asymptotic behavior of the magnitude of vv the results of this paragraph will not be used in the rest of the paper we present them since they follow naturally from our previous considerations equation sisi can be rewritten as xmath69 where xmath70 is a smooth bounded function in xmath71in re since xmath72 is given by an integral of smooth and bounded functions on a compact space therefore the following limit holds xmath73 the same argument can be used to prove the following stronger result xmath74 where in the limit the four colors are taken simultaneously to infinity clearly these equations are valid for any choice of the xmath75 s finally it is easy to verify the following inequalities for the dimension of the space of intertwiners between four representations converging at an edge see xcite xmath76 which holds for any xmath77 using this expression we can construct weaker bounds in the spirit of cota we define the set xmath78 xmath79 as an arbitrary subset of xmath80 with xmath81 elements then the following inequalities hold for any xmath82 xmath83we have now all the tools for proving the finiteness of the model we will construct a finite bound for the amplitudes xmath25 defined in z of arbitrary feynman diagrams of the model inserting the inequalities dd and cota both with xmath84 in the definition of xmath25 we obtain a bound for the amplitude of an arbitrary pentavalent 2complex xmath26 namely xmath85 where xmath86 denotes the number of edges of the face xmath28 the term xmath87 in lala comes from various contributions first we have xmath32 from the face amplitude second we have xmath88 and xmath89 from the denominators of the xmath86 edge amplitudes xii and the bounds for the corresponding numerators dd respectively finally we have xmath89 from the bounds for the xmath86 vertex amplitudes cota if the 2complex contains only faces with more than one edge then the previous bound for the amplitude is finite more precisely if all the xmath86 s are such that xmath90 then xmath91 and using the fact that xmath92 the sum on the rhs of the last equation converges on the other hand if some of the xmath86 are equal to 1 then the right hand side of lala diverges and therefore for this case we need a stricter bound involving stronger inequalities this can be done as follows notice that every time a 2complex contains a face whose boundary is given by a single edge the same edge must be part of the boundary of another face bounded by more than one edge in fig 1 an elementary vertex of a 2complex containing such a face is shown the thick lines represent edges converging at the vertex each of them is part of the boundary of four faces to visualize those faces we have drawn in thin lines the intersection of the vertex diagram with a 3sphere there is a face bounded by a single edge its intersection with the sphere is denoted by xmath93 notice that the surface intersecting the sphere in xmath94 will have a boundary defined by at least two edges notice also that a single vertex can have a maximum of two such peculiar faces therefore using dd for xmath95 we can choose to bound the numerator in xii with the colors corresponding to the three adjacent faces like xmath94 in fig 1 if xmath96 denotes the color of the face xmath93 then we construct the bound for the amplitude using xmath97 one of the square roots would be sufficient to bound the edge amplitude but the symmetry in the previous expression simplifies the construction of the bound for the amplitude of an arbitrary 2complex then use cota for xmath95 or xmath98 to bound the vertex amplitude corresponding to a vertex containing one respectively two faces whose boundary is given by only one edge in this way we can exclude the color corresponding to these singular faces from the bounds corresponding to the vertex and the denominator of the edge amplitude thus these faces contribute to the bound as xmath32 face amplitude times xmath99 from the denominator of the single edge amplitude ie as xmath100 we keep using dd and cota for xmath84 for faces with xmath101 and vertices containing no faces with xmath102 if we denote by xmath103 and xmath104 the set of faces with more than one edge and one edge respectively the general bound is finally given by xmath105 where xmath106 denotes the number of faces in the 2complex xmath26 and xmath107 denotes the riemann zeta function this concludes the proof of the finiteness of the amplitude for any 2complex xmath26 xmath108equation oui proves that there are no divergent amplitudes in the field theory defined by tope this field theory was defined in xcite as a model for euclidean quantum gravity based on the implementations of the constraints that reduce xmath1 bf theory to euclidean gr the corresponding bf topological theory is divergent after quantization regularization is done ad hoc by introducing a cut off in the colors or more elegantly by means of the quantum deformation of the group xmath109 remarkably the implementation of the constrains that give the theory the status of a quantum gravity model automatically regularizes the amplitudes another encouraging result comes from the fact that according to equation oui contributions of feynman diagrams decay exponentially with the number of faces this might be useful for studying the convergence of the full sum over 2complexes to carlo rovelli l crane topological field theory as the key to quantum gravity proceedings of the conference on knot theory and quantum gravity riverside j baez ed 1992 j barret quantum gravity as topological quantum field theory j math phys 36 1995 6161 6179 l smolin linking topological quantum field theory and nonperturbative quantum gravity j math phys 36 1995 6417 j baez spin foam models class quant grav 15 1998 1827 1858 gr qc9709052 an introduction to spin foam models of quantum gravity and bf theory in geometry and quantum physics eds helmut gausterer and harald grosse lecture notes in physics springer verlag berlin 2000 gr qc9905087
we prove that a certain spinfoam model for euclidean quantum general relativity recently defined is finite all its all feynman diagrams converge the model is a variant of the barrett crane model and is defined in terms of a field theory over xmath0 25 in
introduction the model bounds proof of finiteness discussion acknowledgments
lipid bilayer membranes constitute one of the most fundamental components of all living cells apart from their obvious structural role in organizing distinct biochemical compartments their contributions to essential functions such as protein organization sorting or signalling are now well documented xcite in fact their tasks significantly exceed mere passive separation or solubilization of proteins since often mechanical membrane properties are intricately linked to these biological functions most visibly in all cases which go along with distinct membrane deformations such as exo and endocytosis xcite vesiculation xcite viral budding xcite cytoskeleton interaction xcite and cytokinesis xcite consequently a quantitative knowledge of the material parameters which characterize a membrane s elastic response most notably the bending modulus xmath0 is also biologically desirable several methods for the experimental determination of xmath0 have been proposed such as monitoring the spectrum of thermal undulations via light microscopy xcite analyzing the relative area change of vesicles under micropipette aspiration xcite or measuring the force required to pull thin membrane tethers xcite with the possible exception of the tether experiments these techniques are global in nature ixmath1e they supply information averaged over millions of lipids if not over entire vesicles or cells yet in a biological context this may be insufficient xcite for instance membrane properties such as their lipid composition or bilayer phase and thus mechanical rigidity have been proposed to vary on submicroscopic length scales xcite despite being biologically enticing this suggestion known as the raft hypothesis has repeatedly come under critical scrutiny xcite precisely because the existence of such small domains is extremely hard to prove an obvious tool to obtain mechanical information for small samples is the atomic force microscope afm xcite and it has indeed been used to probe cell elastic properties such as for instance their young modulus xcite yet obtaining truly local information still poses a formidable challenge apart from several complications associated with the inhomogeneous cell surface and intra cellular structures beneath the lipid bilayer one particularly notable difficulty is that the basically unknown boundary conditions of the cell membrane away from the spot where the afm tip indents it preclude a quantitative interpretation of the measured force ixmath1e a clean way to translate this force into local material properties to overcome this problem steltenkamp etxmath1al have recently suggested to spread the cell membrane over an adhesive substrate which features circular pores of well defined radius xcite poking the resulting nanodrums would then constitute an elasto mechanical experiment with precisely defined geometry using simple model membranes the authors could in factshow that a quantitative description of such measurements is possible using the standard continuum curvature elastic membrane model due to canham xcite and helfrich xcite spreading a cellular membrane without erasing interesting local lipid structuresobviously poses an experimental challenge but the setup also faces another problem which has its origin in an elastic curiosity even significant indentations which require the full nonlinear version of the helfrich shape equations for their correct description end up displaying force distance curves which are more or less lineara finding in accord with the initial regime of membrane tether pulling xcite yet this simple functional form makes a unique extraction of the two main mechanical properties tension and bending modulus difficult is the nanodrum setup thus futile in the present work we develop the theoretical basis for a slight extension of the nanodrum experiment that will help to overcome this impasse we will show that an additional adhesion between the afm tip and the pore spanning membrane will change the situation very significantly quantitatively and qualitatively force distance curves cease to be linear hysteresis nonzero detachment forces and membrane overhangs can show up and various new stable and unstable equilibrium branches emerge the magnitude and characteristics of all these new effects can be quantitatively predicted using well established techniques which have previously been used successfully to study vesicle shapes xcite vesicle adhesion xcite colloidal wrapping xcite or tether pulling xcite indents a pore spanning membrane with a force xmath2 to a certain depth xmath3 the radius of the pore is xmath4 the membrane detaches from the tip at a radial distance xmath5 the two possible parametrizations xmath6 and xmath7 are explained in the beginning of chapter sec shapeeqn the key ingredient underlying most of the new physics is the fact that the membrane can choose its point of detachment from the afm tip unlike in the existing point force descriptions xcite in which a certain pushing or pulling force is applied at one point of the membrane our description accounts for the fact that the generally nonvanishing interaction energy per unit area between tip and membrane co determines their contact area over which they transmit forces and thereby influence the entire force distance curve what may at first seem like a minor modification of boundary conditions quickly proves to open a very rich and partly also complicated scenario whose various facets may subsequently be used to extract information about the membrane in fact smith etxmath1alxcite have demonstrated in a related situation that the competition between adhesion and tether pulling for substrate bound vesicles gives rise to various first and second order transitions details of which depend in a predictable way on the experimental setup in our casewe will for instance find snap on and snap off events between tip and membrane which rest on the fact that binding is not pre determined and whose correct description is very important for reliably interpreting any afm force experiments moreover we will also see that the very occurrence of tethers is a much more subtle phenomenon since an adhering membrane pulled upwards may in fact prefer to detach rather than being pulled into a tether a question treated previously and on the linear level by boulbitch xcite our paper is organized as follows in chapter sec themodel we introduce the model of our system and discuss the relevant energies in chapter sec shapeeqn we present the equations that have to be solved in order to find membrane profiles force indentation curves and detachment forces this will also include a treatment of the nonlinear case which was only mentioned very briefly in ref xcite in chapter sec results the results of our calculations are summarized and compared to existing xcite measurements we end in chapter sec discussion with a discussion how the predictions for indentation and adhesion characteristics can be used to extract material properties in future experiments we consider a flat solid substrate with a circular pore of radius xmath4 a lipid bilayer membrane is adsorbed onto the substrate and spans the pore in the situationwe want to analyze an afm tip is used to probe the properties of the free pore spanning membrane we assume that the tip has a parabolic shape with curvature radius xmath8 at its apex furthermore we restrict ourselves to the static axisymmetric situation in which the tip pokes the free standing membrane exactly in the middle of the pore see fig fig poregeometry for a certain downward force xmath9 the membrane is indented to a corresponding depth xmath10 which is measured from the plane of the substrate to the depth of the apex of the tip note that it is also possible to pull the membrane up with a force xmath11 in the opposite direction if attractive interactions attach the membrane to the tip in the following we will model the bilayer as a two dimensional surface this is a valid approach provided the thickness of the membrane approx 5 nm is much smaller than i the membrane s lateral extension as well as ii length scales of interest such as local radii of curvature with this geometric setup in mind let us now consider the different energy contributions we want to include in our model the total energy of the system pore tip comprises different contributions the membrane is under a lateral tension xmath12 to pull excess area into the pore work has to be done against the adhesion between membrane and flat substrate xcite it is given by xmath12 times the excess area xcite additionally a curvature energy is associated with the membrane according to canham xcite and helfrich xcitethe hamiltonian for an up down symmetric membrane is then xmath13 where xmath14 denotes the surface of the membrane part which spans the pore the proportionality constants xmath0 and xmath15 are called bending rigidity and saddle splay modulus respectively the gaussian curvature xmath16 is the product of the two principal curvatures whereas xmath17 is their sum xcite note that the last term of energy eq helfrich yields zero in our specific problem xcite with the help of the two material constants xmath12 and xmath0 one can define a characteristic lengthscale xmath18 which does not depend on geometric boundary conditions such as the radius of the tip or the pore but only on properties of the membrane on scales larger than xmath19 tensionis the more important energy contributions on smaller scales bending dominates apart from tension and bending an adhesion between tip and membrane may contribute to the total energy we assume that it is proportional to the contact area xmath20 between tip and membrane with a proportionality constant xmath21 the adhesion energy per area if the indentation xmath3 is given and one wants to determine the force xmath2 the total energy can thus be written as xmath22 under certain circumstances however it is more convenient to consider the problem for a given force xmath2 both ensembles constant indentation vs constant force are connected via a legendre transformation xcite xmath23 while the ground states one obtains for the two ensembles will be the same questions of stability depend on the ensemble a profile found to be stable under constant height conditions is not necessarily stable under constant force conditions the route we want to follow here in order to find force indentation curves is to determine the equilibrium shapes of the non bound section of the membrane via a functional minimization the energy contributions caused by the bounded section of the membrane enter via the appropriate boundary conditions see chapter sec shapeeqn and appendix app boundaryconditions these imply that the contact point xmath24 is not known a priori but has to be determined as well moving boundary problem in the next sectionwe will show how one can set up the appropriate mathematical formulation of the problem to get membrane profiles and force indentation curves to describe the shape of the membrane we use two different kinds of parametrization see fig fig poregeometry for the linear approximation it is sufficient to use monge gauge where the position of the membrane is given by a height xmath6 above or below the underlying reference plane the disadvantage of this parametrization is that it does not allow for overhangs since these may be present in the full nonlinear problem we will use the angle arclength parametrization in the exact calculations the angle xmath7 with respect to the horizontal substrate as a function of arclength xmath25 fully describes the shape to get the profile of the free membrane one has to solve the appropriate euler langrange shape equation this equation is typically a fourth order nonlinear partial differential equation and thus in most cases impossible to solve analytically one may however consider cases where the membrane is indented only a little and gradients are small in that caseone may linearize the energy functional in the constant indentation ensemble onegets for the free part xmath26 labeleq energyfunctionalconstantheight where xmath27 is the area element on the flat reference plane and xmath28 is the projected surface of the free pore spanning membrane the symbol xmath29 denotes the two dimensional nabla operator in the reference plane the appropriate shape equation can be derived by setting the first variation of energy eq energyfunctionalconstantheight to zero yielding xmath30 the solution to this equation is a linear combination of the eigenfunctions of the laplacian corresponding to the eigenvalues 0 and xmath31 for axial symmetryit is given by xmath32 where xmath33 and xmath34 are the modified bessel functions of the first and the second kind respectively xcite the constants xmath35 are determined from the appropriate boundary conditions see appendix app boundaryconditions eq boundaryequationslinear xmath36 where a dash denotes a derivative with respect to xmath37 even though the differential equation is of fourth order five conditions are required due to its moving boundary nature ixmath1e xmath24 is to be determined from an adhesion balance which is in fact the origin of the fifth condition eq boundeqlin3 see appendix app boundaryconditions the solution of the boundary value problem eq shapeequationlineareq boundaryequationslinear can be used in two ways to calculate the force for a prescribed indentation first one can insert the profile back into the functional eq energyfunctionalconstantheight to obtain the energy of the equilibrium solution which will then parametrically depend on the indentation xmath3 its derivative with respect to xmath3 yields the force xmath2 second one can also consider stresses in analogy to elasticity theory xcite xmath2 is given by the integral of the flux of surface stress xcite through a closed contour around the tip the second approach is used in the present work it has the advantage that the final expression for the force can be written in a closed form xcite see also appendix app diffgeostress xmath38 this equation is exact inserting the solution xmath6 of the boundary value problem eq shapeequationlineareq boundaryequationslinear into eq forcelinear yields the value of the force in the linear regime a little warning might be due here expression eq forcelinear is evaluated at the rim of the pore where the profile is flat even for high indentations one might thus wonder whether inserting the small gradient solution would actually lead to an exact result this is however not the case because the membrane shape at the rim predicted by the linear calculation is not identical with the prediction from the full nonlinear theory except for its flatness which is enforced by the boundary conditions there is no magical way to avoid solving the nonlinear shape equation if one wants the exact answer let us now shift to the angle arclength parametrization and consider the full nonlinear problem in principle the constant height ensemble could be used here as well it is however technically much easier to fix xmath2 instead in order to reduce the number of boundary conditions one has to fulfill at the rim of the pore see below and appendix app numericalmethod in this paragraph all variables with a tilde are scaled with xmath39 ixmath1e xmath40 xmath41 etc the energy functional of the free membrane can then be written as xcite xmath42 where xmath43 is the arclength at the contact point xmath24 and xmath44 the arclength at xmath4 the dot denotes the derivative with respect to xmath25 the langrange multiplier functions xmath45 and xmath46 ensure that the geometric conditions xmath47 and xmath48 are fulfilled everywhere in order to make the numerical integration easierlet us rewrite the problem in a hamiltonian formulation xcite the conjugate momenta are xmath49 xmath50 and xmath51 the scaled hamiltonian is then given by xmath52 note that xmath53 is not explicitly dependent on xmath25 and is thus a conserved quantity instead of one fourth order one then has six first order ordinary differential equations the hamilton equations eq shapeequationsnonlinear xmath54 cospsi prho sinpsi labeleq hamiltonequation3 dotprho fracpartial tildehpartial rho fracppsirho bigfracppsi4 rho fracsinpsirho big frac2lambda2 dotpz fracpartial tildehpartial z 0 labeleq hamiltonequation4a endaligned according to the last equation xmath55 has to be constant along the profile its value can be found by considering the integral over the flux of surface stress which has to equal the applied force this implies that xmath55 vanishes everywhere see appendix app diffgeostress equations eq shapeequationsnonlinear can be solved numerically subject to the boundary conditions see also appendices app boundaryconditions and app numericalmethod eq boundaryequationsnonlinear xmath56 where contact radius xmath24 and contact angle xmath57 are connected via xmath58 the solution to eq shapeequationsnonlinear eq boundaryequationsnonlinear gives the indentation xmath3 for some prescribed force xmath59 this chapter will summarize the characteristic features of the solution to the boundary value problems eq shapeequationlinear eq boundaryequationslinear and eq shapeequationsnonlinear eq boundaryequationsnonlinear in addition the theory will be shown to be in accord with available experimental results we will introduce some additional variable rescaling in order to make generalizations of the results easier lengths will be scaled with xmath8 we also define xmath60 in a typical experiment the curvature of the tip is of the order of ten nanometer 540 nm and pore radii may lie between 30 and 200 nm xcite the bending rigidity of a fluid membrane may vary between one and a hundred xmath61 xcite one expects a maximum surface tension of the order of a few mn m which is approximately the rupture tension for a fluid phospholipid bilayer xcite a maximum value of the adhesion can be found by assuming that a few xmath61 per lipid is stored if membrane and tip are in contact one arrives at xmath62 for the continuum theory to be valid eqns eq boundeqlin3 eq boundeqnonlin2 imply that xmath63 where xmath64 is the bilayer thickness this estimate yields approx the same maximum value for xmath65 as before since xmath0 is at most 100 xmath61 thus xmath66 and xmath67 can in principal vary between 0 and xmath68 realistically if we set xmath69 and consider a typical fluid phospholipid bilayer with xmath70 xmath66 and xmath67 are of the order of 1 furthermore we will focus on a pore radius of xmath71 in the following in order to understand how adhesion energy modifies the force distance behavior let us first briefly revisit the case where there is no adhesion between tip and membrane xmath72 in fig fig profilesh0 the shapes of the membrane for different values of indentation are presented in scaled units the linear calculations are dotted whereas the exact result is plotted with solid lines for small indentationsthe two solutions overlap for increasing xmath73 however the deviations become larger just as one expects for a small gradient approximation see also ref xcite for another example while the differences are noticeable they appear fairly benign such that one would maybe not expect big changes in the force distance behavior we will soon find out that these hopes will not be fulfilled all for xmath74 solid lines nonlinear calulations dashed lines linear approximation grey shades afm tips the corresponding forces xmath75 for the three different indentations are nonlinear calculations xmath76 xmath77 xmath78 a deeper indentation also means that the tip has to exert a higher force in figs fig1forcedistancew0 and fig2forcedistancew0 log log plots of force distance curves for different values of xmath66 are shown the dashed line marks the maximum indentation xmath79 which is allowed by the geometry of tip and pore in the limit of high forcesall curves converge and approach xmath80 for small forces the curves are linear in xmath81 let us quantify the indentation response by defining the scaled apparent spring constant xmath82 of the nanodrum afm system via xmath83 a linear force distance curve has a constant xmath82 and thus follows an apparent hookean behavior xmath84 in unscaled units the spring constant is given by xmath85 for typical values xmath86 and xmath87 this implies xmath88 and xmath89 and xmath90 xmath12 increasing from left to right the curve for xmath91 is dashed dotted the inset shows the corresponding scaled apparent spring constant xmath82 see eqn eq calk in the small force limit illustrating its two different regimes of small and large tension with a crossover around xmath92 and xmath89 and xmath90 xmath12 increasing from right to left the solution for xmath93 in the linear regime is dashed dotted nonlinear results are plotted with solid lines the linear approximation is dotted the smaller xmath66 the less force has to be applied to reach the same indentation see fig fig1forcedistancew0 for decreasing xmath66the force distance curves converge to the limiting curve of the pure bending case for which xmath91 this is plotted dashed dotted in fig fig1forcedistancew0 in the opposite pure tension limit xmath94 or xmath93 the curves become essentially linear in xmath66 as can be seen clearly after scaling out the tension see fig fig2forcedistancew0 it is possible to calculate this second limiting curve in the linear regime the linearized euler lagrange equation reduces to the laplace equation xmath95 which is solved by xmath96 in the present axial symmetry the constants xmath97 and xmath98 can be determined with the help of the two boundary conditions xmath99 and xmath100 the contact point xmath24 is then determined by a straightforward energy minimization the final result for the indentation depth is xmath101 which is plotted dashed dotted in fig fig2forcedistancew0 at any given penetrationthe force is now strictly proportional to the tension notice also the remarkably weak logarithmic dependency of penetration on pore size xmath102 xmath103 xmath104 all force distance curves presented in this section exhibit a linear behavior for small forces in this limitthe scaled spring constant for the systems just discussed is well described by the empirical relation xmath105 see inset in fig fig1forcedistancew0 combining this with our observation that for typical system parametersxmath106 we see that a nanodrum s stiffness can be very well matched by available soft afm cantilevers showing that the suggested experiments are indeed feasible in fact fig fig experiment shows the results of such an indentation experiment solid grey line here a fluid dotap 12dioleoyl3trimethylammonium propane chloride membrane was suspended over a pore of radius xmath107 and subsequently probed with a tip of radius xmath102 xcite the apparent spring constant is found to be xmath108 to fit the data we optimized the material parameters xmath12 and xmath0 the linear approximation asymptotically matches the curve down to an indentation depth of about 40 nm as one can see in fig fig experiment dashed line for larger indentationsthe small gradient assumption breaks down the nonlinear calculation solid black line describes the data correctly down to a much deeper penetration depth of 150 nm but diverges for larger values this deviation is most likely not a failure of the elastic model but a consequence of our simplified assumptions for the tip geometry as shown in fig 1b of the supplementary information to ref xcite the tip is parabolic at its apex but further up it narrows quicker and assumes a more cylindrical shape it therefore can penetrate the pore much deeper than one would expect if the parabolic shape were correct for the entire tip apart from this difficulty theory and experiment are in good agreement there is however a catch since we can not trust the force distance behavior close to the depth saturation due to its displeasingly strong dependence on the actual tip shape the remaining interpretable part of the force distance curve is linear and its slope is the only parameter that can be extracted from the data xcite for the theoretical calculation one needs two parameters xmath12 and xmath0 fitting both to a line is not possible in ref xcite this obstacle was overcome by estimating xmath0 from other measurements to be about xmath109 the surface tension xmath12 could then be adjusted to xmath110 to match the data which reassuringly is a very meaningful value alternatively one may proceed in a different manner in the experimenta small snap off peak could be observed upon retraction of the afm tip which was due to the attraction between tip and membrane although this could be neglected in the interpretation of the measurements of ref xcite one may think of deliberately increasing the adhesion between membrane and tip in a follow up experiment by chemically functionalizing the tip with this additional tuning parameterone may get further information on the values of the material parameters in question in the following we will also allow for adhesion between tip and membrane ixmath1e xmath67 is not necessarily equal to zero this will change the qualitative behavior of the force distance curves dramatically for fixed xmath66 and xmath67 different solution branches can be found a hysteresis may occur as well as we will see in the next section additionally stable membrane profiles exist even if the tip is pulled upwards it is therefore possible to calculate the maximum pulling force that can be applied before the tip detaches from the membrane and relate it to the value of the adhesion between tip and membrane in this section we will first investigate the case of weak adhesion xmath111 the scaled surface tension xmath66 will be fixed to 1 it turns out that once the tip is adhesive overhang profiles may occur ixmath1e shapes where at some point xmath112 we will first ignore these solution branches and come back to them later and xmath113 from right to left the region of hysteresis in the curve for xmath114 is magnified in the inset in this casethe energy barrier at xmath115 is approx overhang branches are omitted fig forcedistancew1 illustrates force distance curves for xmath117 compared to the nonadhesive case for which an essentially linear behavior levels off towards maximum penetration adhesive tips behave quite differently already for xmath118 an initial hookean response at small forces is soon followed by a regime in which the system displays a much greater sensitivity towards an externally applied stress ixmath1e where the scaled spring constant xmath82 drops at intermediate penetrations physically this of course originates from the fact that adhesion helps to achieve higher penetrations because the tip is pulled towards the membrane but notice that this does not lead to a uniform reduction of xmath82 softening only sets in beyond a certain indentation shortly beyond xmath118a point is reached where the force distance curve displays a vertical slope at which the apparent spring constant xmath82 vanishes for even larger values of adhesiona hysteresis loop opens featuring a locally unstable region with xmath119 this is the case for xmath114 and the region around the instability is magnified in the inset of fig fig forcedistancew1 notice that the dotted branch corresponding to xmath119 still belongs to solutions for which the functional eq etotal is stationary yet the energy plotted against penetration xmath120 or alternatively contact angle xmath57 has a local maximum confirming that these solutions are unstable against contact point variations the two dashed branches in the inset of fig fig forcedistancew1 have a positive xmath82 and correspond to local minima in the energy however they are globally unstable against the alternative minimum of larger or smaller xmath120 the true global minimum is indicated by the bold solid curve which exhibits a discontinuity at xmath115 depending on the current scanning direction this hysteretic force distance curve manifests itself in a snap on or snap off event such a behavior is reminiscent of a buckling transition such as for instance euler buckling of a rod under compression xcitewith two caveats first notice that the membrane does not stay flat up to a critical buckling force at which it suddenly yields rather the system starts off with a linear stress strain relation and only later undergoes an adhesion driven discontinuity appreciating this point is quite important for the interpretation of measured force distance curves upon approach of tip and membrane the snap on will occur neither at zero force nor at zero penetration second one should not forget that hysteresis is ultimately a consequence of the energy barrier which goes along with such discontinuities for macroscopic systemsthis barrier is typically so big that the transition actually happens at either of the two end points of the s shaped hysteresis curve where the barrier vanishes the spinodal points however for nano systems barriers are much smaller comparable to thermal energy xmath61 such that thermal fluctuations can assist the barrier crossing event in the present casethe barrier at the equilibrium transition point is about xmath116 ixmath1e about xmath121 for typical bilayers however already at xmath122 its magnitude has decreased by about 20 this shows that we have to expect a narrowing down of the hysteresis amplitude compared to an athermal buckling scenario upon increasing the adhesion xmath67 even further one will reach a critical value xmath123 at which the back bending branch of the force distance curve touches the vertical line xmath124 at this pointthe tip is being pulled into the pore even if there is no force at all conversely neglecting barrier complications this also implies that at the critical adhesion energy xmath125 an infinitesimal pulling force will suffice to unbind tip and membrane even though the adhesion between tip and membrane is greater than zero it is very important to keep this fact in mind if one wants to use afm measurements for the determination of adhesion energies for xmath126one obtains stable solutions even when pulling the tip upwards where xmath127 xcite the maximum possible force before detachment xmath128 again corresponds to the leftmost point of the back bend and it increases with increasing xmath67 we will come back to this later notice that detachment always happens for values of xmath120 which are positive ixmath1e when the afm tip is still below the substrate level contrary to what one might have expected pulling will in this case not draw the membrane upwards into a tubular lipid bilayer structure a tether which at some specific elongation will fall off from the tip and snap back rather the strong adhesion pulls the tip far into the pore and while pulling on it indeed lifts it up unbinding still happens below pore rim level at even larger adhesion energyentirely new stationary solution branches emerge as fig fig forcedistancew2 illustrates for xmath129 and xmath74 we first recognize the well known hysteretic branch already seen in fig fig forcedistancew1 which for increasing xmath67 extends to much larger negative forces even though the snap off height xmath120 does only change marginally the shapes of two typical profiles are illustrated in the insets a and b notice that this branch is always connected to the origin but for larger values of xmath67 it starts off into the third quadrant negative values for xmath81 and xmath120 at first sightit seems that we finally get solutions which correspond to a pulled up membrane however this region close to the origin corresponds to a maximum and is thus unstable overhang branches overhang branches contrary to the hysteretic branches the new branches depicted in fig fig forcedistancew2 do not connect to the origin this classifies them as a genuinely nonlinear phenomenon since they can not be obtained as a small perturbation around the state xmath130 in the first quadrant xmath131 they all correspond to profiles which show overhangs see inset xmath24 and xmath132 these branches had been omitted in fig fig forcedistancew1 since for weak adhesion they always correspond to maxima and are thus irrelevant this changes for stronger adhesion though where they become stable in certain regions for instance inset xmath24 is locally stable the details by which this happens are complicated and will be discussed in more detail below following the new branches to negative forces we see that the one for xmath133 loses its overhang around xmath134 that this can happen continuously is not surprising since within angle arc length parametrization there is nothing special about the point where xmath135 only the shooting method might use occurrences of xmath136 as a potential termination criterion for integration branch splitting branch splitting we also see that for sufficiently large xmath67 there is a point where the hysteresis branch intersects the new nonlinear branch therethe values of xmath81 and xmath120 coincide for both branches but the detachment angle xmath57 and the total energy of the profile are generally different however the difference in energy at the intersection decreases with increasing xmath67 and around xmath137 it finally vanishes at this degeneratepoint a branch splitting occurs where the connectivity of the two branches re bridges as illustrated in the lower left inset in fig fig forcedistancew2 rather than connecting to the origin the wide loop of the original hysteresis branch now joins into the overhang branch of the first quadrant while its bit that was connected to the origin now joins into the overhang branch in the third quadrant cusps cusps figure fig forcedistancew3 shows the force distance curve branches for the even larger adhesion energy xmath138 this depicts a situation well after the branch splitting so we recognize the old hysteretic branch connecting with overhangs and the branch connecting to the origin extending exclusively in the third quadrant in contrast to fig fig forcedistancew2 the line styles in fig fig forcedistancew3 are chosen to illustrate local minima solid or maxima dotted what immediately strikes one as surprising is that the profiles at xmath139 belonging to the insets xmath140 and xmath141 both correspond to maxima even though they sit on both sides of a back bending branch close to its end compare this to the usual scenario at xmath142 xmath143 moreover the solution belonging to inset xmath140 turns into a local minimum for slightly more negative forces without any noticeable features of the branch how can this happen the explanation is illustrated in the lower left inset in fig fig forcedistancew3 which shows the total energy as a function of detachment angle xmath57 recall that extrema in this plot correspond to stationary solutions as can be seen the energy is multivalued meaning that there exists more than one solution at a given detachment angle these would then also differ in their value of their penetration xmath120 but more excitingly this graph exhibits a boundary extremum at a lowest possible nonzero value of xmath57 in the form of a cusp this is how one can have two successive maxima on a curve without an intervening minimum the minimum is simply not differentiable hence there is a third solution branch corresponding to the cusp at which the contact curvature condition from eqn eq boundeqnonlin2 is not satisfied because this condition is blind to the possibility of having non differentiable extrema plotting this cusp branch also into fig fig forcedistancew3 we finally understand how the switching of a maximum into a minimum happens it occurs at the point of intersection with the cusp branch as the lower left inset in fig fig forcedistancew3 illustrates the maximum belonging to the solution xmath140 joins the cusp minimum belonging to solution xmath144 in a boundary flat point roughly at force xmath145 for more negative forcesthis flat point turns up leaving a boundary cusp maximum and a new differentiable minimum notice that a similar exchange happens once more at xmath146 xmath147 incidentally since at the cusp the contact curvature condition is not satisfied and since this is the only point where the adhesion energy xmath67 enters the location and form of the cusp branch is independent of the value of xmath67 the existence of the cusp branch poses the question whether the solutions corresponding to it are physically relevant at least the ones which are minima it is not so much the lack of differentiability at a cusp minimum which causes concern but rather the fact that it is located at a boundary take for instance the xmath148 curve in the lower left inset of fig fig forcedistancew3 corresponding to xmath139 now consider a nonequilibrium solution which sits on the upper branch somewhere between the solutions xmath144 and xmath141 to lower the energy this solution will reduce the detachment angle xmath57 thereby approaching the minimum at xmath144 but once xmath144 has been reached no further reduction in xmath57 seems possible since for smaller values no equilibrium solution exists the crucial point is that our present theory is insufficient to answer what else would be going on for smaller xmath57 it could for instance be that there are indeed solutions but they are not time independent this might be analogous to the well known situation of a soap film spanned in the form of a catenoidal minimal surface between two coaxial circular rings of equal radius xmath149 it is easy to show that for a ring separation exceeding xmath150 no more stationary solution exists even though the limiting profile is in no way singular xcite however when slowly pulling the two rings beyond this critical separation the soap film does not suddenly rupture rather it becomes dynamically unstable and begins to collapse in the case we are studying here the system drives itself to the singular boundary point and without a truly dynamical treatment it is not possible to conclude whether it would remain there or start to dynamically approach a different solution for this reason we do not want to overrate the significance of the cusp branch yet its existence is still important in order to explain the behavior of the other regular branches for instance their metamorphosis from maximum branches into minimum branches or vice versa detachment forces detachment forces a measurable quantity in the experiment is the detachment force between tip and membrane which is the maximum applicable pulling force xmath151 before the tip detaches from the membrane in fig fig detachmentforce this force is plotted as a function of adhesion energy xmath67 for different values of the scaled tension xmath66 starting from a certain threshold adhesion xmath152 below which no hysteresis occurs xmath151 decreases with increasing xmath67 and exhibits a linear behavior for higher adhesions increasing xmath66 also increases the threshold adhesion exmath1g xmath153 compared to xmath154 in the largexmath67limit xmath155finally approaches a limit which is independent of xmath0 and xmath12 and only depends on the geometry the elasticity of the membrane no longer influences the measurement of the adhesion energy not because the membrane is not deformed but rather because its deformation energy is subdominant to adhesion but for more realistic smaller values of xmath67 this decoupling does not happen and adhesion energies can only be inferred from the detachment force when a full profile calculation is performed at higher values of xmath66 also other qualitative features such as additional instabilities occur however these ramifications will not be discussed in the present work as a function of scaled adhesion energy xmath67 for four values of the scaled tension xmath156 xmath157 xmath158 and xmath159 xmath66 increasing from left to right tethers tethers characteristically the detachment happens at deep indentations xmath73 close to the maximum indentation possible long pulled out membrane tubes tethers as they have been studied in the literature xcite are not observed even though in our calculations we find profiles with xmath160 these solutions either correspond to energetic maxima or they are only local minima with the global minimum at xmath161 corresponding to a significantly lower energy this is a consequence of the adhesion balance present in our situation upon pulling upwards it is more favorable for the tip either to be sucked in completely or to detach from the membrane rather than forming a long tether as fig fig forcedistancew3 shows there is a very small window of opportunity at xmath162 where locally stable solutions pulled above the surface exist yet their profiles look essentially like the ones of inset xmath140 or xmath144 and show no resemblance to real long tethers upon increasing the force they become unstable such that the tip either falls of the membrane or is drawn below the membrane plane notice that there exist two minima at xmath81 slightly smaller than xmath163 but both at positive indentation this analysis shows that it appears impossible to pull tethers using a probe with a certain binding energy despite existing experiments in which tethers of micrometer size were generated xcite consequently the assumption of an adhesion balance does not seem to be correct in these cases indeed in these studies the experimental setup was different membrane covered micron sized beads xcite and afm tips covered by lipid multilayers xcite in the present situation tethersare also observed xcite but these events are not very reproducible and based on the above calculations we would tentatively attribute them to a pinning of the membrane at some irregularity of the tip in the previous sections we have discussed the indentation of a pore spanning bilayer by an afm tip we have seen that the force distance curves show a linear behavior for small forces in a broad parameter range if the adhesion between tip and membrane vanishes even though this is in agreement with recent experiments xcite see also fig fig experiment such a linear behavior is unfortunately too featureless to reveal the values of both elastic material constants xmath12 and xmath0 one way out of this apparent cul de sac would be to repeat the experiment for different pore radii xmath4 while keeping all other parameters fixed since xmath12 and xmath0 are the same for all pore sizes in that case it should be possible to extract their value from the measured force distance curves note that one does not have to fit both parameters simultaneously if one at first considers a pore where the radius is much larger than the characteristic lengthscale xmath19 see eqn eq characteristiclengthscale the corresponding system is in the high tension regime and the force distance relation only depends on the surface tension see section subsec noad after determining xmath12 from the resulting curve one may subsequently extract the value of xmath0 from a measurement of a system with smaller pore size the elastic constants can also be obtained by considering systems where the adhesion xmath21 between tip and membrane has been increased experimentally as we have seen the curves change their behavior dramatically for xmath164 it should thus be possible to fit two parameters to the resulting curves which would yield a local xmath0 and xmath12 in one fell swoop whereas xmath21 can simultaneously be determined from the snap on of the tip upon approach to the bilayer the experimentalist however has to make sure in that case that the line of contact between tip and membrane is really due to a force balance as described in this paper and not due to other effects such as pinning of the membrane to single spots on the tip in practice this is rather difficult and will be a challenge for future experiments one also has to keep in mind that the assumption of a perfect parabolic tip is quite simplistic compared to the experimental situation it is probably valid in the vicinity of the apex but generally fails further up since the force distance behavior close to the depth saturation depends strongly on the actual tip shape one can only use that part of the force distance curve for data interpretation where the indentation is still small to predict the whole behavior the exact indenter shape has to be known as long as the situation stays axisymmetric one may in principle redo the calcutions of this publication with the new shape this is however rather tedious and therefore inexpedient in practice our theorical approach does not account for hydrodynamic effects although the whole setup is in water and the afm tip is moved with a certain velocity first measurements have shown however that it is possible to increase the velocity of the tip up to 60xmath165xmath166 without altering the force distance curves dramatically xcite one can understand this result with the help of the following simple estimate assume that the tip is a sphere of radius xmath8 moving with the velocity xmath167 in water when indenting the membrane to a distance xmath132 it will also have to overcome a dissipative hydrodynamic force xmath168 in addition to the elastic resistance of the membrane the energy dissipated in this process xmath169 is of the order of the thermal energy if typical values are inserted xmath170 xmath86 xmath171 60xmath165xmath166 xmath172 this is substantially smaller than the corresponding elastic energy xmath173 complications arising from a correct hydrodynamical treatment were thus omitted here including adhesion the velocity of the measurement should nevertheless be as slow as possible to ensure that the line of contact equilibrates due to the force balance if this is guaranteed one can also check whether the predicted linear behavior between detachment force and adhesion is actually valid we thank siegfried steltenkamp and andreas janshoff for providing the experimental data see fig fig experiment we have greatly benefitted from discussions with them and with jemal guven md acknowledges financial support by the german science foundation through grant de7751 3 in this appendix we will explain the origin of the boundary conditions eq boundaryequationslinear and eq boundaryequationsnonlinear eqns eq boundeqlin1 follow simply from the requirement of continuity at the pore rim and the point where the membrane leaves the tip asking for a membrane that has no kinks andthus no diverging bending energy gives eqns eq boundeqlin2 and eq boundeqnonlin1 if the membrane is free to choose its point of detachment as it is assumed here an adhesion balance at the tip yields another boundary condition for the contact curvatures eq boundeqlin3eq boundeqnonlin2 see 12 problem 6 xcite and xcite in ref xcite a quick derivation can be found for the axisymmetric case in the constant height ensemble varying the point of contact changes the energy of the free profile but also the energy due to the part at the tip by setting the total variation to zeroone obtains the well known contact curvature condition eqn eq boundeqlin3 in monge gauge observe that this assumes differentiability of the energy as a function of contact point position in the force ensemblean extra term xmath174 has to be added to the variation of the bound membrane a term that is equal and opposite however enters the variation of the free membrane via the hamiltonian eq hamiltonian in total both terms cancel and one again obtains the same condition eqn eq boundeqnonlin2 in angle arclength parametrization the remaining condition eq boundeqnonlin3 stems from the fact that the total arclength is not a conserved quantity which it would be if we used a fixed interval of integration relaxing this unphysical constraint requires the hamiltonian to vanish xcite if the shape of the free membrane is known the stress tensor xmath175 xmath176 can be evaluated at every point of the surface xmath28 the integral of its flux through an arbitrary contour xmath177 which encloses the tip gives the force xcite xmath178 boldsymboll kappa nablaperpk boldsymbolnbig labeleq forceviastresstensor the normal vectors xmath179 and xmath180 are perpendicular to xmath177 and to each other in every point of the curve in addition xmath179 is tangential to the surface whereas xmath180 is normal to it xmath181 and xmath182 are the curvatures perpendicular in direction of xmath179 and tangential to the curve the symbol xmath183 denotes the derivative along xmath179 in angle arclength parametrization the curvatures are given by xmath184 xmath185 and xmath186 eq forceviastresstensor can then be written as xmath187 sinpsi nonumber qquadqquadquad frac1rho big dotppsi fracppsirhodotrho big cospsi big endaligned the integrand can be evaluated further by inserting the hamilton equations eq shapeequationsnonlinear and making use of the fact that the hamiltonian eq hamiltonian is zero one obtains xmath188 if we now exploit axial symmetry by integrating around a circle of radius xmath189 we finally get xmath190 the momentum xmath55 conjugate to xmath191 has to vanish identically which implies that the lagrange multiplier function xmath46 is equal to xmath59 this at first maybe surprising result is no coincidence at all in fact in ref xcite it was shown that the lagrange multiplier functions which fix the geometrical constraints are closely related to the external forces via the conservation of stresses expression eq forceviastresstensor can also be translated into monge gauge if we again exploit axial symmetry and integrate around a circle of radius xmath189 it reads xmath192 frachrhosqrtg nonumber qquad qquad kappa big frachrhosqrtg3 frachrhorho sqrtg big frac1g biggbiggrhortextint labeleq forceviastresstensormongeendaligned where xmath193 note that the dash denotes derivatives with respect to xmath37 if in particular we choose to evaluate the force at xmath194 the expression eq forceviastresstensormonge simplifies considerably to eqn eq forcelinear the hamilton equations eq shapeequationsnonlinear were solved by using a shooting method xcite for a trial contact point xmath24 eqns eq shapeequationsnonlinear were integrated with a fourth order runge kutta method the value of xmath24 determined the contact angle xmath57 and with it xmath195 xmath37 xmath196 and xmath197 at xmath198 via the boundary conditions eq boundaryequationsnonlinear the integration was stopped as soon as xmath37 was equal or greater than xmath4 to reach xmath4 exactly one extra integration with the correct stepsize backwardswas performed finally the values of xmath24 for which xmath199 at xmath4 were identified for fixed parameters xmath2 xmath12 xmath21 etc if the calculation had been done in the constant height ensemble one would additionally have to check whether the correct indentation xmath3 was reached at xmath200 after shooting in the constant force ensemblethis complication of meeting a second condition is avoided which is why we chose to use it for the nonlinear calculations the gauss bonnet theorem states that xmath201 for a simply connected surface xcite in our casethe boundary xmath202 of the surface xmath14 is a circle of radius xmath4 its geodesic curvature xmath203 is equal to xmath204 such that the second integral yields xmath205 thus the integral over the gaussian curvature xmath16 is zero as long as no topological changes occur
measurements with an atomic force microscope afm offer a direct way to probe elastic properties of lipid bilayer membranes locally provided the underlying stress strain relation is known material parameters such as surface tension or bending rigidity may be deduced in a recent experiment a pore spanning membrane was poked with an afm tip yielding a linear behavior of the force indentation curves a theoretical model for this case is presented here which describes these curves in the framework of helfrich theory the linear behavior of the measurements is reproduced if one neglects the influence of adhesion between tip and membrane including it via an adhesion balance changes the situation significantly force distance curves cease to be linear hysteresis and nonzero detachment forces can show up the characteristics of this rich scenario are discussed in detail in this article
introduction[sec:introduction] the model[sec:themodel] shape equation and appropriate boundary conditions[sec:shapeeqn] results[sec:results] discussion[sec:discussion] boundary conditions calculation of the force via the stress tensor numerical calculations
the anomalous x ray pulsars axps are a group of x ray pulsars whose spin periods fall in a narrow range xmath0 s whose x ray spectra are very soft and which show no evidence that they accrete from a binary companion see mereghetti 1999 for a recent review these objects may be isolated neutron stars with extremely strong xmath1 g surface magnetic fields or they may be accreting from a fallback accretion disk optical measurements could potentially help discriminate between these models an optical counterpart to one axp 4u 0142 61 has recently been identified and shown to have peculiar optical colors hulleman et al the radio quiet neutron stars rqnss are a group of compact x ray sources found near the center of young supernova remnants their x ray spectra are roughly consistent with young cooling neutron stars but they show no evidence for the non thermal emission associated with classical young pulsars like the crab see brazier johnston 1999 for a review the x ray spectral properties of the rqnss and the axps are similar see eg chakrabarty et al below in table 1 the general properties of the three rqnss as our targets in the southern sky are listed clcccc xmath2 age xmath3 source snr kpc xmath4 yr kev refs 1e 08204247 pup a 20 37 028 1 3 1e 16145055 rcw 103 33 1 3 056 4 6 1e 12075209 pks 120952 15 7 025 7 9 our observations were made using the magellan instant camera magic on the magellan1walter baade 65meter telescope at las campanas observatory chile magic is a ccd filter photometer built by mit and cfa for the xmath5 focus of the baade telescope the current detector is a 2048xmath62048 site ccd with a 69 mas pixel scale and a 142xmath6142 arcsec field of view we used the sloan filter set which have the following central wavelengths fukugita et al 1996 xmath73540 xmath84770 xmath96230 xmath107620 and xmath119130 brazier kts johnston s 1999 mnras 303 l1 bignami gf caraveo ga mereghetti s 1992 apj 389 l67 chakrabarty d et al 2001 apj 548 800 fukugita m et al 1996 aj 111 1748 garmire gp pavlov gg garmire ab 2000 iauc 7350 2 gotthelf ev petre r hwang u 1997 487 l175 helfand dj becker rh 1984 nature 307 215 hulleman f kerkwijk mh kulkarni sr 2000 nature 408 689 mereghetti s 1999 in the neutron star black hole connection ed c kouveliotou et al dordrecht kluwer mereghetti s caraveo p bignami gf 1992 a a 263 172 mereghetti s bignami gf caraveo pa 1996 apj 464 842 pavlov g g zavlin ve trmper j 1999 apj 511 l45 petre r becker cm winkler pf 1996 apj 465 l43 petre et al 1982 apj 258 22 seward fd 1990 apjss 73 781 tuohy i garmire g 1980 apj 239 107
we report on our search for the optical counterparts of the southern hemisphere anomalous x ray pulsar 1e10481 5937 and the radio quiet neutron stars in supernova remnants puppis a rcw 103 and pks 1209 52 the observations were carried out with the new mit cfa magic camera on the magellan i 65 m telescope in chile we present deep multiband optical images of the x ray error circles for each of these targets and discuss the resulting candidates and limits 11 11 1 125 in 125 in 25 in
background observations
the formation of jets is the most prominent feature of perturbative qcd in xmath0 annihilation into hadrons jets can be visualized as large portions of hadronic energy or equivalently as a set of hadrons confined to an angular region in the detector in the past this qualitative definition was replaced by quantitatively precise schemes to define and measure jets such as the cone algorithms of the weinberg sterman xcite type or clustering algorithms eg the jade xcite or the durham scheme xmath1 scheme xcite a refinement of the latter one is provided by the cambridge algorithm xcite equipped with a precise jet definition the determination of jet production cross sections and their intrinsic properties is one of the traditional tools to investigate the structure of the strong interaction and to deduce its fundamental parameters in the past decade precision measurements especially in xmath0 annihilation have established both the gauge group structure underlying qcd and the running of its coupling constant xmath2 over a wide range of scales in a similar way also the quark masses should vary with the scale a typical strategy to determine the mass of say the bottom quark at the centre of mass cm energy of the collider is to compare the ratio of three jet production cross sections for heavy and light quarks xcite at jet resolutionscales below the mass of the quark ie for gluons emitted by the quark with a relative transverse momentum xmath1 smaller than the mass the collinear divergences are regularized by the quark mass in this regionmass effects are enhanced by large logarithms xmath3 increasing the significance of the measurement indeed this leads to a multiscale problem since in this kinematical region also large logarithms xmath4 appear such that both logarithms need to be resummed simultaneously a solution to a somewhat similar two scale problem namely for the average sub jet multiplicities in two and three jet events in xmath0 annihilation was given in xcite we report here on the resummation of such logarithms in the xmath1like jet algorithms xcite and provide some predictions for heavy quark production a preliminary comparison with next to leading order calculations of the three jet rate xcite is presented a clustering according to the relative transverse momenta has a number of properties that minimize the effect of hadronization corrections and allow an exponentiation of leading ll and next to leading logarithms nll xcite stemming from soft and collinear emission of secondary partons jet rates in xmath1 algorithms can be expressed up to nll accuracy via integrated splitting functions and sudakov form factors xcite for a better description of the jet properties however the matching with fixed order calculations is mandatory such a matching procedure was first defined for event shapes in xcite later applications include the matching of fixed order and resummed expressions for the four jet rate in xmath0 annihilation into massless quarks xcite a similar scheme for the matching of tree level matrix elements with resummed expressions in the framework of monte carlo event generators for xmath0 processes was suggested in xcite and extended to general collision types in xcite we shall recall here the results obtained in xcite for heavy quark production in xmath0 annihilation in the quasi collinear limitxcite the squared amplitude at tree level fulfils a factorization formula where the splitting functions xmath5 for the branching processes xmath6 with at least one of the partons being a heavy quark are given by xmath7 nnb pgqz q tr left 1 2z1z frac2z1zm2q2m2 right labeleq pqqendaligned where xmath8 is the usual energy fraction of the branching and xmath9 is the space like transverse momentum as expected these splitting functions match the massless splitting functions in the limit xmath10 for xmath9 fixed the splitting function xmath11endaligned obviously does not get mass corrections at the lowest order branching probabilities are defined through xcite full qq q m q q1q q dz pqqz q qq q m0 cf fq q m q q1q q dz pgqz q tr gq q q q1q q dz pggz 2ca with gamdef qq q m0 2cf34 and the sudakov form factors which yield the probability for a parton experiencing no emission of a secondary parton between transverse momentum scales xmath12 down to xmath13 read suddef qq q0 gq q0 fq q0 2gq q0 where xmath14 accounts for the number xmath15 of active light or heavy quarks jet rates in the xmath1 schemes can be expressed by the former branching probabilities and sudakov form factors for the two three and four jet rates jetrates 2 2 3 2 2 q0q qq qgq q0 4 2 2 2 q0q q0q where xmath12 is the cm energy of the colliding xmath0 and xmath16 plays the role of the jet resolution scale single flavour jet rates in eq jetrates are defined from the flavour of the primary vertex ie events with gluon splitting into heavy quarks where the gluon has been emitted off primary light quarks are not included in the heavy jet rates but would be considered in the jet rates for light quarks in order to catch which kind of logarithmic corrections are resummed with these expressions it is illustrative to study the above formulae in the kinematical regime such that xmath17 expanding in powers of xmath2 jet rates can formally be expressed as n n2 k n2 k l0 2k cnkl where the coefficients xmath18 are polynomials of order xmath19 in xmath20 and xmath21 the coefficients for the first order in xmath2 are given by expand1 c212 c312 12 cf ly2lm2 c211 c311 32 cf ly 12 cf lm coef1 for second order xmath2 with xmath22 active flavours at the high scale the ll and nll coefficients read expand2 c224 18 cf2 ly2 lm22 c324 14 cf2 ly2 lm22 cfcaly4lm4 c424 18 cf2 ly2 lm22 cfcaly4lm4 c223 cf2 ly2lm2 34 ly 14 lm 13n cf ly3 32 lylm2 12lm3 13 nn1 cf lm3 c323 12 cf2 ly2lm2 3 ly lm 12n cf ly ly2 lm2 cf ca3 ly3 lm3 16 nn1 cf lm ly2 ly lm 2 lm2 c423 cf2 ly2lm2 34 ly 14 lm 16n cf ly3lm3 18 cf caly3 13 lm3 16 nn1 cf lylm ly lm coef2 terms xmath23 in the nll coefficients where the xmath24function xmath25 for xmath22 active quarks is given by n are due to the combined effect of the gluon splitting into massive quarks and of the running of xmath2 below the threshold of the heavy quarks with a corresponding change in the number of active flavours with our definition of jet rates with primary quarksthe jet rates add up to one at nll accuracy this statement is obviously realized in the result above order by order in xmath2 the corresponding massless result xcite is obtained from eqs coef1 and coef2 by setting xmath26 notice that eqs coef1 and coef2 are valid only for xmath27 and therefore xmath10 does not reproduce the correct limit which has to be smooth as given by eqjetrates let us also mention that for xmath28 there is a strong cancellation of leading logarithms and therefore subleading effects become more pronounced an approximate way of including mass effects in massless calculations that is sometimes used is the dead cone xcite approximation the dead cone relies on the observation that at leading logarithmic order there is no radiation of soft and collinear gluons off heavy quarks this effect can be easily understood from the splitting function xmath29 in eq eq pqq for xmath30this splitting function is not any more enhanced at xmath31 this can be expressed via the modified integrated splitting function deadcone qdcq q m qq q m0 2 cf m q to obtain this resultthe massless splitting function has been used which is integrated with the additional constraint xmath32 we also compare our results with this approximation the impact of mass effects can be highlighted by two examples namely by the effect of the bottom quark mass in xmath0 annihilation at the xmath33pole and by the effect of the top quark mass at a potential linear collider operating in the tev region with xmath34 gev xmath35 gev and xmath36 the effect of the xmath37mass at the xmath33pole on the two and three jet ratesis depicted in fig bb91 left the result obtained in the dead cone approximation is shown in fig bb91 right clearly by using the full massive splitting function the onset of mass effects in the jet rates is not abrupt as in the dead cone case and becomes visible much earlier already at the rather modest value of the jet resolution parameters of xmath38 the two jet rate including mass effects is enhanced by roughly xmath39 with respect to the massless case whereas the three jet rate is decreased by roughly xmath40 for even smaller jet resolution parameters the two jet rate experiences an increasing enhancement whereas the massive three jet rate starts being larger than the massless one at values of the jet resolution parameters of the order of xmath41 the curves have been obtained by numerical integration of eq jetrates furthermore in order to obtain physical result the branching probabilities have been set to one whenever they exceed one or to zero whenever they become negative while in the case of bottom quarks at lep1 energies the overall effect of the quark mass is at the few per cent level this effect becomes tremendous for top quarks at the linear collider fig t1 in fig b91 leading order lo and next to leading order nlo predictions for three jet rates are compared with the nll result showed in the previous plots fixed order predictions for xmath37quark production clearly fail at very low values of xmath42 by giving unphysical values for the jet rate while the nll predictions keep physical and reveal the correct shape the latter is an indication of the necessity for performing such kind of resummations fixed order predictions work well for top production at the linear collider a consequence of the strong cancellation of leading logarithmic corrections and are fully compatible with our nll result sudakov form factors involving heavy quarks have been employed to estimate the size of mass effects in jet rates in xmath0 annihilation into hadrons these effects are sizeable and therefore observable in the experimentally relevant region a preliminary comparison with fixed order results have been presented and showed good agreement matching between fixed order calculations and resummed results is in progress xcite it is a pleasure to thank the organizers of this meeting for the stimulating atmosphere created during the workshop and m mangano for very useful comments gr acknowledges partial support from generalitat valenciana under grant ctidib 200224 and mcyt under grants fpa2001 3031 and bfm 2002 00568 y l dokshitzer g d leder s moretti and b r webber jhep 9708 1997 001 hep ph9707323 g rodrigo a santamaria and m s bilenky phys 79 1997 193 hep ph9703358 p abreu et al delphi collaboration phys b 418 1998 430 f krauss and g rodrigo hep ph0303038 hep ph0309325 g rodrigo nucl phys proc suppl 54a 1997 60 hep ph9609213 m s bilenky s cabrera j fuster s marti g rodrigo and a santamaria phys d 60 1999 114006 hep ph9807489 g rodrigo m s bilenky and a santamaria nucl phys b 554 1999 257 hep ph9905276 nucl suppl 64 1998 380 hep ph9709313 s catani l trentadue g turnock and b r webber nucl b 407 1993 3 l j dixon and a signer phys d 56 1997 4031 hep ph9706285 z nagy and z trocsanyi phys d 59 1999 014020 erratum ibid d 62 2000 099902 hep ph9806317 s catani f krauss r kuhn and b r webber jhep 0111 2001 063 hep ph0109231 f krauss jhep 0208 2002 015 hep ph0205283 s catani s dittmaier and z trocsanyi phys b 500 2001 149 hep ph0011222 s catani s dittmaier m h seymour and z trocsanyi nucl b 627 2002 189 hep ph0201036 y l dokshitzer v a khoze and s i troian j phys g 17 1991 1602
expressions for sudakov form factors for heavy quarks are presented they are used to construct resummed jet rates in xmath0 annihilation predictions are given for production of bottom quarks at lep and top quarks at the linear collider
introduction jet rates for heavy quarks numerical results and comparison with fixed order calculations conclusions acknowledgements
structure growth via mergers is one of the main predictions of cdm type cosmologies however what is predicted is the merger rates of dark matter halos which are not directly observable using dark matter halo merger rates to predict galaxy merger rates requires a theory of galaxy formation or at least a model of how galaxies populate dark matter halos in a similar way what can actually be observed are close galaxy pairs disturbed galaxies or morphological differences between galaxies all of which can only be indirectly tied to galaxy mergers using theoretical models thus connecting theory to observationsposes a number of difficulties which are often not given enough attention in particular the halo merger rate is often used as an indicator of galaxy merger rates if galaxy mass scaled linearly with dark matter halo mass then this could possibly be true but differences in the shapes of the galaxy stellar mass and halo mass functions imply that galaxy formation is much less efficient in low and high mass halos thus we should expect that galaxy merger statistics should differ from halo merging statistics the majority of theoretical studies of merger rates analyze mergers of dark matter halos in n body simulations and references therein while there has been no study comparing the results of different analysis differing treatments at least show qualitative agreement a summary of the results from these studies for halos associated with galaxy are 1 halos rarely have major greater than 13 mergers minor mergers of order 110 are very common the merger rate shows weak dependance on halo mass these results are displayed in the left panel of figure fig time taken from xcite which shows the fraction of halos that have accreted an object of a given mass as a function of lookback time only about a third of halos have had a major merger event involving a sizable amount of the halos final mass however xmath0 of halos have had a merger with an object with a mass one tenth of the halo s final mass creating this plot for different final halo masses results in almost no change aside from a very slight increase in the likelihood of a merger for all merger ratios to go from dark matter halo merger rates to galaxy merger rates requires a theory of galaxy formation unfortunately at this time we have no theory that matches all the observed properties of galaxies so the best that can be done is to explore the predictions of a given model of galaxy formation one possibility is to study the merger rates of galaxies in hydrodynamical simulations xcite however one must keep in mind that hydrodynamical simulations at this time do not produce the observed galaxy stellar mass function mergers in a hydrodynamical simulation are in most ways similar to the results of dark matter halos major mergers are rare however the merger rate does seem to show a much stronger dependance on galaxy mass then it does on halo mass see figure 9 there is some small difference in the kinematics of galaxies compared to dark matter halos most notably in their dynamical friction time scales but this is unlikely to be the primary source of this mass dependance a much more important effect is that stellar mass does not scale linearly with halo mass this means that the mass ratio of a galaxy merger may vary greatly from the mass ratio of the halos in which the galaxies reside this understanding can explain why hubble type is such a strong function of galaxy mass a 13 merger in halo mass could result in a 110 or a 11 merger in galaxy mass depending on how galaxies inhabit dark matter halos we do nt know exactly how to assign galaxies to halos but we know that galaxy formation must be very inefficient for high and low mass galaxies this can be seen in the right panel of figure fig time which shows the fraction of halo mass in the central galaxy using equation 7 of xcite which is obtained from a sdss galaxy group catalogue while one can argue about the details of this result the generic shape of the function in the plotis well established just from the shape of this functionwe can understand why hubble type is a strong function of galaxy or halo mass for low mass halos the efficiency of galaxy formation increases with halo mass so if two low mass halos merge the ratio of the stellar masses will be less than that of the halos but for high mass halos the efficiency of galaxy formation decreases with increasing mass which leads to more nearly equal mass galaxy mergers this is illustrated in figure fig comp which shows the mean number of objects accreted above a certain mass for different mass halos the left panel shows the dark matter case and simply plots equation 3 from xcite for four different halo masses in comparisonthe right panel shows the same results for galaxy mass where the function from xcite has been used to convert halo mass to central galaxy mass the point of this figure is just to show the striking difference in the two cases while there is almost no mass dependence in the dark matter halo case for galaxies the expected number of events can differ by almost two orders of magnitude thus we would expect galaxy morphology to show dependance on mass in conclusion mergers of dark matter halosare largely independent of halo mass but galaxy mergers are most likely very dependent on mass measurements of galaxy merger statistics can be used as direct tests of galaxy formation models while major mergers between halos are rather rare they can be relatively common between galaxies of certain masses depending on how galaxies inhabit dark halos
in the cdm cosmological framework structures grow from merging with smaller structures merging should have observable effects on galaxies including destroying disks and creating spheroids this proceeding aims to give a brief overview of how mergers occur in cosmological simulations in this regard it is important to understand that dark matter halo mergers are not galaxy mergers a theory of galaxy formation is necessary to connect the two mergers of galaxies in hydrodynamical simulations show a stronger dependence on mass than halo mergers in n body simulations if one knows how to connect galaxies to dark matter halos then the halo merger rate can be converted into a galaxy merger rate when this is done it becomes clear that major mergers are many times more common in more massive galaxies offering a possible explanation of why hubble type depends on galaxy mass
introduction dark halo mergers galaxy mergers hubble type
it is thought that the vast majority of stars are formed in star clusters lada lada 2003 during the collapse and fragmentation of a giant molecular cloud into a star cluster only a modest percentage xmath2 of the gas is turned into stars eg lada lada 2003 thus during the initial phases of its lifetime a star cluster will be made up of a combination of gas and stars however at the onset of stellar winds and after the first supernovae explosions enough energy is injected into the gas within the embedded cluster to remove the gas on timescales shorter than a crossing time eg hills 1980 lada et al 1984 goodwin 1997a the resulting cluster now devoid of gas is far out of equilibrium due to the rapid change in gravitational potential energy caused by the loss of a significant fraction of its mass while this process is fairly well understood theoretically eg hills 1980 mathieu 1983 goodwin 1997a b boily kroupa 2003a b its effects have received little consideration in observational studies of young massive star clusters in particular many studies have recently attempted to constrain the initial stellar mass function imf in clusters by studying the internal dynamics of young clusters by measuring the velocity dispersion and half mass radius of a cluster and assuming that the cluster is in virial equilibrium an estimate of the dynamical mass can be made by then comparing the ratio of dynamical mass to observed light of a cluster to simple stellar population models which require an input imf one can constrain the slope or lower upper mass cuts of the imf required to reproduce the observations studieswhich have done such analyses have found discrepant results with some reporting non standard imfs eg smith gallagher 2001 mengel et al 2002 and others reporting standard kroupa 2002 or salpeter 1955 type imfs eg maraston et al 2004 larsen richtler 2004 however bastian et al 2006 noted an age dependence in how well clusters fit standard imfs in the sense that all clusters xmath1100 myr were well fit by kroupa or salpeter imfs while the youngest clusters showed a significant scatter they suggest that this is due to the youngest tens of myr clusters being out of equilibrium hence undercutting the underlying assumption of virial equilibrium needed for such studies in order to test this scenario in the present work we shall look at the detailed luminosity profiles of three young massive clusters namely m82f ngc 1569a ngc 1705 1 all of which reside in nearby starburst galaxies m82f and ngc 1705 1have been reported to have non standard stellar imfs smith gallagher 2001 mccrady et al 2005 sternberg 1998 here we provide evidence that they are likely not in dynamical equilibrium due to rapid gas loss thus calling into question claims of a varying stellar imf ngc 1569a appears to have a standard imf smith gallagher 2001 based on dynamical measurements however we show that this cluster is likely also out of equilibrium throughout this workwe adopt ages of m82f ngc 1569a and ngc 1705 to be xmath3 myr gallagher smith 1999 xmath4 myr anders et al 2004 and 1020 myr heckman leitherer 1997 respectively studies of star clusters in the galaxy eg lada lada 2003 as well as extragalactic clusters bastian et al 2005a fall et al 2005 have shown the existence of a large population of young xmath5 10 20 myr short lived clusters the relative numbers of young and old clusters can only be reconciled if many young clusters are destroyed in what has been dubbed infant mortality it has been suggested that rapid gas expulsion from young cluster which leaves the cluster severely out of equilibrium would cause such an effect bastian et al we provide additional evidence for this hypothesis in the present work the paper is structured in the following way in data and models we present the observations ie luminosity profiles and models of early cluster evolution respectively in disc we compare the observed profiles with our xmath0body simulations and in conclusions we discuss the implications with respect to the dynamical state and the longevity of young clusters for the present work we concentrate on f555w v band observations of m82f ngc 1569a and ngc 1705 1 taken with the high resolution channel hrc of the advanced camera for surveys acs on board the hubble space telescope hst the acs hrc has a plate scale of 0027 arcseconds per pixel all observations were taken from the hst archive fully reduced by the standard automatic pipeline bias correction flat field and dark subtracted and drizzled using the multidrizzle package koekemoer et al 2002 to correct for geometric distortions remove cosmic rays and mask bad pixels the observations of m82f are presented in more detail in mccrady et al total exposures were 400s 130s and 140s for m82f ngc 1569a and ngc 1705 1 respectively due to the high signal to noise of the data we were able to produce surface brightness profiles for each of the three clusters on a per pixel basis the flux per pixel was background subtracted and transformed to surface brightness the inherent benefit of using this technique rather than circular apertures is that it does not assume that the cluster is circularly symmetric this is particularly important for m82f which is highly elliptical eg mccrady et al 2005 for m82f we took a cut through the major axis of the cluster the results are shown in the top panel of fig fig obs we note that a cut along the minor axis of this cluster as well as using different filters u b and i also from hst acs hrc imaging would not change the conclusions presented in disc conclusions for ngc 1569a and ngc 1705 1we were able to assume circular symmetry after checking the validity of this assumption and hence we binned the data as a function of radius from the centre the results for these clusters are shown in the centre and bottom panels of fig fig obs where the circular data points represent mean binning in flux and the triangles represent median binning the standard deviation of the binned mean data points is shown we also note that our conclusions would remain unchanged disc conclusions if we used the f814w i hst acs hrc observations we did not correct the surface brightness profiles for the psf as the effects that we are interested in happen far from the centre of the clusters and therefore should not be influenced by the psf in all panels of fig fig obs we show the psf as a solid green line taken from an acs hrc observation of a star in a non crowded region the background of the area surrounding each cluster is shown by a horizontal dashed line in order to quantify our results we fit two analytical profiles to the observed lps the first is a king 1962 function which fits well the galactic globular clusters and is characterised by centrally concentrated profiles with distinct tidal cut offs in their outer regions the second analytical profile used is an elson fall freeman eff 1987 profile which is also centrally concentrated with a non truncated power law envelope the eff profile has been shown to fit young clusters in the lmc better eff as well as young massive clusters in galaxies outside the local group eg larsen 2004 schweizer 2004 the best fitting king and eff profiles are shown as blue dashed and red solid lines respectively the fits were carried out on all points within 05 of the centre of the clusters ie the point at which from visual inspection the profile deviates from a smoothly decreasing function as is evident in fig fig obs all cluster profiles are well fit by both king and eff profiles in their inner regions however none of the clusters appear tidally truncated in fact all three clusters display an excess of light at large radii with respect to the best fitting power law profile the points of deviation from the best fitting eff profiles are marked with arrows this result will be further discussed in disc due to the rather large distance of the galaxies as well as the non uniform background around the clusters presented here background subtraction is non trivial however we have checked the effect of selecting different regions surrounding the clusters and note that our conclusions remain unchanged we also note that in the lmc where the background can be much more reliably determined many clusters show excess light at large radii eg eff elson 1991 mackey gilmore 2003 we model star clusters using xmath0body simulations star clusters are constructed as plummer 1911 spheres using the prescription of aarseth et al 1974 which require the plummer radius xmath6 and total mass xmath7 to be specified clusters initially contain 30000 equal mass stars or the gravitational softening this is because the dynamics we model are those of violent relaxation to a new potential and so 2body encounters are unimportant simulations were conducted on a grape5a special purpose computer at the university of cardiff using a basic xmath0body integrator the speed of the grape hardware means that sophisticated codes are not required for a simple problem such as this the expulsion of residual gas from star clusters has been modelled by several authors see in particular lada et al 1984 goodwin 1997a b geyer burkert 2001 kroupa et al 2001 boily kroupa 2003a b the typical method is to represent the gas as an external potential which is removed on a certain timescale gas removal is expected to be effectively instantaneous ie to occur in less than a crossing time eg goodwin 1997a melioli de gouveia dal pino 2006 as such we require no gas potential and can model the cluster as a system that is initially out of virial equilibrium equivalent to starting the simulations at the end of the gas expulsion the subsequent evolution is the violent relaxation lynden bell 1967 of the cluster as it attempts to return to virial equilibrium we define an effective star formation efficiency xmath8 which parameterises how far out of virial equilibrium the cluster is after gas expulsion a cluster which initially contains xmath9 stars and xmath9 gas ie a xmath9 star formation efficiency which is initially in virial equilibrium will have a stellar velocity dispersion that is a factor of xmath10 more generally xmath11 too large to be virialised after the gas is instantaneously lost we define the efficiency as effective as it assumes that the gas and stars are initially in virial equilibrium which may not be true we choose as initial conditions xmath12 pc corresponding to a half mass radius of xmath13 pc and xmath14 or xmath15 ie the total initial stellar plus gas mass was xmath16 or xmath15 as representative of young massive star clusters in order to compare the simulations with our observationswe place the simulations at our assumed distance of m82 namely 36 mpc assuming that it is at the same distance as m81 freedman et al previous simulations have shown that for xmath17 clusters are totally destroyed by gas expulsion but for higher xmath8 significant stellar mass loss occurs but a bound core remains goodwin 1997a b boily kroupa 2003a b for xmath18 and xmath19respectively xmath20 and xmath21 of the initial stellar mass is lost within xmath22 myr we confirm those results the escaping stars are not lost instantaneously however stars escape with a velocity of order of the initial velocity dispersion of the cluster typically a few km sxmath23 therefore escaping stars will still be physically associated with the cluster for xmath24 xmath25 myr after gas expulsion these stars produce a tail in the surface brightness profile and produce the observed excess light at large radii we assume a constant mass to light ratio for the simulation and convert the projected mass density into a luminosity and hence surface brightness profile the normalisation of the surface brightness is arbitrary and scaled so that the central surface brightness is similar to that of the observed clusters two of the simulations are shown in fig fig model the filled circles are the surface brightness of the simulated cluster with the specific parameters total initial mass xmath8 and time since gas expulsion of the simulations shown we follow the same fitting technique as with the observations namely fitting king and eff profiles dashed blue and solid red lines respectively to the profile as was seen in the observations the simulations display excess light at large radii the detailed correspondence between the observations and simulations presented here lead us to conclude that m82f ngc 1569a and ngc 1705 1 display the signature of rapid gas removal and hence are not in dynamical equilibrium in future works we will provide a large sample of luminosity profiles of young massive extragalactic star clusters as well as a detailed set of models which can be used to constrain the star formation efficiency of the clusters herewe simply note that models with a sfe between 40 50 best reproduce the observations similar surface brightness profiles with an excess of light at large radii are seen in young lmc clusters see eff and elson 1991 in which many clusters clearly show these unusual profiles and also mackey gilmore 2003 in particular for r136 these profiles are also well matched by our simulations mclaughlin van der marel 2005 have compiled a data base of structural parameters for young lmc smc clusters and compare the m l ratio from dynamical estimates to that predicted by simple stellar population models ie to check the dynamical state of the young clusters however the study was limited as the young clusters tend to be of relatively low mass making it difficult to measure accurate velocity dispersions herewe simply note that the five clusters in their sample younger than 100 myr all show significant deviations in the m l ratio but also note that this may simply be due to stochastic measurement errors it appears likely that the excess light at large radii seen in many massive young star clusters is a signature of violent relaxation after gas expulsion this suggests that these clusters have effective star formation efficiencies of around xmath25 xmath9 such that they show a significant effect but do not destroy themselves rapidly it should also be noted that the escaping stars are not just physically associated with a cluster in the surface brightness profiles measurements of the velocity dispersion of the cluster will also include the escaping stars this will result in an artificially high velocity dispersion that reflects the initial total stellar and gaseous mass thus mass estimates based on the assumption of stellar virial equilibrium may be wrong by a factor of up to three for 1020 myr after gas expulsion as is shown in fig fig virial for xmath26 and xmath27 clusters ie at the ages of ngc 1569a and ngc 1705 1 clusters with xmath28 xmath27 rapidly readjust to their new potential and the virial mass estimates become fairly accurate xmath24 xmath21 myr after gas expulsion ie for a cluster age of xmath21 xmath29 myr however for xmath30 the virial mass is significantly greater than the actual mass for xmath31 myr and clusters do not settle into virial equilibrium for xmath32 myr indeed between xmath33 and xmath25 myr after gas expulsion the virial mass estimate underestimates the total mass by up to xmath33 as the cluster has over expanded a few recent studies have reported non kroupa 2002 or non salpeter 1955 type initial stellar mass functions imf in young star clusters eg smith gallagher 2001 mengel et al these results were based on comparing dynamical mass estimates found by measuring the velocity dispersion and half mass radius of a cluster and assuming virial equilibrium and the light observed from the cluster with simple stellar population models which assume an input stellar imf other studies based on the same technique eg larsen ritchler 2004 maraston et al 2004 have reported standard kroupa or salpeter type imfs recently bastian et al 2006 noted a strong age dependence on how well young clusters fit ssp models with standard imfs with all clusters older than xmath34 myr being will fit by a kroupa imf based on this age dependence they suggested that the youngest star clusters xmath35 myr may not be in virial equilibrium the observations presented here strongly support this interpretation as m82f and ngc 1705 1 both seem to have been strongly affected by rapid gas loss while ngc 1569a has been reported to have a salpeter type imf smith gallagher 2001 the excess light at large radii suggests that this cluster has also undergone a period of violent relaxation and stars lost during this are still associated with the cluster even though its velocity dispersion correctly measures its mass it should be noted that the obvious signature of violent relaxation in the profile of m82f suggests that it is at the lower end of its age estimate of xmath36 myr gallagher smith 1999 as by xmath25 xmath9 myr the tail of stars becomes disassociated from the cluster another possibility is that m82f has been tidally shocked and has had a significant amount of energy input into the cluster thus mimicking the effects of gas expulsion whichever is the case the tail of stars from m82f whatever its age is a signature of violent relaxation and strongly suggests that it is out of virial equilibrium if a young star cluster has a low enough effective star formation efficiency xmath37 it can become completely unbound and dissolve over the course of a few tens of myr this mechanism has been invoked to explain the expanding ob associations in the galaxy hills 1980 recent studies of large extragalactic cluster populations in m 51 bastian et al 2005a and ngc 403839 fall et al 2005 have shown a large excess of young xmath510 myr clusters relative to what would be expected for a continuous cluster formation history both of these studies suggest that the excess of extremely young clusters is due to a population of short lived unbound clusters the rapid dissolution of these clusters has been dubbed infant mortality the observations and simulations presented here support such a scenario if the star formation efficiency is less than 30 no matter what the mass the rapid removal of gas completely disrupts a cluster although see fellhauer kroupa 2005 for a mechanism which can produce a bound cluster with xmath38 even if xmath8 is large enough to leave a bound cluster the cluster may be out of equilibrium enough for external effects to completely dissolve it such as the passage of giant molecular clouds gieles et al 2006 or in the case of large cluster complexes other young star clusters interestingly gas expulsion often significantly lowers the stellar mass of the cluster even if a bound core remains see models thus relating the observed mass function of clusters to the birth mass function needs to account not only for infant mortality but also for infant weight loss in which a cluster could lose xmath39 of its initial stellar mass in xmath40 myr the current simulations do not include either a stellar imf nor the evolution of stars the inclusion of these effects do not significantly effect the results as the mass loss due to stellar evolution is low compared to that due to gas expulsion see goodwin 1997a b in particular we do not expect the preferential loss of low mass stars as these clusters are too young for equipartition to have occured thus stars of all masses are expected to have similar velocities one caveat to this is the effect of primordial mass hence velocity segregation which may mean that the most massive stars are very unlikely to be lost as they have the lowest velocity dispersion we will consider such points in more detail in a future paper observations of the surface brightness profiles of the massive young clusters m82f ngc 1569a and ngc 1705 1 show a significant excess of light at large radii compared to king or eff profiles simulations of the effects of gas expulsion on massive young clusters produce exactly the same excess due to stars escaping during a period of violent relaxation gas expulsion can also cause virial mass estimates to be significantly wrong for several 10s of myr these signatures are also seen in many other young star clusters eg elson 1991 mackey gilmore 2003 and suggest that gas expulsion is an important phase in the evolution of young clusters that can not be ignored in particular this shows that claims of unusual imfs for young star clusters are probably in error as these clusters are not in virial equilibrium as is assumed in future work we will further explore the dynamical state of young clusters in order to constrain the star formation efficiency within the clusters we would like to thank mark gieles and francois schweizer for interesting and useful discussions as well as markus kissler patig and linda smith for critical readings of earlier drafts of the manuscript the anonymous referee is thanked for useful suggestions and comments this paper is based on observations with the nasa esa hubble space telescope which is operated by the association of universities for research in astronomy inc under nasa contract nas5 26555 spg is supported by a uk astrophysical fluids facility ukaff fellowship the grape5a used for the simulations was purchased on pparc grant ppa g s199800642 99 aarseth sj hnon m wielen r 1974 aa 37 183 anders p de grijs r fritze v alvensleben u bissantz n 2004 mnras 347 17 bastian n gieles m lamers hjglm scheepmaker r a de grijs r 2005a aa 431 905 bastian n saglia rp goudfrooij p kissler patig m maraston c schweizer f zoccali m 2006 aa 448 881 boily cm kroupa p 2003a mnras 338 665 boily cm kroupa p 2003b mnras 338 673 elson raw 1991 apjs 76 185 elson raw fall ms freeman kc 1987 apj 323 54 eff fall sm chandar r whitmore bc 2005 apj 631 133 fellhauer m kroupa p 2005 mnras 359 223 freedman w hughes sm madore bf 1994 apj 427 628 gallagher js iii smith lj 1999 mnras 304 540 geyer mp burkert a 2001 mnras 323 988 gieles m portegies zwart s f sipior m baumgardt h lamers hjglm leenaarts j 2006 mnras in prep goodwin sp 1997a mnras 284 785 goodwin sp 1997b mnras 286 669 heckman tm leitherer c 1997 aj 114 69 hills jg 1980 apj 235 986 king i 1962 aj 67 471 koekemoer a m fruchter a s hook r n hack w 2002 in the 2002 hst calibration workshop ed s arribas a koekemoer b whitmore baltimore stsci 339 2001 mnras 321 699 kroupa p 2002 science 295 82 lada cj margulis m dearborn d 1984 apj 285 141 lada cj lada ea 2003 araa 41 57 larsen ss 2004 aa 416 537 larsen ss richtler t 2004 aa 427 495 lynden bell d 1967 mnras 136 101 mackey ad gilmore gf 2003 mnras 338 85 maraston c bastian n saglia r p kissler patig m schweizer f goudfrooij p 2004 aa 416 467 mathieu rd 1983 apj 267 l97 mccrady n graham jr vacca wd 2005 apj 621 278 mclaughlin de van der marel rp 2005 apjs 161 304 melioli c de gouveia dal pino e m 2006 aa 445 l23 mengel s lehnert md thatte n genzel r 2002 aa 383 137 meylan g 1993 in asp conf 48 the globular cluste galaxy connection smith jp brodie astron san fransisco p 588plummer hc 1911 mnras 71 460 salpeter ee 1955 apj 121 161 schweizer f 2004 in asp conf 322 the formation and evolution of massive star clusters eds lamers lj smith a nota p 111smith lj gallagher js 2001 mnras 326 1027 sternberg a 1998 apj 506 721
we present detailed luminosity profiles of the young massive clusters m82f ngc 1569a and ngc 1705 1 which show significant departures from equilibrium king and eff profiles we compare these profiles with those from xmath0body simulations of clusters which have undergone the rapid removal of a significant fraction of their mass due to gas expulsion we show that the observations and simulations agree very well with each other suggesting that these young clusters are undergoing violent relaxation and are also losing a significant fraction of their stellar mass that these clusters are not in equilibrium can explain the discrepant mass to light ratios observed in many young clusters with respect to simple stellar population models without resorting to non standard initial stellar mass functions as claimed for m82f and ngc 1705 1 we also discuss the effect of rapid gas removal on the complete disruption of a large fraction of young massive clusters infant mortality finally we note that even bound clusters may lose xmath1 50 of their initial stellar mass due to rapid gas loss infant weight loss firstpage galaxies star clusters stellar dynamics methods xmath0body simulations
introduction observations simulations a comparison of simulations and observations implications and conclusions summary acknowledgments
cassiopeia a is the brightest shell type galactic supernova remnant snr in x rays and radio and the youngest snr observed in our galaxy the radius of the approximately spherical shell is xmath15 which corresponds to xmath16 pc for the distance xmath17 kpc reed et al the supernova which gave rise to cas a was probably first observed in 1680 ashworth 1980 it is thought to be a type ii supernova caused by explosion of a very massive wolf rayet star fesen becker blair 1987 optical observations of cas a show numerous oxygen rich fast moving knots fmk with velocities of about 5000 km sxmath4 and slow moving quasi stationary flocculi with typical velocities of about 200 km sxmath4 which emit hxmath18 and strong lines of nitrogen x ray observations of cas a show numerous clumps of hot matter emitting strong si s fe ar ne mg and ca lines holt et al 1994 and references therein because this snr lies at the far side of the perseus arm with its patchy distribution of the interstellar gas the interstellar absorption varies considerably across the cas a image eg keohane rudnick anderson 1996 numerous radio optical and x ray measurements of the hydrogen column density eg schwarz goss kalberla 1997 hufford fesen 1996 jansen et al 1988 favata et al 1997 show a strong scatter within a range xmath19 where xmath20 based on recent results we consider xmath21 as plausible values for the central region of the cas a image in spite of considerable efforts to detecta compact remnant of the supernova explosion only upper limits on its flux had been established at different wavelengths until a pointlike x ray source was discovered close to the cas a center tananbaum et al 1999 in the first light observation with the x ray observatory see weisskopf et al 1996 for a description after this discovery the same source was found in the hri image of 199596 aschenbach 1999 and hri images of 1979 and 1981 pavlov zavlin 1999 in this letter we present the first analysis on the central source spectrum observed with 2 together with the analysis of the and asca observations 3 various interpretations of these observations are discussed in 4 the snr cas a was observed several times during the orbital activation and calibration phase for our analysis we chose four observations of 1999 august 2023 with the s array of the advanced ccd imaging spectrometer acis garmire 1997 in these observationscas a was imaged on the backside illuminated chip s3 the spectral response of this chipis presently known better that those of the frontside illuminated chips used in a few other acis observations of cas a we used the processed data products available from the public data archive the observations were performed in the timed exposure mode with a frame integration time of 324 s the durations of the observations were 503 204 176 and 177 ks because of telemetry saturation the effective exposures were 281 122 106 and 105 ks respectively since the available acis response matrices were generated for the set of grades g02346 we selected events with these grades events with pulse height amplitudes exceeding 4095 adu xmath22 of the total number were discarded as generated by cosmic rays the images of the pointlike source look slightly elongated but this elongation is likely caused by errors in the aspect solution and the overall shapes of the images is consistent with the assumption that this is a point source its positions in the four observations are consistent with that reported by tananbaum et al 1999 xmath23 xmath24 for each of the images we extracted the sourcebackground counts from a xmath25 radius circle around the point source center and the background from an elliptical region around the circle with an area of about 10 times that of the circle after subtracting the background we obtained the source countrates xmath26 xmath27 xmath28 and xmath29 ksxmath4 counts per kilosecond the countrate values and the light curves are consistent with the assumption that the source flux remained constant during the 4 days with the countrate of xmath30 ksxmath4 for the analysis of the point source spectrum we chose the longest of the acis s3 observations we grouped the pulse height spectrum for 306 source counts into 14 bins in the 0850 kev range fig 1 each bin has more than 20 counts except for the highest energy bin with 8 counts the spectral fits were performed with the xspec package if the source is an active pulsar we can expect that its x ray radiation is emitted by relativistic particles and has a power law spectrum the power law fit upper panel of fig 2 yields a photon index xmath31 all uncertainties are given at a xmath32xmath33 confidence level that is considerably larger than xmath3421 observed for x ray radiation from youngest pulsars becker trmper 1997 the hydrogen column density xmath35 inferred from the power law fit somewhat exceeds estimates obtained from independent measurements see 1 the unabsorbed x ray luminosity in the 0150 kev range xmath36 erg sxmath4 where xmath37 is lower than those observed from very young pulsars eg xmath38 and xmath39 erg sxmath4 for the crab pulsar and psr b054069 in the same energy range if the source is a neutron star ns but not an active pulsar thermal radiation from the ns surface can be observed the blackbody fit middle panel of fig 2 yields a temperature xmath40 mk and a sphere radius xmath41 km which correspond to a bolometric luminosity xmath42 erg sxmath4 we use the superscript xmath43 to denote the observed quantities distinguishing them from those at the ns surface xmath44 xmath45 xmath46 where xmath4712 1 041 m14r6112 is the gravitational redshift factor xmath48 and xmath49 cm are the ns mass and radius the temperature is too high and the radius is too small to interpret the detected x rays as emitted from the whole surface of a cooling ns with a uniform temperature distribution the inferred hydrogen column density xmath50 is on a lower side of the plausible xmath51 range since fitting observed x ray spectra with light element ns atmosphere models yields lower effective temperatures and larger emitting areas e g zavlin pavlov trmper 1998 we fit the spectrum with a number of hydrogen and helium ns atmosphere models pavlov et al 1995 zavlin pavlov shibanov 1996 for several values of ns magnetic field these fits show that the assumption that the observed radiation is emitted from the whole surface of a 10km radius ns with a uniform temperature still leads to unrealistically large distances xmath5250 kpc thus both the blackbody fit and h he atmosphere fits hint that if the object is a ns the observed radiation emerges from hot spots on its surface see 4 an example of such a fit for polar caps covered with a hydrogen atmosphere with xmath53 g is shown in the bottom panel of figure 2 the model spectra used in this fit were obtained assuming the ns to be an orthogonal rotator the angles xmath18 between the magnetic and rotation axes and xmath54 between the rotation axis and line of sight equal xmath55 the inferred effective temperature of the caps is xmath56 mk which corresponds to xmath57 mk the polar cap radius xmath58 km and xmath59 the bolometric luminosity of two polar caps is xmath60 erg sxmath4 the temperature xmath61 can be lowered and the polar cap radius increased if we see the spot face on during the most part of the period extreme values xmath62 mk xmath63 mk and xmath64 km at xmath65 correspond to xmath66 the fits with the one component thermal models implicitly assume that the temperature of the rest of the ns surface is so low that its radiation is not seen by acis on the other hand according to the ns cooling models eg tsuruta 1998 one should expect that at the age of 320 yr the redshifted surface temperature can be as high as 2 mk for the so called standard cooling and much lower down to 03 mk for accelerated cooling to constrain the temperature outside the polar caps we repeated the polar cap fits with the second thermal component added at a fixed ns radius and different fixed values of surface temperature xmath67 with this approach we estimated upper limits on the lower temperature xmath6823 mk at a 99 confidence level depending on the low temperature model chosen these fits show that the model parameters are strongly correlated the increase of xmath67 shifts the best fit xmath61 downward and xmath69 upward for example using an iron atmosphere model for the low temperature component and assuming a hydrogen polar cap we obtain an acceptable fit see fig 1 for xmath70 mk xmath71 km xmath72 mk xmath73 km xmath74 note that this xmath67 is consistent with the predictions of the standard cooling models and xmath51 is close to most plausible values adopted for the central region of the snr we reanalyzed the archival data on cas a obtained during a long hri observation between 1995 december 23 and 1996 february 1 dead time corrected exposure 1756 ks the image shows a pointlike central source at the position xmath75 xmath76 coordinates of the center of the brightest xmath77 pixel consistent with that reported by aschenbach 1999 its separation from the point source position xmath78 is smaller than the absolute pointing uncertainty about xmath79 briel et al measuring the source countrate is complicated by the spatially nonuniform background another complication is that the 40day long exposure actually consists of many single exposures of very different durations because of the absolute pointing errors combining many single images in one leads to additional broadening of the point source function psf to account for these complications we used several apertures with radii from xmath25 to xmath80 for sourcebackground extraction measured background in several regions with visually the same intensity as around the source discarded short single exposures and used various combinations of long single exposures for countrate calculations this analysis yields a source countrate of xmath81 ksxmath4 corrected for the finite apertures we also re investigated the archival data on cas a obtained with the hri in observations of 1979 february 8 425 ks exposure and 1981 january 2223 256 ks exposure in each of the data sets there is a pointlike source at the positions xmath82 xmath83 and xmath84 xmath85 respectively consistent with those reported by pavlov zavlin 1999 the separations from the position xmath86 and xmath87 and from the position xmath88 and xmath89 are smaller than the nominal absolute position uncertainty xmath90 for since the observations were short estimating the source countrates is less complicated than for the hri observation the sourcebackground counts were selected from xmath91radius circles and the background was measured from annuli of xmath92 outer radii surrounding the circles the source countrates calculated with account for the hri psf harris et al 1984 are xmath93 feb 1979 xmath94 ksxmath4 jan 1981 and xmath95 ksxmath4 for the combined data the countrate is consistent with the upper limit of xmath96 ksxmath4 derived by murray et al 1979 from the longer of the two observations to check whether the source radiation varied during the two decades we plotted the lines of constant and hri countrates in figure 2 for all the three one component models the domains of model parameters corresponding to the hri countrates within a xmath97 range are broader than the 99 confidence domains obtained from the spectra the 1xmath33 domains corresponding to the hri countrate overlap with the 1xmath33 confidence regions obtained from the spectral data thus the source countrates detected with the three instruments do not show statistically significant variability of the source we also examined numerous archival asca observations of cas a 19931999 and failed to detect the central point source on the high background produced by bright snr structures smeared by poor angular resolution of the asca telescopes in the longest of the asca sis observations 1994 july 29 151 ks exposure the point source would be detected at a 3xmath33 level if its flux were a factor of 8 higher than that observed with and the asca observations show that there were no strong outbursts of the central source the observed x ray energy flux xmath98 of the compact central object cco is 36 65 and xmath99 erg xmath100 sxmath4 in 0324 0340 and 0360 kev ranges respectively upper limits on its optical ir fluxes xmath101 erg xmath100 sxmath4 and xmath102 erg xmath100 sxmath4 can be estimated from the magnitude limits xmath103 and xmath104 found by van den bergh pritchet 1986 this gives eg xmath105 for the energy range and xmath106 for the and ranges the flux ratios are high enough to exclude coronal emission from a noncompact star as the source of the observed x ray radiation a hypothesis that cco is a background agn or a cataclismic variablecan not be completely rejected but its probability looks extremely low given the high x ray to optical flux ratio the softness of the spectrum and the lack of indications on variability the strong argument for cco to be a compact remnant of the casa explosion is its proximity to the cas a center in particular this source lies xmath80xmath107 south of the snr geometrical center determined from the radio image of cas a see reed et al 1995 and references therein the source separation xmath108xmath91 from the snr expansion center found by van den bergh kamper 1983 from the analysis of proper motions of fmks corresponds to a transverse velocity of 50250 km sxmath4 for xmath5 kpc xmath109 yr much higher transverse velocities 8001000 km sxmath4 correspond to the separation xmath110xmath111 from the position of the apparent center of expanding snr shell derived by reed et al 1995 from the radial velocities of fmks thus if cco is the compact remnant of the sn explosion it is moving south or sse from the cas a center with a transverse velocity of a few hundred km sxmath4 common for radio pulsars if cco is an isolated nonaccreting object it might be an active pulsar with an unfavorable orientation of the radio beam a limit on the pulsed flux of 80 mjy at 408 mhz was reported by woan duffett smith 1993 however a lack of a plerion or a resolved synchrotron nebula together with the steep x ray spectrum and low luminosity see 2 do not support this hypothesis the lack of the pulsar activity has been found in several x ray sources associated with young compact remnants of sn explosions eg gotthelf vasisht dotani 1999 it may be tentatively explained by superstrong xmath112 g magnetic fields which may suppress the one photon pair creation in the pulsar s acceleration gaps baring harding 1998 if cco is an isolated ns without pulsar activity one may assume that the observed x rays are emitted from the ns surface in this case we also have to assume an intrinsically nonuniform surface temperature distribution to explain the small size and high temperature of the emission region slight nonuniformity of the surface temperature can be caused by anisotropy of heat conduction in the strongly magnetized ns crust greenstein hartke 1983 however this nonuniformity is not strong enough to explain the small apparent areas of the emitting regions some nonuniformity might be expected in magnetars if they are indeed powered by decay of their superstrong magnetic fields thompson duncan 1996 heyl kulkarni 1998 and a substantial fraction of the thermal energy is produced in the outer ns crust in this case the hotter regions of the ns surface would be those with stronger magnetic fields if additional investigations will demonstrate quantitatively that the observed luminosity of xmath113 erg sxmath4 can be emitted from a small fraction xmath114 of the magnetar s surface we should expect that the radiation is pulsed with a probable period of a few seconds typical for magnetars we can also speculate that cco is a predecessor of a soft gamma repeator such a hypothesis has been proposed by gotthelf et al 1999 for the central source of the kes 73 which shows a spectrum similar to cco albeit emitted from a larger area higher temperatures of polar caps can be explained by different chemical compositions of the caps and the rest of the ns surface light element polar caps could form just after the sn explosion via fallback of a fraction of the ejected matter onto the magnetic poles due to fast stratification in the strong gravitational field the upper layers of the polar caps will be comprised of the lightest element present the thermal conductivity in the liquid portion of thin degenerate ns envelopes which is responsible for the temperature drop from the nearly isothermal interior to the surface is proportional to xmath115 where xmath116 is the ion charge yakovlev urpin 1980 this means that lowxmath116 envelopes are more efficient heat conductors than highxmath116 ones so that a light element h he surface has a higher effective temperature for a given temperature xmath117 at the outer boundary of the internal isothermal region approximately the effective surface temperature is proportional to xmath118 if the chemical composition of the envelope does not vary with depth so that the surface of a hydrogen envelope can be xmath119 times hotter than that of an iron envelope numerical calculations of chabrier potekhin yakovlev 1997 give a smaller factor 1617 for temperatures of interest with account for burning of light elements into heavier ones in the hot bottom layers of the envelope but neglecting the effects of strong magnetic fields which can somewhat increase this factor heyl hernquist 1997 hence the light element cap should be hotter than the rest of the ns surface for instance for xmath120 mk the effective temperatures of the h cap and fe surface are xmath121 mk and xmath122 mk for xmath123 xmath71 km as we have shown in 2 a two component model spectrum with such temperatures is consistent with the observed cco spectrum for xmath124 km the thickness of the hydrogen cap xmath125 g xmath100 needed to provide such a temperature difference corresponds to the total cap mass xmath126 for lower xmath127 the temperature difference will be smaller but still appreciable for xmath128 such an explanation of the cco radiation is compatible only with the standard cooling scenario the difference of chemical compositions could not account for a large ratio xmath129 of the cap and surface temperatures required by the accelerated cooling let us consider the hypothesis that the observed x ray radiation is due to accretion onto a ns or a black hole bh to provide a luminosity xmath130 erg sxmath4 the accretion rate should be xmath131 g sxmath4 where xmath132 is the accretion efficiency xmath133 for accretion onto the surface of a ns although the luminosity and the accretion rate are very small compared to typical values observed in accreting binaries they are too high to be explained by accretion from circumstellar matter csm very high csm densities andor low object velocities relative to the accreting medium are required for instance the bondi formula xmath134 gives the following relation between the csm baryon density xmath135 and velocity xmath136 km sxmath4 xmath137 xmath138 even at xmath139 which is lower than a typical pulsar velocity the required density exceeds that expected in the cas a interiors by about 34 orders of magnitude unless the ns or bh moves within a much denser and sufficiently cold csm concentration this estimate for xmath135 can be considered as a lower limit because accretion onto a bh or onto a ns in the propeller regime is much less efficient we can not however exclude that cco is accreting from a secondary component in a close binary or from a fossil disk which remained after the sn explosion we can rule out a massive secondary component from the above mentioned xmath140 and xmath141 limits we estimate xmath142 xmath143 we can also exclude a persistent low mass x ray binary lmxb or a transient lmxb in outburst the object would have a much higher x ray luminosity than observed and the accretion disk would be much brighter in the ir optical range van paradijs mcclintock 1995 however cco might be a compact object with a fossil disk or an lmxb with a dwarf secondary component in a long lasting quiescent state eg an m5 dwarf with xmath144 would have xmath145 for the adopted distance and extinction an indirect indication that cco could be a compact accreting object is that its luminosity and spectrum resemble those of lmxbs in quiescence although we have not seen variability inherent to such objects if the accreting object were a young ns it would be hard to explain how the matter accretes onto the ns surface a very low magnetic field andor long rotation period xmath146 s would be required for the accreting matter to penetrate the centrifugal barrier the criterion suggested by rutledge et al 1999 to distinguish between the ns and bh lmxbs in quiescence based on fitting the quiescent spectra with the light element ns atmosphere models favors the bh interpretation although the applicability of this criterion to a system much younger than classical lmxbs may be questioned on the other hand in at least some of bh binaries optical radiation emitted by the accretion flowwas detected in quiescence eg narayan barret mcclintock 1997 at a level exceeding the upper limit on the cco optical flux finally one could speculate that the cco progenitor was a binary with an old ns and this old ns has sufficiently slow rotation and low magnetic field to permit accretion onto the ns surface from a disk of matter captured in the aftermath of the sn explosion in this case cco could have properties of an accreting x ray pulsar with a low accretion rate a similar model was proposed by popov 1998 for the central source of rcw 103 although he assumed accretion from the ism to conclude we can not firmly establish the nature of cco based on the data available it can be either an isolated ns with hot spots or a compact object more likely a bh accreting from a fossil disk or from a dwarf binary companion although the cco spectrum and luminosity strongly resemble those of other radio quiet compact sources in snrs these sources may not necessarily represent a homogeneous group eg the central source of kes 73 shows 117 s pulsations and remarkable stability and was proposed to be a magnetar gotthelf et al 1999 whereas the central source of rcw 103 shows long term variability and no pulsations gotthelf petre vasisht 1999 we favor the isolated ns interpretation of cco because it has not displayed any variability critical observations to elucidate its nature include searching for periodic and aperiodic variabilities deep ir imaging and longer acis observations which would provide more source quanta for the spectral analysis we are grateful to norbert schulz for providing the acis response matrices to gordon garmire leisa townsley and george chartas for their advices on the acis data reduction and to niel brandt sergei popov and jeremy heyl for useful discussions the and data were obtained through the high energy astrophysics science archive research center online service provided by the nasa s goddard space flight center the work was partially supported through nasa grants nag5 6907 and nag5 7017
the central pointlike x ray source of the cas a supernova remnant was discovered in the first light observation and found later in the archival and images the analysis of these data does not show statistically significant variability of the source because of the small number of photons detected different spectral models can fit the observed spectrum the power law fit yields the photon index xmath041 and luminosity xmath1xmath2xmath3 erg sxmath4 for xmath5 kpc the power law index is higher and the luminosity lower than those observed from very young pulsars one can fit the spectrum equally well with a blackbody model with xmath68 mk xmath7xmath8 km xmath9xmath10 erg sxmath4 the inferred radii are too small and the temperatures too high for the radiation could be interpreted as emitted from the whole surface of a uniformly heated neutron star fits with the neutron star atmosphere models increase the radius and reduce the temperature but these parameters are still substantially different from those expected for a young neutron star one can not exclude however that the observed emission originates from hot spots on a cooler neutron star surface because of strong interstellar absorption the possible low temperature component gives a small contribution to the observed spectrum an upper limit on the gravitationally redshifted surface temperature is xmath1123 mk depending on chemical composition of the surface and star s radius amongst several possible interpretations we favor a model of a strongly magnetized neutron star with magnetically confined hydrogen or helium polar caps xmath12 mk xmath13 km on a cooler iron surface xmath14 mk such temperatures are consistent with the standard models of neutron star cooling alternatively the observed radiation may be interpreted as emitted by a compact object more likely a black hole accreting from a fossil disk or from a late type dwarf in a close binary submitted to the astrohysical journal
introduction acis observation and the point source spectrum analysis of the , and _asca_ images discussion
organic molecular crystals namely crystals composed of organic molecules held together by weak van der waals forces are emerging as excellent candidates for fabricating nanoscale devices these have potential application in electronics and optoelectronics in particular in areas such as solar energy harvesting surface photochemistry organic electronics and spintronics xcite a feature common to such class of devicesis that they are composed from both an organic and inorganic component where the first forms the active part of the device and the second provides the necessary electrical contact to the external circuitry clearly the electronic structure of the interface between these two parts plays a crucial role in determining the final device performance and needs to be understood carefully in particular it is important to determine how charge transfers between the organic and the inorganic component and the energies at which the transfer takes place this is a challenging task especially in the single molecule limit upon adsorption on a substrate the electron addition and removal energies of a molecule change value from that of their gas phase counterparts this is expected since when the molecule is physisorbed on a polarisable substrate the removal addition of an electron from to the molecule gives rise to a polarisation of the substrate the image charge accumulated on the substrate in the vicinity of the molecule alters the addition or removal energy of charge carriers from the molecule a common way to calculate the addition and removal energiesis to use a quasiparticle qp description within the qp picture one ignores the effects of relaxation of molecular orbitals due to addition or removal of electrons and consequently takes the relative alignment of the metal fermi level xmath1 with either the lowest unoccupied molecular orbital lumo and highest occupied molecular orbital homo of a molecule as removal energy this effectively corresponds to associate the electron affinity and the ionization potential respectively to the lumo and homo of the molecule the adequacy of the qp description then depends on the level of theory used to calculate the energy levels of the homo and lumo if the theory of choice is density functional theory dft xcite then a number of observations should be made firstly it is important to note that except for the energy of the homo which can be rigorously interpreted as the negative of the ionization potential xcite in general the kohn sham orbitals can not be associated to qp energies this is however commonly done in practice and often the kohn sham qp levels provide a good approximation to the true removal energies in particular in the case of metals for moleculesunfortunately the situation is less encouraging with the local and semi local approximations of the exchange and correlation functional namely the local density approximation lda and the generalized gradient approximation gga performing rather poorly even for the homo level such situation is partially corrected by hybrid functionals xcite or by functional explicitly including self interaction corrections xcite and extremely encouraging results have been recently demonstrated for range separated functionals xcite the calculation of the energy levels alignment of a molecule in the proximity of a metal however presents additional problems in fact the formation of the image charge although it is essentially a classical electrostatic phenomenon has a completely non local nature this means that unless a given functional is explicitly non local it will in general fail in capturing such effect the most evident feature of such failure is that the position of the homo and lumo changes very little when a molecule approaches a metallic surface xcite such failure is typical of the lda and gga and both hybrid and self interaction corrected functionals do not improve much the situation a possible solution to the problem is that of using an explicit many body approach to calculate the qp spectrum this is for instance the case of the gw approximation xcite which indeed is capable of capturing the energy levels renormalization due to the image charge effect xcite the gw scheme however is highly computationally demanding and can be applied only to rather small systems this is not the case for molecules on surfaces where the typical simulation cells have to include several atomic layers of the metal and they should be laterally large enough to contain the image charge in full this in addition to the gw necessity to compute a significant fraction of the empty states manifold make the calculations demanding and it is often not simple even to establish whether convergence has been achieved in this paper we approach the problem of evaluating the charge transfer energies of an organic molecule physisorbed on an inorganic substrate with the help of a much more resource efficient alternative namely constrained density functional theory cdft xcite in cdft one transfers one electron from the molecule to the substrate and vice versa and calculates the difference in energy with respect to the locally charge neutral configuration no excess of charge either on the molecule or the substrate xcite as such cdft avoids the calculation of a qp spectrum which is instead replaced by a series of total energy calculations for different charge distributionsthis approach is free of any interpretative issues and benefits from the fact that even at the lda level the total energy is usually an accurate quantity finally it is important to remark that for any given functional cdft is computationally no more demanding than a standard dft calculation so that both the lda and the gga allow one to treat large systems and to monitor systematically the approach to convergence herewe use the cdft approach to study the adsorption of molecules on a 2dimensional 2d metal in various configurations it must be noted that in contrast to a regular 3d metal in a 2d one the image charge induced on the substrate is constrained within a one atom thick sheet this means that electron screening is expected to be less efficient than in a standard 3d metal and the features of the image charge formation in general more complex in particular we consider here the case of graphene whose technological relevance is largely established xcite most importantly for our work recently graphene has been used as template layer for the growth of organic crystals xcite it is then quite important to understand how such template layer affects the level alignment of the molecules with the metal as a model systemwe consider a simple benzene molecule adsorbed on a sheet of graphene this has been studied in the past xcite so that a good description of the equilibrium distance and the corresponding binding energy of the molecule in various configurations with respect to the graphene sheet are available furthermore a xmath2 study for some configurations exists xcite so that our calculated qp gap can be benchmarked our calculations show that the addition and removal energies decrease in absolute value as the molecule is brought closer to the graphene sheet such decrease can be described with a classical electrostatic model taking into account the true graphene dielectric constant as it will be discussed a careful choice of the substrate unit cell is necessary to ensure the inclusion of the image charge whose extension strongly depends on the molecule substrate distance we also reveal that the presence of defects in the graphene sheet such as a stone wales one does not significantly alter the charge transfer energies in realistic situations eg at the interface between a molecular crystal and an electrode a molecule is surrounded by many others which might alter the level alignment we thus show calculations where neighboring molecules are included above below and in the same plane of the one under investigation interestingly our results suggest that the charge transfer states are weakly affected by the presence of other molecules in order to find the ground state energy of a system kohn sham dft minimises a universal energy functional xmath3sumsigmaalphabetasuminsigmalangle phii sigmafrac12nabla2phii sigmarangleint dmathbfrvnmathbfrrhomathbfrjrho emathrmxcrhoalpharhobeta endsplit where xmath4 xmath5 and xmath6 denote respectively the hartee exchange correlation xc and external potential energies the kohn sham orbitals xmath7 for an electron with spin xmath8 define the non interacting kinetic energy xmath9 while xmath10 is the total number of electrons with spin xmath8 the electron density is then given by xmath11 in contrast to regular dft in cdft one wants to find the ground state energy of the system subject to an additional constraint of the form xmath12 where xmath13 is a weighting function that describes the spatial extension of the constraining region and xmath14 is the number of electrons that one wants to confine in that region in our casexmath15 is set to 1 inside a specified region and zero elsewhere in order to minimise xmath16 subject to the constraint we introduce a lagrange multiplier xmath17 and define the constrained functional xcite xmath18erhovmathrmcleftsumsigmaint wmathrmcsigmamathbfrrhosigmamathbfrdmathbfrnmathrmcright now the task is that of finding the stationary point of xmath19 under the normalization condition for the kohn sham orbitals this leads to a new set of kohn sham equations xmath20phiisigma epsilonisigmaphiisigma endsplit where xmath21 is the exchange and correlation potential equation equ4 does not compute xmath17 which remains a parameter however for each value of xmath17 it produces a unique set of orbitals corresponding to the minimum energy density in this sense we can treat xmath19 as a functional of xmath17 only it can be proved that xmath19 has only one stationary point with respect to xmath17 where it is maximized xcite most importantly the stationary point satisfies the constraint one can then design the following procedure to find the stationary point of xmath19 i start with an initial guess for xmath22 and xmath17 and solve eq equ4 ii update xmath17 until the constraint eq equ2 is satisfied iii start over with the new xmath17 and a new set of xmath23s here we use cdft to calculate the charge transfer energy between a benzene molecule and a graphene sheet for any given molecule to substrate distance xmath24 we need to perform three different calculations 1 a regular dft calculation in order to determine the ground state total energy xmath25 and the amount of charge on each subsystem ie on the molecule and on the graphene sheet 2 a cdft calculation with the constraint that the graphene sheet contains one extra electron andthe molecule contains one hole this gives the energy xmath26 3 a cdft calculation with the constraint that the graphene sheet contains one extra hole and the molecule one extra electron this gives the energy xmath27 the charge transfer energy for removing an electron from the molecule and placing it on the graphene sheet is then xmath28 similarly that for the transfer of an electron from the graphene sheet to the molecule is xmath29 since in each run the cell remains charge neutral there is no need here to apply any additional corrections however we have to keep in mind that this method is best used when the two subsystems are well separated so that the amount of charge localized on each subsystem is a well defined quantity in our calculationswe use the cdft implementation xcite for the popular dft package siestaxcite which adopts a basis set formed by a linear combination of atomic orbitals lcao the constrain is introduced in the form of a projection over a specified set of basis orbitals and in particular use the lowdin projection scheme throughout this workwe adopt double zeta polarized basis set with an energy cutoff of 002 ry the calculations are done with norm conserving pseudopotential and the lda is the exchange correlation functional of choice a mesh cutoff of 300 ry has been used for the real space grid we impose periodic boundary conditions with different cell sizes and the xmath30space grid is varied in accordance with the size of the unit cell for instance an in plane 5xmath315 xmath30grid has been used for a 13xmath3113 graphene supercell we begin this section with a discussion on the equilibrium distance for a benzene molecule adsorbed on graphene this is obtained by simply minimizing the total energy difference xmath32 where xmath33 is the total energy for the cell containing benzene on graphene while xmath34 xmath35 is the total energy of the same cell when only the benzene graphene is present this minimization is performed for two different orientations of the benzene molecule with respect to the graphene sheet the hollow h configuration in which all the carbon atoms of the benzene ring are placed exactly above the carbon atoms of graphene and the stack s configuration in which alternate carbon atoms of the benzene molecule are placed directly above carbon atoms of the graphene sheet see fig fig figure1a b for the h configurationwe find an equilibrium distance of 34 while for the s one this becomes 325 these results are in fair agreement with another lda theoretical study xcite predicting 34 and 317 respectively for for the h and s orientations note that a more precise evaluation of such distances requires the use of van der waals corrected functionals this exercise however is outside the scope of our work and here we just wish to establish that the equilibrium distance is large enough for our constrain to remain well defined it can also be noted that the equilibrium distance of 36 obtained with a vdw df study xcite is not very different from our lda result for different unit cell sizes of graphene sheet the results are presented for two different molecule to graphene distances 34 and 68 scaledwidth450 we then study the dependence of the charge transfer energies on the size of the graphene unit cell used this is achieved by looking at the charge transfer gap xmath36 as a function of the unit cell size at various molecule to graphene distances see fig fig figure2 when the molecule is very close to the graphene sheet after transferring an electron the image charge is strongly attracted by the oppositely charged molecule and thereby remains highly localized however as the molecule moves away from the substrate the attraction reduces since the coulomb potential decays with distance resulting in a delocalization of the image charge this will eventually spread uniformly all over the graphene sheet in the limit of an infinite distance if the unit cell is too small the image charge will be artificially over confined resulting in an overestimation of xmath37 and xmath38 and as a consequence of the charge transfer energies this effect can be clearly seen in fig fig figure2 where we display the variation of the charge transfer energies as a function of the cell size clearly for the shorter distance 34 corresponding to the average equilibrium distance the energy gap converges for supercells of about 10xmath3110 10xmath3110 graphene primitive cells at the larger distance of 68 the same convergence is achieved for a 13xmath3113 supercell next we compute the charge transfer energies as a function of the distance between the sheet and the molecule in order to compare our results with the gap expected in the limit of an infinite distance we need to evaluate first the ionization potential xmath39 and the electron affinity xmath40 of the isolated benzene molecule this is also obtained in terms of total energy differences between the neutral and the positively and negatively charged molecule namely with the xmath41scf method this returns a quasiparticle energy gap xmath42 of 1102 ev in good agreement within 45 with the experimental value xcite likewise we also determine the fermi level xmath43 of graphene which is found to be 445 ev in fig fig figure3a we show the change in the charge transfer energy gap with the distance of the benzene from the graphene sheet for the h configuration as expected when the molecule is close to the surface there is a considerably large attraction between the image charge and the opposite charge excess on the molecule resulting in an additional stabilization of the system and a reduction in magnitude of xmath44 and xmath37 hence in such case the charge transfer energies have a reduced magnitude and the charge transfer gap is smaller than that in the gas phase then as the molecule moves away from the graphene sheet the charge transfer energies increase and so does the charge transfer energy gap until it eventually reaches the value corresponding to the homo lumo gap of the isolated molecule in the limit of an infinite distance in figs fig figure3b c d and e we show the excess charge density xmath45 in different parts of the system after transferring one electron for two different molecule to graphene distances the excess charge density xmath45 is defined as xmath46 where xmath47 and xmath48 are respectively the charge densities of the system before and after the charge transfer thus the portion of xmath45 localized on the graphene sheet effectively corresponds to the image charge profile clearly due to the stronger coulomb attraction the image charge is more localized for xmath49 than for xmath50 at equilibrium for the s configuration xmath51 the charge transfer energy gap is calculated to be 891 ev which is in good agreement within 4 with the gap obtained by xmath52 xcite in table table configuration for the purpose of comparison we have listed the charge transfer energies and charge transfer gaps for two different heights 34 and 68 and in different configurations the most notable feature is that for the case of a pristine graphene substrate the specific absorption site plays little role in determining the charge transfer levels alignment in general actual graphene samples always display lattice imperfections xcite in order to determine the effect of such structural defects on the ct energies we consider a reference system where a stone wales sw defect in which a single c c bond is rotated by 90xmath53 is present in the graphene sheet we have then calculated xmath54 for two different positions of the molecule with respect to the defect on the sheet namely the xmath55 position in which the molecule is placed right above the defect and the xmath56 position in which it is placed above the sheet far from the defect see fig fig figure1 our findings are listed in tab table configuration where we report the charge transfer energies for both the configurations assuming the molecule is kept at the same distance from the graphene sheet from the tableit is evident that the structural change in graphene due to presence of such defect does not alter the charge transfer energies of the molecule this is because the image charge distribution on graphene is little affected by presence of the sw defect in addition the density of states dos of graphene remains almost completely unchanged near its fermi energy after introducing such defect as can be seen in fig figure4 which shows that the partial density of states pdos of the atoms forming the sw defect has no significant presence near the fermi level thus after the charge transfer the electron added to or removed from the graphene sheet has the same energy that it would have in the absence of the defect ie it is subtracted added from a region of the dos where there is no contribution from the sw defect in this context it is noteworthy that a xmath52 study xcite has concluded that altering the structure of pristine graphene by introducing dopant which raises the fermi level of graphene by 1 ev also has minor effect on the qp gap of benzene reducing it by less than 3 xmath57 xmath58 and xmath59 for various configurations of a benzene molecule on pristine and defective graphene h and s denote adsorption of benzene on graphene inthe hollow and stack configuration respectively xmath55 and xmath56 correspond to adsorption on graphene with sw defect with the former corresponding to adsorption exactly on top of the defect and the latter corresponding to adsorption away from the site of the defect the configurations xmath60 and xmath61 both correspond to adsorption of two benzene molecules in hollow configuration one at height 34 and another at a height 68 while in xmath60 the ct is calculated for the lower molecule in xmath61 the ct is calculated for the upper one finally xmath62 represents the case in which we have a layer of non overlapping benzenes adsorbed on graphene and one is interested in calculating the ct energy for one of them which is placed in the hollow configuration colsoptionsheader in real interfaces between organic molecules and a substrate molecules usually are not found isolated but in proximity to others it is then interesting to investigate the effects that the presence of other benzene molecules produce of the charge transfer energies of a given one to this endwe select three representative configurations in the first one xmath60 the graphene sheet is decorated with two benzene molecules one at 34 while the other is placed above the first at 68 from the graphene plane we then calculate the charge transfer energies of the middle benzene the one at 34 from the sheet the excess charge on different parts of the system image charge after transferring one electron to the sheet is displayed in fig fig hea and fig fig heb the second configuration xmath61 is identical to the first one but now we calculate the charge transfer energies of the molecule which is farther away from the graphene sheet namely at a distance of 68 for this configuration the excess charge after a similar charge transferis shown in fig fig hec and fig fig hed in the third configuration xmath62 we arrange multiple benzene molecules in the same plane the molecules are in close proximity with each other although their atomic orbitals do not overlap charge transfer energies are then calculated with respect to one benzene molecule keeping the others neutral and an isovalue plot for similar charge transfer is shown in fig fig hee and fig fig hef the charge transfer energies calculated for these three configurations are shown in tab table configuration if one compares configurations where the molecule is kept at the same distance from the graphene plane such as the case of hxmath49 xmath60 and xmath62 or of hxmath50 and xmath61 it appears clear that the presence of other molecules has some effect on the charge transfer energies in particular we observe than when other molecules are present both xmath57 and xmath58 get more shallow ie their absolute values is reduced interestingly the relative reduction of xmath57 and xmath58 depends on the details of the positions of the other molecules eg it is different for xmath60 and xmath62 but the resulting renormalization of the homo lumo gap is essentially identical about 25 mev when going from hxmath49 to either xmath60 or xmath62 this behaviour can be explained in terms of a simple classical effect consider the case of xmath60 for example when one transfers an electron from the middle benzene to the graphene sheet the second benzene molecule placed above the first remains neutral but develops an induced charge dipole the moment of such dipole points away from the charged benzene and lowers the associated electrostatic potential importantly also the potential of graphene will be lowered however since the potential generated by an electrical dipole is inversely proportional to the square of distance the effect remains more pronounced at the site of the middle benzene than at that of the graphene sheet a similar effect can be observed for an electron transfer from the graphene sheet to the middle benzene and for the xmath62 configuration in the case of xmath61 the system comprising the topmost benzene from which we transfer charge and the graphene plane can be thought of as a parallel plate capacitor the work xmath64 done to transfer a charge xmath65 from one plate to the other is xmath66 where xmath67 is the capacitance which in turn is proportional to the dielectric constant of the medium enclosed between the plates hence at variance with the case of hd68 the space in between the molecule and the graphene sheet is occupied by a molecule with finite dielectric constant and not by vacuum this results in a reduction of xmath64 so that the charge transfer energies for xmath61 are smaller than those for hd68 finally we show that our calculated energy levels alignment can be obtained from a classical electrostatic model if one approximates the transferred electron as a point charge and the substrate where the image charge forms as an infinite sheet of relative permittivity xmath68 then for a completely planar distribution of the bound surface charge the work done by the induced charge to take an electron from the position of the molecule at a distance xmath24 to infinity is xmath69 hence this electrostatic approximation predicts that the presence of the substrate lowers the xmath70 of the molecule by xmath71 with respect to the corresponding gas phase value however the actual image charge is not strictly confined to a 2d plane but instead spills out over the graphene surface we can account for such non planar image charge distribution by introducing a small modification to the above expression xcite and write the lumo at a height xmath24 as xmath72 where xmath73 is the distance between the centre of mass of the image charge and the substrate plane and xmath74 is the gas phase lumo the electron affinity a similar argument for the homo level shows an elevation of same magnitude due to the presence of the substrate in fig fig classicalplot we plot the charge transfer energies and show that they compare quite well with the curves predicted by the classical model by using an effective dielectric constant of 24 for graphene xcite when drawing the classical curves we have used an approximate value xmath75 which provides an excellent estimate for smaller distances xmath24 it is worth noting that for larger distances though the actual value of xmath73 should be much less the overall effect of xmath73 is very small and almost negligible in the same graph we have also plotted the classical curves corresponding to benzene on a perfectly metallic xmath76 surface this shows that the level renormalization of benzene for physisorption on graphene is significantly different from that on a perfect metal owing to the different screening properties of graphene circles and xmath77 squares calculated for different molecule to substrate distances the cdft results are seen to agree well with the classically calculated curve given in red the horizontal lines mark the same quantities for isolated an molecule gas phase quantities the continuous black line shows the position of the classically calculated level curve for adsorption on a perfect metal xmath76scaledwidth440 we have used cdft as implemented in the siesta code to calculate the energy levels alignment of a benzene molecule adsorbed on a graphene sheet in generalthe charge transfer energies depend on the distance between the molecule and the graphene sheet and this is a consequence of the image charge formation such an effect can not be described by standard kohn sham dft but it is well captured by cdft which translates a quasi particle problem into an energy differences one with cdftwe have simulated the energy level renormalization as a function of the molecule to graphene distance these agree well with experimental data for an infinite separation where the charge transfer energies coincide with the ionization potential and the electron affinity furthermore an excellent agreement is also obtained with xmath0 calculations at typical bonding distances since cdft is computationally inexpensive we have been able to study the effects arising from bonding the molecule to a graphene structural defect and from the presence of other benzene molecules we have found that a stone wales defect does not affect the energy level alignment since its electronic density of state has little amplitude at the graphene fermi level in contrast the charge transfer energies change when more then a molecule is present all our results can be easily rationalized by a simple classical electrostatic model describing the interaction of a point like charge and a uniform planar charge distribution this at variance to the case of a perfect metal takes into account the finite dielectric constant of graphene this work is supported by the european research council quest project computational resources have been provided by the supercomputer facilities at the trinity center for high performance computing tchpc and at the irish center for high end computing ichec additionally the authors would like to thank dr ivan rungger and dr a m souza for helpful discussions 29ifxundefined 1 ifx1 ifnum 1 1firstoftwo secondoftwo ifx 1 1firstoftwo secondoftwo 1noop 0secondoftwosanitizeurl 0 1212 1212121212startlink1endlink0bibinnerbibempty noop noop noop linkdoibase 101002cphc200700177 noop linkdoibase 101103physrev136b864 linkdoibase 101103physrevb187165 noop noop noop noop noop noop noop noop noop linkdoibase 101103physrevb88165112 noop noop doibase httpdxdoiorg101016japsusc201007069 linkdoibase 101103physrevlett96146107 noop linkdoibase 101103physrevb88235437 noop doibase httpdxdoiorg1010160022190281804861 noop noop noop noop
constrained density functional theory cdft is used to evaluate the energy level alignment of a benzene molecule as it approaches a graphene sheet within cdft the problem is conveniently mapped onto evaluating total energy differences between different charge separated states and it does not consist in determining a quasi particle spectrum we demonstrate that the simple local density approximation provides a good description of the level aligmnent along the entire binding curve with excellent agreement to experiments at an infinite separation and to xmath0 calculations close to the bonding distance the method also allows us to explore the effects due to the presence of graphene structural defects and of multiple molecules in general all our results can be reproduced by a classical image charge model taking into account the finite dielectric constant of graphene
introduction method result and discussion conclusion acknowledgment
there are many problems concerning electronic structure where attention is focussed on a small region of a larger system at surfaces or defects in crystals being perhaps the most common let us call this region i figure fig1 and the rest of the system region ii although not of primary interest region ii can not be ignored since in general the electron wave functions in i will be sensitive to the contents of region ii some time ago inglesfield xcite derived an embedding scheme which enables the single particle schrdinger equation to be solved explicitly only in region i the influence of region ii is taken into account exactly by adding an energy dependent non local potential to the hamiltonian for region i which constrains the solutions in i to match onto solutions in ii this embedding method has been developed into a powerful tool most notably for surface electronic structure problems xcite where it has found widespread application especially to situations where an accurate description of the spectrum of electron states is necessary examples include studies of image states xcite surface states at metals surfaces xcite static and dynamic screening xcite atomic adsorption and scattering at surfaces xcite studies of surface optical response xcite and field emission xcite recent applications to transport problems have also been described xcite for a review of the embedding methodsee inglesfield xcite 80 mm in the case of materials containing heavier elements relativistic effects can be significant xcite and lead to important deviations from the electronic structure as predicted by the schrdinger equation shifts in inner core levels of 5d elements are typically several 100 or 1000 ev valence bands shifts are on the ev scale and spin orbit splitting is often measured in tenths of ev even ignoring the concomitant changes in electron wave functions these shifts can reorder levels and so affect calculated densities fundamental to the determination of ground state properties within the density functional framework xcite for this reason most of the conventional electronic structure techniques developed for accurately solving the single particle schrdinger equation in solids have subsequently been modified to deal with the dirac equation including the relativistic augmented plane wave method xcite relativistic linear muffin tin orbital method xcite relativistic augmented spherical wave method xcite and the relativistic multiple scattering method xcite and each has subsequently been used in studying a diverse range of problems the last method alone has formed the basis of calculations of photoemission xcite magnetocrystalline anisotropy xcite hyperfine interactions xcite and magnetotransport xcite amongst other topics inglesfield s embedding method has particular advantages that encourage its extension to the relativistic case it permits the inclusion of extended substrates for surface and interface calculations enables the study of isolated point defects in solids and being a basis set technique is highly flexible and permits full potential studies with relative ease at surfaces extended substrates as against the use of the supercell or thin film approximation in which the crystal is approximated by a small number of layers typically 5 7 enable the proper distinction between surface states resonances and the continuum of bulk states xcite the behaviour of the w110 surface xcite where the addition of half a monolayer of li is observed to increase the spin orbit splitting of a surface state by xmath2 ev resulting in fermi surface crossings separated by xmath3 of the brillouin zone dimension typifies a type of problem a relativistic embedding scheme could address indeed each of the topics mentioned at the end of the previous paragraph are relevant at surfaces andor interfaces and could be usefully investigated within a relativistic embedding framework in this paperwe develop an embedding scheme for the dirac equation that parallels inglesfield s scheme for the schrdinger equation inglesfield s starting point is the expectation value of the hamiltonian using a trial wave function which is continuous in amplitude but discontinuous in derivative across the surface xmath1 separating i and ii the first order nature of the dirac equation precludes the use of a similar trial function instead in the following section we use a trial function in which the large component is continuous and the small component discontinuous across xmath1 continuity in the small component is restored when the resulting equations are solved exactly using the green function for regionii we are able to derive an expression for the expectation value purely in terms of the trial function in i in section section app the application of the method is illustrated by calculating the eigenstates of a hydrogen atom within a cavity and in section section green we determine the green function for the embedded region section section monolayer briefly illustrates the method applied to a sandwich structure where relativistic effects are marked we conclude with a brief summary and discussion in this section we consider region i joined onto region ii figure fig1 and derive a variational principle for a trial wave function xmath4 defined explicitly only within region i we are primarily interested in the positive energy solutions of the dirac equation xcite and so we refer to the upper and lower spinors of the dirac bi spinor solutions as the large and small components of the wave function respectively we notionally extend xmath4 into ii as xmath5 an exact solution of the dirac equation at some energy xmath6 with the large components of xmath4 and xmath5 xmath7 and xmath8 matching on the surface xmath1 separating i and ii but with no constraint upon the small components xmath9 and xmath10 figure fig2 the expectation value for the energy xmath11 is then xmath12 langlevarphivarphiranglemathrmi langlechichiranglemathrmii labeleqn exp where xmath13 for clarity we omit the interaction xmath14 which appears in the relativistic density functional theory xcite neglecting orbital and displacement currents where xmath15 is a spin only effective magnetic field containing an external and exchange correlation contribution its inclusion has no consequences for the derivation the first two terms in the numerator are the expectation value of the hamiltonian through regions i and ii and the third the contribution due to the discontinuity in the small component of the wave function on xmath1 in this and the following surface normals are directed from i to ii 60 mm we eliminate reference to xmath5 by introducing two relations firstly for xmath16 xmath5 satisfies the dirac equation at energy xmath6 xmath17chi0 labeleqn dirii and differentiating with respect to xmath6 the energy derivative of xmath5 xmath18 satisfies xmath19dotchichiqquad birinmathrmii labeleqn eder multiplying the hermitian conjugate of the first equation by xmath20 from the right multiplying the second from the left by xmath21 subtracting and integrating over region ii gives a relation between the normalisation of xmath5 in ii and the amplitude on xmath1 xmath22 we have assumed that xmath5 vanishes sufficiently strongly at infinity for the second relationwe introduce the green function resolvant xmath23 corresponding to equation eqn dirii xmath24gdeltabirbir qquad birbirin mathrmii labeleqn gf multiplying the hermitian conjugate of this equation by xmath5 from the right and subtracting xmath25 times equation eqn dirii integrating over region ii and then using the reciprocity of the green function gives xmath26 we see that the green function relates the amplitude of the wave function on xmath1 to the amplitude at any point within ii in particular we can obtain a relation between the large and small components of xmath5 on s writing the xmath27 green function as xmath28 where each entry is a xmath29 matrix substituting into equation eqn chis and rearranging the two equations coupling the small and large components of xmath5 gives xmath30 where xmath31 it follows from eqn chis that the green functions in eqn gam are the limiting forms of xmath32 as xmath33 from within ii equations eqn norm and eqn cgc are the desired results that enable us to express the expectation value xmath11 in eqn exp in terms of xmath4 alone after substitution and use of the continuity of the large components xmath34 on xmath1 we obtain xmath35 langlevarphivarphiranglemathrmi c2hbar2 ints rmdbirscdot varphirm ldag bsigma ints rmdbirs cdot dotgamma bsigma varphirm l labeleqn exp2 this is an expression for the expectation value of the energy xmath11 given purely in terms of the trial function xmath4 in region i and on the surface xmath1 with all details of region ii entering via xmath36 and its energy derivative following the convention in the non relativistic embedding scheme we shall refer to xmath36 as the embedding potential to see what this variational principle means in practice we consider variations in xmath37 whereby xmath38 langlevarphivarphiranglemathrmi c2hbar2ints rmdbirscdot varphirm ldag bsigma ints rmdbirs cdot dotgamma bsigma varphirm l nonumber labeleqn de1endaligned so that solutions xmath4 stationary with respect to arbitrary variations xmath39 satisfy xmath40 the first expression indicates xmath4 is a solution of the dirac equation at energy xmath11 in region i comparing the second with eqn cgc shows that xmath4 also possesses the correct relationship between large and small components on xmath1 the surface separating i and ii to match onto solutions in ii the term xmath41 provides a first order correction to xmath42 so that the boundary condition is appropriate for energy xmath11 in practice expression eqn exp2 may be used to obtain solutions of the dirac hamiltonian by inserting a suitably parameterised trial function and varying the parameters to obtain a stationary solution this is conveniently achieved by expanding the trial solution in a finite basis of separate large and small component spinors xmath43 sumn1nrm s arm sn leftbeginarrayc0 psirm snbirendarrayright leftbeginarraycc bpsirm lbir 0 0 bpsirm sbir endarrayright left beginarrayc biarm l biarm s endarray right the matrix in the final expression is xmath44 by xmath45 and the column vector contains the xmath45 coefficients substituting into eqn exp2 we find states xmath46 that are stationary with respect to variations in the expansion coefficients xmath47 are then given by the eigenstates of a generalised eigenvalue problem of the form xmath48 left beginarrayc biarm l biarm s endarray right w left beginarraycc orm ll 0 0 orm ss endarray right left beginarrayc biarm l biarm s endarray right labeleqn a1 where xmath49nn inti psirm lndagbir leftvbirmc2 right psirm lnbir rmdbir nonumber fl phantomlefthrm llrightnnc2hbar2ints rmdbirscdot psirm lndagbirsbsigma ints rmdbirs cdot left gammabirsbirsw w dotgammabirsbirswright bsigmapsirm lnbirs labeleqn h2 fl lefthrm lsrightnn inti psirm lndagbir cbsigmacdot widehatbip psirm snbir rmdbirrmi chbarints rmdbirscdot psirm lndagbirsbsigmapsirm snbirslabeleqn h3 fl lefthrm slrightnn inti psirm sndagbir cbsigmacdotwidehatbip psirm lnbir rmdbir labeleqn h4 fl lefthrm ssrightnn inti psirm sndagbir leftvbirmc2 right psirm snbir rmdbir labeleqn h5 fl leftorm llrightnn inti psirm lndagbir psirm snbir rmdbirnonumber fl phantomleftorm llrightnn c2hbar2ints rmdbirscdot psirm lndagbirsbsigma ints rmdbirs cdot dotgammabirsbirsw bsigmapsirm lnbirs labeleqn h6 fl leftorm ssrightnn inti psirm sndagbir psirm snbir rmdbirlabeleqn h7endaligned of course the spectrum of the dirac hamiltonian is unbounded below and care must be taken to prevent solutions collapsing to negative energies this can be avoided through the use of a kinetically balanced basis xcite in which there is a one to one relationship between large and small component spinors xmath50 and where the small component spinors are given by xmath51 the upper half of the spectrum of the xmath52 eigenstates of eqn a1 then provide approximations to the spectrum of electronic states 80 mm to illustrate the application of the relativistic embedding scheme we consider a model problem of a hydrogen atom within a spherical cavity finding bound states of the dirac equation corresponding to the potential illustrated in figure fig3 xmath53 where xmath54 and xmath55 we choose this model as the bound states may also be found straightforwardly by alternative methods region i the region to be treated explicitly is the sphere of radius xmath56 centered on xmath57 the external region ii where xmath58 is replaced by an embedding potential acting on the surface of the sphere the value of the embedding potential is most readily evaluated from equation eqn cgc a general solution to the dirac equation at some energy xmath6 in regionii and satisfying the appropriate boundary conditions is xcite xmath59 where xmath60 xmath61 xmath62 a spin angular function xmath63 a modified spherical bessel function of the third kind xcite xmath64 xmath65 and xmath66 the spherical symmetry of region ii means the the embedding potential xmath36 may be expanded on xmath1 as xmath67 and substituting eqn a2b and eqn a2 into eqn cgc leads to xmath68 using eqn gam with the green function for constant potential xmath69 xmath70 gives the same result but after rather more involved manipulations xmath71 is a modified spherical bessel function of the first kind because of the spherical symmetry we can determine separately states with a given angular character xmath72 using as a basis set for the large component spinors xmath73 so that the small component spinors ensuring kinetic balance are xmath74 er labeleqn bas2 the matrix elements become xmath75nn int0r gnrleftfraclambdarmc2right gnr rmd rnonumber hbar2c2r2gnrgnr leftgammakappawwdotgammakappawright fl lefthlambdarm ssrightnn int0rfnkapparleftfraclambdarmc2rightfnkappar rmd r fl lefthlambdarm lsrightnnhbar cint0r gnrleftfracrmd fnkapparrmd r frackapparfnkapparright rmd r hbar c gnrfnkappar fl lefthlambdarm slrightnnhbar cint0r fnkapparleftfracrmd gnrrmd r frackappargnrright rmd r fl leftolambdarm llrightnn int0r gnrgnr rmd r hbar2c2r2gnrgnrdotgammakappaw fl leftolambdarm ssrightnn int0r fnkapparfnkappar rmd rendaligned the eigenvalues only depend upon the quantum number xmath76 in table table1 the lowest two eigenvalues of xmath77 symmetry corresponding to the xmath78 and xmath79 of free hydrogen are shown as a function of basis set size and for different values of the energy xmath6 at which the embedding potential is evaluated for the case xmath80 xmath81 for comparison also given are the values found by matching the external solution eqn a2 to the regular internal solution which can be expressed in terms of confluent hypergeometric functionsxcite fora given fixed xmath6 the eigenvalues converge from above to values that are equal or above the exact values the further xmath6 lies from the eigenvalue the larger the difference between the limiting value for large basis sets and the correct value however the influence of the xmath82 terms in eqn exp2 means the error is relatively small when xmath83 the lowest eigenvalue found with xmath84 is xmath85 ha and in error by only 00000044 ha a factor xmath86 smaller than the error in xmath6 cccc xmath87 xmath88 xmath83 xmath89 2 04111620 16980995 04111527 16949300 04111624 16896482 4 04451482 09129418 04451439 09126817 04396204 09714775 6 04455519 08914789 04455477 08912219 04455520 08910268 8 04455532 08912708 04455488 08910141 04455532 08908194 exact 04455532 08908194 04455532 08908194 04455532 08908194 differentiating eqn exp2 with respect to the trial energy xmath6 shows the expectation value is stationary at xmath90 in this case xmath11 is given by the solutions of xmath91 langlevarphivarphiranglemathrmi labeleqn exp3 eigenfunctions xmath4 solving this equation satisfy the dirac equation within i and the relationship between small and large components on xmath1 eqn de2 is exact the final column in table table1 shows the lowest two positive energy eigenvalues of eqn exp3 again as a function of basis set size the eigenvalues again converge from above and by xmath84 reproduce the exact values by at least 7 significant figures it is worth noting that with this particular basis set increasing xmath87 much further leads to some numerical difficulties due to overcompleteness for more accuratework a more suitable basis set should be used it should also be noted that conventional finite basis set calculations using a basis satisfying kinetic balance can given eigenvalues that lie below exact limiting values by an amount of order xmath92 xcite and similar behaviour is expected in this embedding scheme most practical applications of the schrdinger embedding scheme have actually used the green function of the embedded system this is a more convenient quantity when dealing with systems where the spectrum is continuous such as at surfaces or defects in solids we therefore consider the green function for the embedded dirac system differentiating eqn exp2 with respect to xmath6 shows xmath11 is stationary when xmath90 as would be expected in this casestationary solutions satisfy the embedded dirac equation xmath93 where introducing xmath94 the component of xmath95 in the direction normal to the surface xmath1 from i to ii at xmath96 the additional term xmath97 enforcing the embedding is xmath98 labeleqn gf2endaligned the corresponding green function satisfies xmath99 for xmath100 a similar line of argument to that given by inglesfield xcite for the embedded schrdinger equation shows that this green function is identical for xmath100 to the green functions xmath101 for the entire system ixmath102ii for simplicity assuming ixmath102ii constitute a finite system so that the spectrum is discrete the green function xmath101 is given by xmath103 where xmath104 is the eigenvalue corresponding to eigenstate xmath105 of the entire system normalised to unity over ixmath102ii for a given xmath11 the green function solving eqn gf3 can be expanded in terms of the eigenstates xmath106 of the corresponding homogeneous equation xmath107 normalised to unity over i as xmath108 clearly xmath109 has poles at xmath110 at these energies eqn gf5 becomes the exact embedded dirac equation eqn gf1 so as we have seen the poles will occur at eigenstates of the entire system and the spectrum of xmath109 and xmath101 coincide it remains to show the poles of xmath109 have the appropriate weight the residue of xmath109 at xmath111 is xmath112 the second term in the denominator is precisely the additional factor necessary to correctly normalise the states see eqn norm eqn cgc so that xmath113 the residues of the green function of the embedded system and those of the entire system are identical hence the two green functions are identical for xmath114 i for practical calculations the green function can be expanded using a double basis of separate large and small component spinors xmath115gw leftbeginarraycc bpsirm lbir 0 0 bpsirm sbir endarrayrightdag the matrix elements of the matrix of coefficients xmath116 may be found by substituting into eqn gf3 multiplying from the right by the vector of basis functions multiplying from the left by the hermitian transpose of the vector of basis functions and integrating over region xmath117 this leads to xmath1181 where the overlap and hamiltonian matrices have their previous definitions eqn h2eqn h7 with xmath119 as an illustrationwe calculate the local density of states for the confined hydrogen model at energies above xmath120 where the spectrum is continuous integrating over the embedded region thisis given by xmath121 figure fig4 shows the xmath122 wave local density of states for xmath80 xmath123 calculated with varying number of basis functions the basis functions eqn bas1 eqn bas2 are not particularly appropriate for representing the continuum wave solutions and so convergence is only achieved using a relatively large set however the results serve to illustrate the systematic improvement that accompanies an increasing number of basis functions the local density of states shows two resonances the precursors of bound states that exist when any of xmath56 xmath124 or xmath120 are increased sufficiently as a further example one that provides a test of the relativistic embedding scheme when applied to a more challenging problem we use it to calculate the local density of states on a silver monolayer in a au001ag au001 sandwich structure using the embedding scheme only the region occupied by the ag monolayeris explicitly treated this is region i with the two au halfspaces to either side entering the calculation via embedding potentials expanded on planar surfaces then using bloch s theorem the calculation is performed within a unit cell containing one atom the full technical details will be described elsewhere but briefly the green function at two dimensional wave vector xmath125 is expanded in a set of linearised augmented relativistic plane waves we use large component basis functions xmath126omegalambdawidehatbir bir in mboxmuffin tin endarray right where xmath127 is a pauli spinor xmath128 with xmath129 a two dimensional reciprocal lattice vector and xmath130 xmath131 and where xmath132 exceeds the width of the embedded region ensuring variational freedom in the basis the function xmath133 is the large component of the wavefunction that satisfies the radial dirac equation for the spherically symmetric component of the potential at some pivot energy xmath134 is the energy derivative of xmath135 the matching coefficients xmath136 xmath137 ensure continuity of the basis function in amplitude and derivative at the muffin tin radius the small component basis functions are chosen to satisfy kinetic balance overlap and hamiltonian matrix elements follow directly from these basis functions the embedding potential is obtained from eqn cgc using the general expression for a wavefunction outside a surface at wave vector xmath125 this gives for the embedding potential describing the left au half space xmath138bigsigmabigsigmanonumber times exprmibikbigcdot birs exprmibikbigcdot birs varphisigmaotimes varphisigma labeleqn embpotendaligned with xmath139 the reflection matrix xmath140 is found using standard layer scattering methods xcite a similar approach may be used to obtain an embedding potential for the right half space which unlike the non relativistic case differs from that for the left half space figure fig5 compares the local density of states calculated using the relativistic embedding technique for an embedded ag monolayer using embedding potentials corresponding to au001 with that found for an au001ag au001 sandwich geometry using relativistic scattering theory xcite the same au and ag potentials has been used in each case and the local density of states found within the same muffin tin volume therefore the results obtained with the two methods should be comparable and we find that they are indistinguishable this confirms that the embedding potential eqn embpot imposes the correct variational constraint upon wave functions for the embedded ag monolayer so that they replicate the behaviour of an extended au001ag au001 sandwich structure the inset in figure fig5 shows the local density of states in the non relativistic limit xmath141 indicating the significant relativistic effects on the electronic structure which are correctly reproduced with this dirac embedding scheme we have outline above an embedding scheme for the dirac equation it enables the dirac equation to be solved within a limited region i when this region forms part of a larger system ixmath102ii region ii is replaced by an additional term added to the hamiltonian for region i and which acts on the surface xmath1 separating i and ii the embedding scheme is derived using a trial function in which continuity in the small component across xmath1 is imposed variationally expanding the wave function in a basis set of separate large and small component spinors the problem of variational collapse is avoided by using a basis satisfying kinetic balance calculating the spectrum of a confined hydrogen atom the method is shown to be stable and converge to the exact eigenvalues we have also derived the green function for the embedded hamiltonian and illustrated its use in the continuum regime of the same confined hydrogen system and an au ag au sandwich structure these are demonstration calculations future applications are likely to be to defects and surfaces of materials containing heavier typically 5xmath142 elements within the framework of density functional theory it is worthwhile to discuss further the use of a trial function that is discontinuous in the small component since such a wave function gives rise to a discontinuous probability density and so would normally be dismissed in quantum theory in non relativistic quantum mechanicsdiscontinuous trial functions are not permitted since they possess infinite energy however the dirac equation is first order in xmath143 and as we have seen a perfectly regular expectation value of xmath0 results exploiting this freedom the embedding scheme outlined above leads to solutions that are continuous in both large and small component only when the embedding potential xmath144 is evaluated at the same energy xmath6 as the energy xmath11 that appears in the dirac equation itself for then the relationship between small and large components on xmath1 inside equation eqn de2 and outside equation eqn cgc coincide the large components matching by construction this may be achieved for example via the iterative scheme used in connection with equation eqn exp3 and the final column of table table1 or explicitly when determining the green function as in section section green these are the methods in which the non relativistic embedding scheme has been most widely used when xmath6 and xmath11 do not coincide the solutions obtained via this embedding scheme will retain small components that are discontinuous across xmath1 this may be unacceptable for certain applications but the solutions continue to be valid approximations at least in as much as they provide estimates of the energies of the solutions of the dirac equation and so could suffice eg for interpreting spectroscopic measurements this embedding scheme places no greater emphasis on a discontinuity in the small amplitude at xmath1 than on an incorrect but continuous amplitude elsewhere within the embedded region it aims merely to optimise the energy of the state and will retain a discontinuity in the small component if in doing so it can better in terms of energy approximate the solution inside the embedded region in the non relativistic embedding schemethe discontinuous derivative of the trial function implies a probability current and electric current that is discontinuous across the embedding surface this is similarly unphysical yet numerous applications such as those cited above have demonstrated the utility and accuracy of the method indeed there have been many applications in which this scheme has been used to determine currents and or transport properties such as in relation to surface optical response xcite and electron transport in electron waveguides or through domain walls xcite the reason for the success of these calculations is that they employed schemes in which the embedding potential was evaluated at the correct energy ensuring that the derivative of the wavefunction was continuous across xmath1 in practisethere have been few calculations using the non relativistic embedding scheme in which the energies did not coincide there are a number of aspects of the method which are worthy of further consideration we started with a trial function in which by construction the large component was continuous and the small component discontinuous across the surface xmath1 dividing i and ii we could have reversed these conditions leading to a similar embedded dirac equation but with a modified embedding term the particular choice was motivated by the wish to have a theory which behaves reasonably in the limit xmath141 when the small component becomes negligible a discontinuous amplitude is not permissible in trial solutions to the schrdinger equation however the behaviour of the alternative formulation should be investigated perhaps in connection with this there is the question of the spectrum of negative energy solutions to which we have paid scant attention exploring the xmath141 limit it might be possible to identify how to embed a relativistic region i within a region ii treated non relativistically a 5xmath142 overlayer on a simple metal substrate might be a physical system where such a treatment is appropriate there could be benefits in terms of computational resources expended if the embedding potential could be determined within the framework of a non relativistic calculation and there might also be useful insights in terms of simple models finally in terms of implementation for realistic systems some of the novel schemes for deriving embedding potentials xcite could certainly be adapted to the relativistic case it would also be worthwhile to consider whether it is possible to use a restricted electron like basis in which the large and small component spinors are combined this is common practice in most relativistic electronic structure calculations for solids when using basis set techniques eg xcite and would result in significant computational efficiencies
an embedding scheme is developed for the dirac hamiltonian xmath0 dividing space into regions i and ii separated by surface xmath1 an expression is derived for the expectation value of xmath0 which makes explicit reference to a trial function defined in i alone with all details of region ii replaced by an effective potential acting on xmath1 and which is related to the green function of region ii stationary solutions provide approximations to the eigenstates of xmath0 within i the green function for the embedded hamiltonian is equal to the green function for the entire system in region i application of the method is illustrated for the problem of a hydrogen atom in a spherical cavity and an au001ag au001 sandwich structure using basis sets that satisfy kinetic balance
introduction embedding scheme model application green function application to an embedded monolayer summary and discussion
among the large number of theoretical models proposed to either solve the hierarchy problem andor explain dark matter with a new stable particle the minimal supersymmetric model mssm remains one of the favourite supersymmetry not only provides a solution to both these problems but also predicts new physics around the tev scale the main drawback of the mssm apart from the lack of evidence for supersymmetric particles is the large number of unknown parameters most of which describe the symmetry breaking sector with the improved sensitivities of dark matter searches in astroparticle experiments xcite the precise determination of the dm relic density from cosmology xcite the latest results from the tevatron xcite and the precision measurements large regions of the parameter space of the supersymmetric models are being probed this will continue in the near future with a number of direct and indirect detection experiments improving their sensitivities xcite and most importantly with the lhc starting to take data the lhc running at the full design energy of 14tev offers good prospects for producing coloured supersymmetric particles lighter than 2 3 tev for discovering one or more higgs scalars xcite and for measuring the rare processes in the flavour sector in particular in b physics xcite furthermore some properties of the sparticles in particular mass differences can be measured precisely in some scenarios xcite the first studies that extracted constraints on supersymmetric models worked in general within the context of the mssm embedded in a gut scale model such as the cmssm xcite after specifying the fundamental model parameters at the high scale the renormalisation group equations are used to obtain the weak scale particle spectrum this approach provides a convenient framework for phenomenological analyses as the number of free parameters is reduced drastically compared to the general mssm from o100 to xmath3 and xmath4 parameters in the case of the cmssm the drawback is that one is often confined to very specific scenarios for example in the cmssm the lsp is dominantly bino over most of the parameter space this has important consequences for the dark matter relic abundance furthermore it was customary to choose some specific values for some of the mssm or even the sm parameters for a convenient representation of the parameter space in two dimensions while the link between specific observables and allowed region of parameter space is easier to grasp in this framework the allowed parameter space appeared much more restrictive than if all free parameters were allowed to vary in the last few yearsefficient methods for exploring multi dimensional parameter space have been used in particle physics and more specifically for determining the allowed parameter space of the cmssm this approach showed that the often narrow strips in parameter space obtained when varying only two parameters at a time fattened to large areas xcite after letting all parameters of the cmssm and the sm vary in the full range with this efficient parameter space sampling methodit becomes possible to relax some theoretical assumptions and consider the full parameter space of the mssm because the number of experimental constraints on tev scale physics is still rather limited it seems a bit premature to go to the full fledge xmath5 parameters of the mssm or even to the 19 parameters that characterize the model when assuming no flavour structure and equality of all soft parameters for the first and second generations of sfermions for an approach along these lines see xcite furthermore many parameters for example those of the first and second generations of squarks once chosen to be equal to avoid strong flavour changing neutral current constraints do not play an important role in the observables selected to fit the model here we consider a model where input parameters of the mssm are defined at the weak scale and we add some simplifying assumptions common slepton masses xmath6 and common squark masses xmath7 at the weak scale for all three generations and universality of gaugino parameters at the gut scale this implies the following relation between the gaugino masses at the weak scale xmath8 we furthermore assume that xmath9 is the only non zero trilinear coupling while as we just argued the first assumption should not impact much our analysis the second should certainly be considered as a theoretical bias this assumption is however well motivated in the context of models defined at the gut scale most importantly in our approach we keep the higgsino parameter xmath10 and the gaugino mass xmath11 as completely independent parameter the relation between the gaugino and higgsino parameters is what determines the nature of the lsp and plays an important role in determining the lsp lsp annihilation in the early universe in that sense our modelhas many similarities with the non universal higgs model which also has xmath10 and xmath11 as independent parameters xcite the observables selected to constrain the model include the relic density of dark matter xmath12 direct searches for higgs and new particles at colliders searches for rare processes such as the muon anomalous magnetic moment as well as various b physics observables note that the dark matter relic abundance is computed within the standard cosmological scenario the direct detection of dark matter while providing stringent constraint on the model introduces additional unknown parameters both from astrophysics and from strong interactions we therefore prefer to consider the direct detection rate as an observable to be predicted rather than as a constraint keeping in mind that folding in the astrophysical and hadronic uncertainty could however easily introduce an order of magnitude uncertainty in that prediction xcite we find that each individual parameter of the mssm model is only weakly constrained in particular the parameters of the sfermion sector the very large allowed parameter space only reflects the still poor sampling of the total parameter space by experiments the neutralino sector is better constrained with a preferred value for the lsp of a few hundred gev s and a small likelihood for masses above 900gev similarly charginos above 12tev are disfavoured we also find a lower limit on the pseudo scalar mass as well as on xmath13 furthermore some correlations between parameters of the model are observed most notably the one between xmath10 and the gaugino mass this is because those two parameters determine the higgsino content of the lsp after having determined the allowed parameter space we examined the predictions for direct detection as well as for lhc searches both in the higgs and susy sector as well as for b physics observables although each type of search can only probe a fraction of the total parameter space we find a good complementarity between the different searches with less than 10 of scenarios leading to no signal for example large signals for direct detection are expected in the mixed bino higgsino lsp scenario that are hard to probe at the lhc the lhc searches in the susy and higgs sector are also complementary and b observables are specially useful in scenarios with large xmath13 and a pseudoscalar that is not too heavy the predictions for susy searches can be different from that expected in the constrained cmssm with in particular a large fraction of models that only have a gluino accessible at lhc the squarks being too heavy to ascertain how experiments that will take place in the near futurecould further constrain the parameter space of the model we consider specific case studies for example we consider the impact of a signal in xmath14 at tevatron or of the observation of a signal in direct detection experiments finally we examine in more details the susy signals at the lhc analysing the preferred decay chains for models that have either a gluino or a squark within the reach of the lhc in this analysiswe did not include the constraints from indirect detection experiments because the rates predicted feature a strong dependence on additional quantities such as the dark matter profile or the boost factor the predictions for the rates for xmath15 will be presented in a separate publication xcite the paper is organised as follows the model and the impact of various constraints are described in section 2 the method used for the fit is described in section 3 the results of the global fits are presented in section 4 together with the impact of a selected number of future measurements the susy signatures are detailed in section 5 the conclusion contains a summary of our results we consider the mssm with input parameters defined at the weak scale we assume minimal flavour violation equality of the soft masses between sfermion generations and unification of the gaugino mass at the gut scale the latter leads to xmath8 at the weak scale relaxing this assumption is kept for a further study we allow for only one non zero trilinear coupling xmath9 forthe b squark the mixing which is xmath16 is driven in general by xmath17 rather than by the trilinear coupling this approximation is however not very good in the small sample of models with xmath18 gev note also that the higgs mass at high xmath13 can show some dependence on the sbottom mixing for first and second generations of squarksthe mixing which depends on fermions masses is negligible except for the neutralino nucleon cross section since the dominant contributions to the scalar cross section are also dependent on fermion masses however since the squark exchange diagram is usually subdominant as compared to higgs exchange the neglected contribution of the trilinear coupling falls within the theoretical uncertainties introduced by the hadronic matrix elements xcite similarly neglecting the the muon trilinear mixing xmath19 could affect the prediction for xmath20 but this effect is not large compared with the uncertainties on the value extracted from measurements the top quark mass xmath21 is also used as an input although it has a much weaker influence on the results than in the case of gut scale models for the latterthe top quark mass enters the renormalization group evolution and can have a large impact on the supersymmetric spectrum in some regions of the parameter space in the general mssmthe top quark mass mainly influences the light higgs mass we fix xmath22 and xmath23 gev the free parameters of our mssm model with unified gaugino masses mssm ug are xmath24 the range examined for each of these parameters is listed in table tab param mssm ug has a far more restricted set of paramters than the general mssm still this model will show how the possibilities for susy scenarios open up the observables that will be used in the fit are listed in table tab constraints we first review the expectations for the role of each observable in constraining the mssm parameter space tab param range of the free mssm ug parameters colsoptionsheader tab squark the left handed quark which couples strongly to the wino andor higgsino component has a wide variety of decay modes the frequency of each dominant decay chain are displayed in table tab squark for each lsp configuration for the bino lsp the dominant mode is usually xmath25 with typical branching fractions around 60 the chargino will decay either into xmath26 or xmath27 when light sleptons are present the subdominant mode in those scenario is xmath28 with xmath29 the decay chains are similar to those of the cmssm in some casesthe second chargino a mixed higgsino wino is kinematically accessible and the dominant mode will be xmath30 with subdominant decays into xmath31 and xmath32 xmath33 will decay preferentially into xmath34 or in other neutralinos as well as into xmath35 the higgs can be produced in either xmath36 or further in xmath37 a fraction of models 77 feature the dominant decay into the lsp xmath38 because the squark xmath39 has a suppressed rate to the bino this channel is dominant only when other two body channels are kinematically forbidden for a mixed lsp xmath40 the relative importance of the various decay channels shifts the decay xmath38 is dominant in less than 3 of the cases although because of the higgsino component of the lsp this can occur even when heavier neutralinos are kinematically accesssible by farthe most frequent dominant decay is xmath30 with significant branching fractions in xmath41 or xmath42 the heavier chargino always has two body decay modes xmath43 preferably xmath34 or xmath36 the xmath44 and xmath45 in turn feature mostly 3body decays note that decay modes into higgs bosons xmath36 can involve even the heavy higgs bosons as usualwhen light sleptons are present the decay xmath46 can be dominant for the higgsino lsp xmath47 the dominant mode is either xmath48 or xmath49 with some contributions from xmath49 and xmath31 the xmath33 channel has similar decay chains as the mixed lsp except that the dominant mode is usually xmath50 rather than channels involving higgses the xmath51 can in a few cases decay via two body xmath52 or xmath35 but in most cases it decays via three body dominantly into xmath53 these decays mainly give signatures into jets and missing energy the xmath45 produced in squark or neutralino decays will also decay via three body final states in summaryxmath39 decays dominantly into heavy charginos with further decay chains involving other chargino neutralino states decay chains involving slepton production dominater only in 25 of scenarios finally recall that the elastic scattering cross section also differs significantly depending on the nature of the lsp giving an opportunity to correlate susy signals at lhc with those of direct detection for the bino lsp xmath54pb while xmath55 pb for the mixed higgsino lsp we do not discuss in detail the case where both squarks and gluinos are below 2tev the decay chains can be rather complicated with the possibility of producing the gluino in squark decay and vice versa the case where the gluino is heavier than the squarks features the same decay chains for the squarks as the case just discussed increasing the number of free parameters as compared to the cmssm model has opened up the possibilities for supersymmetric scenarios that are compatible with all experimental constraints and this even maintaining the universality of gaugino mass although the parameter space of the model is still not very well constrained we found that the most favoured models have a lsp of a few hundred gev with a significant higgsino fraction xmath56 contrary to the cmssm case the higgsino lsp is not fully correlated with a very heavy squark sector although all our scenarios favour squarks above the tev scale a very light pseudoscalar is also disfavoured with xmath57 gev this means that large deviations from the sm in b physics observables are expected only in a small fraction of allowed scenarios our favoured scenarios predict few signals at the tevatron the pseudoscalar higgs as well as the coloured sector are too heavy to be accessed by direct searches only very few scenarios have a potentially large enough rate for trilepton searches at the tevatron the complementarity between future experiments to probe this class of models was emphasized even though susy or heavy higgs signals are not guaranteed at lhc the majority of allowed models predict at least one signal either at the lhc including the flavour sector or in future direct detection experiment furthermore the light higgs is expected to be around 120gev with sm like couplings we have also explored the various dominant decay chains for gluinos and squarks that could be produced at lhc in the mssm ug as well as for the heavy neutralinos appearing in the decays of these coloured sparticles we found that for models with gluinos accessible at lhc a significant fraction of the heavy neutralinos produced decayed dominantly into a gauge or higgs boson furthermore states which decayed into sleptons are rarely dominant we also showed how the preferred squarks decay channels are determined to a large extent by the neutralino composition whether one can exploit these decay chains to determine some properties of the sparticles remains to be seen in our analysis the relic density measurement plays the dominant role in constraining the model since the relic density computation implicitly assumes a standard cosmological scenario relaxing this requirement affects significantly the allowed parameter space of the model finally we comment on the difference between our results and other recent analyses done within the framework of the mssm with 24 parameters either using a mcmc likelihood approach or applying xmath58 constraints on each of the observables xcite first these studies were done in a more general model than the one we have considered with in particular no universality condition on the gaugino masses this means that the lsp can have a significant wino component and therefore is more likely to be at the tev scale as was found in xcite using linear priors recall that a tev scale wino annihilates efficiently into gauge bosons pairs the analysis of xcite also emphasizes the prior dependence with a generally much lighter spectrum using log priors this is due mostly to the poorly constrained parameter space xcite as in our analysis squarks and sleptons ran over the full range allowed in the scan and the pseudoscalar mass can be very heavy the analysis of xcite used a different statistical treatment but most importantly did not require that the neutalino explained all the dm in the universe only an upper bound on xmath12 was imposed this means that a large number of models with small mass splitting between the lsp and the nlsp appeared in the scan calling for a careful study of collider limits in our approachsuch models are ruled out since they have xmath59 this analysis further emphasized the light susy spectrum in their scans so naturally found preferred lsp mass below the tev scale we thank j hamann for many useful discussions on the mcmc method we acknowledge support from the indo french center fro promotion of advanced scientific research under project number 30004 2 this work was also supported in part by the gdri acpp of cnrs and by the french anr project toolsdmcoll blan07 2 194882 the work of ap was supported by the russian foundation for basic research grant rfbr08 02 00856a and rfbr08 02 92499a collaboration j angle et al first results from the xenon10 dark matter experiment at the gran sasso national laboratory httpdxdoiorg101103physrevlett100021303phys rev lett 100 2008 021303 httparxivorgabs07060039arxiv07060039 astro ph collaboration z ahmed et al a search for wimps with the first five tower data from cdms httparxivorgabs08023530arxiv08023530 astro ph collaboration o adriani et al an anomalous positron abundance in cosmic rays with energies 15100 gev httpdxdoiorg101038nature07942nature 458 2009 607609 httparxivorgabs08104995arxiv08104995 astro ph collaboration a a abdo et al measurement of the cosmic ray e plus e spectrum from 20 gev to 1 tev with the fermi large area telescope httparxivorgabs09050025arxiv09050025 astrophhe collaboration f aharonian et al the energy spectrum of cosmic ray electrons at tev energies httpdxdoiorg101103physrevlett101261104phys 101 2008 261104 httparxivorgabs08113894arxiv08113894 astro ph o adriani et al a new measurement of the antiproton to proton flux ratio up to 100 gev in the cosmic radiation httpdxdoiorg101103physrevlett102051101phys rev 102 2009 051101 httparxivorgabs08104994arxiv08104994 astro ph collaboration j dunkley et al five year wilkinson microwave anisotropy probe wmap observations likelihoods and parameters from the wmap data httpdxdoiorg101088006700491802306astrophys j suppl 180 2009 306329 httparxivorgabs08030586arxiv08030586 astro ph collaboration d n spergel et al wilkinson microwave anisotropy probe wmap three year results implications for cosmology httpdxdoiorg101086513700astrophys j suppl 170 2007 377 httparxivorgabsastroph0603449arxivastroph0603449 collaboration m tegmark et al cosmological constraints from the sdss luminous red galaxies httpdxdoiorg101103physrevd74123507phys rev d74 2006 123507 httparxivorgabsastroph0608632arxivastroph0608632 collaboration t aaltonen et al search for new physics in the xmath60 channel with a lowxmath61 lepton threshold at the collider detector at fermilab httpdxdoiorg101103physrevd79052004phys rev d79 2009 052004 httparxivorgabs08103522arxiv08103522 hep ex collaboration v m abazov et al search for associated production of charginos and neutralinos in the trilepton final state using 23 fb1 of data httparxivorgabs09010646arxiv09010646 hep ex e aprile l baudis and f t x collaboration status and sensitivity projections for the xenon100 dark matter experiment httparxivorgabs09024253arxiv09024253 astrophim collaboration t bruch status and future of the cdms experiment cdms ii to supercdms httpdxdoiorg10106312823758aip conf 957 2007 193196 collaboration a a moiseev gamma ray large area space telescope mission overview httpdxdoiorg101016jnima200801005nucl a588 2008 4147 e mocchiutti et al the pamela space experiment httparxivorgabs09052551arxiv09052551 astrophhe atlas detector and physics performance technical design report vol 2 cern lhcc99 15 collaboration g l bayatian et al cms technical design report volume ii physics performance httpdxdoiorg10108809543899346s01j g34 2007 9951579 m artuso et al xmath62 xmath63 and xmath64 decays httpdxdoiorg101140epjcs1005200807161eur phys j c57 2008 309492 httparxivorgabs08011833arxiv08011833 hep ph j r ellis s heinemeyer k a olive a m weber and g weiglein the supersymmetric parameter space in light of xmath65 physics observables and electroweak precision data httpdxdoiorg10108811266708200708083jhep 08 2007 083 httparxivorgabs07060652arxiv07060652 hep ph h baer and c balazs chi2 analysis of the minimal supergravity model including wmap gmu2 and b s gamma constraints httpdxdoiorg10108814757516200305006jcap 0305 2003 006 httparxivorgabshepph0303114arxivhepph0303114 g belanger f boudjema a cottrant a pukhov and a semenov wmap constraints on sugra models with non universal gaugino masses and prospects for direct detection httpdxdoiorg101016jnuclphysb200411036nucl phys b706 2005 411454 httparxivorgabshepph0407218arxivhepph0407218 e a baltz and p gondolo markov chain monte carlo exploration of minimal supergravity with implications for dark matter httpdxdoiorg10108811266708200410052jhep 10 2004 052 httparxivorgabshepph0407039arxivhepph0407039 b c allanach and c g lester multi dimensional msugra likelihood maps httpdxdoiorg101103physrevd73015013phys rev d73 2006 015013 httparxivorgabshepph0507283arxivhepph0507283 b c allanach c g lester and a m weber the dark side of msugra jhep 12 2006 065 httparxivorgabshepph0609295arxivhepph0609295 r r de austri r trotta and l roszkowski a markov chain monte carlo analysis of the cmssm jhep 05 2006 002 httparxivorgabshepph0602028arxivhepph0602028 c f berger j s gainer j l hewett and t g rizzo supersymmetry without prejudice httpdxdoiorg10108811266708200902023jhep 02 2009 023 httparxivorgabs08120980arxiv08120980 hep ph s s abdussalam b c allanach f quevedo f feroz and m hobson fitting the phenomenological mssm httparxivorgabs09042548arxiv09042548 hep ph h baer a mustafayev s profumo a belyaev and x tata direct indirect and collider detection of neutralino dark matter in susy models with non universal higgs masses jhep 07 2005 065 httparxivorgabshepph0504001arxivhepph0504001 a bottino f donato n fornengo and s scopel size of the neutralino nucleon cross section in the light of a new determination of the pion nucleon sigma term httpdxdoiorg101016s092765050200107x astropart 18 2002 205211 httparxivorgabshepph0111229arxivhepph0111229 g belanger f boudjema a pukhov and a semenov dark matter direct detection rate in a generic model with micromegas22 httpdxdoiorg101016jcpc200811019comput 180 2009 747767 httparxivorgabs08032360arxiv08032360 hep ph g blanger in preparation b c allanach softsusy a c program for calculating supersymmetric spectra httpdxdoiorg101016s001046550100460x comput 143 2002 305331 httparxivorgabshepph0104145arxivhepph0104145 g belanger f boudjema a pukhov and a semenov micromegas version 13 httpdxdoiorg101016jcpc200512005comput 174 2006 577604 httparxivorgabshepph0405253arxivhepph0405253 g belanger f boudjema a pukhov and a semenov micromegas20 a program to calculate the relic density of dark matter in a generic model httpdxdoiorg101016jcpc200611008comput 176 2007 367382 httparxivorgabshepph0607059arxivhepph0607059 m misiak et al the first estimate of banti b x s gamma at oalphas2 httpdxdoiorg101103physrevlett98022002phys 98 2007 022002 httparxivorgabshepph0609232arxivhepph0609232 collaboration v m abazov et al search for squarks and gluinos in events with jets and missing transverse energy using 21 xmath66 of xmath67 collision data at xmath68 196 tev httpdxdoiorg101016jphysletb200801042phys b660 2008 449457 httparxivorgabs07123805arxiv07123805 hep ex m davier g2 httparxivorgabstalk presented at tau08 novosibirsk russia talk presented at tau08 novosibirsk russia a dedes h k dreiner and u nierste correlation of b s mu mu and g2mu in minimal supergravity httpdxdoiorg101103physrevlett87251804phys 87 2001 251804 httparxivorgabshepph0108037arxivhepph0108037 a dedes h k dreiner u nierste and p richardson trilepton events and xmath69 no lose for msugra at the tevatron httparxivorgabshepph0207026arxivhepph0207026 g mercadante j k mizukoshi and x tata using b tagging to enhance the susy reach of the cern large hadron collider httpdxdoiorg101103physrevd72035009phys rev d72 2005 035009 httparxivorgabshepph0506142arxivhepph0506142 r h k kadala p g mercadante j k mizukoshi and x tata heavy flavour tagging and the supersymmetry reach of the cern large hadron collider httpdxdoiorg101140epjcs1005200806729 eur j c56 2008 511528 httparxivorgabs08030001arxiv08030001 hep ph u de sanctis t lari s montesano and c troncon perspectives for the detection and measurement of supersymmetry in the focus point region of msugra models with the atlas detector at lhc httpdxdoiorg101140epjcs1005200704153eur phys j c52 2007 743758 httparxivorgabs07042515arxiv07042515 hep ex s profumo and c e yaguna a statistical analysis of supersymmetric dark matter in the mssm after wmap httpdxdoiorg101103physrevd70095004phys rev d70 2004 095004 httparxivorgabshepph0407036arxivhepph0407036 b c allanach k cranmer c g lester and a m weber natural priors cmssm fits and lhc weather forecasts httpdxdoiorg10108811266708200708023jhep 08 2007 023 httparxivorgabs07050487arxiv07050487 hep ph r trotta f feroz m p hobson l roszkowski and r ruiz de austri the impact of priors and observables on parameter inferences in the constrained mssm httpdxdoiorg10108811266708200812024jhep 12 2008 024 httparxivorgabs08093792arxiv08093792 hep ph
using a markov chain monte carlo approach we find the allowed parameter space of a mssm model with seven free parameters in this model universality conditions at the gut scale are imposed on the gaugino sector we require in particular that the relic density of dark matter saturates the value extracted from cosmological measurements assuming a standard cosmological scenario we characterize the parameter space of the model that satisfies experimental constraints and illustrate the complementarity of the lhc searches b physics observables and direct dark matter searches for further probing the parameter space of the model we also explore the different decay chains expected for the coloured particles that would be produced at lhc date constraining the mssm with universal gaugino masses and implication for searches at the lhc g blangerxmath0 f boudjemaxmath0 a pukhovxmath1 r k singhxmath2 1 lapth univ de savoie cnrs bp110 f74941 annecy le vieux france 2 skobeltsyn inst of nuclear physics moscow state univ moscow 119992 russia 3 institut fr theoretische physik und astrophysik universitt wrzburg d97074 wrzburg germany
introduction model and constraints conclusion acknowledgements
quantum mechanical fluctuations during an early epoch of inflation provide a plausible mechanism to generate the energy density perturbations responsible for observed cosmological structure while it has been known for quite some time that inflation is consistent with open spatial hypersurfaces gott 1982 guth weinberg 1983 attention was initially focussed on models in which there are a very large number of xmath17foldings during inflation resulting in almost exactly flat spatial hypersurfaces for the observable part of the present universe guth 1981 also see kazanas 1980 sato 1981a b this was perhaps inevitable because of strong theoretical prejudice towards flat spatial hypersurfaces and their resulting simplicity however to get a very large number of xmath17foldings during inflation it seems necessary that the inflation model have a small dimensionless parameter j r gott private communication 1994 banks et al 1995 which would require an explanation attempts to reconcile these favoured flat spatial hypersurfaces with observational measures of a low value for the clustered mass density parameter xmath1 have concentrated on models in which one postulates the presence of a cosmological constant xmath18 peebles 1984 in the simplest flatxmath18 model one assumes a scale invariant harrison 1970 peebles yu 1970 zeldovich 1972 primordial power spectrum for gaussian adiabatic energy density perturbations such a spectrum is generated by quantum mechanical fluctuations during an early epoch of inflation in a spatially flat model provided that the inflaton potential is reasonably flat fischler ratra susskind 1985 and references therein it has been demonstrated that these models are indeed consistent with current observational constraints eg stompor grski banday 1995 ostriker steinhardt 1995 ratra sugiyama 1995 liddle et al 1996b ganga ratra sugiyama 1996b hereafter grs an alternative more popular of late is to accept that the spatial hypersurfaces are not flat in this case the radius of curvature for the open spatial sections introduces a new length scale in addition to the hubble length which requires a generalization of the usual flat space scale invariant spectrum ratra peebles 1994 hereafter rp94 such a spectrum is generated by quantum mechanical fluctuations during an epoch of inflation in an open bubble model rp94 ratra peebles 1995 hereafter rp95 bucher et al 1995 hereafter bgt lyth woszczyna 1995 yamamoto et al 1995 hereafter yst provided that the inflaton potential inside the bubble is reasonably flat such gaussian adiabatic open bubble inflation models have also been shown to be consistent with current observational constraints rp94 kamionkowski et al 1994 grski et al 1995 hereafter grsb liddle et al 1996a hereafter llrv ratra et al 1995 grs inflation theory by itself is unable to predict the normalization amplitude for the energy density perturbations currently the least controversial and most robust method for the normalization of a cosmological model is to fix the amplitude of the model predicted large scale cmb spatial anisotropy by comparing it to the observed cmb anisotropy discovered by the xmath0dmr experiment smoot et al 1992 previously specific open cold dark matter cdm models have been examined in light of the xmath0dmr two year results bennett et al grsb investigated the cmb anisotropy angular spectra predicted by the open bubble inflation model rp94 and compared large scale structure predictions of this dmr normalized model to observational data cayn et al 1996 performed a related analysis for the open model with a flat space scale invariant spectrum wilson 1983 hereafter w83 and yamamoto bunn 1996 hereafter yb examined the effect of additional sources of quantum fluctuations bgt yst in the open bubble inflation model in this paper we study the observational predictions for a number of open cdm models in particular we employ the power spectrum estimation technique devised by grski 1994 for incomplete sky coverage to normalize the open models using the xmath0dmr four year data bennett 1996 in xmath19 we provide an overview of open bubble inflation cosmogonies in xmath20 we detail the various dmr data sets used in the analyses here discuss the various open models we consider and present the dmr estimate of the cmb rms quadrupole anisotropy amplitude xmath21 as a function of xmath1 for these open models in xmath22we detail the computation of several cosmographic and large scale structure statistics for the dmr normalized open models these statistics are confronted by various current observational constraints in xmath23 our results are summarized in xmath24 the simplest open inflation model is that in which a single open inflation bubble nucleates in a possibly spatially flat inflating spacetime gott 1982 guth weinberg 1983 in this model the first epoch of inflation smooths away any preexisting spatial inhomogeneities while simultaneously generating quantum mechanical zero point fluctuations then in a tunnelling event an open inflation bubble nucleates and for a small enough nucleation probability the observable universe lies inside a single open inflation bubble fluctuations of relevance to the late time universe can be generated via three different quantum mechanical mechanisms 1 they can be generated in the first epoch of inflation 2 they can be generated during the tunnelling event thus resulting in a slightly inhomogeneous initial hypersurface inside the bubble or a slightly non spherical bubble and 3 they can be generated inside the bubble the tunneling amplitude is largest for the most symmetrical solution and deviations from symmetry lead to an exponential suppression so it has usually been assumed that the nucleation process mechanism 2 does not lead to the generation of significant inhomogeneities quantum mechanical fluctuations generated during evolution inside the bubble rp95 are significant assuming that the energy density difference between the two epochs of inflation is negligible and so the bubble wall is not significant one may estimate the contribution to the perturbation spectrum after bubble nucleation from quantum mechanical fluctuations during the first epoch of inflation bgt yst as discussed by bucher turok 1995 hereafter bt also see yst yb the observable predictions of these simple open bubble inflation models are almost completely insensitive to the details of the first epoch of inflation for the observationally viable range of xmath1 this is because the fluctuations generated during this epoch affect only the smallest wavenumber part of the energy density perturbation power spectrum which can not contribute significantly to observable quantities because of the spatial curvature length cutoff in an open universe eg w83 kamionkowski spergel 1994 rp95 inclusion of such fluctuations in the calculations alter the predictions for the present value of the rms linear mass fluctuations averaged over an xmath25 mpc sphere xmath26 by xmath27 which is comparable to our computational accuracy besides the open bubble inflation model spectra a variety of alternatives have also been considered predictions for the usual flat space scale invariant spectrum in an open model have been examined w83 abbott schaefer 1986 gouda sugiyama sasaki 1991 sugiyama gouda 1992 kamionkowski spergel 1994 sugiyama silk 1994 cayn et al the possibility that the standard formulation of quantum mechanics is incorrect in an open universe and that allowance must be made for non square integrable basis functions has been investigated lyth woszczyna 1995 and other spectra have also been considered eg w83 abbott schaefer 1986 kamionkowski spergel 1994 these spectra being inconsistent with either standard quantum mechanics or the length scale set by spatial curvature are of historical interest more recently the open bubble inflation scenario has been further elaborated on yst have considered a very specific model for the nucleation of the open bubble in a spatially flat de sitter spacetime and demonstrated a possible additional contribution from a non square integrable basis function which depends on the form of the potential and on the assumed form of the quantum state prior to bubble nucleation however since the non square integrable basis function contributes only on the very largest scales the spatial curvature cutoff in an open universe makes almost all of the model predictions insensitive to this basis function for the observationally viable range of xmath1 yst yb for example at xmath28 its effect is to change xmath26 by xmath29 an additional possible effect determined for the specific model of an open inflation bubble nucleating in a spatially flat de sitterspacetime is that fluctuations of the bubble wall behave like a non square integrable basis function hamazaki et al 1996 garriga 1996 garca bellido 1996 yamamoto sasaki tanaka 1996 while there are models in which these bubble wall fluctuations are completely insignificant garriga 1996 yamamoto et al 1996 there is as yet no computation that accounts for both the bubble wall fluctuations as well as those generated during the evolution inside the bubble which are always present so it is not yet known if bubble wall fluctuations can give rise to an observationally significant effect finally again in this very specific model the effects of a finite bubble size at nucleation seem to alter the zero bubble size predictions only by a very small amount yamamoto et al 1996 cohn 1996 while there is no guarantee that there is a spatially flat de sitter spacetime prior to bubble nucleation these computations do illustrate the important point that the spatial curvature length cutoff in an open universe eg rp95 does seem to ensure that what happens prior to bubble nucleation does not significantly affect the observable predictions for observationally viable single field open bubble inflation models it is indeed reassuring that accounting only for the quantum mechanical fluctuations generated during the evolution inside the bubble rp94 seems to be essentially all that is required to make observational predictions for the single field open bubble inflation models that is the observational predictions of the open bubble inflation scenario seem to be as robust as those for the spatially flat inflation scenario in this paper we utilize the dmr four year 53 and 90 ghz sky maps in both galactic and ecliptic coordinates we thus quantify explicitly the expected small shifts in the inferred normalization amplitudes due to the small differences between the galactic and ecliptic coordinate maps the maps are coadded using inverse noise variance weights derived in each coordinate system the least sensitive 31 ghz maps have been omitted from the analysis since their contribution is minimal under such a weighting scheme the dominant source of emission in the dmr maps is due to the galactic plane we are unable to model this contribution to the sky temperature to sufficient accuracy to enable its subtraction thus we excise all pixels where the galactic plane signal dominates the cmb the geometry of the cut has been determined by using the dirbe 140 xmath30 m map as a tracer of the strongest emission as described completely in banday 1996a all pixels with galactic latitude xmath31 20xmath32xmath33 are removed together with regions towards scorpius ophiucus and taurus orion there are 3881 surviving pixels in galactic coordinates and 3890 in ecliptic this extended four year data galactic plane cut has provided the biggest impact on the analysis of the dmr data see grski et al 1996 hereafter g96 the extent to which residual high latitude galactic emission can modify our results has been quantified in two ways since the spatial morphology of galactic synchrotron free free and dust emission seems to be well described by a steeply falling power spectrum xmath34 kogut 1996a g96 the cosmological signal is predominantly compromised on the largest angular scales as a simple test of galactic contamination we perform all computations both including and excluding the observed sky quadrupole a more detailed approach g96 notes that a large fraction of the galactic signal can be accounted for by using the dirbe 140 xmath30 m sky map reach 1995 as a template for free free and dust emission and the 408 mhz all sky radio survey haslam 1981 to describe synchrotron emission a correlation analysis yields coupling coefficients for the two templates at each of the dmr frequencies we have repeated our model analysis after correcting the coadded sky maps by the galactic templates scaled by the coefficients derived in g96 in particular we adopt those values derived under the assumption that the cmb anisotropy is well described by an xmath35 1 power law model with normalization amplitude xmath21 xmath36 18 xmath30k and coupling coefficient amplitudes in fact we have investigated this for a sub sample of the models considered here in which we varied xmath1 but fixed xmath2 and xmath10 no statistically significant changes were found in the derived values of either xmath21 or the coupling coefficients one might make criticisms of either technique excluding information from an analysis in this case the quadrupole components can obviously weaken any conclusions simply because statistical uncertainties will grow at the same time it is not clear whether the galactic corrections applied are completely adequate we believe that given these uncertainties our analysis is the most complete and conservative one that is possible the power spectrum analysis technique developed by grski 1994 is implemented orthogonal basis functions for the fourier decomposition of the sky maps are constructed which specifically include both pixelization effects and the galactic cut these are linear combinations of the usual spherical harmonics with multipole xmath37 the functions are coordinate system dependent a likelihood analysis is then performed as described in grski 1994 we consider four open model energy density perturbation power spectra 1 the open bubble inflation model spectrum accounting only for fluctuations that are generated during the evolution inside the bubble rp94 2 the open bubble inflation model spectrum now also accounting for the fluctuations generated in the first epoch of inflation bgt yst 3 the open bubble inflation model spectrum now also accounting for both the usual fluctuations generated in the first epoch of inflation and a contribution from a non square integrable basis function yst and 4 an open model with a flat space scale invariant spectrum w83 in all caseswe have ignored the possibility of tilt or primordial gravity waves since it is unlikely that they can have a significant effect in viable open models with the eigenvalue of the spatial scalar laplacian being xmath38 where xmath39 is the radial coordinate spatial wavenumber the gauge invariant fractional energy density perturbation power spectrum of type 1 above is xmath40 where xmath41 is the transfer function and xmath42 is the normalization amplitude generalize the primordial part of the spectrum of eq 1 by multiplying it with xmath43 as yet only the specific xmath44 generalized spectrum ie eq 1 is known to be a prediction of an open bubble inflation model and therefore consistent with the presence of spatial curvature it is premature to draw conclusions about open cosmogony on the basis of the xmath45 version of the spectrum considered by bw in the simplest example perturbations generated in the first epoch of inflation introduce an additional multiplicative factor xmath46 on the right hand side of eq 1 for a discussion of the effects of the non square integrable basis function see yst and yb the energy density power spectrum of type 4 above is xmath47 and in this case one can also consider eg xmath48 w83 but because of the spatial curvature cutoff in an open model the predictions are essentially indistinguishable atsmall xmath49 the asymptotic expressions are xmath50 type 1 xmath51 type 2 and xmath52 type 4 conventionally the cmb fractional temperature perturbation xmath53 is expressed as a function of angular position xmath54 on the sky via the spherical harmonic decomposition xmath55 the cmb spatial anisotropy in a gaussian model can then be characterized by the angular perturbation spectrum xmath56 defined in terms of the ensemble average xmath57 the xmath56 s used here were computed using two independent boltzmann transfer codes developed by ns eg sugiyama 1995 and rs eg stompor 1994 some illustrative comparisons are shown in fig we emphasize that the excellent agreement between the xmath56 s computed using the two codes is mostly a reflection of the currently achievable numerical accuracy currently the major likely additional unaccounted for source of uncertainty is that due to the uncertainty in the modelling of various physical effects the computations here assume a standard recombination thermal history and ignore the possibility of early reionization the simplest open models with the least possible number of free parameters have yet to be ruled out by observational data grsb ratra et al 1995 grs this paper so there is insufficient motivation to expand the model parameter space by including the effect of early reionization tilt or gravity waves values determined from the dmr data here assuming no early reionization are unlikely to be very significantly affected by early reionization however since structure forms earlier in an open model other effects of early reionization might be more significant in an open model while it is possible to heuristically account for such effects an accurate quantitative estimate must await a better understanding of structure formation for the xmath58 of types 1 2 and 4 above we have evaluated the cmb anisotropy angular spectra for a range of xmath1 spanning the interval between 01 and 10 for a variety of values of xmath2 the hubble parameter xmath59 and the baryonic mass density parameter xmath10 the values of xmath2 were selected to cover the lower part of the range of ages consistent with current requirements xmath60 105 gyr 12 gyr or 135 gyr with xmath2 as a function of xmath1 computed accordingly see for example jimenez et al 1996 chaboyer et al the values of xmath10 were chosen to be consistent with current standard nucleosynthesis requirements xmath61 00055 00125 or 00205 eg copi schramm turner 1995 sarkar 1996 to render the problem tractable xmath56 s were determined for the central values of xmath62 and xmath63 and for the two combinations of these parameters which most perturb the xmath56 s from those computed at the central values ie for the smallest xmath62 we used the smallest xmath63 and for the largest xmath62 we used the largest xmath63 specific parameter values are given in columns 1 and 2 of tables 16 and representative anisotropy spectra can be seen in figs 2 and 3 we therefore improve on our earlier analysis of the dmr two year data grsb by considering a suitably broader range in the xmath10 xmath2 parameter space the cmb anisotropy spectra for xmath58 of type 3 above were computed for a range of xmath1 spanning the interval between 01 and 09 for xmath64 and xmath65 specific parameter values are given in columns 1 and 2 of table 7 and these spectra are shown in fig 4 in fig 5 we compare the various spectra considered here the differences in the lowxmath66 shapes of the xmath56 s in the various models figs 25 are a consequence of three effects 1 the shape of the energy density perturbation power spectrum at low wavenumber 2 the exponential suppression at the spatial curvature scale in an open model and 3 the interplay between the usual fiducial cdm sachs wolfe term and the integrated sachs wolfe hereafter sw term in the expression for the cmb spatial anisotropy the relative importance of these effects is determined by the value of xmath1 and leads to the non monotonic behaviour of the large scale xmath56 s as a function of xmath1 seen in figs more precisely the contributions to the cmb anisotropy angular spectrum from the usual and integrated sw terms have a different xmath66dependence as well as a relative amplitude that is both xmath1 and xmath58 dependent on very large angular scales small xmath66 s the dominant contribution to the usual sw term comes from a higher redshift when the length scales are smaller than does the dominant contribution to the integrated sw term hu sugiyama 1994 1995 as a result in an open model on very large angular scales the usual sw term is cut off more sharply by the spatial curvature length scale than is the integrated sw term hu sugiyama 1994 ie on very large angular scales in an open model the usual sw term has a larger positive effective index xmath35 than the integrated sw term on slightly smaller angular scalesthe integrated sw term is damped ie it has a negative effective index xmath35 while the usual sw term plateaus hu sugiyama 1994 as a consequence going from the largest to slightly smaller angular scales the usual term rises steeply and then flattens while the integrated term rises less steeply and then drops ie it has a peak the change in shape as a function of xmath66 of these two terms is both xmath1 and xmath58 dependent these are the two dominant effects at xmath67 at higher xmath66 other effects come into play more specifically for xmath68 the curvature length scale cutoff and the precise large scale form of the xmath58 considered here are relatively unimportant the cmb anisotropy angular spectrum is quite similar to that for xmath69 and the dominant contribution is the usual sw term for a xmath58 that does not diverge at low wavenumber as with the flat space scale invariant spectrum in an open model for xmath70 the exponential cutoff at the spatial curvature length dominates and the lowestxmath66 xmath56 s are suppressed figs 3 and 5 for this xmath58 as xmath1 is reduced the usual term continues to be important on the largest angular scales down to xmath28 as xmath1 is reduced below xmath71 the integrated term starts to dominate on the largest angular scales and as xmath1 is further reduced the integrated term also starts to dominate on smaller angular scales from fig 3a one will notice that the integrated sw term peak first makes an appearance at xmath72 the central line in the plot at xmath73 and that as xmath1 is further reduced in descending order along the curves shown the integrated term peak moves to smaller angular scales the xmath74 case is where the integrated term peaks at xmath75 and the damping of this term on smaller angular scales xmath76 is compensated for by the steep rise of the usual sw term the two terms are of roughly equal magnitude at xmath77 and these effects result in the almost exactly scale invariant spectrum at xmath9 this case is more scale invariant than fiducial cdm a discussion of some of these features of the cmb anisotropy angular spectrum in the flat space scale invariant spectrum open model is given in cayn et al 1996 open bubble inflation models have a xmath58 that diverges at low wavenumber rp95 note that no physical quantity diverges and this increases the lowxmath66 xmath56 s figs 2 and 5 relative to those of the flat space scale invariant spectrum open model figs 3 and 5 the xmath56 s for low xmath1 models increase more than the higher xmath1 ones since for a fixed wavenumber dependence of xmath58 the divergence is more prominent at lower xmath1 rp94 the non square integrable basis function yst contributes even more power on large angular scales and so at lowxmath66 the xmath56 s of fig 4 are slightly larger than those of fig 2 also see fig 5 again spectra at lower values of xmath1 are more significantly influenced as is clear from figs 2 and 5 in an open bubble inflation model quantum mechanical zero point fluctuations generated in the first epoch of inflation scarcely affect the xmath56 s although at the very lowest values of xmath1 the very lowest order xmath56 coefficients are slightly modified the effect is concentrated in this region of the parameter space since the fluctuations in the first inflation epoch only contribute to and increase the lowest wavenumber part of xmath58 in simple open bubble inflation models the precise value of this small effect is dependent on the model assumed for the first epoch of inflation bt since the dmr data is most sensitive to multipole moments with xmath78 810 one expects the effect at xmath78 23 to be almost completely negligible bt also see yst yb figs 35 show that both the flat space scale invariant spectrum open model and the contribution from the non square integrable mode do lead to significantly different xmath56 s compared to those of fig the results of the dmr likelihood analyses are summarized in figs 621 and tables 17 and 13 two representative sets of likelihood functions xmath79 are shown in figs 6 and 7 figure 6 shows those derived from the ecliptic frame sky maps ignoring the correction for faint high latitude foreground galactic emission and excluding the quadrupole moment from the analysis figure 7 shows the likelihood functions derived from the galactic frame sky maps accounting for the faint high latitude foreground galactic emission correction and including the quadrupole moment in the analysis together these two data sets span the maximum range of normalizations inferred from our analysis the former providing the highest and the latter the lowest xmath21 tables 17 give the xmath21 central values and 1xmath80 and 2xmath80 ranges for spectra of type 1 3 and 4 above computed from the appropriate posterior probability density distribution function assuming a uniform prior each line in tables 17 lists these values at a given xmath1 for the 8 possible combinations of 1 galactic or ecliptic coordinate map 2 faint high latitude galactic foreground emission correction accounted for or ignored and 3 quadrupole included xmath81 or excluded xmath82 value of varying cosmological parameters like xmath10 since they do not quote derived xmath21 values for this model we are not able to compare to their results the corresponding ridge lines of maximum likelihood xmath21 value as a function of xmath1 are shown in figs 810 for some of the cosmological parameter values considered here although we have computed these values for spectra of type 2 above ie those accounting for perturbations generated in the first epoch of inflation we record only a subset of them in column 4 of table 13 these should be compared to columns 2 and 6 of table 13 which show the maximal 2xmath80 xmath21 range for spectra of types 1 and 3 while the differences in xmath21 between spectra 1 and 2 cols 2 and 4 of table 13 are not totally insignificant more importantly the differences between the xmath26 values for the three spectra cols 3 5 and 7 of table 13 are observationally insignificant the entries in tables 16 illustrate the shift in the inferred normalization amplitudes due to changes in xmath2 and xmath10 these shifts are larger for models with a larger xmath1 since these models have cmb anisotropy spectra that rise somewhat more rapidly towards large xmath66 so in these cases the dmr data is sensitive to somewhat smaller angular scales where the effects of varying xmath2 and xmath63 are more prominent figure 11 shows the effects that varying xmath62 and xmath63 have on some of the ridge lines of maximum likelihood xmath21 as a function of xmath1 and fig 13 illustrates the effects on some of the conditional fixed xmath1 slice likelihood densities for xmath83 on the whole for the cmb anisotropy spectra considered here shifts in xmath2 and xmath84 have only a small effect on the inferred normalization amplitude the normalization amplitude is somewhat more sensitive to the differences between the galactic and ecliptic coordinate sky maps to the foreground high latitude galactic emission treatment and to the inclusion or exclusion of the xmath85 moment for the purpose of normalizing models we choose for our 2xmath80 cl bounds values from the likelihood fits that span the maximal range in the xmath21 normalizations specifically for the lower 2xmath80 bound we adopt the value determined from the analysis of the galactic coordinate maps accounting for the high latitude galactic emission correction and including the xmath85 moment in the analysis and for the upper 2xmath80 value that determined from the analysis of the ecliptic coordinate maps ignoring the galactic emission correction and excluding the xmath85 moment from the analysis these values are recorded in columns 5 and 8 of tables 912 and columns 2 4 and 6 of table 13 were used in the likelihood analyses of the various model spectra and different interpolation methods were used in the determination of the xmath21 values there are small but insignificant differences in the quoted xmath21 values for some identical models in these tables figure 12 compares the ridge lines of maximum likelihood xmath21 value as a function of xmath1 for the four different cmb anisotropy angular spectra considered here and fig 14 compares some of the conditional fixed xmath1 slice likelihood densities for xmath21 for these four cmb anisotropy angular spectra approximate fitting formulae may be derived to describe the above two extreme 2xmath80 limits for the open bubble inflation model rp94 bgt yst not including a contribution from a non square integrable basis function we have xmath86 eqno5 which is good to better than xmath87 for all values of xmath1 and to better than xmath88 over the observationally viable range of xmath89 for those models including a contribution from the non square integrable basis function yst we have xmath90 eqno6 mostly good to better than xmath88 the flat space scale invariant spectrum open model fitting formula is xmath91 eqno7 generally good to better than xmath92 except near xmath93 and xmath94 where the deviations are larger further details about these fitting formulae may be found in stompor 1996 the approximate fitting formulae 57 provide a convenient portable normalization of the open models it is important however to note that they have been derived using the xmath21 values determined for a given xmath2 and xmath10 and hence do not account for the additional uncertainty which could be as large as xmath88 due to allowed variations in these parameters we emphasize that in our analysis here we make use of the actual xmath21 values derived from the likelihood analyses not these fitting formulae figures 15 and 16 show projected likelihood densities for xmath1 for some of the models and dmr data sets considered here note that the general features of the projected likelihood densities for the open bubble inflation model only accounting for the fluctuations generated during the evolution inside the bubble spectrum 1 above are consistent with those derived from the dmr two year data grsb fig 3 however since we only compute down to xmath95 here only the rise to the prominent peak at very low xmath1 grsb is seen bw show in the middle left hand panel of their fig 11 presumably the projected likelihood density for xmath1 for the same open bubble inflation model the general features of which are consistent with those derived here figures 1721 show marginal likelihood densities for xmath1 for some of the models and dmr data sets considered here for the open bubble inflation model accounting only for the fluctuations generated during the evolution inside the bubble rp94 the dmr two year data galactic frame quadrupole moment excluded and included marginal likelihoods are shown in fig 3 of grsb and are in general concord with those shown in fig 17 here although again only the rise to the prominent lowxmath1 peak is seen here note that now especially for the quadrupole excluded case the peaks and troughs are more prominent although still not greatly statistically significant furthermore comparing the solid line of fig 17b here to the heavy dotted line of fig 3 of grsb one notices that the intermediate xmath1 peak is now at xmath96 instead of at xmath97 for the dmr two year data since bw chose not to compute for the case when the quadrupole moment is excluded from the analysis they presumably did not notice the peak at xmath98 in the marginalized likelihood density for the open bubble inflation model see fig 17 for the open bubble inflation model now also accounting for both the fluctuations generated in the first spatially flat epoch of inflation bgt yst and those from the non square integrable basis function yst the dmr two year data ecliptic frame quadrupole included marginal likelihood shown as the solid line in fig 3 of yb is in general agreement with the dot dashed line of fig however yb did not compute for the case where the quadrupole moment was excluded from the analysis and so did not find the peak at xmath99 in fig 19 given the shapes of the marginal likelihoods in figs 1721 it is not at all clear if it is meaningful to derive limits on xmath1 without making use of other prior information as an example it is not at all clear what to use for the integration range in xmath1 focussing on fig 21a which is similar to the other quadrupole excluded cases the only conclusion seems to be that xmath9 is the value most consistent with the dmr data at least amongst those models with xmath14 some of the models have another peak at xmath100 grsb however when the quadrupole moment is included in the analysis as in fig 21b the open bubble inflation model peaks are at xmath12 at least in the range xmath14 grsb while the flat space scale invariant spectrum open model peak is at xmath11 at the 95 cl no value of xmath1 over the range considered 011 is excluded the yb and bw claims of a lower limit on xmath1 from the dmr data alone are at the very least premature the xmath58 eg eqs 1 and 2 were determined from a numerical integration of the linear perturbation theory equations of motion as before the computations were performed with two independent numerical codes for some of the model parameter values considered here the results of the two computations were compared and found to be in excellent agreement illustrative examples of the comparisons are shown in fig again we emphasize that the excellent agreement is mostly a reflection of the currently available numerical accuracy and the most likely additional unaccounted for source of uncertainty is that due to the uncertainty in the modelling of various physical effects table 8 list the xmath58 normalization amplitudes xmath42 eg eqs 1 and 2 when xmath101k examples of the power spectra normalized to xmath21 derived from the mean of the dmr four year data analysis extreme upper and lower 2xmath80 limits discussed above are shown in figs one will notice from fig 23e the good agreement between the open bubble inflation spectra when normalized to the two extreme 2xmath80 xmath21 limits eg cols 5 and 8 of table 10 the xmath58 normalization factor eq 1 and table 8 for the open bubble inflation model rp94 bgt yst may be summarized by for the lower 2xmath80 limit xmath102 eqno8 and for the upper 2xmath80 limit xmath103 eqno9 these fits are good to xmath104 for xmath14 note however that they are derived using the xmath21 values determined for given xmath62 and xmath63 and hence do not account for the additional uncertainty introduced by allowed variations in these parameters which could affect the power spectrum normalization amplitude by as much as xmath105 from fig 23e and given the uncertainties we see that the fitting formulae of eqs 8 and 9 provide an adequate summary for all the open bubble inflation model spectra the extreme xmath106xmath80 xmath58 normalization factor eq 2 and table 8 for the flat space scale invariant spectrum open model w83 may be summarized by for the lower 2xmath80 limit xmath107 eqno10 and for the upper 2xmath80 limit xmath108 eqno11 these fits are good to better than xmath88 for xmath109 again they are derived from xmath21 values determined at given xmath62 and xmath63 given the uncertainties involved in the normalization procedure born of both statistical and other arguments it is not yet possible to quote a unique dmr normalization amplitude g96 as a central value for the xmath58 normalization factor we currently advocate the mean of eqs 8 and 9 or eqs 10 and 11 as required we emphasize however that it is incorrect to draw conclusions about model viability based solely on this central value in conjunction with numerically determined transfer functions the fits of eqs 811 allow for a determination of xmath26 accurate to a few percent herethe mean square linear mass fluctuation averaged over a sphere of coordinate radius xmath110 is xmath111 2rightrangle 2 over pi2 leftrm sinh barchi rm coshbarchi barchiright2 nonumber times intinfty0 dk over 1 k22 leftrm coshbarchi rm sinkbarchi k rm sinhbarchi rm coskbarchi right2 pk endaligned which on small scales reduces to the usual flat space expression xmath112 intinfty0 dk k2 pk leftrm sinkbarchi kbarchi rm coskbarchi right2kbarchi6 if instead use is made of the bardeen et al 1986 hereafter bbks analytic fit to the transfer function using the parameterization of eq 13 below sugiyama 1995 and numerically determined values for xmath42 the resultant xmath113 values are accurate to better than xmath87 except for large baryon fraction xmath114 models where the error could be as large as xmath115 use of the analytic fits of eqs 811 for xmath42 instead of the numerically determined values slightly increases the error while use of the bbks transfer function fit parameterized by an earlier version of eq 13 below xmath116 results in xmath26 values that could be off by as much as xmath117 nevertheless as has been demonstrated by llrv the approximate analytic fit to the transfer function greatly simplifies the computation and allows for rapid demarcation of the favoured part of cosmological parameter space numerical values for some cosmographic and large scale structure statistics for the models considered here are recorded in tables 915 we emphasize that when comparing to observational data we make use of numerically determined large scale structure predictions not those derived using an approximate analytic fitting formula tables 912 give the predictions for the open bubble inflation model accounting only for the perturbations generated during the evolution inside the bubble rp94 and for the flat space scale invariant spectrum open model w83 each of these tables corresponds to a different pair of xmath118 values the first two columns in these tables record xmath1 and xmath2 and the third column is the cosmological baryonic matter fraction xmath119 the fourth column gives the value of the matter power spectrum scaling parameter sugiyama 1995 xmath120 which is used to parameterize approximate analytic fits to the power spectra derived from numerical integration of the perturbation equations the quantities listed in columns 14 of these tables are sensitive only to the global parameters of the cosmological model columns 5 and 8 of tables 912 give the dmr data 2xmath80 range of xmath21 that is used to normalize the perturbations in the models considered here the numerical values in table 12 are for xmath121 gyr xmath122 we did not analyze the dmr data using xmath56 s for these models and in this case the perturbations are normalized to the xmath21 values from the xmath123 gyr xmath124 analyses as discussed above shifts in xmath2 and xmath63 do not greatly alter the inferred normalization amplitude columns 6 and 9 of tables 912 give the 2xmath80 range of xmath125 these were determined using the xmath58 derived from numerical integration of the perturbation equations for about two dozen cases these rms mass fluctuations determined using the two independent numerical integration codes were compared and found to be in excellent agreement at fixed xmath21 they differ by xmath126 depending on model parameter values with the typical difference being xmath127 we again emphasize that this is mostly a reflection of currently achievable numerical accuracy to usually better than xmath128 accuracy for xmath129 the 2xmath80 xmath130 entries of columns 6 and 9 of tables 912 may be summarized by the fitting formulae listed in table 14 these fitting formulae are more accurate than expressions for xmath26 derived at the same cosmological parameter values using an analytic approximation to the transfer function and the normalization of eqs 811 for open models as discussed below it proves most convenient to characterize the peculiar velocity perturbation by the parameter xmath131 where xmath132 is the linear bias factor for xmath133 galaxies eg peacock dodds 1994 the 2xmath80 range of xmath134 are listed in columns 7 and 10 of tables 912 table 13 compares the xmath113 values for spectra of types 13 above clearly there is no significant observational difference between the predictions for the different spectra in what follows for the open bubble inflation model we concentrate on the type 1 spectrum above again the ranges in tables 914 are those determined from the maximal 2xmath80 xmath21 range table 15 lists central dmr normalized values for xmath130 defined as the mean of the maximal xmath1352xmath80 entries of tables 912 the mean of the xmath1352xmath80 fitting formulae of table 14 may be used to interpolate between the entries of table 15 we again emphasize that it is incorrect to draw conclusions about model viability based solely on these central values for the purpose of constraining model parameter values by eg comparing numerical simulation results to observational data one must make use of computations at a few different values of the normalization selected to span the xmath1352xmath80 ranges of tables 912 the dmr likelihoods do not meaningfully exclude any part of the xmath1 xmath2 xmath63 parameter space for the models considered here in this sectionwe combine current observational constraints on global cosmological parameters with the dmr normalized model predictions to place constraints on the range of allowed model parameter values it is important to bear in mind that some measures of observational cosmology remain uncertain thus our analysis here must be viewed as tentative and subject to revision as the observational situation approaches equilibrium to constrain our model parameter valueswe have employed the most robust of the current observational constraints tables 912 list some observational predictions for the models considered here and the boldface entries are those that are inconsistent with current observational data at the 2xmath80 significance level for each cosmographic or large scale parameter we have generally chosen to use constraints from a single set of observations or from a single analysis we generally use the most recent analyses since we assume that they incorporate a better understanding of the uncertainties especially those due to systematics the specific constraints we use are summarized below where we compare them to those derived from other analyses the model predictions depend on the age of the universe xmath62 to reconcile the models with the high measured values of the hubble parameter xmath2 we have chosen to focus on xmath60 105 12 and 135 gyr which are near the lower end of the ages now under discussion for instance jimenez et al 1996 find that the oldest globular clusters have ages xmath136 gyr also see salaris deglinnocenti weiss 1996 renzini et al 1996 and that it is very unlikely that the oldest clusters are younger than 97 gyr the value of xmath1 is another input parameter for our computations as summarized by peebles 1993 xmath137 on scales xmath138 mpc a variety of different observational measurements indicate that xmath1 is low for instance virial analyses of x ray cluster data indicates xmath139 with a 2xmath80 range xmath140 carlberg et al 1996 we have added their 1xmath80 statistical and systematic uncertainties in quadrature and doubled to get the 2xmath80 uncertainty in a cdm model in which structure forms at a relatively high redshift as is observed these local estimates of xmath1 do constrain the global value of xmath1 since in this case it is inconceivable that the pressureless cdm is much more homogeneously distributed than is the observed baryonic mass we hence adopt a 2xmath80 upper limit of xmath141 to constrain the cdm models we consider here this large upper limit allows for the possibility that the models might be moderately biased the boldface entries in column 1 of tables 912 indicates those xmath1 values inconsistent with this constraint column 2 of tables 912 gives the value of the hubble parameter xmath2 that corresponds to the chosen values of xmath1 and xmath62 current observational data favours a larger xmath2 eg kennicutt freedman mould 1995 baum et al 1995 van den bergh 1995 sandage et al 1996 ruiz lapuente 1996 riess press kirshner 1996 but also see schaefer 1996 branch et al for the purpose of our analysis here we adopt the xmath142 value xmath143 1xmath80 uncertainty tanvir et al 1995 doubling the uncertainty the 2xmath80 range is xmath144 the bold face entries in column 2 of tables 912 indicates those model parameter values which predict an xmath2 inconsistent with this range comparison of the standard nucleosynthesis theoretical predictions for the primordial light element abundances to what is determined by extrapolation of the observed abundances to primordial values leads to constraints on xmath63 it has usually been argued that xmath145he and xmath146li allow for the most straightforward extrapolation from the locally observed abundances to the primordial values eg dar 1995 fields olive 1996 fields et al 1996 hereafter fkot the observed xmath145he and xmath146li abundances then suggest xmath147 and a conservative assessment of the uncertainties indicate a 2xmath80 range xmath148 fkot also see copi et al 1995 sarkar 1996 observational constraints on the primordial deuterium d abundance should in principle allow for a tightening of the allowed xmath63 range there are now a number of different estimates of the primordial d abundance and since the field is still in its infancy it is perhaps not surprising that the different estimates are somewhat discrepant songaila et al 1994 carswell et al 1994 and rugers hogan 1996a b use observations of three high redshift absorption clouds to argue for a high primordial d abundance and so a low xmath63 tytler fan burles 1996 and burles tytler 1996 study two absorption clouds and argue for a low primordial d abundance and so a high xmath63 carswell et al 1996 and wampler et al 1996 examine other absorption clouds but are not able to strongly constrain xmath63 while the error bars on xmath63determined from these d abundance observations are somewhat asymmetric to use these results to qualitatively pick the xmath63 values we wish to examine we assume that the errors are gaussian and where needed add all uncertainties in quadrature to get the 2xmath80 uncertainties the large d abundance observations suggest xmath149 with a 2xmath80 range xmath150 rugers hogan 1996a when these large d abundances are combined with the observed xmath145he and xmath146li abundances they indicate xmath151 with a 2xmath80 range xmath152 fkot the large d abundances are consistent with the standard interpretation of the xmath145he and xmath146li abundances and with the standard model of particle physics with three massless neutrino species they do however seem to require a modification in galactic chemical evolution models to be consistent with local determinations of the d and xmath153he abundances eg fkot cardall fuller 1996 the low d abundance observations favour xmath154 with a 2xmath80 range xmath155 burles tytler 1996 the low d abundance observations seem to be more easily accommodated in modifications of the standard model of particle physics ie they are difficult to reconcile with exactly three massless neutrino species alternatively they might indicate a gross as yet unaccounted for uncertainty in the observed xmath145he abundance burles tytler 1996 cardall fuller 1996 the low d abundance is approximately consistent with locally observed d abundances but probably requires some modification in the usual galactic chemical evolution model for xmath146li burles tytler 1996 cardall fuller 1996 to accommodate the range of xmath63 now under discussion we compute model predictions for xmath124 table 9 0007 table 12 00125 table 10 and 00205 table 11 we shall find that this uncertainty in xmath63 precludes determination of robust constraints on model parameter values fortunately recent improvements in observational capabilities should eventually lead to a tightening of the constraints on xmath63 and so allow for tighter constraints on the other cosmological parameters column 3 of tables 912 give the cosmological baryonic mass fraction for the models we consider here the cluster baryonic mass fraction is the sum of the cluster galactic mass and gas mass fractions assuming that the white et al 1993 1xmath80 uncertainties on the cluster total galactic and gas masses are gaussian and adding them in quadrature we find for the 2xmath80 range of the cluster baryonic mass fraction xmath156 elbaz arnaud bhringer 1995 white fabian 1995 david jones forman 1995 markevitch et al 1996 and buote canizares 1996 find similar or larger gas mass fractions note that elbaz et al 1995 and white fabian 1995 find that the gas mass error bars are somewhat asymmetric this non gaussianity is ignored here assuming that the cluster baryonic mass fraction is an unbiased estimate of the cosmological baryonic mass fraction we may use eq 15 to constrain the cosmological parameters the boldface entries in column 3 of tables 9 12 indicates those model parameter values which predict a cosmological baryonic mass fraction inconsistent with the range of eq 15 viana liddle 1996 hereafter vl have reanalyzed the combined galaxy xmath58 data of peacock dodds 1994 ignoring some of the smaller scale data where nonlinear effects might be somewhat larger than previously suspected using an analytic approximation to the xmath58 they estimate that the scaling parameter eq 13 in the exponent of eq 13 so the numerical values of their constraint on xmath157 should be reduced slightly we ignore this small effect here xmath158 with a 2xmath80 range xmath159 this estimate is consistent with earlier ones than eq 16 this is one reason why llrv favour a higher xmath1 for the open bubble inflation model than do grsb it might be of interest to determine whether the wiggles in xmath58 due to the pressure in the photon baryon fluid see figs 23 can significantly affect the determination of xmath157 especially in large xmath119 models these wiggles are not well described by the analytic approximation to xmath58 the boldface entries in column 4 of tables 912 indicates those model parameter values which predict a scaling parameter value inconsistent with the range of eq 16 to determine the value of the linear bias parameter xmath160 xmath161 where xmath162 is the rms fractional perturbation in galaxy number we adopt the apm value maddox efstathiou sutherland 1996 of xmath163 096 with 2xmath80 range xmath164 where we have added the uncertainty due to the assumed cosmological model and due to the assumed evolution in quadrature with the statistical 1xmath80 uncertainty maddox et al 1996 eq 43 and doubled to get the 2xmath80 uncertainty the range of eq 18 is consistent with that determined from eqs 733 and 773 of peebles 1993 the local abundance of rich clusters as a function of their x ray temperature provides a tight constraint on xmath113 eke cole frenk 1996 hereafter ecf and s cole private communication 1996 find for the open model at 2xmath80 xmath165 where we have assumed that the ecf uncertainties are gaussian and that in general it depends weakly on the value of xmath157 and so on the value of xmath2 and xmath10 see fig 13 of ecf in our preliminary analysis herewe ignore this mild dependence on xmath2 and xmath10 also note that the constraint of eq 19 is approximately that required for consistency with the observed cluster correlation function the constraints of eq 19 are consistent with but more restrictive than those derived by vl 060 for fiducial cdm which is at the xmath1662xmath80 limit of eq as discussed in ecf this is because vl normalize to the cluster temperature function at 7 kev where there is a rise in the temperature function this is one reason why llrv favour a higher value of xmath1 for the open bubble inflation model than did grsb this is because ecf use observational data over a larger range in x ray temperature to constrain xmath167 and also use n body computations at xmath168 03 and 1 to calibrate the press schechter model which is used in their determination of the constraints furthermore ecf also make use of hydrodynamical simulations of a handful of individual clusters in the fiducial cdm model xmath69 to calibrate the relation between the gas temperature and the cluster mass and then use this calibrated relation for the computations at all values of xmath1 the initial conditions for all the simulations were set using the analytical approximation to xmath58 so again it might be of interest to see whether the wiggles in the numerically integrated xmath58 could significantly affect the determination of the constraints of eq kitayama suto 1996 use x ray cluster data and a method that allows for the fact that clusters need not have formed at the redshift at which they are observed to directly constrain the value of xmath1 for cdm cosmogonies normalized by the dmr two year data their conclusions are in resonable accord with what would be found by using eq 19 derived assuming that observed clusters are at their redshifts of formation however kitayama suto 1996 note that evolution from the redshift of formation to the redshift of observation can affect the conclusions so a more careful comparison of these two results is warranted the boldface entries in columns 6 and 9 of tables 912 indicate those model parameter values whose predictions are inconsistent with the constraints of eq 19 1xmath80 uncertainty of eq 19 approximate analyses based on using the analytic bbks approximation to the transfer function should make use of the more accurate parameterization of eq 13 rather than that with xmath169 in the exponent as this gives xmath26 to better than xmath87 in the observationally viable part of parameter space provided use is made of the numerically determined values of xmath42 from large scale peculiar velocity observational data zaroubi et al 1996 estimate xmath26 085 pm 02omega006 2xmath80 it might be significant that the large scale peculiar velocity observational data constraint is somewhat discordant with higher than the cluster temperature function constraint since xmath170 is less sensitive to smaller length scales compared to xmath26 observational constraints on xmath170 are more reliably contrasted with the linear theory predictions however since xmath170 is sensitive to larger length scales the observational constraints on xmath170 are significantly less restrictive than the xmath171 1xmath80 constraints of eq 19 and so we do not record the predicted values of xmath170 here observational constraints on the mass power spectrum determined from large scale peculiar velocity observations provide another constraint on the mass fluctuations kolatt dekel 1995 find at the 1xmath80 level xmath172 where the 1xmath80 uncertainty also accounts for sample variance t kolatt private communication 1996 since the uncertainties associated with the constraint of eq 19 are more restrictive than those associated with the constraint of eq 20 we do not tabulate predictions for this quantity here however comparison may be made to the predicted linear theory mass power spectra of figs 23 bearing in mind the xmath173 2xmath80 uncertainty of eq 20 the uncertainty is approximately gaussian t kolatt private communication 1996xmath80 significance level eq 20 provides a strong upper limit on xmath174 especially at larger xmath1 because of the xmath1 dependence and the uncertainty in the dmr normalization not shown in figs 23 columns 7 and 10 of tables 912 give the dmr normalized model predictions for xmath134 eq 14 cole fisher weinberg 1995 measure the anisotropy of the redshift space power spectrum of the xmath133 12 jy survey and conclude xmath175 with a 2xmath80 cl range xmath176 where we have doubled the error bars of eq 51 of cole et al 1995 to get the 2xmath80 range cole et al 1995 table 1 compare the estimate of eq 21 to other estimates of xmath134 and at 2xmath80 all estimates of xmath134 are consistent it should be noted that the model predictions of xmath134 eq 14 in tables 912 assume that for xmath133 galaxies xmath163 113 holds exactly ie they ignore the uncertainty in the rms fractional perturbation in xmath133 galaxy number which is presumably of the order of that in eq 18 as the constraints from the deduced xmath134 values eq 21 are not yet as restrictive as those from other large scale structure measures we do not pursue this issue in our analysis here the boldface entries in columns 7 and 10 of tables 912 indicate those model parameter values whose predictions are inconsistent with the constraints of eq 21 the boldface entries in tables912 summarize the current constraints imposed by the observational data discussed in the previous section on the model parameter values for the open bubble inflation model spectra of type 1 above and for the flat space scale invariant spectrum open model type 4 above the current observational constraints on the models are not dissimilar but this is mostly a reflection of the uncertainty on the constraints themselves since the model predictions are fairly different in the following discussion of the preferred part of model parameter spacewe focus on the open bubble inflation model rp94 note from table 13 that the large scale structure predictions of the open bubble inflation model do not depend on perturbations generated in the first epoch of inflation bgt yst and also do not depend significantly on the contribution from the non square integrable basis function yst table 9 corresponds to the part of parameter space with maximized small scale power in matter fluctuations this is accomplished by picking a low xmath123 gyr and so large xmath2 and by picking a low xmath124 this is the lower 2xmath80 limit from standard nucleosynthesis and the observed xmath145he xmath146li and high d abundances fkot the tightest constraints on the model parameter values come from the matter power spectrum observational data constraints on the shape parameter xmath157 table 9 col 4 and from the cluster x ray temperature function observational data constraints on xmath26 col note that for xmath177 the predicted upper 2xmath80 value of xmath113 069 while ecf conclude that at 2xmath80 the observational data requires that this be at least 074 so an xmath177 case fails this test the constraints on xmath134 col 7 are not as restrictive as those on xmath113 for these values of xmath62 and xmath178 the cosmological baryonic mass fraction at xmath177is predicted to be 0033 col 3 while at 2xmath80 white et al 1993 require that this be at least 0039 at xmath179 so again this xmath177 model just fails this test given the observational uncertainties it might be possible to make minor adjustments to model parameter values so that an xmath180 model with xmath181 gyr and xmath182 is just consistent with the observational data however it is clear that current observational data do not favour an open model with xmath183 the observed cluster xmath184 favours a larger xmath1 while the observed cluster baryonic mass fraction favours a smaller xmath1 and so are in conflict table 10 gives the predictions for the xmath121 gyr xmath185 models this value of xmath63 is consistent with the 2xmath80 range determined from standard nucleosynthesis and the observed xmath145he and xmath146li abundances xmath148 fkot also see copi et al 1995 sarkar 1996 it is however somewhat difficult to reconcile xmath8 with the 2xmath80 range derived from the observed xmath145he xmath146li and current high d abundances xmath152 fkot or with that from the current observed low d abundances xmath186 burles tytler 1996 in any case the observed d abundances are still under discussion and must be viewed as preliminary in this case open bubble inflation models with xmath187 are consistent with the observational constraints the current central observational data values for xmath157 and xmath134 favour xmath74 while that for the cluster baryonic mass fraction prefers xmath188 and that for xmath130 favours xmath189 so in this case the agreement between predictions and observational data is fairly impressive although the tanvir et al 1995 central xmath2 value favours xmath190 note that in this case models with xmath191 are quite inconsistent with the data table 11 gives the predictions for xmath192 gyr xmath193 models this baryonic mass density value is consistent with that determined from the current observed low d abundances but is difficult to reconcile with the current standard nucleosynthesis interpretation of the observed xmath145he and xmath146li abundances cardall fuller 1996 the larger value of xmath63 and smaller value of xmath2 has now lowered small scale power in mass fluctuations somewhat significantly opening up the allowed xmath1 range to larger values models with xmath194 are consistent with the observational data although the higher xmath1 part of the range is starting to conflict with what is determined from the small scale dynamical estimates and the models do require a somewhat low xmath2 but not yet inconsistently so at the 2xmath80 significance level while the tanvir et al 1995 central xmath2 value requires xmath100 at 2xmath80 the xmath2 constraint only requires xmath195 the central observational values for xmath157 the cluster baryonic mass fraction xmath26 and xmath134 favour xmath97 so the agreement with observational data is fairly impressive and could even be improved by reducing xmath62 a little to raise xmath2 table 12 gives the predictions for another part of model parameter space here we show xmath122 models at xmath121 gyr consistent with the central value of xmath63 determined from standard nucleosynthesis using the observed xmath145he xmath146li and high d abundances fkot the larger value of xmath63 compared to table 9 eases the cluster baryonic mass fraction constraint which now requires only xmath196 the increase in xmath63 also decreases the mass fluctuation amplitude making it more difficult to argue for xmath177 however models with xmath197 seem to be consistent with the observational constraints when xmath4 and xmath198 gyr it is interesting that in this case the central observational data values we consider for xmath157 for xmath26 and for xmath134 prefer xmath9 however that for the cluster baryonic mass fraction as well as that for xmath2 favours xmath190 although at 2xmath80 the cluster baryonic mass fraction constraint only requires xmath196 hence while xmath199 open bubble inflation models with xmath200 and xmath198 gyr are quite consistent with the observational constraints in this case the agreement between predictions and observations is not spectacular note that in this case models with xmath201 are quite inconsistent with the observational data in summary open bubble inflation models based on the cdm picture rp94 bgt yst are reasonably consistent with current observational data provided xmath202 the flat space scale invariant spectrum open model w83 is also reasonably compatible with current observational constraints for a similar range of xmath1 the uncertainty in current estimates of xmath63 is one of the major reasons why such a large range in xmath1 is consistent with current observational constraints our previous analysis of the dmr two year data led us to conclude that only those open bubble inflation models near the lower end of the above range xmath203 were consistent with the majority of observations grsb the increase in the allowed range to higher xmath1 values xmath204 can be ascribed to a number of small effects specifically these are 1 the slight downward shift in the central value of the dmr four year normalization relative to the two year one g96 2 use of the full 2xmath80 range of normalizations allowed by the dmr data analysis instead of the 1xmath80 range allowed by the galactic frame quadrupole excluded dmr two year data set used previously 3 use of the 2xmath80 range of the small scale dynamical estimates of xmath1 instead of the 1xmath80 range used in our earlier analysis 4 we consider a range of xmath63 values here in grsb we focussed on xmath8 and 5 we consider a range of xmath62 values here in grsb we concentrated on xmath121 gyr we emphasize however that the part of parameter space with xmath205 is only favoured if xmath63 is large xmath206 xmath2 is low xmath207 and the small scale dynamical estimates of xmath1 turn out to be biased somewhat low the observational results we have used to constrain model parameter values in the previous sections are the most robust currently available in addition there are several other observational results which we do not consider to be as robust and any conclusions drawn from these should be treated with due caution in this sectionwe summarize several of the more tentative constraints from more recent observations in our analysis of the dmr two year data normalized models we compared model predictions for the rms value of the smoothed peculiar velocity field to results from the analysis of observational data bertschinger et al we do not do so again here since given the uncertainties the conclusions drawn in grsb are not significantly modified in particular comparison of the appropriate quantities implies that we can treat the old 1xmath80 upper limits essentially as 2xmath80 upper limits for the four year analysis in grsbwe used xmath134 determined by nusser davis 1994 xmath208 2xmath80 to constrain the allowed range of models to xmath209 herewe use the cole et al 1995 estimate xmath210 2xmath80 which for the models of table 10 requires xmath211 this value is just slightly below the lower limit xmath212 derived from the bertschinger et al 1990 results in grsb we hence conclude that the large scale flow results of bertschinger et al 1990 indicates a lower 2xmath80 limit on xmath1 that is about xmath213 higher than that suggested by the redshift space distortion analysis of cole et al 1995 we however strongly emphasize that the central value of the large scale flow results of bertschinger et al 1990 does favour a significantly larger value of xmath1 than the rest of the data we have considered here furthermore as discussed in detail in grsb there is some uncertainty in how to properly interpret large scale velocity data in the open models particularly given the large sample variance associated with the measurement of a single bulk velocity bond 1996 also see llrv a more careful analysis as well as more observational data is undoubtedly needed before it will be possible to robustly conclude that the large scale velocity data does indeed force one to consider significantly larger values of xmath1 than is favoured by the rest of the observational constraints and hence rules out the models considered here it might be significant that on comparing the mass power spectrum deduced from a refined set of peculiar velocity observations to the galaxy power spectrum determined from the apm survey kolatt dekel 1995 estimate that for the optically selected apm galaxies xmath214 with a 2xmath80 range xmath215 note that it has been argued that systematic uncertainties preclude a believable determination of xmath134 from a comparison of the observed large scale peculiar velocity field to the xmath133 12 jy galaxy distribution davis nusser willick 1996 this range is consistent with other estimates now under discussion the stromlo apm comparison of loveday et al 1996 indicates xmath216 with a 2xmath80 upper limit of 075 while baugh 1996 concludes that xmath217 2xmath80 and ratcliffe et al 1996 argue for xmath218 using the apm range for xmath163 18 the kolatt dekel 1995 estimate of xmath219 eq 22 may be converted to an estimate of xmath167 and at 2xmath80 xmath220 it is interesting that at xmath69 the lower part of this range is consistent with that determined from the cluster x ray temperature function data eq 19 although at lower xmath1 eq 23 indicates a larger value then does eq 19 because of the steeper rise to low xmath1 zaroubi et al 1996 have constrained model parameter values by comparing large scale flow observations to that predicted in the dmr two year data normalized open bubble inflation model they conclude that the open bubble inflation model provides a good description of the large scale flow observations if at 2xmath80 xmath221 from table 12 we see that an open bubble inflation model with xmath222 and xmath223 provides a good fit to all the observational data considered in xmath224 for xmath223 zaroubi et al 1996 conclude that at 2xmath80 xmath225 eq 24 just above our value of xmath222 since the zaroubi et al 1996 analysis does not account for the uncertainty in the dmr normalization t kolatt private communication 1996 it is still unclear if the constraints from the large scale flow observations are in conflict with those determined from the other data considered here and so rule out the open bubble inflation model it might also be significant that on somewhat smaller length scales there is support for a smaller value of xmath1 from large scale velocity field data shaya peebles tully 1995 the cluster peculiar velocity function provides an alternate mechanism for probing the peculiar velocity field eg croft efstathiou 1994 moscardini et al 1995 bahcall oh 1996 bahcall oh 1996 conclude that current observational data is well described by an xmath177 flatxmath18 model with xmath226 and xmath227 067 this normalization is somewhat smaller than that indicated by the dmr data eg ratra sugiyama 1995 while bahcall oh 1996 did not compare the cluster peculiar velocity function data to the predictions of the open bubble inflation model approximate estimates indicate that this data is consistent with the open bubble inflation model predictions for the range of xmath1 favoured by the other data we consider in xmath228 see the xmath26 values for the allowed models in tables 912 bahcall oh 1996 also note that it is difficult if not impossible to reconcile the cluster peculiar velocity observations with what is predicted in high density models like fiducial cdm and mdm at fixed xmath113 low density cosmogonies form structure earlier than high density ones thus observations of structure at high redshift may be used to constrain the matter density as benchmarks we note that scaling from the results of the numerical simulations of cen ostriker 1993 in a open model with xmath229 08 galaxy formation peaks at a redshift xmath230 when xmath231 and at xmath232 when xmath72 thus the open bubble inflation model is not in conflict with observational indications that the giant elliptical luminosity function at xmath233 is similar to that at the present eg lilly et al 1995 glazebrook et al 1995 i m et al 1996 nor is it in conflict with observational evidence for massive galactic disks at xmath233 vogt et al these models can also accommodate observational evidence of massive star forming galaxies at xmath234 cowie hu songaila 1995 as well as the significant peak at xmath235 in the number of galaxies as a function of photometric redshift found in the hubble deep field gwyn hartwick 1996 and it is not inconceivable that objects like the xmath236 protogalaxy candidate yee et al 1996 ellingson et al 1996 can be produced in these models it is however at present unclear whether the open bubble inflation model can accommodate a substantial population of massive star forming galaxies at xmath237 steidel et al 1996 giavalisco steidel macchetto 1996 and if there are many more examples of massive damped lymanxmath238 systems like the one at xmath239 eg lu et al 1996 wampler et al 1996 fontana et al 1996 then depending on the masses these might be a serious problem for the open bubble inflation model on the other hand the recent discovery of galaxy groups at xmath240 eg francis et al 1996 pascarelle et al 1996 probably do not pose a serious threat for the open bubble inflation model while massive clusters at xmath241 eg luppino gioia 1995 pell et al 1996 can easily be accommodated in the model it should be noted that in adiabatic xmath69 models normalized to fit the present small scale observations eg fiducial cdm with a normalization inconsistent with that from the dmr or mdm or tilted cdm without a cosmological constant it is quite difficult if not impossible to accommodate the above observational indications of early structure formation eg ma bertschinger 1994 ostriker cen 1996 with the recent improvements in observational capabilities neoclassical cosmological tests hold great promise for constraining the world model it might be significant that current constraints from these tests are consistent with that region of the open bubble inflation model parameter space that is favoured by the large scale structure constraints these tests include the xmath142 elliptical galaxy number counts test driver et al 1996 an early application of the apparent magnitude redshift test using type ia supernovae perlmutter et al 1996 as well as analyses of the rate of gravitational lensing of quasars by foreground galaxies eg torres waga 1996 kochanek 1996 it should be noted that these tests are also consistent with xmath69 models and plausibly with a time variable cosmological constant dominated spatially flat model eg ratra quillen 1992 torres waga 1996 but they do put pressure on the flatxmath18 cdm model smaller scale cmb spatial anisotropy measurements will eventually significantly constrain the allowed range of model parameter values fig 24 compares the 1xmath80 range of cmb spatial anisotropy predictions for a few representative open bubble inflation as well as flat space scale invariant spectrum open models to available cmb spatial anisotropy observational data from a preliminary comparison of the predictions of dmr two year datanormalized open bubble inflation models to available cmb anisotropy observational data ratra et al 1995 concluded that the range of parameter space for the open bubble inflation model that was favoured by the other observational data was also consistent with the small scale cmb anisotropy data this result was quantified by grs who also considered open bubble inflation models normalized to the xmath1351xmath80 values of the dmr two year data and hence considered open bubble inflation models normalized at close to the dmr four year data value see figs 5 and 6 of grs grs discovered that given the uncertainties associated with the smaller scale measurements the 1xmath80 uncertainty in the value of the dmr normalization precludes determination of robust constraints on model parameter values although the range of model parameter space for the open bubble inflation model favoured by the analysis here was found to be consistent with the smaller scale cmb anisotropy observations and xmath93 open bubble inflation models were not favoured by the smaller scale cmb anisotropy observational data grs figs 5 and 6 is favoured but even at 1xmath80 xmath242 is allowed this broad range is consistent with the conclusion of grs that it is not yet possible to meaningfully constrain cosmological parameter values from the cmb anisotropy data alone note also that hancock et al 1996b do not consider the effects of the systematic shifts between the various dmr data sets and also exclude a number of data points eg the four msam points and the max3 mup point which is consistent with the recent max5 mup result lim et al 1996 which do not disfavour a lower value of xmath1 for the open bubble inflation model ratra et al 1995 grs a detailed analysis of the ucsb south pole 1994 cmb anisotropy data gundersen et al 1995 by ganga et al 1996a reaches a similar conclusion at 1xmath80 assuming a gaussian marginal probability distribution the data favours open bubble inflation models with xmath243 while at 2xmath80 the ucsb south pole 1994 data is consistent with the predictions of the open bubble flatxmath18 and fiducial cdm inflation models we have compared the dmr 53 and 90 ghz sky maps to a variety of open model cmb anisotropy angular spectra in order to infer the normalization of these open cosmogonical models our analysis explicitly quantifies the small shifts in the inferred normalization amplitudes due to 1 the small differences between the galactic and ecliptic coordinate sky maps 2 the inclusion or exclusion of the xmath85 moment in the analysis and 3 the faint high latitude galactic emission treatment we have defined a maximal 2xmath80 uncertainty range based on the extremal solutions of the normalization fits and a maximal 1xmath80 uncertainty range may be defined in a similar manner for thismaximal 1xmath80 xmath21 range the fractional 1xmath80 uncertainty at fixed xmath10 and xmath2 but depending on the assumed cmb anisotropy angular spectrum and model parameter values ranges between xmath244 and xmath245 statistical and systematic uncertainty of bw footnote 4 also see bunn liddle white 1996 xmath246 is smaller than the dmr four year data 1xmath80 uncertainty estimated in eg g96 wright et al 1996 and here this is because we explicitly estimate the effect of all known systematic uncertainties for each assumed cmb anisotropy angular spectrum and account for them in the most conservative manner possible as small shifts in particular we do not just account for the small systematic difference between the galactic and ecliptic frame maps we do not assume that any of the small systematic differences lead to model independent systematic shifts in the inferred xmath21 values and we do not add the systematic shifts in quadrature with the statistical uncertainty since our accounting of the uncertainties is the most conservative possible our conclusions about model viability are the most robust possible compare this to the xmath247 1xmath80 uncertainty of eq 19 since part of this uncertainty is due to the small systematic shifts the maximal 2xmath80 fractional uncertainty is smaller than twice the maximal 1xmath80 fractional uncertainty for the largest possible 2xmath80 xmath21 range defined above the fractional uncertainty varies between xmath248 and xmath249 note that this accounts for intrinsic noise cosmic variance and effects 13 above other systematic effects eg the calibration uncertainty kogut et al 1996b or the beamwidth uncertainty wright et al 1994 are much smaller than the effects we have accounted for here it has also been shown that there is negligible non cmb contribution to the dmr data sets from known extragalactic astrophysical foregrounds banday et al 1996b by analyzing the dmr maps using cmb anisotropy spectra at fixed xmath1 but different xmath2 and xmath10 we have also explicitly quantified the small shifts in the inferred normalization amplitude due to shifts in xmath2 and xmath10 although these shifts do depend on the value of xmath1 and the assumed model power spectrum given the other uncertainties it is reasonable to ignore these small shifts when normalizing the models considered in this work we have analyzed the open bubble inflation model accounting only for the fluctuations generated during the evolution inside the bubble rp94 including the effects of the fluctuations generated in the first epoch of spatially flat inflation bgt yst and finally accounting for the contribution from a non square integrable basis function yst for observationally viable open bubble models the observable predictions do not depend significantly on the latter two sources of anisotropy the observable predictions of the open bubble inflation scenario seem to be robust it seems that only those fluctuations generated during the evolution inside the bubble need to be accounted for as discussed in the introduction a variety of more specific realizations of the open bubble inflation scenario have recently come under scrutiny these are based on specific assumptions about the vacuum state prior to open bubble nucleation in these specific realizations of the open bubble inflation scenariothere are a number of additional mechanisms for stress energy perturbation generation in addition to those in the models considered here including those that come from fluctuations in the bubble wall as well as effects associated with the nucleation of a nonzero size bubble while current analyses suggest that such effects also do not add a significant amount to the fluctuations generated during the evolution inside the bubble it is important to continue to pursue such investigations both to more carefully examine the robustness of the open bubble inflation scenario predictions as well as to try to find a reasonable particle physics based realization of the open bubble inflation scenario as has been previously noted for other cmb anisotropy angular spectra g96 the various different dmr data sets lead to slightly different xmath21 normalization amplitudes but well within the statistical uncertainty this total range is slightly reduced if one considers results from analyses either ignoring or including the quadrupole moment the dmr data alone can not be used to constrain xmath1 over range xmath14 in a statistically meaningful fashion for the open models considered here it is however reasonable to conclude that when the quadrupole moment is excluded from the analysis the xmath9 model cmb anisotropy spectral shape is most consistent with the dmr data while the quadrupole included analysis favours xmath12 for the open bubble inflation model in the range xmath250 current cosmographic observations in conjunction with current large scale structure observations compared to the predictions of the dmr normalized open bubble inflation model derived here favour xmath202 the large allowed range is partially a consequence of the current uncertainty in xmath10 this range is consistent with the value weakly favoured xmath9 by a quadrupole excluded analysis of the dmr data alone it might also be significant that mild bias is indicated both by the need to reconcile these larger values of xmath1 with what is determined from small scale dynamical estimates as well as to reconcile the smaller dmr normalized xmath251 values for this favoured range of xmath1 with the larger observed galaxy number fluctuations eg eq 18 in common with the low density flatxmath18 cdm model we have established that in the low density open bubble cdm model one may adjust the value of xmath1 to accommodate a large fraction of present observational constraints for a broad class of these models with adiabatic gaussian initial energy density perturbations this focuses attention on values of xmath1 that are larger than the range of values for xmath10 inferred from the observed light element abundances in conjunction with standard nucleosynthesis theory whether this additional cdm is nonbaryonic or is simply baryonic material that does not take part in standard nucleosynthesis remains a major outstanding puzzle for these models we acknowledge the efforts of those contributing to the xmath0dmr xmath0 is supported by the office of space sciences of nasa headquarters we also acknowledge the advice and assistance of c baugh s cole j garriga t kolatt c park l piccirillo g rocha g tucker d weinberg and k yamamoto rs is supported in part by a pparc grant and kbn grant 2p30401607 1fractional differences xmath253 between the cmb spatial anisotropy multipole coefficients xmath56 computed using the two boltzmann transfer codes and normalized to agree at xmath254 heavy type is for the open bubble inflation model spectrum accounting only for perturbations that are generated during the evolution inside the bubble type 1 spectra above and light type is for the open bubble inflation model spectrum now also accounting for perturbations generated in the first epoch of inflation type 2 spectra solid lines are for xmath255 and dashed lines are for xmath256 these are for xmath64 and xmath65 note that xmath257 2a cmb anisotropy multipole coefficients for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble rp94 solid lines and also accounting for fluctuations generated in the first epoch of inflation bgt yst dotted lines these overlap the solid lines except at the lowest xmath1 and smallest xmath66 for xmath258 01 02 025 03 035 04 045 05 06 08 and 10 in ascending order these are for xmath121 gyr and xmath8 the coefficients are normalized relative to the xmath259 amplitude and different values of xmath1 are offset from each other to aid visualization in b are the set of cmb anisotropy spectra for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble rp94 with xmath255 and xmath256 for the three different pairs of values xmath62 xmath63 xmath260 gyr xmath261 xmath262 gyr xmath263 and xmath264 gyr xmath265 spectra in the two sets are normalized to have the same xmath259 and xmath63 increases in ascending order on the right axis 3cmb spatial anisotropymultipole coefficients for the flat space scale invariant spectrum open model w83 conventions and parameter values are as in the caption of fig 2 although only one set of spectra are shown in fig 3a fig 4cmb spatial anisotropy multipole coefficients for the open bubble inflation spectrum also accounting for both fluctuations generated in the first epoch of inflation and that corresponding to a non square integrable basis function yst solid lines and ignoring both these fluctuations rp94 dotted lines they are in ascending order for xmath258 01 to 09 in steps of 01 with xmath64 and xmath65 normalized relative to the xmath259 amplitude and different values of xmath1 are offset from each other to aid visualization 5cmb spatial anisotropy multipole coefficients as a function of xmath66 for the various spectra considered in this paper at xmath255 and xmath266 vertically offset light solid and heavy solid lines show the open bubble inflation cases accounting for type 2 spectra above and ignoring type 1 spectra at xmath256 these completely overlap the type 2 spectra fluctuations generated in the first epoch of inflation dashed lines show the open bubble inflation models now also accounting for the contribution from the non square integrable basis function type 3 spectra dotted lines show the flat space scale invariant spectrum open model spectra type 4 spectra all spectra are for xmath64 and xmath65 6likelihood functions xmath79 arbitrarily normalized to unity at the highest peak at xmath74 derived from a simultaneous analysis of the dmr 53 and 90 ghz ecliptic frame data ignoring the correction for faint high latitude foreground galactic emission and excluding the quadrupole moment from the analysis these are for the xmath64 xmath65 models panel a is for the flat space scale invariant spectrum open model w83 b is for the open bubble inflation model accounting only for perturbations generated during the evolution inside the bubble rp94 and c is for the open bubble inflation model now also accounting for both the fluctuations generated in the first epoch of inflation and those corresponding to a non square integrable basis function yst 7likelihood functions xmath79 arbitrarily normalized to unity at the highest peak near either xmath93 or xmath267 derived from a simultaneous analysis of the dmr 53 and 90 ghz galactic frame data accounting for the faint high latitude foreground galactic emission correction and including the quadrupole moment in the analysis conventions and parameter values are as for fig 6 fig 8ridge lines of the maximum likelihood xmath21 value as a function of xmath1 for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble type 1 spectra for the eight different dmr data sets considered here and for xmath121 gyr xmath8 heavy lines correspond to the case when the quadrupole moment is excluded from the analysis while light lines account for the quadrupole moment these are for the ecliptic frame sky maps accounting for dashed lines and ignoring solid lines the faint high latitude foreground galactic emission correction and for the galactic frame maps accounting for dot dashed lines and ignoring dotted lines this galactic emission correction the general features of this figure are consistent with that derived from the dmr two year data grsb fig 2 fig 9ridge lines of the maximum likelihood xmath21 value as a function of xmath1 for the flat space scale invariant spectrum open model type 4 spectra for the eight different dmr data sets and for xmath121 gyr xmath8 heavy lines correspond to the ecliptic frame analyses while light lines are from the galactic frame analyses these are for the cases ignoring the faint high latitude foreground galactic emission correction and either including dotted lines or excluding solid lines the quadrupole moment and accounting for this galactic emission correction and either including dot dashed lines or excluding dashed lines the quadrupole moment the general features of this figure are roughly consistent with that derived from the dmr two year data cayn et al 1996 fig 3 fig 10ridge lines of the maximum likelihood xmath21 value as a function of xmath1 for the open bubble inflation model now also accounting for both the fluctuations generated in the first epoch of inflation bgt yst and those from a non square integrable basis function yst for the eight different dmr data sets considered here and for xmath64 xmath65 heavy lines correspond to the cases where the faint high latitude foreground galactic emission correction is ignored while light lines account for this galactic emission correction these are from the ecliptic frame analyses accounting for dotted lines or ignoring solid lines the quadrupole moment and from the galactic frame analyses accounting for dot dashed lines or ignoring dashed lines the quadrupole moment the general features of this figure are consistent with that derived from the dmr two year data yb fig 2 fig 11ridge lines of the maximum likelihood xmath21 value as a function of xmath1 for the two extreme dmr data sets and two different cmb anisotropy angular spectra showing the effects of varying xmath62 and xmath63 heavy lines are for xmath192 gyr and xmath268 while light lines are for xmath123 gyr and xmath124 two of the four pairs of lines are for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble type 1 spectra either from the ecliptic frame analysis without the faint high latitude foreground galactic emission correction and ignoring the quadrupole moment in the analysis solid lines or from the galactic frame analysis accounting for this galactic emission correction and including the quadrupole moment in the analysis dotted lines the other two of the four pairs of lines are for the flat space scale invariant spectrum open model type 4 spectra either from the ecliptic frame analysis without the faint high latitude foreground galactic emission correction and ignoring the quadrupole moment in the analysis dashed lines or from the galactic frame analysis accounting for this galactic emission correction and including the quadrupole moment in the analysis dot dashed lines given the other uncertainties the effects of varying xmath62 and xmath63 are fairly negligible fig 12ridge lines of the maximum likelihood xmath21 value as a function of xmath1 for the two extreme dmr data sets for the four cmb anisotropy angular spectra models considered here and for xmath64 xmath65 heavy lines are from the ecliptic frame sky maps ignoring the faint high latitude foreground galactic emission correction and excluding the quadrupole moment from the analysis while light lines are from the galactic frame sky maps accounting for this galactic emission correction and including the quadrupole moment in the analysis solid dotted and dashed lines show the open bubble inflation cases accounting only for the fluctuations generated during the evolution inside the bubble type 1 spectra solid lines also accounting for the fluctuations generated in the first epoch of inflation type 2 spectra dotted lines these overlap the solid lines except for xmath269 and xmath12 and finally also accounting for the fluctuations corresponding to the non square integrable basis function type 3 spectra dashed lines dot dashed lines correspond to the flat space scale invariant spectrum open model type 4 spectra 13conditional likelihood densities for xmath21 derived from xmath79 which are normalized to be unity at the peak for each dmr data set cmb anisotropy angular spectrum and set of model parameter values panel a is for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble type 1 spectra while panel b is for the flat space scale invariant spectrum open model type 4 spectra the heavy lines are for xmath255 while the light lines are for xmath256 two of the four pairs of lines in each panel correspond to the results from the analysis of the galactic frame maps accounting for the faint high latitude foreground galactic emission correction and with the quadrupole moment included in the analysis either for xmath123 gyr and xmath124 dot dashed lines or for xmath192 gyr and xmath268 dashed lines the other two pairs of lines in each panel correspond to the results from the analysis of the ecliptic frame maps ignoring this galactic emission correction and with the quadrupole moment excluded from the analysis either for xmath123 gyr and xmath124 dotted lines or for xmath192 gyr and xmath268 solid lines given the other uncertainties the effects of varying xmath62 and xmath63 are fairly negligible 14conditional likelihood densities for xmath21 normalized as in the caption for fig 13 panel a is from the analysis of the ecliptic frame maps ignoring the faint high latitude foreground galactic emission correction and excluding the quadrupole moment from the analysis while panel b is from the analysis of the galactic frame maps accounting for this galactic emission correction and including the quadrupole moment in the analysis these are for xmath64 and xmath65 the heavy lines are for xmath255 and the light lines are for xmath256 there are eight lines four pairs in each panel although in each panel two pairs almost identically overlap solid dotted and dashed lines show the open bubble inflation cases accounting only for the fluctuations generated during the evolution inside the bubble type 1 spectra solid lines also accounting for the fluctuations generated in the first epoch of inflation type 2 spectra dotted lines these almost identically overlap the solid lines and finally also accounting for the fluctuations corresponding to the non square integrable basis function type 3 spectra dashed lines dot dashed lines correspond to the flat space scale invariant spectrum open model type 4 spectra 15projected likelihood densities for xmath1 derived from xmath79 normalized as in the caption of fig panel a is for the open bubble inflation model accounting only for the fluctuations generated during the evolution inside the bubble type 1 spectra and panel b is for the flat space scale invariant spectrum open model type 4 spectra two of the curves in each panel correspond to the results from the analysis of the galactic frame maps accounting for the faint high latitude foreground galactic emission correction and with the quadrupole moment included in the analysis for xmath123 gyr and xmath124 dot dashed lines and for xmath192 gyr and xmath268 dashed lines the other two curves in each panel are from the analysis of the ecliptic frame maps ignoring the galactic emission correction and excluding the quadrupole moment from the analysis for xmath123 gyr and xmath124 dotted lines and for xmath192 gyr and xmath268 solid lines 16projected likelihood densities for xmath1 derived from xmath79 normalized as in the caption of fig panel a is from the analysis of the ecliptic frame sky maps ignoring the faint high latitude foreground galactic emission correction and excluding the quadrupole moment from the analysis panel b is from the analysis of the galactic frame sky maps accounting for this galactic emission correction and including the quadrupole moment in the analysis there are four curves in each panel although in each panel two of them almost overlap solid dotted and dashed lines show the open bubble inflation cases accounting only for the fluctuations generated during the evolution inside the bubble type 1 spectra solid lines also accounting for the fluctuations generated in the first epoch of spatially flat inflation type 2 spectra dotted lines these almost exactly overlap the solid lines and finally also accounting for the fluctuations corresponding to the non square integrable basis function type 3 spectra dashed lines dot dashed lines correspond to the flat space scale invariant spectrum open model type 4 spectra these are for xmath64 and xmath270 17marginal likelihood densities xmath271 for xmath1 normalized to unity at the peak for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble rp94 for the eight different dmr data sets and for xmath121 gyr xmath8 panel a is from the ecliptic frame analyses and panel b is from the galactic frame analyses two of the four lines in each panel are from the analysis without the faint high latitude foreground galactic emission correction either accounting for dot dashed lines or ignoring solid lines the quadrupole moment the other two lines in each panel are from the analysis with this galactic emission correction either accounting for dotted lines or ignoring dashed lines the quadrupole moment 19marginal likelihood densities for xmath1 for the open bubble inflation model now also accounting for both the fluctuations generated in the first spatially flat epoch of inflation and those that correspond to the non square integrable basis function yst computed for xmath64 and xmath65 conventions are as in the caption of fig 20marginal likelihood densities for xmath1 normalized as in the caption of fig panel a is for the open bubble inflation model accounting only for the fluctuations generated during the evolution inside the bubble rp94 while panel b is for the flat space scale invariant spectrum open model w83 two of the lines in each panel are the results from the analysis of the galactic frame data sets accounting for the faint high latitude foreground galactic emission correction and with the quadrupole moment included in the analysis for xmath123 gyr and xmath124 dot dashed lines and for xmath192 gyr and xmath268 dashed lines the other two lines in each panel are the results from the analysis of the ecliptic frame data sets ignoring this galactic emission correction and with the quadrupole moment excluded from the analysis for xmath272 gyr and xmath124 dotted lines and for xmath273 gyr and xmath268 solid lines 21marginal likelihood densities for xmath1 normalized as in the caption of fig 17 computed for xmath64 and xmath65 panel a is from the analysis of the ecliptic frame sky maps ignoring the faint high latitude foreground galactic emission correction and excluding the quadrupole moment from the analysis panel b is from the analysis of the galactic frame sky maps accounting for this galactic emission correction and including the quadrupole moment in the analysis there are four lines in each panel although in each panel two of the lines almost overlap solid dotted and dashed curves are the open bubble inflation cases accounting only for the fluctuations generated during the evolution inside the bubble rp94 solid lines also accounting for the fluctuations generated in the first epoch of spatially flat inflation bgt yst dotted lines these almost identically overlap the solid lines and finally also accounting for the fluctuations corresponding to the non square integrable basis function yst dashed lines dot dashed curves correspond to the flat space scale invariant spectrum open model w83 22fractional differences xmath274 as a function of wavenumber xmath49 between the energy density perturbation power spectra xmath58 computed using the two independent numerical integration codes and normalized to give the same xmath21 the heavy curves are for the open bubble inflation model spectrum accounting only for fluctuations that are generated during the evolution inside the bubble type 1 spectra and the light curves are for the open bubble inflation model spectrum now also accounting for fluctuations generated in the first epoch of inflation type 2 spectra these are for xmath255 solid lines and xmath256 dashed lines with xmath64 and xmath65 23fractional energy density perturbation power spectra xmath58 as a function of wavenumber xmath49 these are normalized to the mean of the extreme upper and lower 2xmath80 xmath21 values as discussed in 33 panels ad correspond to the four different sets of xmath62 xmath84 of tables 912 and each panel shows power spectra for three different models at six values of xmath1 solid lines show the open bubble inflation model xmath58 accounting only for fluctuations generated during the evolution inside the bubble rp95 dotted lines are for the open bubble inflation model now also accounting for fluctuations generated in the first epoch of inflation bgt yst and dashed lines are for the flat space scale invariant spectrum open model w83 starting near the center of the lower horizontal axis andmoving counterclockwise the spectra shown correspond to xmath258 01 02 03 045 06 and 1 note that at xmath69 all three model spectra are identical and so overlap also note that at a given xmath1 the open bubble inflation model xmath58 accounting for the fluctuations generated in the first epoch of inflation bgt yst dotted lines essentially overlap those where this source of fluctuations is ignored rp95 solid lines panel a corresponds to xmath123 gyr and xmath124 b to xmath121 gyr and xmath275 c to xmath192 gyr and xmath276 and d to xmath121 gyr and xmath122 normalized using the results of the dmr analysis of the xmath123 gyr xmath277 models panel e shows the three xmath64 xmath65 open bubble inflation spectra of table 13 at five different values of xmath1 the spectra are for the open bubble inflation model accounting only for fluctuations generated during the evolution inside the bubble rp95 solid lines also accounting for fluctuations generated in the first epoch of inflation bgt yst dotted lines and also accounting for the contribution from the non square integrable basis function yst dashed lines starting near the center of the lower horizontal axis and moving counterclockwise the models correspond to xmath258 01 02 03 05 and 09 note that at a given xmath1 the three spectra essentially overlap especially for observationally viable values of xmath212 the solid triangles represent the redshift space da costa et al 1994 ssrs2 cfa2 xmath278 mpc depth optical galaxies data and were very kindly provided to us by c park the solid squares represent the xmath279 weighting redshift space results of the tadros efstathiou 1995 analysis of the xmath133 qdot and 12 jy infrared galaxy data the hollow pentagons represent the real space results of the baugh efstathiou 1993 analysis of the apm optical galaxy data and were very kindly provided to us by c baugh it should be noted that the plotted model mass not galaxy power spectra do not account for any bias of galaxies with respect to mass they also do not account for nonlinear or redshift space distortion when relevant corrections nor for the survey window functions it should also be noted that the observational data error bars are determined under the assumption of a specific cosmological model and a specific evolution scenario ie they do not necessarily account for these additional sources of uncertainty eg gaztaaga 1995 we emphasize that because of the different assumptions the different observed galaxy power spectra shown on the plots are defined somewhat differently and so can not be directly quantitatively compared to each other 24cmb anisotropy bandtemperature predictions and observational results as a function of multipole xmath66 to xmath280 the four pairs of wavy curves in different linestyles demarcating the boundaries of the four partially overlapping wavy hatched regions hatched with straight lines in different linestyles in panel a are dmr normalized open bubble inflation model rp94 predictions for what would be seen by a series of ideal kronecker delta window function experiments see ratra et al 1995 for details panel b shows dmr normalized cmb anisotropy spectra with the same cosmological parameters for the flat space scale invariant spectrum open model w83 the model parameter values are xmath177 xmath281 xmath282 xmath283 gyr dot dashed lines xmath72 xmath284 xmath8 xmath285 gyr solid lines xmath256 xmath286 xmath287 xmath288 gyr dashed lines and xmath69 xmath289 xmath8 xmath290 gyr dotted lines for more details on these models see ratra et al 1995 for each pair of model prediction demarcation curves the lower one is normalized to the lower 1xmath80 xmath21 value determined from the analysis of the galactic coordinate maps accounting for the high latitude galactic emission correction and including the xmath291 moment in the analysis and the upper one is normalized to the upper 1xmath80 xmath21 value determined from the analysis of the ecliptic coordinate maps ignoring the galactic emission correction and excluding the xmath85 moment from the analysis amongst the open bubble inflation models of panel a the xmath72 model is close to what is favoured by the analysis of table 10 and the xmath256 model is close to that preferred from the analysis of table 11 the xmath177 model is on the edge of the allowed region from the analysis of table 12 and the xmath69 fiducial cdm model is incompatible with cosmographic and large scale structure observations a large fraction of the smaller scale observational data in these plots are tabulated in ratra et al 1995 and ratra sugiyama 1995 note that as discussed in these papers some of the data points are from reanalyses of the observational data there are 69 detections and 22 2xmath80 upper limits shown since most of the smaller scale data points are derived assuming a flat bandpower cmb anisotropy angular spectrum which is more accurate for narrower in xmath66 window functions we have shown the observational results from the narrowest windows available the data shown are from the dmr galactic frame maps ignoring the galactic emission correction grski 1996 open octagons with xmath292 from firs ganga et al 1994 as analyzed by bond 1995 solid pentagon tenerife hancock et al 1996a open five point star bartol piccirillo et al 1996 solid diamond note that atmospheric contamination may be an issue sk93 individual chop sk94 ka and q and individual chop sk95 cap and ring netterfield et al 1996 open squares sp94 ka and q gundersen et al 1995 the points plotted here are from the flat bandpower analysis of ganga et al 1996a solid circles bam 2beam tucker et al 1996 at xmath293 with xmath294 spanning 16 to 92 and accounting for the xmath295 calibration uncertainty open circle python g l and s eg platt et al 1996 open six point stars argo eg masi et al 1996 both the hercules and ariestaurus scans are shown note that the ariestaurus scan has a larger calibration uncertainty of xmath296 solid squares max3 individual channel max4 and max5 eg tanaka et al 1996 including the max5 mup 2xmath80 upper limit xmath297k at xmath298 lim et al 1996 open hexagons msam92 and msam94 eg inman et al 1996 open diamonds wdh13 and wdi ii eg griffin et al 1996 open pentagons and cat scott et al 1996 cat1 at xmath299 with xmath294 spanning 351 to 471 and cat2 at xmath300 with xmath294 spanning 565 to 710 both accounting for calibration uncertainty of xmath301 solid hexagons detections have vertical 1xmath80 error bars solid inverted triangles inserted inside the appropriate symbols correspond to nondetections and are placed at the upper 2xmath80 limits vertical error bars are not shown for non detections as discussed in ratra et al 1995 all xmath302 vertical error bars also account for the calibration uncertainty but in an approximate manner except for the sp94 ka and q results from ganga et al 1996a see ganga et al 1996a for a discussion of this issue the observational data points are placed at the xmath66value at which the corresponding window function is most sensitive this ignores the fact that the sensitivity of the experiment is also dependent on the assumed form of the sky anisotropy signal and so gives a somewhat misleading impression of the multipoles to which the experiment is sensitive see ganga et al 1996a for a discussion of this issue excluding the dmr points at xmath303 the horizontal lines on the observational data points represent the xmath66space width of the corresponding window function again ignoring the form of the sky anisotropy signal note that from an analysis of a large fraction of the data corresponding to detections of cmb anisotropy shown in these figures grs figs 5 and 6 conclude that all the models shown in panel a including the fiducial cdm one are consistent with the cmb anisotropy data
cut sky orthogonal mode analyses of the xmath0dmr 53 and 90 ghz sky maps are used to determine the normalization of a variety of open cosmogonical models based on the cold dark matter scenario to constrain the allowed cosmological parameter range for these open cosmogonies the predictions of the dmr normalized models are compared to various observational measures of cosmography and large scale structure viz the age of the universe small scale dynamical estimates of the clustered mass density parameter xmath1 constraints on the hubble parameter xmath2 the x ray cluster baryonic mass fraction xmath3 and the matter power spectrum shape parameter estimates of the mass perturbation amplitude and constraints on the large scale peculiar velocity field the open bubble inflation model ratra peebles 1994 bucher goldhaber turok 1995 yamamoto sasaki tanaka 1995 is consistent with current determinations of the 95 confidence level cl range of these observational constraints more specifically for a range of xmath2 the model is reasonably consistent with recent high redshift estimates of the deuterium abundance which suggest xmath4 provided xmath5 recent high redshift estimates of the deuterium abundance which suggest xmath6 favour xmath7 while the old nucleosynthesis value xmath8 requires xmath9 small shifts in the inferred xmath0dmr normalization amplitudes due to 1 the small differences between the galactic and ecliptic coordinate sky maps 2 the inclusion or exclusion of the quadrupole moment in the analysis 3 the faint high latitude galactic emission treatment and 4 the dependence of the theoretical cosmic microwave background anisotropy angular spectral shape on the value of xmath2 and xmath10 are explicitly quantified the dmr data alone do not possess sufficient discriminative power to prefer any values for xmath1 xmath2 or xmath10 at the 95 cl for the models considered at a lower cl and when the quadrupole moment is included in the analysis the dmr data are most consistent with either xmath11 or xmath12 depending on the model considered however when the quadrupole moment is excluded from the analysis the dmr data are most consistent with xmath13 in all open models considered with xmath14 including the open bubble inflation model earlier claims yamamoto bunn 1996 bunn white 1996 that the dmr data require a 95 cl lower bound on xmath1 xmath15 are not supported by our complete analysis of the four year data the dmr data alone can not be used to meaningfully constrain xmath1 10 000em0 0 003em0 0 000em04em0 0 003em04em0 00 12345 1 2 3 4 5 123 1 2 3 123 1 2 3 1xmath16 mit ctp2548 kuns 1399 05 cm august 1996
introduction open-bubble inflation models cmb anisotropy normalization procedure computation of large-scale structure statistics current observational constraints on dmr-normalized models discussion and conclusion
the enrichment of the intergalactic medium igm with heavy elements has over the past decade become a key tool in understanding star and galaxy formation by providing a fossil record of metal formation and galactic feedback absorption line spectroscopy has revealed among other findings that the low density xmath10 intergalactic medium igm as probed by the lyxmath11 forest and through and other transitions is at least partly enriched at all redshifts and densities probed in particular recent studies indicate that when smoothed over large xmath12 kpc scales the abundance of carbon decreases as gas overdensity xmath13 does and has a scatter of xmath14dex at fixed density there is carbon in at least some gas at all densities down to at least the mean cosmic density with the median carbon metallicity obeying c h xmath15 at xmath16 schaye et al 2003 hereafter on smaller xmath17 kpc scales the distribution of metals is less well known but observations suggest that the metals may be concentrated in small high metallicity patches xcite there is no evidence for metallicity evolution from redshift xmath18 to xmath19 xcite and metals exist at some level at xmath20 xcite in connection with thisobserved widespread distribution of metals a general picture has emerged that galactic winds driven largely from young andor starburst galaxies have enriched the igm the same feedback may account for the dearth of low luminosity galaxies relative to the halo mass function eg and also for the mass metallicity relationship of galaxies eg however a detailed understanding of the various feedback processes is lacking and there are still open questions and controversies concerning the time and relative importance of the various enrichment processes and concerning the implications for galaxy formation both theoretical modeling and observations of intergalactic ig enrichment are now advancing to the point where comparison between the two can provide crucial insight into these issues but this comparison is not without problems two key difficulties concern the ionization correction required to convert observed ionic abundances into elemental abundances first while the oft studied ions and are observationally convenient they are poor probes of hot xmath21k gas because the ion factions c and si both fall dramatically with temperature thus the hot remnants of fast outflows might be largely invisible in these ions second the dominant uncertainty in both the absolute and relative abundance inferences stems from uncertainty in the spectral shape of the ultraviolet ionizing background radiation uvb analysis of oxygen as probed by has the potential to shed light on both problems this ionization state becomes prevalent in some of the very phases in which and become rare and its abundance depends on the uvb shape differently than those of other ions helping break the degeneracy between abundances and uvb shape the challenge posed by is that at xmath22 it is strongly contaminated by both lyxmath11 and lyxmath23 lines making its identification and quantification difficult previous studies of highxmath24 oxygen enrichment using line fitting xcite or pixel statistics xcite have reliably detected oxygen in the igm and quantified its abundance in relatively dense gas but have not assessed the oxygen abundance with a very large data sample at very low densities or in a unified treatment with other available ions here we extend to our application of the pixel optical depth technique eg aguirre schaye theuns hereafter to a large set of high quality vlt uves and keck hires spectra the results when combined with previous studies of and xcite and of and aguirre et 2004 hereafter give a comprehensive observational assessment of ig enrichment by carbon silicon and oxygen with significantly reduced uncertainties due to the uvb shape as well as new data on the importance of hot collisionally ionized gas we have organized this paper as follows in sec data and sec overview we briefly describe our sample of qso spectra the analysis method is described briefly in sec overview and then in greater depth in the remainder of sec meth with heavy reference to papers i ii and iii the basic results are given in sec resrel and discussed in sec discuss finally we conclude in sec conc all abundances are given by number relative to hydrogen and solar abundance are taken to be xmath25 xmath26 and xmath27 xcite we analyze 17 of the 19 high quality xmath28 velocity resolution s n xmath29 absorption spectra of quasars used in papers ii and iii the two highest redshift spectra used in those previous studieswere excluded here because the severe contamination of the region by lines makes detection of nearly impossible and also introduces very large continuum fitting errors in the region fourteen spectra were taken with the uv visual echelle spectrograph uves on the very large telescope and three were taken with the high resolution echelle spectrograph hires on the keck telescope for convenience the observed qsos are listed in table tbl sample llcccclll q1101 264 2145 1878 2103 305000 uves 1 16 q0122 380 2190 1920 2147 306200 uves 2 06 j2233 606 2238 1963 2195 305500 uves 3 11 he1122 1648 2400 2112 2355 305500 uves 1 14 q0109 3518 2406 2117 2361 305000 uves 2 15 he2217 2818 2406 2117 2361 305000 uves 3 16 q0329 385 2423 2133 2377 306200 uves 2 12 he1347 2457 2534 2234 2487 305000 uves 12 25 pks0329 255 2685 2373 2636 315000 uves 2 15 q0002 422 276 2441 2710 305500 uves 2 16 he2347 4342 290 2569 2848 342800 uves 2 15 q1107 485 300 2661 2947 364436 hires 4 23 q0420 388 3123 2774 3068 376000 uves 2 18 q1425 604 320 2844 3144 373620 hires 4 21 q2126 158 3268 2906 3211 340000 uves 2 20 q1422 230 362 3225 3552 364524 hires 4 31 q0055 269 3655 3257 3586 342300 uves 1 40 regions within xmath30 from the quasars where xmath31 is the hubble parameter at redshift xmath24 extrapolated from its present value xmath32 assuming xmath33 were excluded to avoid proximity effects regions thought to be contaminated by absorption features that are not present in our simulated spectra eg damped lyxmath11 systems were also excluded from the analysis lyman continuum contamination increases significantly towards lower wavelengths whereas as described below our correction for this contamination assumes that it is non evolving to mitigate this effect only the red portion xmath34 of the qso spectra used in papers ii and iii is analyzed in this paper as in xcite this was found to result in smaller errors than using the full region further details concerning the sample and data reduction are given in paper ii 2 the pixel optical depth method we use for measuring is similar to that described in papers i ii and iii section sec overview contains a brief outline of the method sec confit and sec recovery describe continuum fitting and contamination corrections which have been changed slightly from the methods described in papers ii and iii sec ovitest describes tests of the recovery and ioncorr discusses the ionization balance of the relevant species and describes how ionization corrections are performed the basic method for analysis of each qso spectrum is as follows 1 optical depths due to lyxmath11 xmath35 absorption are recovered for all pixels in the lyxmath11 forest region using higher order lyman lines to estimate optical depths for saturated pixels the pixel optical depth at the corresponding wavelengths of the metal lines xmath36 xmath37 and xmath38 are recovered making several corrections to reduce contamination and noise the recovered optical depth in one transition is compared with that of another by binning the pixels in terms of the optical depth of or and plotting the median or some other percentile of optical depth of a correlation then indicates a detection of absorption an example is shown in fig fig oviscat as was done in papers ii and iii an identical analysis is applied to synthetic spectra generated using a cosmological hydrodynamical simulation kindly provided by tom theuns for each observed quasarwe generate 50 corresponding simulated spectra with the same noise properties wavelength coverage instrumental broadening and pixel size as the observed spectra for each uvb model of which several are used see below the carbon distribution as measured in paper ii and the value of si c from paper iii are imposed on the fiducial spectra an oxygen abundance is assigned by assuming a constant uniform value of o c ionization balances are calculated using cloudyversion 94 see ferland et al 1998 and ferland 2000 for details a direct comparison of the results from these simulated and observed spectra allows for inferences about the distribution of oxygen carbon and silicon the same simulation was used in papers i 3 ii and iii to which the reader is referred for details this study employs the identical uvb models used in papers ii 42 and iii excluding model qgs32 all models are from haardt madau 2001 hereafter hm01 these have been renormalized by a redshift dependent factor such that the simulated spectra match the observed evolution of the mean lyxmath11 absorption paper ii the fiducial model qg includes contributions from both galaxies with a 10 escape fraction for ionizing photons and quasars q includes only quasars qgs is an artificially softened version of qg its flux has been reduced by a factor of ten above 4 ryd the uvb used in the simulation only affects the igm temperature and was chosen to match the measurements by xcite a major source of error in optical depths is continuum fitting in the absorption region where contamination by lyxmath11 and lyxmath23 lines is heavy to make this fitting as accurate as possible and to furnish an estimate of the continuum fitting error we have applied the following procedure to the region analyzed for absorption in the case of observed spectra this was done after the spectrahad been continuum fitted by eye as described in paper ii 2 1 the spectral region is divided into 20 rest frame segments 2 in segments with large unabsorbed regions an automatic continuum fitting algorithmis applied see 51 of paper ii in which pixels xmath39 below the continuum are iteratively removed 3 in segments without large unabsorbed regions we identify small unabsorbed regions or regions absorbed only in lyxmath23 the latter are identified by superimposing the region of the spectrum corresponding to lyxmath11 absorption the continuum level of the segment is fit by minimizing the deviation of identified unabsorbed regions from unit flux and deviation of the lyxmath23 regions from the corresponding scaled lyxmath11 features a spline is interpolated between the fits to all segments and the spectrum is rescaled by this spline this procedure was applied to all observed spectra as well as to to one simulated spectrum per observed spectrum where a 10 20 error in the continuum was introduced on scales of 1 4 and 16 segments the median absolute errors remaining after blindly fitting the continua of the simulated spectra are given in table tbl sample as an estimate of continuum fitting errors in the corresponding observed spectra because the procedure is not fully automatic we were unable to apply it to all of the simulated spectra the region redwards of lyxmath11 was fit in both simulated and observed spectra using the procedure described in 51 step i of paper ii continuum fitting errors are much smaller for this region xmath40001xmath41 after continuum fitting the spectra lyxmath11 optical depths xmath9 are derived for each pixel between the quasar s lyxmath11 and lyxmath23 emission wavelengths save for regions close to the quasar to avoid proximity effects see 2 if lyxmath11 is saturated ie xmath42 where xmath43 and xmath44 are the flux and noise arrays see paper i 41 paper ii 51 step 2 higher order lyman lines are used to estimate xmath9 corresponding and optical depths xmath45 xmath46 xmath47 are subsequently derived for each pixel we exclude regions of the quasar spectrum that are contaminated by absorption features that are not included in our simulated spectra such as lyxmath11 lines with damping wings for xmath46 and xmath47 corrections are made for self contamination and contamination by other metal lines as described in paper i 42 as shown in fig fig oviscat when plotting each percentile in xmath45 absorption against absorption in some other ion the correlation disappears below some optical depth xmath48 corresponding to a value xmath49 in the other ion that is determined by noise continuum fitting errors and contamination by other lines these effects may then be corrected for by subtracting xmath48 from the binned optical depths thus converting most points below xmath49 into upper limits for each realization and for each percentile we compute xmath48 as the given percentile of optical depth for the set of pixels with optical depth xmath50 we use values xmath51 when binning in or and of xmath52 when binning in as we never see a correlation extending below these values for ovi the correlations are generally less strong than for civ and we fix xmath49 by hand the error on xmath48 for an individual realization is computed by dividing the spectrum into 5 segments then bootstrap resampling the spectrum by choosing these chunks with replacement and finally computing the variance of xmath48 as computed from 100 such resampled spectra when the realizations are combined xmath53 is instead computed as the median among the realizations and the error on this value is computed by bootstrap resampling the realizations for further detailssee paper ii 51 step 4 and paper iii 34 as noted above lyxmath11 and higher lymantransitions heavily contaminate the absorption regime this can add substantial error in the recovered xmath45 two corrections are made to minimize this contamination first after recovering xmath9 and xmath45 an initial correction is made for contamination by higher order lines by subtracting xmath54 where f is the transition oscillator strength xmath55 is the redshifted lyman xmath11 wavelength corresponding to lyxmath56 absorption observed at wavelength xmath57 and i corresponds to first five higher order lyman lines ie lyxmath23 xmath58 through lyxmath59 xmath60 a second correction is made by taking the minimum of the doublet xmath61 where a and b denote the stronger and weaker doublet components respectively these corrections are described in detail in paper i 42 another potential contamination issue is that due to strong lyxmath11 or lyxmath23 absorption some higher percentiles in absorption can become dominated by saturated pixels so that the particular value of the percentile is determined by the contaminating lines rather than by the distribution to remove these unreliable percentiles from consideration the average noise xmath62 is calculated for saturated pixels and is converted into a maximum optical depth xmath63 after optical depths are binned those percentile bins with xmath64 are excluded from the analysis because there is substantial processing of the recovered optical depths it is important to test how efficiently the true optical depths are recovered by our procedures to do so simulations were produced just as described in sec overview but with effectively perfect resolution no noise and only xmath65 absorption and hence no contamination or self contamination in fig fig truerec the recovered pixel optical depths are plotted against these true optical depths for a set of 60 simulated spectra for two representative qsos see paper i for more such tests of particular note is the efficacy of subtracting the flat level xmath48 as described in the preceding section ideally xmath48 would be determined using pixels with negligible absorption this is possible in the present case as the true optical depths are known and corresponding results are shown in the left panels in a realistic case a proxy for must be used in the right panels of fig fig truerec is employed and xmath48 is computed using all pixels with xmath66 in both cases the subtracted xmath48 values are shown as horizontal dashes on the right axes of fig fig truerec overall we find that using xmath9 to calculate xmath67 is effective at recovering xmath68 for the 31st and median percentiles for higher percentiles the recovery is accurate only at higher xmath45 but the large scatter indicates that this is random rather than systematic error in papers i and iiit was shown from simulations that there exists a tight correlation between xmath9 and the absorbing gas density and temperature which could be used to predict an ionization correction ie the ratios of o and h as a function of density for details see 6 in paper i and 51 in paper ii as noted in papers ii and iii this works well for and less well for due to their mild and strong ionization correction dependence on xmath9 respectively in the upper panel of figure fig ionpred we provide a contour plot of the logarithm of the predicted fraction of ions versus temperature and density the middle and lower panels show xmath69 for o c0 and xmath70 for o h0 respectively for photoionized gas xmath71 the fraction is highest for xmath72 and only weakly dependent on the temperature however for higher densities and xmath73k the fraction falls quickly resulting in a very large ionization correction for saturated pixels at the same time the fraction at high density increases with xmath74 for xmath75k so that collisionally ionized gas might be detected relatively easily therefore at high xmath9 collisionally ionized gas can easily swamp photoionized this can be seen most clearly in the bottom panel which shows that for fixed o h and density xmath8 increases quickly at xmath75k particularly if the density is high because our xmath76 relation is dominated by photoionized gas with xmath77 the effects of collisionally ionized gas are potentially important and are discussed at length in sec collisional below the strategy employed here is to employ our fiducial ionization corrections as in papers ii and iii but to recognize that at high density the results may be significantly affected by collisionally ionized gas it is important to note that the importance of collisionally ionized gas may be underestimated by our simulation because we did not include a mechanism for generating galactic winds which could shock heat the gas surrounding galaxies our simulation does however include heating by gravitational accretion shocks once the ionization correction has been determined and corrected optical depths and recovered optical depths xmath9 have been obtained the oxygen abundance can be calculated xmath78 log left taurm ovi over taurm hi flambdarm hi over flambdarm ovi nrm o over novinrm hi over nhright rm o hodot labeleq metallicity where xmath79 and xmath80 are the oscillator strength and rest wavelength of transition xmath56 respectively xmath81 xmath82 xmath83 xmath84 and we use the solar abundance xmath25 number density relative to hydrogen anders grevesse 1989 an example of the results from this analysis applied to the observed spectrum of q1422 230 is shown in figure fig1422ioncorr in figure fig trueinvsummo6 we show a test in which we have generated simulated spectra using the qg ionizing background recovered optical depths and applied the just described ionization correction to recover the oxygen abundance the true metallicity is given by the carbon distribution of paper ii for the qg background with a fixed o cxmath85 ie o h315 065xmath13 and is shown on the plot as a dashed line for xmath86 the abundance recovery is promising it overestimates by less than 03 dex and the dependence of xmath13 is reproduced however the overestimation appears to increase for xmath87 reaching approximately 1 dex for the highest overdensity bin the difference in the high versus lowxmath13 gas can probably be attributed to the collisionally ionized gas residing in and around dense regions due to gravitational accretion shocks we should thus keep in mind that we expect to overestimate the oxygen abundance associated with strong absorbers our basic result will be an estimate of o c for the low density igm as a whole computed using four different but consistent methods which we describe in turn to extract as much information as possible from our data we have as in papers ii and iii combined the data points obtained from our entire sample figure fig ocfwdzbin shows xmath88 versus xmath89 in bins of xmath24 to generate these points we begin with xmath45 values binned in xmath47 for each qso as in fig fig oviscat for q1422 230 we then subtract from each the flat level xmath48 for that qso to adjust for noise contamination etc see sec overview then divide by the central value of the xmath47 bin these points gathered from all qsos are rebinned by determining for each xmath47 bin in fig fig ocfwdzbin the best constant level xmath90fit to all of the points in the specified redshift bin the errors represent 1 and 2xmath44 confidence intervals xmath91 and xmath92 on this fit the plotted lines indicate corresponding optical depths from synthetic spectra drawn from the simulation using several uvb models the corresponding c h distributions as determined in paper ii and a constant o c value determined as follows for each background we generate simulated xmath93 points in the same way as we did for the observations but averaging over 50 simulated realizations as described in sec overview we then calculate a xmath90 between all valid observed original not rebinned points and the corresponding simulated points because we use 50 simulated realizations the simulation errors are almost always negligible compared to the observed errors but they are still taken into account by calculating the total xmath90 using the formula xmath941 labeleq chi2 where xmath95 and xmath44 is the error in this quantity we then add a constant offset to the simulated points which corresponds to scaling o c such that xmath90 is minimized in each panelthe lines connect the scaled rebinned simulation points the first evident result is that xmath96 and appears to be at most weakly dependent on xmath97 from xmath98 to 0 this is unlike xmath46 which increases by xmath992 dex in this xmath47 regime paper ii at the lowest densities the data exhibits a decline comparing these panels suggest there is little dependence on redshift in this intervalthis can be seen more clearly in figure fig ocfwddbin which show xmath100 versus xmath24 in bins of xmath47 there is no evidence in either the simulated or observed points for evolution in xmath93 for 15xmath10135 the observed trends in xmath102 are reproduced well by the simulations therefore because xmath93 scales with o c the offset in xmath93 obtained by minimizing the xmath90 eq eq chi2 against the observations can be used to reliably compute the best fit o c as an example for our fiducial uvb model qg the simulated spectra were generated with o c065 and best fit by an offset of xmath103dex implying a best fit o c066 with xmath10442782 as we found in paper ii and for q1422 230 above the reduced xmath90 is somewhat low this is due to a slight overestimate of the errors at lowxmath47 paper ii and to the fact that the data points are not completely independent because single absorbers contribute to multiple data points the fitted o c values and corresponding xmath105 are listed in table tbl allfits with errors computed by bootstrap resampling the quasars used in the xmath90 minimization for our fiducial model qg the best fit o cxmath106 the quasar only background q which is probably too hard see paper ii gives a much lower value of o cxmath107 the softer qgs backgrounds gives implausibly high values of o cxmath108 coupling this with results from paper ii suggesting that the qgs background is unrealistically soft strongly disfavors this uvb model lcccccccccc qg xmath109 42782 xmath110 xmath111 657115 xmath112 347xmath113 008xmath114 065xmath115 1141184 q xmath116 59982 xmath117 xmath118 656115 xmath119 291xmath120 006xmath121 017xmath122 1138184 qgs xmath123 26782 xmath124 xmath125 738115 xmath126 414xmath127 054xmath128 131xmath120 1142184 we may also subdivide our sample by redshift to test the dependence of o c on these first computing o c using only spectra that have a median absorption redshift xmath129 see table tbl sample yields o cxmath130 versus o cxmath131 using the spectra with xmath132 these are consistent to about 1xmath44 using the q and qgs uvbs the o c values inferred from the redshift subsamples are marginally consistent with each other and with the full sample while the xmath93 ratios give the most direct constraints on o c it is also useful to examine xmath8 since comparing the simulated to the observed xmath8 ratios gives an additional but related estimate of o c recall that our simulation reproduces the observed xmath133 figure ohfwdzbin shows xmath70 versus xmath9 for our combined sample in bins of redshift lines again connect the simulation points with an overall scaling to best match the observations which reproduce the observed trends in xmath24 and xmath9 the scalings correspond to best fit o c of 069 xmath4 006 019 xmath4 006 and 225 xmath4 006 for qg q and qgs respectively for qg and qgs the inferred o c are consistent with the results found using xmath93 for our hardest uvb model q the o c values are more discrepant but still within 15xmath44 however while the simulations reproduce the overall trends present in the data there are some possible discrepancies although the xmath134 points are upper limits the simulations also appear to fall marginally above the data in this regime at xmath135 on the other hand the simulations slightly but significantly underpredict xmath45 this can be seen more clearly in figure ohfwddbin where xmath70 versus xmath24 in bins of xmath9 is shown exhibiting a clear discrepancy for points at highxmath24 and high xmath9 indeed if we consider subsamples above and below xmath136 we find that for xmath1371 we obtain o c values consistent those obtained from xmath93 062 xmath4 007 013 xmath4 007 and 218 xmath4 007 for qg q and qgs respectively but for xmath138 we obtain o c of xmath139 xmath140 and xmath141 for qg q and qgs respectively this discrepancy in o c for between low and highxmath9 subsamples is significant at xmath142 for the qg and qgs models and at xmath143 for the q model given the tight but xmath24dependent relation between xmath9 and gas density see paper ii fig 2 and the upper axis in the top two panels of fig ohfwdzbin it is useful also to divide our sample into high and low density subsamples we have done this by recomputing o c from xmath8 using only bins with xmath9 corresponding to xmath144 or xmath145 for xmath14610 we obtain o c of 056 xmath4 008 006 xmath4 008 and 211 xmath4 009 all consistent at 1xmath44 with the full sample whereas for for xmath147 we obtain o c of 102 xmath4 009 051 xmath4 009 and 260 xmath4 009 for qg q and qgs respectively all discrepant at xmath148 this highxmath13 and less significantly highxmath9 difference might have several causes first it might correspond to a genuine change in o c with gas density in this case however such an effect would also be expected in the trends of xmath93 vsxmath47 and no such effect is evident thus we consider it more likely that there exists a significant portion of hot gas not present in the simulations that contains but lacks and as suggested in ioncorr and fig fig ionpred and further discussed below in sec collisional this might indicate the presence of a significant amount of collisionally ionized xmath149k gas in the igm a third check on our measured o c is provided by the xmath150 ratio osfwdzbin shows xmath150 versus xmath46 for our full sample for two cuts in xmath24 for each uvbthe constant si c ratio derived in paper iii and the c distribution measured in paper ii are imposed on the simulations and the o c ratio is varied to minimize the xmath90 difference between the observed and simulated data points though the detection of xmath150 is weak the simulations appear to adequately represent the observations in the redshift range where xmath46 is best detected xmath151 from these fitswe infer for this redshift interval o c of xmath152 xmath153 and xmath154 for uvb models qg q and qgs respectively all are consistent with the results obtained from xmath93 though all are somewhat higher because probes higher density gas than see paper iii this again suggests that the simulations underpredict the amount of in and near dense gas as a final method we can apply the inversion method developed in paper ii to convert xmath8 vs xmath9 into o h vs xmath13 by applying a density dependent ionization correction then using the measured distribution of carbon from paper ii an independent measurement of o c can be obtained in figure ohinvzbin we show the derived o h versus xmath13 for our preferred uvb model qg with data from all xmath24 combined the data points from the individual quasar spectra an example is shown in fig fig1422ioncorr have been binned in density bins of 025 dex the solid line shows the least squares fit to the data points with xmath144 and the dotted curves indicate the 1 xmath44 confidence limits with the resulting fit given in the upper left corner the errors on the fits were determined by bootstrap resampling the qsos the dashed line is the value of o h given by the derived o cxmath155 result using xmath93 and assuming a c h distribution from paper ii for xmath156 the o h derived from the ionization correction agrees very well with that determined from both xmath93 and xmath8 this strengthens the result from sec resrel oc and sec ovihi that o c is indeed constant for 05xmath157 1 however for xmath158 the ionization correction results in substantially more o h than that predicted using the previously determined c h distribution and a fixed o c ratio this is similar to the breakdown between the simulations and observed xmath8 seen in fig ohfwdzbin and the erroneously high o h recovered for highxmath13 see fig fig trueinvsummo6 when applying the ionization correction to the simulated spectra once again this suggests the presence of collisionally ionized the primary source of systematic uncertainty in eg o c or o h is the complex modeling that must be performed to extract these values from the pixel correlations the greatest combination of importance and uncertainty is clearly the uncertainty in the shape of the uvb but this is discussed at length below in sec sec nuc so we here focus on other aspects of the modeling the good agreement between the four methods we have employed indicates that the method is sound but it is also clear that there are real differences between the universe and our simulation in particular it is clear that the real universe has an extra component of at high density almost certainly due to collisionally ionized gas that is not captured by those simulations nonetheless if we exclude those highxmath9 regions the small discrepancies between o c as measured using the different methods indicate that such effects probably do not contribute more than xmath159 dex uncertainty to our basic results another source of error that may be inaccurately assessed by our bootstrap resampling technique is that from continuum fitting as shown in table tbl sample our estimated rms error in the absorption region is xmath160 to test the effect of this error on our results we have imposed an additional error on each observed spectrum on scales of 20 80 and 320 for a total added rms error of 2 then recomputed our results we find that our best fit o c from xmath45 versus xmath47 and from xmath45 versus xmath9 are both within xmath161 dex further both the individual binned points and linear fits of the o h values computed from the full sample fig ohinvzbin are all affected by this continuum error to a lesser degree than the quoted random errors thus we conclude that the continuum fitting error is not a significant systematic affecting our results a final possible source of systematic error is that the and recombination rates used in the version of cloudy we have employed are too high by xmath162 compared with recent experimental values for the temperatures relevant to low density photoionized gas savin private communication for the range of densities xmath163we cover this would imply a density dependent correction of xmath164dex this might change our overall fits by an amount comparable to the statistical errors but is still much smaller than uncertainties stemming from the uvb shape and can not account for the 05dex of excess in xmath45 at high xmath165 shown in fig ohinvzbin combining these possible sources we estimate probable systematic errors of xmath166dex in our basic o c and o h values this uncertainty may be somewhat greater in subsets of the data particularly at high density it is important to emphasize that each quoted result is sensitive to and applies to only a certain range of gas densities at the upper end our results nominally concern gas of up to xmath167 though as noted at length above the highxmath9 range of our data is likely to be affected by collisional ionization that range is however not dominant we have checked that if all pixels with xmath168 are excluded from the analysis the results given for o c in columns 2 and 4 of table tbl allfits change only within the quoted errors the lower end of the gas density range probed is most straightforward in results from xmath8 figs ohfwdzbin and ohinvzbin which formally show ovi detections at 1xmath44 using all qsos at zxmath169 for xmath170 or xmath171 and confident detections at xmath172 in paper iii the si abundance results were sensitive to xmath173 xmath174 and in paper ii c abundances were measured in much lower density gas thus our results provide indirect constraints on o c and o si down to xmath175 however there are important caveats first the quoted results pertain to the the full density range probed and thus are not necessarily very sensitive to the lowest densities second because the pixel method only works if the element on the x axis is more easily detectable than the element on the y axis direct measurements of o c and o si from xmath93 and xmath150 are dominated by much higher density that gives xmath176 thus while these results are consistent with our indirect constraints they do not address the somewhat implausible possibility that at low densities o si and c come from completely different gas phases on the other hand at high densities it is quite possible that ovi and siiv emission are dominated by different phases so the indirectly inferred si o values are probably both reliable and well measured only in the moderate density range xmath177 although mass or volume filling factors corresponding to these results are not well constrained see eg schaye aguirre 2005 the forewarned reader can convert the density range xmath5 correspond to into a volume using paper ii the best fitting metallicities and corresponding xmath90dof from papers ii iii and this work are shown in table tbl allfits for each uvb model two interesting results stand out first for all uvb models carbon is underabundant relative to both silicon and oxygen being only marginally consistent with solar for uvb model q second all abundance ratios are sensitive to the uvb shape a harder uvb results in a lower inferred o c but a higher inferred si c making the si o ratio particularly sensitive to the spectral hardness of the uvb the extreme sensitivity of the inferred si o ratio to the spectral shape of the uv background makes it possible to constrain feasible uvb models by making only weak assumptions about the si o ratio since si and o are both xmath11 elements they are expected to trace each other relatively well for example using the nucleosynthetic yields of and and a xcite initial mass function from xmath178 the si o ratio of the ejecta of a population of age xmath179 yr is predicted to be about 012 and 003 for stars of solar and 1 percent solar metallicity respectively this agrees well with observations of metal poor stars which find si oxmath3 xcite tallying the results of this work with those of papers ii and iii yields si oxmath180 xmath181 and xmath182 for uvb models qg q and qgs respectively thus our preferred model qg is nicely consistent with the expectations but models q and qg lead to inferred si o ratios that are highly inconsistent with both nucleosynthetic yields and observations of metal poor stars assuming si c xmath183 and si o xmath184 in the q background for example raises the xmath90 of the fits in figures fig ocfwdzbin and ohfwdzbin by 65 and 92 respectively requiring this for qgs likewise raises the xmath90 by 138 and 81 we conclude that the uvb has a spectral shape similar to that of model qg while our result using the qg uvb are broadly consistent with the abundance ratios in metal poor stars and in yield calculations the o c and si c may be somewhat high by xmath185dex this is comparable to our systematic errors but nevertheless interesting if taken seriously for example the models of xcite that include the contributions of hypernovae defined as supernovae with kinetic energy xmath186 that of normal core collapse sne produce o c xmath99 06 and si c xmath99 065 in agreement with our results in paper ii we combined the median c hxmath187 with the width xmath188delta z of the lognormal probability distribution of c h for xmath189 to determine the mean c abundance versus xmath13 this was then integrated over the mass weighted probability distribution xmath13 obtained from our hydrodynamical simulation to compute the contribution by gas in this density range to the overall mean cosmic c h assuming that o c is constant over this density range over which we have reliably measured oxygen abundances we obtain for our fiducial uvb model qg o hxmath190 corresponding to xmath191 21left omegab over 0045right extrapolating our c h and o c results even further to the full density range of the simulation would yield values xmath192dex higher but with more uncertainty as we have argued that our results are unreliable at the highest densities note that these results are relatively insensitive to the uvb unlike those for xmath193 in paper iii because for a harder uvb the inferred c h increases while o c decreases for our quasar only model q these effects almost entirely cancel yielding o hxmath194 and an xmath195 value 30xmath41 lower than for model qg note also that these estimates include the oxygen that resides in gas that is observable in and but they do not include oxygen in intergalactic gas that is very hot xmath196 k or very cold xmath197 k and shielded from ionizing radiation if following xcite we take xmath198 we then infer an intergalactic metal reservoir of xmath199 this can be compared to their estimate of the total xmath200 metal budget of xmath201 indicating that xmath202 of metals produced prior to xmath203 reside in the component of the igm that is studied here previous studies have explored oxygen abundances in the igm using both line fitting eg and pixel optical depths previous pixel studies did not attempt to convert their detections into oxygen abundances but we can compare to their recovered optical depths both xcite and claim detection of down to xmath204 using our combined data set binned in density see fig ohinvzbin we obtain 1xmath44 detections down to about the mean density xmath205 at xmath206 while is in principle an excellent tracer of metal in very low density gas in practice we find that for the higher redshifts where low densities are more easily probed contamination is severe thus we are not in practice able to constrain metals in underdense gas as claimed in previous studies in spite of a large sample and improved techniques of removing contaminants on the highxmath9 side both and xcite exclude pixels saturated in so can not probe xmath207 this accounts for example for our detection of in q1422 230 while xcite had no detection in the same qso the study of xcite did probe high xmath9 where their results are broadly consistent with ours the studies of xcite xcite xcite and xcite did perform ionization corrections and we can compare our abundance determinations relatively directly to theirs xcite assumed to abundance of oxygen relative to carbon to be solar and inferred metallicities for various uvb models and a number of absorbers in xmath208 systems at xmath19 they found that for relatively hard uv backgrounds comparable to our model q the ionization models yielded densities in agreement with theoretical predictions for self gravitating clouds with the observed column densities xmath209 xcite and metallicities of xmath210 solar in excellent agreement with our measurement of o cxmath211c hxmath212 for model q at xmath213 and xmath214 we note that if xcite would have allowed oxygen to be overabundant relative to carbon they would have found that softer uvb models are required to obtain density estimates that agree with theoretical expectations for gravitationally confined clouds xcite employed the faint object spectrograph on hst to search for in qso spectra from redshift 16xmath215 29 the ratio found by the survey favored a uvb background similar to our q which they use to derive a metallicity of xmath216o h xmath217 while somewhat higher than our value much of the difference may be attributable to their use of the 78th percentile in the ratio assuming that o h has a scatter at fixed density similar to that in c h see xcite this would correspond to a median of xmath218dex less or xmath219o h xmath220 in fairly good agreement with our numbers xcite divide their sample into metal poor absorbers with xmath221xmath222xmath223 which they take to be predominantly photoionized and metal rich absorbers with xmath221xmath222xmath224 for which they assume a hotter phase for the metal poor systems they use a hard uvb and assume o cxmath225 to derive a range of xmath226 o h xmath227 and for the metal rich phase they infer a median o h xmath228 to xmath229 depending upon the assumptions regarding the ionization balance combining their samples they estimate cosmic density xmath230 corresponding to xmath231 o h xmath232 if divided by the cosmic gas density xmath233 while precise comparison is difficult these numbers are consistent with our corresponding estimates of o hxmath194 or xmath234 using the q uvb xcite assume o c05 and 00 for uvb backgrounds comparable to our qg and q for xmath235 so as for the above studies comparing derived o c values is less useful than several other points of comparison first in both backgrounds their dependence of o h upon xmath13 is similar to that found for c h vs xmath13 in paper ii consistent with a constant o c value second xcite find that there is a clear jump in in the median o h at xmath23610 while a corresponding jump is not seen in c h similar to our results in sec ioncorr they also interpret this as possibly indicating that stronger absorbers are physically more complex or multiphased third xcite compute an overall contribution xmath237 using their hard background this would correspond to a cosmic average contribution of o h xmath238 these numbers are xmath239dex lower than our values but this should be regarded as good agreement given the number of assumptions made in each computation finally the studies of xcitexcite are similar to ours in employing simulated spectra to attempt to match observed absorption and thus constrain o c and o h xcite generated simulated spectra from a constant metallicity simulation and compared ionic ratios to extant data inferring c h xmath240 and evidence for overabundance of si and o relative to carbon xcite used q1422 230 and a quasar only haardt madau uvb model like q the found that their data is consistent with o c xmath3 and c hxmath241 these are quite consistent with our results using the q uvb as an alternative interpretation they note that a softer uvb would give high o c more characteristic of type ii supernova yields but also lower c h again consistent with our findings however they interpret this softness as due to patchy reionization whereas we favor its explanation by the contribution of galaxies to the uvb as discussed above the difference between the inferred o h for xmath242 and xmath243 is probably due to collisionally ionized gas for reasons sketched in ioncorr fig ionpred shows that for xmath244k collisional ionization dominates and the optical depth ratios become independent of the density for xmath245k on the other hand the and the ratios both drop rapidly with increasing density consequently these ratios can be many orders of magnitude higher in hot dense gas than in warm dense gas the lower the density the smaller the differences become in fact at high densities the fraction in warm gas is too small for to be observable therefore any at the redshift of very strong absorption is likely to arise in a different phase than the associated and possibly even the associated the phase must either have a much lower density or a much higher temperature because the fraction of hot gas in our simulation is small at all densities we effectively assume the gas to be photo ionized when we compare with synthetic spectra and when we correct for ionization as in fig ohinvzbin in the latter case we also implicitly assume that and absorption arise in the same gas phase hence our results from sec ioncorr suggest the existence of a detectable amount of enriched hot xmath246k gas associated with strong absorption a possible physical explanation is that such systems coincide with outer regions of high xmath24 galactic halos where the effects of galactic winds may dominate the heating process if the temperature exceeds 10xmath247k in these regions then this gas would contain significant while lacking and the latter two would then arise in a cooler gas phase which has to be fairly dense in order to account for the strong absorption the high density of the cooler phase implies that it would not produce significant photo ionized these results and inferences are consistent with other observational studies of xcite and xcite bothfind that the majority of the detected absorption systems had temperatures determined from the line widths too low for collisional ionization but can not rule out higher temperatures for some absorbers indeed the study of xcite find their detected high column density lines associated with strong absorbers are broad enough to be consistent with collisional ionization as noted above xcite find a jump in o h at xmath23610 interpretable as a transition to a regime in which collisionally ionized gas affects the abundance inferences while collisionally ionized gas complicates oxygen abundance inferences the flip side is that then provides an important probe of hot enriched ig gas that is difficult to detect in eg paper ii indeed hydrodynamical simulations by xcite and xcite predict that a significant portion of is collisionally ionized hence it would thus be very interesting to compare such simulations employing as a metallicity tracer with the observations analyzed here we have studied the relative abundance of oxygen in the igm by analyzing and pixel optical depths derived from a set of high quality vlt and keck spectra of 17 qsos at xmath248 and we have compared them to realistic synthetic spectra drawn from a hydrodynamical simulation to which metals have been added our fiducial model employs the ionizing background model qg taken from haardt madau 2001 for quasars and galaxies rescaled to reproduce the observed mean lyxmath11 absorption the simulation assumes a silicon abundance as calculated in paper iii si cxmath249 and a carbon abundance as derived in paper ii at a given overdensity xmath13 and redshift xmath24 c h has a lognormal probability distribution centered on xmath250 and of width 070 dex the main conclusions from this analysis are as follows for 19xmath21536 xmath251 and xmath25210 when smoothed on the scale of the absorption xmath253 kpc the fiducial simulation utilized in papers i ii and iii consistently agrees with the observed xmath1 xmath6 and to a lesser degree xmath7 fitting xmath1 yields a constant o c 066 xmath4 006 with estimated systematic errors within xmath254 dex converting the observed xmath6 into o cxmath255 using the ionization correction method of paper ii further supports these results the relative abundances o c and especially o si are sensitive to the uvb shape we find that our fiducial haardt madau 2001 quasars and galaxies spectrum gives reasonable results for both but that significantly softer or harder uvbs such as the haardt madau 2001 quasar only uvb give results that are highly inconsistent with both theoretical yields and observed abundance ratios in other low metallicity environments and should not be considered tenable our results both from applying the ionization correction and from comparing the simulations to the observations suggest no evolution in o h over the redshift range 19xmath25636 but a strong dependence on xmath13 both results are consistent with those found in paper ii for c h for xmath257 and xmath13 xmath258 10 the value of o c derived by comparison to the simulations is inconsistent with that found at lower densities and xmath218dex higher than that predicted using the carbon distribution of paper ii and a density independent o c value this might in principle suggest a density dependent o c ratio but we favor the interpretation that a fraction of the highxmath13 absorbing gas is collisionally ionized and that this leads to an erroneously large ionization correction in this regime this interpretation is supported by our simulated spectra as well as by the observation that lines associated with strong absorbers tend to be broader than those associated with weak systems xcite we are grateful to wallace sargent michael rauch and tae sun kim for providing the keck hires and vlt uves data used here and in papers i iii we are also extremely grateful to daniel savin for his assistance in understanding and assessing the systematic uncertainties in recombination rates thanks also to rob wiersma for computing the expected si o ratio from nucleosynthetic yields taken from the literature we thank the anonymous referee for providing comprehensive and helpful feedback that improved the manuscript aa and cdh gratefully acknowledge support from nsf grant ast0507117 and js from marie curie excellence grant mext ct2004 014112
we have studied the abundance of oxygen in the igm by analyzing and pixel optical depths derived from a set of high quality vlt and keck spectra of 17 qsos at xmath0 comparing ratios xmath1 to those in realistic synthetic spectra drawn from a hydrodynamical simulation and comparing to existing constraints on si c places strong constraints on the ultraviolet background uvb model using weak priors on allowed values of si o for example a quasar only background yields si o xmath2 highly inconsistent with the si o xmath3 expected from nucleosynthetic yields and with observations of metal poor stars assuming a fiducial quasargalaxy uvb consistent with these constraints yields a primary result that o c 066 xmath4 006 xmath4 02 this result is sensitive to gas with overdensity xmath5 consistent results are obtained by similarly comparing xmath6 and xmath7 to simulation values and also by directly ionization correcting xmath8 as function of xmath9 into o h as a function of density subdividing the sample reveals no evidence for evolution but low and highxmath9 samples are inconsistent suggesting either density dependence of o c or more likely prevalence of collisionally ionized gas at high density
introduction observations method results analysis and discussion of results conclusions
the experimental data used in this paper were collected by the forward looking radar of the us army research laboratory xcite that radar was built for detection and possible identification of shallow explosive like targets since targets are three dimensional objects one needs to measure a three dimensional information about each target however the radar measures only one time dependent curve for each target see figure 5 therefore one can hope to reconstruct only a very limited information about each target so we reconstruct only an estimate of the dielectric constant of each target for each target our estimate likely provides a sort of an average of values of its spatially distributed dielectric constant but even this information can be potentially very useful for engineers indeed currently the radar community is relying only on the energy information of radar images see eg xcite estimates of dielectric constants of targets if taken alone can not improve the current false alarm rate however these estimates can be potentially used as an additional piece of information being combined with the currently used energy information this piece of the information might result in the future in new classification algorithms which might improve the current false alarm rate an inverse medium scattering problem imsp is often also called a coefficient inverse problem cip imsps cips are both ill posed and highly nonlinear therefore an important question to address in a numerical treatment of such a problem is how to reach a sufficiently small neighborhood of the exact coefficient without any advanced knowledge of this neighborhood the size of this neighborhood should depend only on the level of noise in the data and on approximation errors we call a numerical method which has a rigorous guarantee of achieving this goal globally convergent method gcm in this paperwe develop analytically a new globally convergent method for a 1d inverse medium scattering problem imsp with the data generated by multiple frequencies in addition to the analytical study we test this method numerically using both computationally simulated and the above mentioned experimental data first we derive a nonlinear integro differential equation in which the unknown coefficient is not present element of this paper is the method of the solution of this equation this method is based on the construction of a weighted least squares cost functional the key point of this functional is the presence of the carleman weight function cwf in it this is the function which is involved in the carleman estimate for the underlying differential operator we prove that given a closed ball of an arbitrary radius xmath1 with the center at xmath2 in an appropriate hilbert space one can choose the parameter xmath3 of the cwf in such a way that this functional becomes strictly convex on that ball the existence of the unique minimizer on that closed ball as well as convergence of minimizers to the exact solution when the level of noise in the data tends to zero are proven in addition it is proven that the gradient projection method reaches a sufficiently small neighborhood of the exact coefficient if its starting point is an arbitrary point of that ball the size of that neighborhood is proportional to the level of noise in the data therefore since restrictions on xmath4 are not imposed in our method then this is a globally convergent numerical method we note that in the conventional case of a non convex cost functional a gradient like method converges to the exact solution only if its starting point is located in a sufficiently small neighborhood of this solution this is due to the phenomenon of multiple local minima and ravines of such functionals unlike previously developed globally convergent numerical methods of the first type for cips see this section below the convergence analysis for the technique of the current paper does not impose a smallness condition on the interval xmath5 of the variations of the wave numbers xmath6 the majority of currently known numerical methods of solutions of nonlinear ill posed problems use the nonlinear optimization in other words a least squares cost functional is minimized in each problem see eg chavent engl gonch1gonch2 however the major problem with these functionals is that they are usually non convex figure 1 of the paper scales presents a numerical example of multiple local minima and ravines of non convex least squares cost functionals for some cips hence convergence of the optimization process of such a functional to the exact solution can be guaranteed only if a good approximation for that solution is known in advance however such an approximation is rarely available in applications this prompts the development of globally convergent numerical methods for cips see eg xcite the first author with coauthors has proposed two types of gcm for cips with single measurement data the gcm of the first type is reasonable to call the tail functions method this development has started from the work xcite and has been continued since then see eg xcite and references cited therein in this case on each step of an iterative process one solves the dirichlet boundary value problem for a certain linear elliptic pde which depends on that iterative step the solution of this pde allows one to update the unknown coefficient first and then to update a certain function which is called the tail function the convergence theorems for this method impose a smallness condition on the interval of the variation of either the parameter xmath7 of the laplace transform of the solution of a hyperbolic equation or of the wave number xmath8 in the helmholtz equation recall that the method of this paper does not impose the latter assumption in this paper we present a new version of the gcm of the second type in any version of the gcm of the second typea weighted cost functional with a cwf in it is constructed the same properties of the global strict convexity and the global convergence of the gradient projection method hold as the ones indicated above the gcm of the second type was initiated in klib95klib97kt with a recently renewed interest in xcite the idea of any version of the gcm of the second type has direct roots in the method of xcite which is based on carleman estimates and which was originally designed in xcite only for proofs of uniqueness theorems for cips also see the recent survey in xcite another version of the gcm with a cwf in it was recently developed in bau1 for a cip for the hyperbolic equation xmath9 where xmath10 is the unknown coefficient this gcm was tested numerically in xcite in bau1bau2 non vanishing conditionsare imposed it is assumed that either xmath11 or xmath12 or xmath13 in the entire domain of interest similar assumptions are imposed in xcite for the gcm of the second type on the other hand we consider in the current paper so as in xcite the fundamental solution of the corresponding pde the differences between the fundamental solutions of those pdes and solutions satisfying non vanishing conditions cause quite significant differences between klib95klib97kt ktsiap and xcite of corresponding versions of the gcm of the second type recently the idea of the gcm of the second type was extended to the case of ill posed cauchy problems for quasilinear pdes see the theory in klquasi and some extensions and numerical examples in bakklkosh klkosh cips of wave propagation are a part of a bigger subfield inverse scattering problems isps isps attract a significant attention of the scientific community in thisregard we refer to some direct methods which successfully reconstruct positions sizes and shapes of scatterers without iterations xcite we also refer to xcite for some other isps in the frequency domain in addition we cite some other numerical methods for isps considered in xcite as to the cips with multiple measurement ie the dirichlet to neumann map data we mention recent works xcite and references cited therein where reconstruction procedures are developed which do not require a priori knowledge of a small neighborhood of the exact coefficient in section 2 we state our inverse problem in section 3 we construct that weighted cost functional in section 4we prove the main property of this functional its global strict convexity in section 5we prove the global convergence of the gradient projection method of the minimization of this functional although this paper is mostly an analytical one sections 3 5 we complement the theory with computations in section 6we test our method on computationally simulated data in section 7we test it on experimental data concluding remarks are in section 8 let the function xmath14 be the spatially distributed dielectric constant of the medium we assume thatxmath15xmath16fix the source position xmath17 for brevity we do not indicate below dependence of our functions on xmath18 consider the 1d helmholtz equation for the function xmath19xmath20xmath21let xmath22 be the solution of the problem 24 26 for the case xmath23 thenxmath24our interest is in the following inverse problem inverse medium scattering problem imsp let xmath25subset left 0infty right be an interval of wavenumbers xmath26 reconstruct the function xmath27 assuming that the following function xmath28 is known xmath29 label28 denotexmath30it follows from 28 2100 and xcite that xmath31 label2101xmath32 label2160 in this subsection we briefly outline some results of xcite which we use below in this paper existence and uniqueness of the solution xmath33 for each xmath8 was established in xcite also it was proven in xcite that xmath34 forall k0 label29in particular xmath35 in addition uniqueness of our imsp was proven in klibloc also the following asymptotic behavior of the function xmath36 takes place xmath37 left 1 widehatuleft x kright right krightarrow infty forall xin left 01right label210xmath38 given 29 and 210 we now can uniquely define the function xmath39 as in xcite the difficulty here is in defining xmath40 since this number is usually defined up to the addition of xmath41 where xmath42 is an integer for sufficiently large values of xmath26we define the function xmath39 using 260 2100 210 and 21000 as xmath43where xmath44hence for sufficiently large xmath26 xmath45which eliminates the above mentioned ambiguity suppose that the number xmath46 is so large that 212 is true for xmath47 then xmath48 is defined as in 211 as to not large values of xmath26 we define the function 211xmath49 as xmath50by 29 xmath51 forall xi 0 differentiating both sides of 213 with respect to xmath26 we obtain xmath52multiplying both sides of 214 by xmath53 we obtain xmath54 hence there exists a function xmath55 independent on xmath26 such that xmath56setting in 215 xmath57 and using the fact that by 213 xmath58 we obtain xmath59 label2150hence 213 and 215 imply that xmath60 is defined as xmath61in this section we construct the above mentioned weighted cost functional with the cwf in it lemma 31 carleman estimate for any complex valued function xmath62 with xmath63 and for any parameter xmath64 the following carleman estimate holds xmath65 label300where the constant xmath66 is independent of xmath67and xmath68 proof in the case when the integral with xmath69 is absent in the right hand side of 300 this lemma was proved in klibloc to incorporate this integral we note that xmath70 label302let xmath71 then 302 implies 300 where xmath72 is replaced with xmath73 xmath74 for xmath75kin lbrack underlinekoverlinek consider the function xmath76 and its xmath77derivative xmath78 where xmath79 hencexmath80consider the function xmath81 which we call the tail function and this function is unknownxmath82 let xmath83 note that since xmath84 for xmath85 then equation 24 and the first condition 26 imply that xmath86 for xmath87 hence 260 and 2100 imply that xmath88 for xmath87 it follows from 24 260 21002160 215 and 2150 that xmath89xmath90using 215 2150 30 and 33 we obtain xmath91differentiate 35 with respect to xmath26 and use 3034 we obtain xmath92xmath93xmath94wherexmath95 kin left underlinekoverlinekright label370 we have obtained an integro differential equation 36 for the function xmath96 with the overdetermined boundary conditions 37 the tail function xmath97 is also unknown first we will approximate the tail function xmath98 next we will solve the problem 36 37 for the function xmath96 to solve this problem we will construct the above mentioned weighted cost functional with the cwf xmath99 in it see 300 this construction combined with corresponding analytical results is the central part of our paper thus even though the problem 36370 is the same as the problem 65 66 in xcite the numerical method of the solution of the problem 36370 is radically different from the one in xcite now suppose that we have obtained approximations for both functions xmath100 and xmath78 then we obtain the unknown coefficient xmath101 via backwards calculations first we calculate the approximation for the function xmath102 via 31 and 32 next we calculate the function xmath103 via 35 we have learned from our numerical experience that the best value of xmath26 to use in 35 for the latter calculation is xmath104 the approximation for the tail function is done here the same way as the approximation for the so called first tail function in section 42 of xcite however while tail functions are updated in xcite we are not doing such updates here it follows from 21002110 and 3032 that there exists a function xmath105 such that xmath106hence assuming that the number xmath107 is sufficiently large we drop terms xmath108 and xmath109 in 38 next we setxmath110set xmath111 in 36 and 37 next substitute 39 in 36 and 37 at xmath57 we obtain xmath112 recall that functions xmath113 and xmath114 are linked via 2160 thus xmath115where functions xmath113 and xmath116 are defined in 2101 and 2160 respectively it seems to be at the first glance that one can find the function xmath98 as for example cauchy problem for ode 310 with data xmath117 and xmath118 however it was noticed in remark 51 of xcite that this approach being applied to a similar problem does not lead to good results we have the same observation in our numerical studies this is likely to the approximate nature of 39 thus just like in xcite we solve the problem 310 311 by the quasi reversibility method qrm the boundary condition xmath119 provides a better stability property so we minimize the following functional xmath120 on the set xmath121 where xmath122xmath123where xmath124 is the regularization parameter the existence and uniqueness of the solution of this minimization problem as well as convergence of minimizers xmath125 in the xmath126norm to the exact solution xmath127 of the problem 311 312 with the exact data xmath128 as xmath129 were proved in xcite we note that in the regularization theory one always assumes existence of an ideal exact solution with noiseless data xcite recall that by the embedding theorem xmath130 and xmath131 leq cleftvert frightvert h2left 01right forall fin h2left 01right label3130 where xmath66 is a generic constantxmath132 theorem 31 is a reformulation of theorem 42 of xcite theorem 31 let the function xmath133 satisfying conditions 2122 be the exact solution of our imsp with the noiseless data where xmath135 and xmath136 is the solution of the forward problem 24 26 let the exact tail function xmath137 and the function xmath138have the form 39 with xmath139 assume that for xmath141where xmath142 is a sufficiently small number which characterizes the level of the error in the boundary data let in 312 xmath143 let the function xmath144 be the minimizer of the functional 312 on the set of functions xmath121 defined in 313 then there exists a constant xmath145depending only on xmath107 and xmath146 such that xmath147 leq cleftvert valpha left delta right left xright vast left x overlinekright rightvert h2left 01right leq c1delta label315 remark 31 we have also tried to consider two terms in the asymptotic expansion for xmath98 in 38 the second one with xmath148 this resulted in a nonlinear system of two equations we have solved it by via minimizing an analog of the functional of section 33 however the quality of resulting images deteriorated as compared with the above function xmath149 in addition we have tried to iterate with respect to the tail function xmath98 however the quality of resulting images has also deteriorated consider the function xmath78 satisfying 36370 in sections 52 and 53 we use lemma 21 and theorem 21 of bakklkosh to apply theorems we need to have zero boundary conditions at xmath150 hence we introduce the function xmath151xmath152denote xmath153also replace in 36 xmath98 with xmath154 then 36 37 and 316 and 3170 imply thatxmath155xmath156xmath157 introduce the hilbert space xmath158 of pairs of real valued functions xmath159 xmath160 asxmath161 12infty endarray right label319here and below xmath162 based on 317 and 318 we define our weighted cost functional asxmath163let xmath1 be an arbitrary number let xmath164 be the closure in the norm of the space xmath158 of the open set xmath165 of functions xmath166 defined as xmath167 minimization problem minimize the functional xmath168 on the set xmath169 remark 31 the analytical part of this paper below is dedicated to this minimization problem since we deal with complex valued functions we consider below xmath170 as the functional with respect to the 2d vector of real valued functions xmath171 thus even though we the consider complex conjugations below this is done only for the convenience of writing below xmath172 is the scalar product in xmath158 even though we use in 316 and 317 the functions xmath173 xmath174 it is always clear from the context below what do we actually mean in each particular case the first component of xmath175 of the vector function xmath166 or the above functions xmath176theorem 41 is the main analytical result of this paper theorem 41 assume that conditions of theorem 31 are satisfied then the functional xmath170 has the frecht derivative xmath178for all xmath179 also there exists a sufficiently large number xmath180 rright 1 depending only on listed parameters and a generic constant xmath66 such that for all xmath181the functional xmath170 is strictly convex on xmath182 ie for all xmath183 xmath184 proof everywhere below in this paper xmath185 rright 0 denotes different constants depending only on listed parameters since conditions of theorem 31 are satisfied then by 315xmath186 leq leftvert vast rightvert c1left 01right c1delta leq c2 label3220let xmath187 where xmath188 then 3130 319 and 321 imply that xmath189 2dkleq c2 label323using 323 we obtainxmath190xmath191 2dkleq c2 we use the formula xmath192where xmath193 is the complex conjugate of xmath194 denote xmath195consider functions xmath196 defined asxmath197first using 317 and 325 we single out in xmath198 the part which is linear with respect to the vector function xmath199 thenxmath200xmath201xmath202by 325 xmath203 hprime right overlineaxmath204 intlimitskoverlinekhprime left xtau right dtau cdot overlinea label3261xmath205 overlineahencexmath206 overlinelleft pright hprime xmath207 overlinelleft pright intlimitskoverlinekhprime left xtau right dtau label327xmath208where xmath209 depends nonlinearly on the vector function xmath210 also by 3220324 and the cauchy schwarz inequalityxmath211to explain the presence of the multiplier 12 at xmath212 in 328 we note that it follows from 3260 that the term xmath213 in 3261 contains the term xmath214 which is included in 327 already as well as termsxmath215we now show how do we estimate the third term in 3280 since estimates of two other terms are simpler we use the so called cauchy schwarz inequality with xmath216xmath217where xmath218 is the scalar product in xmath219 hencexmath220thus choosing appropriate numbers xmath221 we obtain the term xmath222 in 328 the second term in the right hand side of 328 is obtained similarly analogously using 3250325 we obtainxmath223 hprime right cdot lleft prightxmath224 intlimitskoverlinekhprime left xtau right dtau cdot lleft pright label329xmath225where xmath226 depends nonlinearly on the vector function xmath227 and similarly with 328xmath228 it is clear from 325 327330 that the linear with respect to the vector function xmath227 part of xmath229 consists of the sum of the first two lines of 327 with the first two lines of 329 we denote this linear part as xmath230 then xmath231thus using 320 and 325 we obtainxmath232xmath233consider the expression xmath234xmath235it follows from 317 3220 327 and 329 that xmath236 is a bounded linear functional hence by riesz theorem there exists unique element xmath237 such that xmath238 forall hin h label333it follows from 328 and 330333 thatxmath239 oleft leftvert hrightvert h2right thus the frecht derivative xmath240 of the functional xmath170 at the point xmath241 exists and xmath242 note that xmath243 label335hence using 328 330334 and lemma 31 we obtainxmath244xmath245xmath246xmath247xmath248choose the number xmath249 rright 1 so large that xmath250 then using 335 and 336 we obtain with a new generic constant xmath66 for all xmath181xmath251using theorem 41 we establish in this section the global convergence of the gradient projection method of the minimization of the functional xmath252 as to some other versions of the gradient method they will be discussed in follow up publications first we need to prove the lipschitz continuity of the functional xmath254 with respect to xmath241 theorem 51 let conditions of theorem 31 hold then the functional xmath178 is lipschitz continuous on the closed ball xmath169 in other wordsxmath255 proof consider for example the first line of 327 for xmath256 and denote it xmath257 we define xmath258 similarly both these expressions are linear with respect to xmath259 denote xmath260 we havexmath261xmath262 label52xmath263 hprime it is clear from 317 that xmath264 hence using 335 52 and cauchy schwarz inequality we obtainxmath265xmath266the rest of the proof of 51 is similar xmath74 theorem 52 claims the existence and uniqueness of the minimizer of the functional xmath170 on the set xmath268 theorem 52 let conditions of theorem 41 hold then for every there exists unique minimizer xmath269 of the functional xmath170 on the set xmath169 furthermorexmath270 geq 0forall yin overlinebleft rright label53 proof this theorem follows immediately from the above theorem 41 and lemma 21 of xcite xmath74 let xmath271 be the operator of the projection of the space xmath158 on the closed ball xmath272 let xmath273 and let xmath274 be an arbitrary point of xmath164 consider the sequence of the gradient projection methodxmath275 theorem 53 let conditions of theorem 41 hold then for every xmath181 there exists a sufficiently small number xmath276 leftvert p1rightvert cleft underlinekoverlinekright rlambda right in left 01right and a number xmath277 such that for every xmath278 the sequence 54 converges to the unique minimizer xmath279 of the functional xmath280 on the set xmath281 and xmath282 proof this theorem follows immediately from the above theorem 41 and theorem 21 of xcite xmath74 as it was pointed out in section 32 following one of the main concepts of the regularization theory xcite we assume the existence of the exact solution xmath133 of our imsp with the exact ie noiseless data xmath283 in 28 below the superscript xmath284 denotes quantities generated by xmath285 the level of the error xmath142 was introduced in our data in 314 in particular it follows from 37 370 and 314 thatxmath286 leftvert p1p1ast rightvert cleft underlinek overlinekright leq c3delta label56where the number xmath287 depends only on listed parameters thus in this section we show that the gradient projection method delivers points in a small neighborhood of the function xmath288 and therefore of the function xmath289 the size of this neighborhood is proportional to xmath290 it is convenient to indicate in this section dependencies of the functional xmath291 from xmath292 and xmath293 hence we write in this section xmath294 theorem 54 assume that conditions of theorem 41 hold also let the exact function xmath295 then the following accuracy estimates hold for each xmath181xmath296xmath297where xmath279 is the minimizer of the functional xmath298 which is guaranteed by theorem 52 and xmath299is the corresponding reconstructed coefficient section 31 in addition let be the sequence 54 of the gradient projection method where xmath301 is an arbitrary point of xmath302 and numbers xmath303 xmath304 and xmath305 are the same as in theorem 53 be the corresponding sequence of reconstructed coefficients section 31 then the following estimates holdxmath307xmath308 proof obviouslyxmath309using 315 3170 317 56 and 511 we obtainxmath310xmath311 jlambda left past p0ast p1ast vast right label512xmath312 2leftvert p1p1ast rightvert c left underlinekoverlinekright 2leftvert valpha left delta right vast rightvert c1left 01right 2right leq c2delta 2by theorems 41 and 52xmath313 label513xmath314by 53 and 512xmath315 leq 0jlambda left past p0p1valpha left delta right right leq c2delta 2 hence 513 implies 57 since the function xmath316 is obtained from the functions xmath279 and xmath317 as described in the end of section 31 then 57 implies 58 next 59 follows from 55 and 57 finally 510 follows from that procedure of section 31 and 58 xmath74 remark 51 therefore theorem 54 ensures the global convergence property of our method see the definition in introduction since the theory of sections 3 5 is the main focus of this paper we omit some details of the numerical implementation both in this and next sections we now briefly describe our numerical steps for both computationally simulated and experimental data to minimize the functional xmath318 we have written the derivatives of the operator xmath319 via finite differences with the step size xmath320 also we have written integrals with respect to xmath26 in discrete forms using the trapezoidal rule with the step size xmath321 the differentiation of the data xmath28 with respect to xmath26 which we need in our method see 370 was performed using finite differences with the step size xmath321 we have not observed any instabilities after the differentiation probably because the number xmath322 is not too small similar conclusions were drawn in works xcite where similar differentiations were performed including cases with experimental data next we have minimized the corresponding discrete version of xmath168 with respect to the values of the function xmath323 at those grid points initially we have used the gradient projection method however we have observed in our computations that the regular and simpler gradient method provides practically the same results hence all computational results below are obtained via the gradient method the starting point of this method was xmath324 and a specific ball xmath325 was not used the latter means that computational results are less pessimistic ones than our theory is the step size of the gradient method xmath326 was used we have observed that this step size is the optimal one for our computations the computations were stopped after 5000 iterations based on our above theory we have developed the following algorithm 1 find the tail function xmath327 via minimizing the functional 312 2 minimize the functional 320 let xmath328 be its minimizer 3 calculate the function xmath329 see 316 and 3170 4 compute xmath330 5 compute the function xmath331 see 21 and 35 xmath332 in this algorithm unlike the previous globally convergent algorithms xcite we do not need to update the tail function xmath327 first we reconstruct the spatially distributed dielectric constant from computationally simulated data which is generated by solving the problem 24 26 via the 1d analog of the lippmann schwinger equation xcite xmath333here and thereafter we have use xmath334 in all our computations keeping in mind our desired application to imaging of flash explosive like targets we have chosen in our numerical experiments the true test coefficient xmath335 as xmath336where xmath337 is the location of the center of our target of interest and xmath338 is its width hence the inclusion background contrast in 60 is 7 for our numerical experiments we have chosen in 60 xmath339 figure fig u0abs displays a typical behavior of the modulus of the simulated data xmath340 at the measurement point xmath341 one can observe that xmath342 next xmath340 changes too rapidly for xmath343 hence the interval xmath344 seems to be the optimal one and we indeed observed this in our computations hence we choose for our study xmath345 and xmath346 we note that even though the above theory of the choice of the tail function xmath327 works only for sufficiently large values of xmath347 the notion sufficiently large is relative see eg 620 besides it is clear from section 7 that we actually work in the gigahertz range of frequencies and this can be considered as the range of large frequencies in physics scaledwidth400 next having the values of xmath348 we calculate the function xmath349 in 28 and introduce the random noise in this function xmath350where xmath351 and xmath352 are random numbers uniformly distributed on xmath353 the next important question is about the choice of an optimal parameter xmath354 indeed even though theorem 41 says that the functional xmath170 is strictly convex on the closed ball xmath164 for all xmath355 in fact the larger xmath356 is the less is the influence on xmath168 of those points xmath357 which are relatively far from the point xmath358 where the data are given hence we need to choose such a value of xmath359 which would provide us satisfactory images of inclusions whose centers xmath337 are as in 62 xmath360 let xmath361 be the discrete xmath362 norm of the gradient of the above described discrete version of the functional xmath363 figure fig gnorm displays the dependencies of this norm on the number of iteration of the gradient method for different values of xmath356 we have observed in our computations that these dependencies are very similar for targets satisfying 60 62 with different values of target background contrasts one can see that the process diverges at xmath364 which is to be expected since convexity of xmath365 is not guaranteed also we observe that the larger xmath366 is the faster the process converges we have found that the optimal value of xmath356 for targets satisfying 62 is xmath367 we also apply a post processing procedure after step 5 of the above algorithm more precisely we smooth out the function xmath368 c using a simple averaging procedure over two neighboring grid points next the resulting function xmath369 is truncated as xmath370the function xmath371 in 61 is considered as our reconstructed coefficient xmath372 norm of the gradient of the functional xmath373 for different xmath374scaledwidth400 the computational results xmath375 for different values of xmath337 are shown in figure fig results one can see that the proposed algorithm accurately reconstructs both locations and values of the coefficient xmath335 similar accuracy was obtained for other target background contrasts in 60 varying from 2 to 10 we use here the same experimental data as ones used in klibloc kuzh ieee where these data were treated by the tail functions method thus it is worth to test the new method of this paper on the same data set in xcitethe wave propagation process was modeled by a 1d hyperbolic equation the laplace transform with respect to time was applied to the solution of this equation and then the tail functions method was applied to the corresponding imsp in xcitethe process was modeled by imsp 28 and the tail functions method was applied to this imsp the data in xcite and in xcite were obtained after applying laplace and fourier transforms respectively to the original time dependent data we have observed a substantial mismatch of amplitudes between computationally simulated and experimental data hence we have calibrated experimental data here via multiplying them by the calibration factor xmath376 just as in xcite 1103 1052 725 arc 27020005 5545 arc 209008 15 arc 9018008 303 270 120 arc 180012 680 arc 180012 1070 1103 00 circle 1 80 circle 1 435 46 375 435 565 575 473 2 63 2 57 355 565 355 365 575 365 263 3562 circle 02 465 circle 02 4568 circle 02 51 rectangle 306 2015 rectangle 253 at 225225 target 556 arc 30035007 7550 arc 30035015 954 arc 30035025 1153 arc 30035035 1352 arc 30035045 1551 arc 30035055 1750 arc 30035065 1951 arc 30035075 our experimental data were collected in the field by the forward looking radar of the us army research laboratory xcite the schematic diagram of data collection is presented on figure fig setup the device has two sources placed on the top of a car sources emit pulses the device also has 16 detectors detectors measure backscattering time resolved signal which is actually the voltage pulses of only one component of the electric field are emitted and the same component is measured on those detectors the time step size of measurements is 0133 nanosecond and the maximal amplitudes of the measured signal are seen about 2 nanoseconds see figure 5 since 1 nanosecond corresponds to the frequency of 1 gigahertz xcite then the corresponding frequency range is in gigahertz which are considered as high frequencies in physics the car moves and the time dependent backscattering signal is measured on distances from 20 to 8 meters from the target of interest the collected signals are averaged users know horizontal coordinates of each target with a very good precision to do this the ground positioning system is used two kinds of targets were tested ones located in air and ones buried on the depth of a few centimeters in the ground the horizontal axis is time in nanosecondsscaledwidth400 while it is assumed both in 21 and 61 that xmath377 we had one target buried in the ground in which xmath378 this target was a plastic cylinder it was shown on page 2944 of xcite that using the original time dependent date one can figure out that inside the target xmath379 hence in this case we replace c and 61 withxmath380xmath381 suppose that a target occupies a subinterval xmath382 in fact we estimate here the ratio of dielectric constants of targets and backgrounds for xmath383 thus actually our computed function xmath384 in 61 and 72 is an estimate of the function xmath385xmath386where xmath387 is the spatially distributed dielectric constant of that target using 61 720 72 and 73 we define the computed target background contrast in the dielectric constant asxmath388 min ccompleft xright text if ccompleft xright leq 1forall xin left 01right endarray right label74finally we introduce the number xmath389 which is our estimate of the dielectric constant of a target xmath390 we have chosen the interval xmath391 as xmath392 left underlinekoverlinekright label71the considerations for the choice 71 were similar with ones for the case of simulated data in section 62 we had experimental data for total five targets the background was air in the case of targets placed in air with xmath393 and it was sand with xmath394 xcite in the case of buried targets two targets bush and wood stake were placed in air and three targets metal box metal cylinder and plastic cylinder were buried in sand figures fig expres display some samples of calculated images of targets dielectric constants of targets were not measured in experiments so the maximum what we can do at this point is to compare our computed values of xmath395 with published ones this is done in table tab1 in which xmath396 is a published value as to the metallic targets it was established numerically in xcite that they can be approximated as dielectric targets with large values of the dielectric constantxmath397 published values of dielectric constants of sand wood and plastic can be found in xcite as to the casewhen the target was a bush we took the interval of published values from xcite bush was the most challenging target to image this is because bush is obviously a significantly heterogeneous target summary of estimated dielectric constants xmath398 colsoptionsheader for the engineering part of this team of coauthors ln and as the depth of burial of a target is not of an interest here since all depths are a few centimeters it is also clear that it is impossible to figure out the shape of the target given so limited information content on the other hand the most valuable piece of the information for ln and as is in estimates of the dielectric constants of targets therefore table tab1 is the most interesting piece of the information from the engineering standpoint indeed one can see in this table that values of estimated dielectric constants xmath398 are always within limits of xmath399 as it was pointed out in section 1 these estimates even if not perfectly accurate can be potentially very useful for the quite important goal of reducing the false alarm rate this indicates that the technique of the current paper might potentially be quite valuable for the goal of an improvement of the false alarm rate the above results inspire ln and as to measure dielectric constants of targets in the future experiments our team plans to treat those future experimental data by the numerical method of this publication we have developed a new globally convergent numerical method for the 1d inverse medium scattering problem 28 unlike the tail function method the one of this paper does not impose the smallness condition on the size of the interval xmath391 of wave numbers the method is based on the construction of a weighted cost functional with the carleman weight function in it the main new theoretical result of this paper is theorem 41 which claims the strict convexity of this functional on any closed ball xmath400 for any radius xmath1 as long as the parameter xmath401 of this functional is chosen appropriately global convergence of the gradient method of the minimization of this functional to the exact solution is proved numerical testing of this method on both computationally simulated and experimental data shows good results h ammari y t chow and j zou phased and phaseless domain reconstructions in the inverse scattering problem via scattering coefficients siam journal on applied mathematics 76 2016 pp 10001030 a b bakushinskii m v klibanov and n a koshev carleman weight functions for a globally convergent numerical method for ill posed cauchy problems for some quasilinear pdes nonlinear analysis real world applications 34 2017 pp 201224 m v klibanov n a koshev j li and a g yagola numerical solution of an ill posed cauchy problem for a quasilinear parabolic equation using a carleman weight function journal of inverse and ill posed problems 24 2016 pp 761776 m v klibanov d nguyen l h nguyen and h liu a globally convergent numerical method for a 3d coefficient inverse problem with a single measurement of multi frequency data 2016 httpsarxivorgabs161204014 m v klibanov l h nguyen a sullivan and l nguyen a globally convergent numerical method for a 1d inverse medium problem with experimental data inverse problems and imaging 10 2016 pp 10571085 a v kuzhuget l beilina m v klibanov a sullivan l nguyen and m a fiddy blind backscattering experimental data collected in the field and an approximately globally convergent inverse algorithm inverse problems 28 2012 p 095007 a v kuzhuget l beilina m v klibanov a sullivan l nguyen and m a fiddy quantitative image recovery from measured blind backscattered data using a globally convergent inverse method ieee transactions on geoscience and remote sensing 51 2013 pp 29372948 nguyen m v klibanov l h nguyen a e kolesov m a fiddy and h liu numerical solution of a coefficient inverse problem with multi frequency experimental raw data by a globally convergent algorithm 2016 httpsarxivorgabs160903102 l nguyen d wong m ressler f koenig b stanton g smith j sichina and k kappra obstacle avoidance and concealed target detection using the army research lab ultra wideband synchronous impulse reconstruction uwb sire forward imaging radar 2007 p 65530h m sini and n t thnh regularized recursive newton type methods for inverse scattering problems using multifrequency measurements esaim mathematical modelling and numerical analysis 49 2015 pp 459480 n t thnh l beilina m v klibanov and m a fiddy imaging of buried objects from experimental backscattering time dependent measurements using a globally convergent inverse algorithm siam journal on imaging sciences 8 2015 pp 757786 dielectric constant table httpswwwhoneywellprocesscomlibrarymarketingtechspecsdielectric constant tablepdfhttpswwwhoneywellprocesscomlibrarymarketingtechspecsdielectric constant tablepdf
a new numerical method is proposed for a 1d inverse medium scattering problem with multi frequency data this method is based on the construction of a weighted cost functional the weight is a carleman weight function cwf in other words this is the function which is present in the carleman estimate for the undelying differential operator the presence of the cwf makes this functional strictly convex on any a priori chosen ball with the center at xmath0 in an appropriate hilbert space convergence of the gradient minimization method to the exact solution starting from any point of that ball is proven computational results for both computationally simulated and experimental data show a good accuracy of this method key words global convergence coefficient inverse problem multi frequency data carleman weight function 2010 mathematics subject classification 35r30
introduction problem statement the weighted cost functional the global strict convexity of @xmath177 global convergence of the gradient projection method numerical studies numerical results for experimental data concluding remarks
quantum information processing qip often requires pure state as the initial state xcite shor s prime factorizing algorithm xcite grover search algorithm xcite are few examples creation of pure state in nmr is not easy due to small gaps between nuclear magnetic energy levels and demands unrealistic experimental conditions like near absolute zero temperature or extremely high magnetic field this problem has been circumvented by creating a pseudo pure state pps while in a pure stateall energy levels except one have zero populations in a pps all levels except one have equal populations since the uniform background populations do not contribute to the nmr signal such a state then mimics a pure state several methods of creating pps have been developed like spatial averaging xcite logical labeling xcite temporal averaging xcite spatially averaged logical labeling technique sallt xcite however pseudo pure state as well as pure states are not stationary and are destroyed with time as the spin system relaxes toward equilibrium in qipthere are also cases where one or more qubits are initialized to a suitable state at the beginning of the computation and are used as storage or memory qubits at the end of the computation performed on some other qubitsxcite in these cases it is important for memory qubits to be in the initialized state till the time they are in use since deviation from the initial state adds error to the output result since it is not possible to stop decay of a state which is away from equilibrium alternate strategies like quantum error correction xcite noiseless subspace xcite are being tried recently sarthour et alxcite has reported a detailed study of relaxation of pseudo pure states and few other states in a quadrupolar system herewe experimentally examine the lifetime of various pseudo pure states in a weakly j coupled two qubit system we find that cross terms known as cross correlation between different pathways of relaxation of a spin can retard the relaxation of certain pps and accelerate that of others in 1946 bloch formulated the behavior of populations or longitudinal magnetizations when they are perturbed from the equilibrium xcite the recovery toward equilibrium is exponential for a two level system and for a complex system the recovery involves several time constants xcite for complex systems the von neumann liouville equation xcite describes mathematically the time evolution of the density matrix in the magnetic resonance phenomena for systemhaving more than one spin the relaxation is described by a matrix called the relaxation matrix whose elements are linear combinations of spectral densities which in turn are fourier transforms of time correlation function xcite of the fluctuations of the various interactions responsible for relaxation thereexist several different mechanisms for relaxation such as time dependent dipole dipoledd interaction chemical shift anisotropycsa quadrupolar interaction and spin rotation interaction xcite the correlation function gives the time correlations between different values of the interactions the final correlation function has two major parts namely the auto correlation part which gives at two different times the correlation between the same relaxation interaction and the cross correlation part which gives the time correlation between two different relaxation interactions the mathematics of cross correlation can be found in detail in works of schneider xcite blicharski xcite and hubbard xcite recently a few models have been suggested to study the decoherence of the quantum coherence the off diagonal elements in density matrix xcite it can be shown that in absence of rf pulses and under secular approximation the relaxation of the diagonal and the off diagonal elements of the density matrix are independent xcite here we study the longitudinal relaxation that is the relaxation of the diagonal elements of the density matrix and the role of cross correlations in it in terms of magnetization modes the equilibrium density matrix of a two spin system is given by xcitefigeqlev xmath0 where xmath1 and xmath2 are gyro magnetic ratios of the two spins xmath3 and xmath4 respectively the density matrix of a general state can be written as xmath5 labelgeneralendaligned which for the condition xmath6xmath7xmath8k corresponds to the density matrix of a pps given by xcite xmath9 labelppsendaligned where k is a constant the value of which depends on the method of creation of pps the first two terms in the right hand side in eqgeneral and eqpps are the single spin order modes for the first and second spin respectively while the last term is the two spin order mode of the two spins xcite choosing properly the signs of the modes the various pps of a two qubit system are xmath10nonumber chipps01 k i1z i2z 2i1zi2z nonumber chipps10 k i1z i2z 2i1zi2znonumber chipps11 k i1z i2z 2i1zi2zendaligned the relative populations of the states for different pps are shown in fig ppslev as seen in eq2 in pps the coefficients of the all three modes are equal on the other hand equilibrium density matrixdoes not contain any two spin order mode to reach eqpps starting from eqeqd the two spin order mode has to be created and at the same time the coefficients of all the modes have to be made equal the equation of motion of modes m is given by xcite xmath11 labelmagmodeendaligned where xmath12 is the relaxation matrix and xmath13 is the equilibrium values of a mode for a weakly coupled two spin system relaxing via mutual dipolar interaction and the csa relaxation the two dominant mechanism of relaxation of spin half nuclei in liquid state the above equation takes the form xmath14 left beginarrayccc rho1 sigma12 delta112 sigma12 rho2 delta212 delta112 delta212 rho12 endarray right cdot left beginarrayc i1z0i1zinfty i2z0i2zinfty 2i1zi2z endarray right labelrelaxendaligned where xmath15 is the self relaxation rate of the single spin order mode of spin xmath16 xmath17 is the self relaxation rate of the two spin order mode of spin xmath16 and xmath18 xmath19 is the cross relaxation nuclear overhouser effect noe rate between spins xmath16 and xmath18 and xmath20 is the cross correlation term between csa relaxation of spin xmath16 and the dipolar relaxation between the spins xmath16 and xmath18 xmath21 and xmath22 involve only the auto correlation terms and xmath23 involves only the cross correlation termsxcite magnetization modes of one order relaxes to other orders through cross correlation and in absence of it the relaxation matrix becomes block diagonal within each order the relaxation of modes are in general dominated by their self relaxation xmath21 but in case of samples having long xmath24 the cross correlation terms become comparable with self relaxation and play an important role in relaxation of the spins the formal solution of eq magmode is given by xmath25 exphatgammatendaligned as time evolution of various modes are coupled a general solution of the above equation requires diagonalization of the relaxation matrix however in the initial rate approximation eq7 can be written for small values of txmath26 as xmath271hatgammatau cr vecm0 hatgammatauvecm0vecminfty labelinitialappendaligned this equation asserts that in the initial rate approximation for low xmath26 the decay or growth of a mode is linear with time and the initial slope is proportional to the corresponding relaxation matrix element if the modes are allowed to relax for a longer time their decay or growth deviates from the linear nature and adopts a multi exponential behavior to finally reach the equilibriumxcite let a two qubit system be in xmath28 pps at t0 xmath29 endaligned after time t it will relax to xmath30 where xmath31xmath32 and xmath33 are the time dependent deviations of respective modes from their initial values the deviation of the two spin order can be measured from spectrum of either spin eqe10 can also be written as xmath34 delta1t delta12ti1z delta2t delta12ti2z labelchi001 labelchi00endaligned the first term is the pseudo pure state with the coefficient decreasing in time while the other two terms are the excesses of the single spin order modes with coefficients increasing in time for other pseudo pure states eqchi001 becomes xmath35 delta1t delta12ti1z delta2t delta12ti2z chi10t k delta12ti1z i2z 2i1zi2z delta1t delta12ti1z delta2t delta12ti2z chi11t k delta12ti1z i2z 2i1zi2z delta1t delta12ti1z delta2t delta12ti2z labelchi11endaligned in the initial rate approximation using eqinitialapp we obtain for the xmath28 pps xmath36 labeld1 delta2tau tausigma12gamma1 k rho1gamma2 k k delta212 delta12tau taudelta112gamma1 k delta212gamma2 k k rho12 labeld2endaligned let the coefficients of the pps term and the two single spin order modes xmath37 and xmath38 in eqchi00 be called as xmath39xmath40 and xmath41 respectively figppsmode schematically shows the time evolution of the coefficients xmath39xmath40 and xmath41 for xmath28 pps any coefficient for any pps at any instant is simply the initial value plus the total deviation due to the auto and the cross correlations for example xmath42 for xmath28 pps at time xmath26 is xmath43 where k is the initial value and xmath44 and xmath45 are the deviations at xmath26 due to auto correlation and cross correlation parts respectively putting the values of the deviations of different modes obtained from eqd1d2 in eqchi001chi11 we obtain the contribution only of auto correlation terms to the deviation from initial value of the coefficients xmath46xmath40 and xmath41 under initial rate approximation at txmath26 as xmath47 xmath48tau mathcalbauto01tau rho1gamma1 k sigma12 gamma2 k k rho12tau nonumber mathcalbauto10tau rho1gamma1 k sigma12 gamma2 k k rho12tau mathcalbauto11tau rho1gamma1 k sigma12 gamma2 k k rho12tau nonumber mathcalcauto00tau sigma12gamma1 k rho2 gamma2 k k rho12tau mathcalcauto01tau sigma12gamma1 k rho2 gamma2 k k rho12tau nonumber mathcalcauto10tau sigma12gamma1 k rho2 gamma2 k k rho12tau mathcalcauto11tau sigma12gamma1 k rho2 gamma2 k k rho12tauendaligned it is evident that in absence of cross correlations the xmath28 and xmath49 pps relax at the same initial rate since xmath50 xmath51 and xmath52 however the same is not true for xmath53 and xmath54 pps the contribution by the cross correlation terms is given by xmath55tau nonumber mathcala01cctau delta112 gamma1 k delta212gamma2 ktau nonumber mathcala10cctau delta112 gamma1 k delta212gamma2 ktau nonumber mathcala11cctau delta112 gamma1 k delta212gamma2 ktau nonumber endaligned xmath56tau mathcalb01cctau delta112 gamma1 delta212gamma2 ktau nonumber mathcalb10cctau delta112 gamma1 delta212gamma2 ktau mathcalb11cctau delta112 gamma1 delta212gamma2 ktaunonumber mathcalc00cctau delta112 gamma1 k delta212 gamma2 tau mathcalc01cctau delta112 gamma1 k delta212 gamma2 taunonumber mathcalc10cctau delta112 gamma1 k delta212 gamma2 tau mathcalc11cctau delta112 gamma1 k delta212 gamma2 tau labelabc cross endaligned the important thing is that the presence of cross correlation can lead to differential relaxation of all pps positive cross correlation rates xmath57 and xmath58 slow down the relaxation of all the three coefficients for xmath28 pps since xmath59 while make the relaxation of all three coefficients faster for xmath49 pps since xmath60 for xmath53 and xmath54pps cross correlations give a mixed effect since xmath61 and xmath62 as the contributions of the auto correlation part for xmath28 and xmath49 pps are equal we have monitored the relaxation behavior only of xmath28 and xmath49 pps to study the effect of cross correlations for samples having long xmath24 where the cross correlations becomes comparable with auto correlation rates the four pps relax with four different rates and the difference increases with the increased value of the cross correlation terms the three coefficients xmath39xmath40 and xmath41 normalized to the equilibrium line intensities in terms of proton and fluorine line intensities for xmath28 pps are xmath63 xmath64 and for xmath49 pps are xmath65 xmath66 where xmath67 and xmath68 are intensities of the two proton transitions when the fluorine spin is respectively in state xmath69 and xmath70 similarly xmath71 and xmath72 are intensities of two fluorine transitions corresponding to the proton spin being respectively in the state xmath69 and xmath70 as shown in figeqlev and figallspec xmath73 and xmath74 give the xmath67 line intensity respectively at time t and at equilibrium thus by monitoring the intensities of the two proton and two fluorine transitions as a function of time one can calculate the coefficient xmath75 which is a measure of decay of pps relaxation of the coefficients xmath39xmath40 and xmath41 have been simulated using matlab for a weakly coupled xmath76xmath77 system the relaxation matrix used for the simulation is xmath78endaligned figasimu shows the decay of coefficient xmath39 with time xmath79 and xmath80 show no difference in decay rate in absence of cross correlation rates as xmath57 and xmath58are increased more and more difference in decay rate is observed figbcsimu shows growth of coefficients xmath40 and xmath41 as xmath58 is taken smaller than xmath57 difference in decay rate between xmath81 and xmath82 is found to be less than between xmath83 and xmath84 all the relaxation measurement were performed on a two qubit sample formed by one fluorine and one proton of 5fluro 13dimethyl uracil yielding an ax spin system with a j coupling of 58 hz longitudinal relaxation time constants for xmath76 and xmath85 are 6 and 72 sec respectively at room temperature 300k all the experiments were performed in a bruker drx 500 mhz spectrometer where the resonance frequencies for xmath76 and xmath85 are 47059 mhz and 50013 mhz respectively the pseudo pure state was prepared by spatial averaging method using j evolution xcite relaxation of all the three coefficients for xmath28 and xmath49 pps has been calculated since auto correlations contribute equally to the relaxation of these two pps any difference in relaxation rate can be attributed to cross correlation rates sample temperature was varied to change the correlation time and hence the cross correlation rate xmath23 four different sample temperatures 300k 283k 263k and 253k were used figallspec shows the proton and fluorine spectra obtained using recovery measurement at four different temperatures the spectra correspond to the initial pps state and that after an interval of 25 sec figfht1 shows the longitudinal relaxation times xmath24 of fluorine and proton as function of temperature obtained from initial part of inversion recovery experiment a steady decrease in xmath24 with decreasing temperatureindicates that the dynamics of the sample molecule is in the short correlation time limit xcite in this limit auto as well ascross correlations increase linearly with decreasing temperature all the spectra were fitted to bi lorentzian lines in matlab and various parameters were extracted using the origin software figaplot shows the decay of the coefficient xmath39 calculated independently from proton and fluorine spectra at 300k xmath79 and xmath80 showed almost same rate of decay as the temperature was gradually lowered a steady increase in difference in decay rate was observed this is due to the steady increase in cross correlation rates with decreasing temperature which is expected in the short correlation time limit in figbcplot the growths of the coefficients xmath40 and xmath41 are shown similar to the coefficient xmath39 coefficients xmath40 and xmath41 also show differences in decay rate between xmath28 and xmath49 pps at lower temperatures the difference between xmath83 and xmath84 at any temperature was found to be larger compared to between xmath81 and xmath82 this is expected since according to eqabc cross the dominant cross correlation factor in xmath83 and xmath84 is xmath57 which is the cross correlation between csa of fluorine with fluorine proton dipolar interaction whereas in xmath81 and xmath82 the dominant factor is xmath58 which is cross correlation between csa of proton which is much less than fluorine with fluorine proton dipolar interaction thus it is found that at lower temperatures the xmath28 pps decays slower than the xmath49 pps the dominant difference in the decay rates arises from the cross correlations between the csa of the fluorine and the dipolar interaction between the fluorine and the proton spin to the best of our knowledgethis is the first study of its kind where the differential decay of the pps has been attributed to cross correlations we have demonstrated here that in samples having long xmath24 cross correlations plays an important role in determining the rate of relaxation of pseudo pure state in qipsometimes one or more qubits having comparatively longer longitudinal relaxation are used as storage or memory qubits recentlylevitt et al have demonstrated a long living antisymmetric state arrived by shifting the sample from high to very low magnetic field suggesting that this long living state could be used as memory qubit xcite in such cases fidelity of computationdepends on how much the memory qubits have been deviated from the initialized state at the beginning of the computation till the time they are actually used theoretically it is shown here that in presence of cross correlations all the four pps relax with different initial rates for positive cross correlations the xmath28 pps relaxes significantly slower than xmath49 pps it is therefore important to choose a proper initial pseudo pure state according to the sample we gratefully acknowledge prof k v ramanathan for discussions and mr rangeet bhattacharyya for his help in data processing the use of drx500 high resolution liquid state spectrometer of the sophisticated instrument facility indian institute of science bangalore funded by department of science and technology dst new delhi is gratefully acknowledged ak acknowledges dae brns for senior scientist scheme and dst for a research grant a chemical structure of 5fluro 13dimethyl uracil the fluorine and the proton spins shown by circles are used as the two qubits xmath3 and xmath4 respectively b the energy level diagram of a two qubit system identifying the four states 000110 and 11 under high temperature and high field approximation xcite the relative equilibrium deviation populations are indicated in the bracket for each level assuming this to be a weakly coupled two spin system the deviation populations become proportional to the gyromagnetic ratios xmath1 and xmath2 xmath86 refers to the transition of the xmath87 spin when the other spin is in state xmath88 thus xmath67 means the proton transition when the fluorine is in state xmath69 figure 2 population distribution of different energy levels of a two spin system in different pseudo pure states k is a constant whose value depends on the protocol used for the preparation of pps abc and d show respectively the xmath28xmath53xmath54 and xmath49 pps figure 3 schematic representation of decay of the coefficient xmath39 and growth of the coefficients xmath40 and xmath41 the magnetization modes are normalized to their respective equilibrium values in each sub figure the three bars correspond to the modes xmath37xmath38 and 2xmath89 from left to right the amount of any mode present at any time is directly proportional to the height of the corresponding bar the numbers provided in the rightmost column represent typical values of the modes a thermal equilibrium at thermal equilibriumonly xmath37 and xmath38 exist b xmath28 pseudo pure state just after creation where all the three modes are equal in magnitude for xmath28 pps all modes are of same sign but this is not the case for other pps eq3 coefficient xmath39 is the common equal amount of all the modes and it is maximum at t0 c the amount of magnetization modes schematic at time xmath26 after preparation of the pps at t0 the two single spin order modes increase and the two spin order mode decreases from their initial values d the state of various modes at time xmath26 same as figc redrawn with filled bar to indicate the residual value of xmath39 all the three coefficients xmath39xmath40 and xmath41 are shown xmath39 shown by the filled bar which is the measure of the pps has come down by the same amount as the two spin order xmath40 shown by the empty bar and xmath41 shown by the striped bar are the residual part of the single spin order modes xmath37 and xmath38 respectivelye the values of various modes and coefficients after a delay xmath90 simulation of decay of coefficient xmath39 the boxes xmath91 and circles xmath92 correspond to the xmath49 and xmath28 pps respectively in each plot deviation from initial value xmath93has been plotted figure 5 simulation of growth of coefficient xmath40 and xmath41 the boxes xmath91 and circles xmath92 correspond to the xmath49 and xmath28 pps respectively figure 6 relaxation of pseudo pure state as monitored on a fluorine spin and b proton spin of the 5fluro 13dimethyl uracil at four different temperatures the top row in a and b show the equilibrium spectrum at each temperature with decrease in temperatures the lines broaden due to decreased xmath94 the second row in a and b show the spectra corresponding to the xmath28 pps prepared by spatial averaging method using j evolution the state of pps was measured by xmath95 pulse at each spin the third row in a and b show the spectra after an interval of 25 seconds after creation of the xmath28 pps the fourth row shows the spectra immediately after creation of xmath49 pps and the fifth row the spectra after 25 seconds figure 7 longitudinal relaxation time xmath24 of fluorine a and proton b as function of temperature measured from the initial part of inversion recovery experiment for each spin figure 8 the deviation from initial value at t0 of the coefficient xmath39 of the pps term calculated from proton left column and fluorine right column at four different sample temperature the empty xmath92 and filled xmath96 circles correspond to the xmath28 and xmath49 pps respectively figure 9 the growth of the coefficients xmath40 and xmath41 at different sample temperatures xmath40 was calculated from fluorine spectrum while xmath41 was calculated from the proton spectrum the empty xmath92 and filled xmath96 circles correspond to the xmath28 and xmath49 pps respectively 99 j preskill lecture notes for physics 229 quantum information and computation httptheorycaltechedu people preskill ma nielsen and il chuang quantum computation and quantum information cambridge university press 2000 pwshor polynomial time algorithms for prime factorization and discrete algorithms on quantum computer siam rev 41 1999 303 332 grover quantum mechanics helps in searching for a needle in a haystack phys 79 1997 325 cory af fahmy and tf havel ensemble quantum computing by nmr spectroscopy procnatlacadsci usa 94 1997 1634 cory m d price and tf havel nuclear magnetic resonance spectroscopy an experimentally accessible paradigm for quantum computing physica d 120 1998 82 n gershenfeld and il chuang bulk spin resonance quantum computation science 275 1997 350 chuang n gershenfeld mg kubines and dw leung bulk quantum computation with nuclear magnetic resonance procroysoclond a 454 1998 447 467 kavita dorai arvind and anil kumar implementing quantum logic operations pseudopure states and the deutsch jozsa algorithm using noncommuting selective pulses in nmr phys a 61 2000 042306 kavita dorai tsmahesh arvind and anil kumar quantum computations by nmr current science 79 2000 1447 1458 e knill i l chuang and r laflamme effective pure states for bulk quantum computation phys a 57 2000 3348 mahesh and anil kumar ensemble quantum information processing by nmr spatially averaged logical labeling technique for creating pseudopure states phys a 64 2001 012307 d gottesman and ilchuang demonstrating the viability of universal quantum computation using teleportation and single qubit operations nature london 402 1999 390 eknill and r laflamme theory of quantum error correcting codes phys a 55 1997 900 911 eknill r laflamme and l viola theory of quantum error correction for general noise phys 84 2000 2525 l viola e m fortunato m a pravia eknill r laflamme and d g cory experimental realization of noiseless subsystems for quantum information processing science 293 2001 2059 2063 r s sarthour e r deazevedo f a bonk e l g vidoto t j bonagamba a p guimarxmath97es j c c freitas and i s oliveira relaxation of coherent states in a two qubit nmr quadrupolar system phys a 68 2003 022311 f bloch nuclear induction phys 70 1946 460 a g redfield the theory of relaxation processes adv res 1 1966 1 j von neumann measurement and reversibility and the measuring process chapter v and vi in mathematische grund lagen der quantenmechanik springer berlin 1932 english translation by r t beyer mathematical foundations of quantum mechanics princeton unv press princeton a abragam principles of nuclear magnetic resonance claredon press oxford1961 anil kumar r c r grace p k madhu cross correlation in nmr prog in nucl res spec 37 2000 191 319 h schneider kernmagnetische relaxation von drei spin molekxmath98len i m flxmath98ssign oder adsorbierten zustandi ann 1964 313 h schneider kernmagnetische relaxation von drei spin molekxmath98len i m flxmath98ssign oder adsorbierten zustandii ann 1965 135 j s blicharski interference effect in nuclear magnetic relaxation phys 1967 608 p s hubbard some properties of correlation functions of irreducible tensor operators phys 180 1969 319 w h zurek environment induced superselection rules phys d 26 1982 1862 g teklemariam e m fortunato c c lopez j emerson j p paz t f havel and d g cory a method for modeling decoherence on a quantum information processor phys a 67 2003 062316 ersnt g bodenhausen and a wokaun principles of nuclear magnetic resonance in one and two dimensions clarendon press oxford1987 j a jones r h hansen and m mosca quantum logic gates and nuclear magnetic resonance pulse sequences jl of mag res 135 1998 353 m carravetta and m h levitt long lived nuclear spin states in high field solution nmr j am 2004 6228 m carravetta o g johannessen and malcolm h levitt beyond the xmath24 limit singlet nuclear spin states in low magnetic fields phys 92 2004 153003
in quantum information processing by nmr one of the major challenges is relaxation or decoherence often it is found that the equilibrium mixed state of a spin system is not suitable as an initial state for computation and a definite initial state is required to be prepared prior to the computation as these preferred initial states are non equilibrium states they are not stationary and are destroyed with time as the spin system relaxes toward its equilibrium introducing error in computation since it is not possible to cut off the relaxation processes completely attempts are going on to develop alternate strategies like quantum error correction codes or noiseless subsystems here we study the relaxation behavior of various pseudo pure states and analyze the role of cross terms between different relaxation processes known as cross correlation it is found that while cross correlations accelerate the relaxation of certain pseudo pure states they retard that of others
i. introduction ii. theory iii. simulation iv. experimental v. conclusion acknowledgments
deuterium is understood to be only produced in significant amount during primordial big bang nucleosynthesis bbn and thoroughly destroyed in stellar interiors deuterium is thus a key element in cosmology and in galactic chemical evolution see eg audouze tinsley 1976 indeed its primordial abundance is the best tracer of the baryonic density parameter of the universe xmath7 and the decrease of its abundance during the galactic evolution should trace the amount of star formation among other astrophysical interests in the galactic ism d h measurements made toward hot stars have suggested variations imaps observations toward xmath8 ori led to a low value jenkins et al 1999 confirming the previous analysis by laurent et al 1979 from copernicus observations while toward xmath9 vel they led to a high value sonneborn et al 2000this seems to indicate that in the ism within few hundred parsecs d h may vary by more than a factor xmath10 in the nearby ism the case of g191b2b was studied in detail see the most recent analysis by lemoine et al 2002 and the evaluation toward capella linsky et al 1995 taken as a reference their comparison provided for a while a possible case for d h variations within the local ism concerning g191b2b lemoine et al 2002 have shown that the total xmath11hxmath0i column density evaluation was greatly perturbed by the possible addition of two broad and weak hxmath0i components such components able to mimic the shape of the lyman xmath12 damping wings can induce an important decrease of the evaluated xmath11hxmath0i to illustrate this point the error bar estimation on xmath11hxmath0i from all previously published studies considered as the extremes of a 2xmath4 limit was of the order of dex 007 while including the lemoine et al 2002 analysis enlarged the error bar to about dex 037 this huge change has of course a considerable impact on any d h evaluation this raises two crucial questions first is that situation typical of g191b2b alone and possibly due to an unexpected shape of the core of the stellar lyman xmath12 profile improperly described by the theoretical models second if weak hxmath0i features are present in the ism to what extent are evaluations toward other targets affected from the combination of stis echelle observations spectrograph on board the hubble space telescope hst and fuse ones the far ultraviolet spectroscopic explorer moos et al 2000 lemoine et al 2002 have found through iterative fitting process with the owensf fitting program developed by martin lemoine and the french fuse team that three interstellar absorption components are present along the line of sight and that two additional broad and weak hxmath0i components could be added detected only over the lyman xmath12 line negligible over the lyman xmath13 line but important enough to strongly perturb the total hxmath0i column density evaluation within the local ism it has been shown that such additional hi absorptions are often present they have been interpreted either as cloud interfaces with the hot gas within the local ism bertin et al 1995 or as hydrogen walls signature of the shock interaction between the solar wind or stellar wind and the surrounding ism linsky 1998 this latter heliospheric absorption has been modeled by wood et al 2000 and a prediction derived in the direction of g191b2b see figure 9 of lemoine et al 2002 most of the predicted absorption is expected in the saturated core of the observed interstellar line but some weak absorption xmath14 of the continuum might extend over several tenths of angstroms on the red side of the line due to the neutral hydrogen atoms seen behind the shock in the downwind direction where g191b2b is located it was found that the combination of two broad and weak hi components can easily reproduce the model prediction if real besides the three interstellar absorptions a fourth component representing the bulk of the predicted absorption and a fifth one for the broad and shallow extended red wing are needed this is exactly what lemoine et al 2002 have found in the course of determining the minimum number of components each defined by its hi column density xmath1 its velocity xmath15 its temperature xmath16 and turbulence broadening xmath17 needed to fit the data lemoine et al 2002 completed the xmath18test which uses the fisher snedecor law describing the probability distribution of xmath3 ratio what is tested is the probability that the decrease of the xmath3 with additional components is not simply due to the increase of free parameters the result gives a probability xmath19 and xmath20 that a fourth and a fifth hi component are respectively not required by the data these low probabilities of non occurence strongly suggest that lemoine 2002 have indeed detected the heliospheric absorption downwind in the direction of g191b2b note however that this heliospheric complex absorption profile is simulated by two components whose physical meaning in terms of hydrogen content andor temperature is not clear furthermore the photospheric lyman xmath12 stellar core is difficult to evaluate see discussion in eg lemoine et al 2002 and is slightly red shifted relative to the ism absorptions this result may very well be simply related to the use of a white dwarf as background target star the detailed analysis of the capella line of sight could directly test the heliospheric hypothesis if the two additional components present along the g191b2b line of sight are as a matter of fact due to an heliospheric phenomenon it is an extremely local signature within few hundreds of astronomical units to be compared to the few tens of parsecs lines of sight lengths which should be also present along the capella sight line both stars being separated by only xmath21 on the sky and similar in shape to the structure predicted and observed in the direction of g191b2b if that description is correct we are expecting an extra absorption reasonably represented by two additional components a main one mostly lost within the ism absorption core and a weak one extending over several tenths of angstroms on the red side of the line again due to the neutral hydrogen atoms seen behind the shock in the downwind direction where both g191b2b and capella are located recently young et al 2002 analysed new obervations obtained at lyman xmath13 lyman xmath22 and the whole lyman series with fuse the precise lyman xmath12 stellar profile compatible with all lyman lines and with the data sets obtained at different phases of the capella binary system see also linsky et al 1995 was reevaluated wood 2001 and is used here as a reference profile sxmath23 we thus revisited the fits completed over the lyman xmath12 line as observed toward capella with the best available data set ie the one obtained with the ghrs the goddard high resolution spectrograph on board hst the study by linsky 1995 essentially confirmed by vidal 1998 shows that only one interstellar component the local interstellar cloud lic also seen toward g191b2b is needed on that line of sight this very simple structure strengthens the capella case as the simplest one where d h can be very well evaluated however vidal madjar et al 1998 have already noted that an additional weak and broad hi component was required to better reproduce the profile this was a first indication of the presence of an heliospheric absorption toward capella in fact we were able to show that as in the case of g191b2b the addition of one or two weak and broad hi components together with the very weak geocoronal component present at a known velocity but not shown on figure cap contvar for clarity improves the xmath3 more precisely we fitted the ghrs data assuming that the stellar continuum was sxmath23 adding successively to the fit one then two free hi components the added hi components have only three free parameters velocity xmath15 column density xmath1 and width t since the thermal xmath16 or turbulent broadening xmath17 act in an undifferentiated manner when only one species is observed we obtained the following xmath3degree of freedomdof values for only the lic component and the geocorona 84489716 for one additional component 83117713 for two additional components 82268710 the xmath18test probabilities that these two additional components are not required by the data are respectively xmath24 and xmath25 the first one is clearly needed here its correlated parameter ranges according to different possible solutions similar in terms of xmath3 are xmath26 xmath15 km sxmath27 xmath28 xmath29 xmath1 xmath30 xmath31 xmath32 t k xmath33 but unlike in the case of g191b2b the second one corresponding to the weaker and broader one parameter ranges are xmath34 xmath15 km sxmath27 xmath35 xmath36 xmath1 xmath30 xmath37 xmath38 t k xmath39 is less strongly needed these ranges are certainly compatible with the corresponding estimated values in the direction of g191b2b see figure 10 of lemoine et al 2002 to search for the possible impact of the choice of the continuum on the evaluation of xmath1hi we fixed this value and looked for the best fitted solutions while the stellar continuum we used sxmath23 was allowed for some variations by multiplying it by a low order polynomial 8xmath40 order which coefficients were free to vary along with all components parameters results are shown in figure cap contvar slight changes of the continuum shape by no more than xmath41 lead to nearly identical xmath3 values with xmath1hi varying from xmath42 to xmath43 which corresponds to a change in d h from xmath44 to xmath45 this is clearly a larger range xmath46 than the one previously claimed xmath47 linsky et al the situation could be even worse since we do not know how far the capella continuum could be away from sxmath23 the question is thus to evaluate if the capella lyman xmath12 stellar continuum shape is estimated to better than xmath41 or not it is true that having a binary system can help constraining the continuum shape as linsky 1995 did but their whole approach requires that the lyman xmath12 stellar profiles of both g1 iii and g8 iii stars are invariant with phase and time in fact from the study of 120 iue echelle spectra ayres 1993 have shown on one hand that the line fluxes were surprisingly stable but on the other hand that whichever way they process the data obvious variations were seen these seem to be related to variations of the blue peak of the g1 iii dominant stellar lyman xmath12 line they found that in the 19811986 interval the line shape at phase 025 of the system was quite stable and similar to the one recorded with the ghrs in 1991 at a xmath41 level earlier spectra taken in 1980 or later ones observed after 1986 look quite different this very careful study shows that with the iue sensitivity level of xmath48 variations are clearly detected since linsky 1995 used ghrs observations at two different phases of the system 026 and 080 taken respectively in april 1991 and in september 1993 ie two and a half year apart it is difficult to ascertain that the lyman xmath12 profile evaluated for each stellar component is well controlled because of the very careful analysis made by linsky 1995 it may be possible that the stellar lyman xmath12 profiles are relatively well evaluated but certainly not at a level better than xmath41 as previously mentioned thus an heliospheric absorption is also detected on the capella sight line furthermore even in such a simple ism configuration a unique component it appears impossible to tightly constrain the total hi column density in that direction we have shown that for two lines of sight xmath1hi can not be evaluated with a high accuracy column densities on both sight lines are very similar of the order of few times xmath49 for lower column densities the situation should be worse since then the possible absorption signature of the weak components is becoming relatively more and more important and the lyman xmath12 line is getting closer to the flat part of the curve of growth where column densities are indeed difficult to evaluate note however that the hst euve comparison completed by linsky et al 2000 shows that often xmath1hi values derived from the ghrs and euve data not sensitive to weak hi are in good agreement implying that heliospheric absorption or other hot components do nt necessarily ruin lyman alpha analyses in a dramatic way but clearly counter examples leave that question open due to possible systematics related to the evaluation of total hi below the lyman limit in the euve domain on the contrary one could guess that for larger column densities the situation should improve since the lyman xmath12 damping wings are becoming broader and the signature of the weak features may disapear in the line core just above xmath5 the reliability of the d h values is greatly enhanced if the studied gas is demonstrably warm 6000 k for a thorough discussion see york 2001 as one goes above xmath6 credibility increases unless either cold gas components are hidden in the warm di but still affect the hi damping wings or weak hi features at high velocity are present the unknown referee further stressed this point through an impressive report he mentionned that at xmath1hixmath50 the half intensity point of pure damping lyman xmath12 and lyman xmath13 profiles located respectively at velocity shifts of 274 km sxmath27 and 55 km sxmath27 should be the place where a putative high velocity feature could have a strong perturbing influence on the damping profile however only very few high velocity ism components were detected above 120 km sxmath27 this could lead to the inverse impression that for larger column densities the estimation through the lyman xmath13 line should be more questionable than the one made at lyman xmath12 as a matter of fact in both the xmath22 cas and xmath51 pup lines of sight the hi column density was discrepant when derived through the lyman xmath12 or the lyman xmath13 line see below however in both cases the lyman xmath13 estimations of xmath1hi are smaller in contradiction with the formerly suggested cause since the most perturbed evaluation by additional absorptions should lead instead to larger column densities high velocity ism components essentially observed below 120 km sxmath27were only searched for through other lines and species than hi at lyman xmath12 the strongest transition of the most abundant element for instance cowie et al 1979 reported lyman xmath8 and lyman xmath52 hiism absorptions up to about 105 km s for xmath53 ori from their study the referee evaluated that a shock at 274 km sxmath27 should produce either a very broad xmath54 km sxmath27 and undetectable maximum depth of xmath55 post shock absorption signature or should originate in a region far downstream from the front where the gas has cooled and compressed enough to allow recombination of the h atoms ie a shock from a supernova explosion entering the radiative phase in this second case however he estimated from cowie and york 1978 and spitzer 1978 that such signatures should occur only very near a known supernova event ie within about 30 to 60 pc for standard ism and sn values thus that looks unlikely too on the other hand high velocity gas could be generated by the target stars while gry lamers and vidal madjar 1984 seem to detect most of the activity at velocities below 100 km s they nevertheless identified a transient component at 150 km s toward xmath9 vel through lyman xmath8 and another one at 220 km s toward xmath53 ori through lyman xmath52 note also that this survey was completed over a limited spectral domain scanned with the copernicus instrument and not at lyman xmath12 ie with a relatively limited sensitivity one might argue that there are some risks that stellar ejecta could influence the lyman xmath12 measurements but the observers have a good defense multiple observations at very different epochs this strategy was invoked by jenkins 1999 and sonneborn et al 2000 in their studies of d h toward xmath8 ori xmath51 pup and xmath9 vel their findings are thus pretty convincing in this regard all the above stated arguments should mitigate our concern that small amounts of hi at high velocities are a likely source of confusion for the flanks of the damping profiles for xmath1hi of the order of or greater than xmath56 xmath30 one however should recall the two lines of sight for which the hi column density was discrepant when derived through the lyman xmath12 or the lyman xmath13 line xmath22 cas in bohlin savage and drake 1978 lyman xmath12 only copernicus xmath1hi xmath57 in ferlet et al 1980 core of lyman xmath13 only copernicus xmath1hi xmath58 in diplas and savage 1994 from lyman xmath12 only iue xmath1hi xmath59 xmath51 pup in bohlin 1975 from lyman xmath12 only copernicus xmath1hi xmath60 in vidal madjar et al 1977 from lyman xmath13 only copernicus xmath1hi xmath61 in diplas and savage 1994 from lyman xmath12 only iue xmath1hi xmath62 in sonneborn et al 2000 from lyman xmath12 only iue xmath1hi xmath63 since lyman xmath13 is less sensitive to weak hi features it is interesting to note that in both cases the lyman xmath13 evaluation is slightly below the lyman xmath12 one a possible indication of a similar effect for column densities of the order of xmath6 note however that these two cases are marginally convincing since the different evaluations are still compatible within the error bars therefore if this effect is indeed real for relatively large column densities it means that some of the d h values may be underestimated and thus the higher d h ratios may be favoured this further shows the importance of the xmath64 vel estimation sonneborn et al according to our conclusion that there is lower precision for xmath1hi measurements within the local ism ie at low column densities which induces large error bars on the corresponding d h estimations the xmath1hi evaluation in the direction of capella is relatively less accurate than previously claimed and is of the order of log xmath1hi xmath65 leading to d h xmath66 an average d h ratio may exist in the local ism but should be larger than previously evaluated since locally xmath1hi could be overestimated by as much as about 20 this does affect arguments about local variability d o should be a better tracer of d variations as originally suggested by timmes 1997 and directly verified in the lism moos et al 2002 hbrard et al 2002a 2002b and further confirmed by the stability of o h over longer path length meyer et al 1998 andr et al 2002 provided all sources of errors related to the oxygen measurements are well understood finally we note also that a similar systematic effect has been pointed out by pettini and bowen 2001 for the evaluation of d h in quasars absorption line systems again as in our study the systems presenting the highest values of xmath1hi are derived from the damping wings of the lyman xmath12 line which also includes all hi in close proximity of the hydrogen at the line center of deuterium burles 2001 while those for which xmath1hi is evaluated from the discontinuity at the lyman limit present smaller column densities and larger d h evaluations we would like to thank brian wood who kindly provided to us the capella lyman xmath12 stellar profile he has evaluated for his own study of that line of sight as well as jeff linsky and jeff kruk for constructive comments we are also pleased to warmly thank don york for about twenty seven years of exciting collaboration and martin lemoine for the last decade of common enlightening work we deeply thank our unknown referee whose report was nearly as long as this letter and briefly summarized here
the deuterium abundance evaluation in the direction of capella has for a long time been used as a reference for the local interstellar medium ism within our galaxy we show here that broad and weak hxmath0i components could be present on the capella line of sight leading to a large new additional systematic uncertainty on the xmath1hxmath0i evaluation the d h ratio toward capella is found to be equal to xmath2 with almost identical xmath3 for all the fits this range includes only the systematic error the 2 xmath4 statistical one is almost negligible in comparison it is concluded that d h evaluations over hxmath0i column densities below xmath5 even perhaps below xmath6 if demonstrated by additional observations may present larger uncertainties than previously anticipated it is mentionned that the d o ratio might be a better tracer for dxmath0i variations in the ism as recently measured by the far ultraviolet spectroscopic explorer fuse
introduction summary of the g191b2b case the case of capella discussion conclusion
optical properties of low dimensional semiconductor nanostructures originate from excitons coulomb bound electron hole pairs and exciton complexes such as biexcitons coupled states of two excitons and trions charged excitons these have pronounced binding energies in nanostructures due to the quantum confinement effectxcite the advantage of optoelectronic device applications with low dimensional semiconductor nanostructures lies in the ability to tune their properties in a controllable way optical properties of semiconducting carbon nanotubes cns in particular are largely determined by excitonsxcite and can be tuned by electrostatic dopingxcite or by means of the quantum confined stark effectxcite carbon nanotubes are graphene sheets rolled up into cylinders of one to a few nanometers in diameter and up to hundreds of microns in length which can be both metals and semiconductors depending on their diameters and chiralityxcite over the past decade optical nanomaterials research has uncovered intriguing optical attributes of their physical properties lending themselves to a variety of new optoelectronic device applicationsxcite formation of biexcitons and trions though not detectable in bulk materials at room temperature play a significant role in quantum confined systems of reduced dimensionality such as quantum wellsxcite nanowiresxcite nanotubesxcite and quantum dotsxcite biexciton and trion excitations open up routes for controllable nonlinear optics and spinoptronics applications respectively the trion in particular has both net charge and spin and therefore can be controlled by electrical gates while being used for optical spin manipulation or to investigate correlated carrier dynamics in low dimensional materials for conventional semiconductor quantum wells wires and dots the binding energies of negatively or positively charged trions are known to be typically lower than those of biexcitons in the same nanostructure although the specific trion to biexciton binding energy ratios are strongly sample fabrication dependentxcite first experimental evidence for the trion formation in carbon nanotubes was reported by matsunaga et alxcite and by santos et alxcite on xmath0doped 75 and undoped 65 cns respectively theoretically rnnow et alxcite have predicted that the lowest energy trion states in all semiconducting cns with diameters of the order of or less than 1 nm should be stable at room temperature they have later developed the fractional dimension approach to simulate binding energies of trions and biexcitons in quasi1d2d semiconductors including nanotubes as a particular casexcite binding energies of xmath1 mev and xmath2 mev are reported for the lowest energy trionsxcite and biexcitonsxcite respectively in the 75 nanotube however the recent nonlinear optics experiments were able to resolve both trions and biexcitons in the same cn samplexcite to report on the opposite tendency where the binding energy of the trion exceeds that of the biexciton rather significantly in small diameter xmath3 nm cns figure fig0 shows typical experimental data for conventional low dimension semiconductors left panel and small diameter semicondicting cns right panel in the left panel the biexciton resonance is seen to appear at lower photon energy than the trion one in contrast with the right panel where the biexciton resonance manifests itself at greater photon energy than the trion resonance does this clearly indicates greater trion binding energies than those of biexcitons in small diameter semiconducting cns as opposed to conventional low dimension semiconductors 175 cm more specifically colombier et alxcite reported on the observation of the binding energies xmath4 mev and xmath5 mev for the trion and biexciton respectively in the 97 cn yuma et alxcite reported even greater binding energies of xmath6 mev for the trion versus xmath7 mev for the biexciton in the smaller diameter 65 cn their spectra are reproduced in fig fig0 right panel in both cases the trion to biexciton binding energy ratio is greater than unity decreasing as the cn diameter increases 146 for the 075 nm diameter 65 cn versus 142 for the 109 nm diameter 97 cn trion binding energies greater than those of biexcitons are theoretically reported by watanabe and asanoxcite due to the energy band nonparabolicity and the coulomb screening effect that reduces the biexciton binding energy more than that of the trion watanabe and asano have extended the first order xmath8perturbation series expansion model originally developed by ando for excitons see refxcite for review to the case of electron hole complexes such as trions and biexcitons figure fig00 compares the differences between the trion and biexciton binding energies delivered by phenomenological and unscreened models termed as such to refer to the cases where the energy band nonparabolicity electron hole complex form factors self energies and the screening effect are all neglected and where all of them but screening are taken into account respectively with the difference given by the screened model the latter is the watanabe asano model which includes all of the factors mentioned within the first order xmath8perturbation theory one can see that the screened model does predict greater trion binding energies than those of biexcitons as opposed to the phenomenological and unscreened models however the most the trion binding energy can exceed that of the biexciton within this model is xmath9 equal to xmath10 and xmath11 mev for the 65 and 97 cns respectively which is obviously not enough to explain the experimental observations 100 cm this article reviews the capabilities of the configuration space landau herring method for the binding energy calculations of the lowest energy exciton complexes in quasi1d2d semiconductors the approach was originally pioneered by landauxcite gorkov and pitaevskixcite holstein and herringxcite in the studies of molecular binding and magnetism the method was recently shown to be especially advantageous in the case of quasi1d semiconductorsxcite allowing for easily tractable complete analytical solutions to reveal universal asymptotic relations between the binding energy of the exciton complex of interest and the binding energy of the exciton in the same nanostructure the landau herring method of the complex bound state binding energy calculation is different from commonly used quantum mechanical approaches reviewed above these either use advanced simulation techniques to solve the coordinate space schrdinger equation numericallyxcite or convert it into the reciprocal momentum space to follow up with the xmath8perturbation series expansion calculationsxcite obviously this latter one in particular requires for perturbations to be small if they are not then the method brings up an underestimated binding energy value especially for molecular complexes such as biexciton and trion where the kinematics of complex formation depends largely on the asymptotic behavior of the wave functions of the constituents this is likely the cause for watanabe asano theory of excitonic complexesxcite to significantly underestimate the measurements by colombier et alxcite and yuma et alxcite on semiconducting cns 115 cm the landau herring configuration space approach does not have this shortcoming it works in the configuration space of the two relative electron hole motion coordinates of the two non interacting quasi1d excitons that are modeled by the effective one dimensional cusp type coulomb potential as proposed by ogawa and takagahara for 1d semiconductorsxcite since the configuration space is different from the ordinary coordinate or its reciprocal momentum space the approach does not belong to any of the models summarized in fig fig00 in this approach the biexciton or trion bound state forms due to the exchange under barrier tunneling between the equivalent configurations of the electron hole system in the configuration space the strength of the binding is controlled by the exchange tunneling rate the corresponding binding energy is given by the tunnel exchange integral determined through an appropriate variational procedure as any variational approach the method gives an upper bound for the ground state binding energy of the exciton complex of interest as an example fig fig000 compares the biexciton binding energies calculated within several different models including those coordinate space formulated that are referred to as phenomenological in fig fig00 as well as the configuration space model it is quite remarkable that with obvious overall correspondence to the other methods as seen in fig fig000 the landau herring configuration space approach is the only to have been able consistently explain the experimental observations discussed above and shown in fig fig0 both for conventional low dimension semiconductors and for semiconducting cns whether the trion or biexciton is more stable has greater binding energy in a particular quasi1d system turns out to depend on the reduced electron hole mass and on the characteristic transverse size of the systemxcite trions are generally more stable than biexcitons in strongly confined quasi1d structures with small reduced electron hole masses while biexcitons are more stable than trions in less confined quasi1d structures with large reduced electron hole masses as such a crossover behavior is predictedxcite whereby trions get less stable than biexcitons as the transverse size of the quasi1d nanostructure increases quite a general effect which could likely be observed through comparative measurements on semiconducting cns of increasing diameter the method captures the essential kinematics of exciton complex formation thus helping understand in simple terms the general physical principles that underlie experimental observations on biexcitons and trions in a variety of quasi1d semiconductor nanostructures for semiconducting cns with diameters xmath12 nm the model predicts the trion binding energy greater than that of the biexciton by a factor xmath13 that decreases with the cn diameter increase in reasonable agreement with the measurements by colombier et alxcite and yuma et alxcite the article is structured as follows section 2 formulates the general hamiltonian for the biexciton complex of two electrons and two holes in quasi1d semiconductor carbon nanotubes of varying diameter are used as a model example for definiteness the theory and conclusionsare valid for any quasi1d semiconductor system in general the exchange integral and the binding energy of the biexciton complex are derived and analyzed section 3 further develops the theory to include the trion case in section 4 the trion binding energy derived is compared to the biexciton binding energy for semiconducting quasi1d nanostructures of varying transverse size and reduced exciton effective mass section 5 generalizes the method to include trion and biexciton complexes formed by indirect excitons in layered quasi2d semiconductor structures such as coupled quantum wells cqws and bilayer self assembled transition metal dichalchogenide heterostructures section 6 summarizes and concludes the article the problem is initially formulated for two interacting ground state 1d excitons in a semiconducting cn the cn is taken as a model for definiteness the theory and conclusions are valid for any quasi1d semiconductor system in general the excitons are modeled by effective one dimensional cusp type coulomb potentials shown in fig fig1 a as proposed by ogawa and takagahara for 1d semiconductorsxcite the intra exciton motion can be legitimately treated as being much faster than the inter exciton center of mass relative motion since the exciton itself is normally more stable than any of its compound complexes therefore the adiabatic approximation can be employed to simplify the formulation of the problem with this in mind using the cylindrical coordinate system zaxis along the cn as in fig fig1 a and separating out circumferential and longitudinal degrees of freedom for each of the excitons by transforming their longitudinal motion into their respective center of mass coordinatesxcite one arrives at the hamiltonian of the formxcite xmath14 here xmath15 are the relative electron hole motion coordinates of the two 1d excitons separated by the center of mass to center of mass distance xmath16 xmath17 is the cut off parameter of the effective cusp type longitudinal electron hole coulomb potential xmath18 xmath19 with xmath20 xmath21 representing the electron hole effective mass the atomic unitsare usedxcite whereby distance and energy are measured in units of the exciton bohr radius xmath22 and the rydberg energy xmath23 respectively xmath24 is the exciton reduced mass in units of the free electron mass xmath25 and xmath26 is the static dielectric constant of the electron hole coulomb potential 110 cm the first two lines in eq biexcham represent two non interacting 1d excitons their individual potentials are symmetrized to account for the presence of the neighbor a distance xmath27 away as seen from the xmath28 and xmath29coordinate systems treated independently fig fig1 a the last two lines are the inter exciton exchange coulomb interactions electron hole line next to last and hole hole electron electron last line respectively the binding energy xmath30 of the biexciton is given by the difference xmath31 where xmath32 is the lowest eigenvalue of the hamiltonian biexcham and xmath33 is the single exciton binding energy with xmath34 being the lowest bound state quantum number of the 1d excitonxcite negativexmath30 indicates that the biexciton is stable with respect to the dissociation into two isolated excitons the strong transverse confinement in reduced dimensionality semiconductors is known to result in the mass reversal effectxcite whereby the bulk heavy hole state that forming the lowest excitation energy exciton acquires a longitudinal mass comparable to the bulk light hole mass xmath35 therefore xmath36 in our case of interest here which is also true for graphitic systems such as cns in particularxcite and so xmath37 is assumed in eq biexcham in what follows with no substantial loss of generality the hamiltonian biexcham is effectively two dimensional in the configuration space of the two independent relative motion coordinates xmath28 and xmath29 figure fig1 b bottom shows schematically the potential energy surface of the two closely spaced non interacting 1d excitons second line of eq biexcham in the xmath38 space the surface has four symmetrical minima representing isolated two exciton states shown in fig fig1 b top these minima are separated by the potential barriers responsible for the tunnel exchange coupling between the two exciton states in the configuration space the coordinate transformation xmath39 places the origin of the new coordinate system into the intersection of the two tunnel channels between the respective potential minima fig fig1 b whereby the exchange splitting formula of refsxcite takes the form xmath40 here xmath41 are the ground state and excited state energies eigenvalues of the hamiltonian biexcham of the two coupled excitons as functions of their center of mass to center of mass separation and xmath42 is the tunnel exchange coupling integral responsible for the bound state formation of two excitons for biexciton this takes the form xmath43 where xmath44 is the solution to the schrdinger equation with the hamiltonian biexcham transformed to the xmath45 coordinates the factor xmath46 comes from the fact that there are two equivalent tunnel channels in the biexciton problem mixing three equivalent indistinguishable two exciton states in the configuration space one state is given by the two minima on the xmath47axis and two more are represented by each of the minima on the xmath48axis cf fig1 a and fig fig1 b the function xmath44 in eq jxx is sought in the form xmath49 labelpsixxxy where xmath50 labelpsi0xy is the product of two single exciton wave functions ground state representing the isolated two exciton state centered at the minimum xmath51 or xmath52 xmath53 of the configuration space potential fig fig1 b this is the approximate solution to the shrdinger equation with the hamiltonin given by the first two lines in eq biexcham where the cut off parameter xmath17 is neglectedxcite this approximation greatly simplifies problem solving while still remaining adequate as only the long distance tail of xmath54 is important for the tunnel exchange coupling the function xmath55 on the other hand is a slowly varying function to account for the major deviation of xmath56 from xmath54 in its tail area due to the tunnel exchange coupling to another equivalent isolated two exciton state centered at xmath57 xmath58 or xmath59 xmath53 substituting eq psixxxy into the schrdinger equation with the hamiltonian biexcham pre transformed to the xmath45 coordinates one obtains in the region of interest xmath17 dropped for the reason above xmath60 up to negligible terms of the order of the inter exciton van der waals energy and up to the second order derivatives of xmath61 this equation is to be solved with the boundary condition xmath62 originating from the natural requirement xmath63 to result in xmath64 after plugging this into eq psixxxy one can calculate the tunnel exchange coupling integral jxx retaining only the leading term of the integral series expansion in powers of xmath34 subject to xmath65 one obtains xmath66 the ground state energy xmath67 of the two coupled 1d excitons in eq egu is now seen to go through the negative minimum biexcitonic state as xmath27 increases the minimum occurs at xmath68 whereby the biexciton binding energy takes the form xmath69 in atomic units expressing xmath34 in terms of xmath70 one obtains in absolute units the equation as follows xmath71 110 cmthe trion binding energy can be found in the same way using a modification of the hamiltonian biexcham in which two same sign particles share the third particle of an opposite sign to form the two equivalent 1d excitons as fig fig2 shows for the negative trion complex consisting of the hole shared by the two electrons the hamiltonian modified to reflect this fact has the first two lines exactly the same as in eq biexcham no line next to last and one of the two terms in the last line either the first or the second one for the positive with xmath72 and negative with xmath73 trion respectively obviously due to the additional mass factor xmath74 typically less than one for bulk semiconductors in the hole hole interaction term in the last line the positive trion might be expected to have a greater binding energy in this model in agreement with the results reported earlierxcite however as was already mentioned in sec sec2 the mass reversal effect in strongly confined reduced dimensionality semiconductors is to result in xmath37 in the trion hamiltonian the positive negative trion binding energy difference disappears then the negative trion case illustrated in fig fig2 is addressed below just like in the case of the biexciton the treatment of the trion problem starts with the coordinate transformation transformation to bring the trion hamiltonian from the original configuration space coordinate system xmath38 into the new coordinate system xmath45 with the origin positioned as shown in fig fig1 b the tunnel exchange splitting integral in eq egu now takes the form xmath75 where xmath76 is the ground state wave function of the schrdinger equation with the hamiltonian biexcham modified to the negative trion case as discussed above and then transformed to the xmath45 coordinates the tunnel exchange current integral xmath77 is due to the electron position exchange relative to the hole see fig fig2 this corresponds to the tunneling of the entire three particle system between the two equivalent indistinguishable configurations of the two excitons sharing the same hole in the configuration space xmath38 given by the pair of minima at xmath51 and xmath78 in fig fig1 b such a tunnel exchange interaction is responsible for the coupling of the three particle system to form a stable trion state like in the case of the biexciton one seeks the function xmath79 in the form xmath80 labelpsixy with xmath81 given by eq psi0xy where xmath82 is assumed to be a slowly varying function to take into account the deviation of xmath83 from xmath54 in the tail area of xmath54 due to the tunnel exchange coupling to another equivalent isolated two exciton state centered at xmath57 xmath58 or xmath59 xmath53 substituting eq psixy into the schrdinger equation with the negative trion hamiltonian pre transformed to the xmath45 coordinates one obtains in the region of interest xmath84 xmath85 cut off xmath17 dropped up to terms of the order of the second derivatives of xmath86 this is to be solved with the boundary condition xmath87 coming from the requirement xmath88 to result in xmath89 after plugging eqs sxy and psixy into eq jxast and retaining only the leading term of the integral series expansion in powers of xmath34 subject to xmath65 one obtains xmath90 inserting this into the right hand side of eq egu one sees that the ground state energy xmath32 of the three particle system goes through the negative minimum the trion state as xmath27 increases the minimum occurs at xmath68 whereby the trion binding energy in atomic units takes the form xmath91 in absolute units expressing xmath34 in terms of xmath70 one obtains xmath92 120 cmfrom eqs exx and exstar one has the trion to biexciton binding energy ratio as follows xmath94 if one now assumes xmath95 xmath96 is the dimensionless cn radius or transverse confinement size for quasi1d nanostructure in general as was demonstrated earlier by variational calculationsxcite to be consistent with many quasi1d modelsxcite then one obtains the xmath96dependences of xmath97 xmath98 and xmath99 shown in fig fig3 the trion and biexciton binding energies both decrease with increasing xmath96 in such a way that their ratio remains greater than unity for small enough xmath96 in full agreement with the experiments by colombier et alxcite and yuma et alxcite however since the factor xmath100 in eq exstarexx is less than one the ratio can also be less than unity for xmath96 large enough but not too large so that our configuration space method still works as xmath96 goes down on the other hand the biexciton to exciton binding energy ratio xmath101 in eq exx slowly grows approaching the pure 1d limit xmath102 similar tendency can also be traced in the monte carlo simulation data of refxcite the equilibrium inter exciton center of mass distance in the biexciton complex goes down with decreasing xmath96 as well xmath103 atomic units this supports experimental evidence for enhanced exciton exciton annihilation in small diameter cnsxcite the trion to exciton binding energy ratio xmath104 of eq exstar increases with decreasing xmath96 faster than xmath101 fig fig3 to yield xmath105 as the pure 1d limit for the trion to biexciton binding energy ratio 115 cm when xmath99 is known one can use eq exstarexx to estimate the effective bohr radii xmath106 for the excitons in the cns of known radii for example substituting xmath107 for the 075 nm diameter 65 cn and xmath108 for the 109 nm diameter 97 cn as reported by yuma et alxcite and colombier et alxcite respectively into the left hand side of the transcendental equation exstarexx and solving it for xmath106 one obtains the effective exciton bohr radius xmath109 nm and xmath110 nm for the 65 cn and 97 cn respectively this agrees reasonably with previous estimatesxcite in general the binding energies in eqs exstar and exx are functions of the cn radius transverse confinement size for a quasi1d semiconductor nanowire xmath111 and xmath26 figures fig4 a and fig4 b show their 3d plots at fixed xmath112 and at fixed xmath113 respectively as functions of two remaining variables the reduced effective mass xmath111 chosen is typical of large radius excitons in small diameter cnsxcite the unit dielectric constant xmath26 assumes the cn placed in air and the fact that there is no screening in quasi1d semiconductor systems both at short and at large electron hole separationsxcite this latter assumption of the unit background dielectric constant remains legitimate for small diameter xmath3 nm semiconducting cns in dielectric screening environment too for the lowest excitation energy exciton in its ground state of interest here not for its excited states though in which case the environment screening effect is shown by ando to be negligiblexcite diminishing quickly with the increase of the effective distance between the cn and dielectric medium relative to the cn diameter figure fig4 a can be used to evaluate the relative stability of the trion and biexciton complexes in quasi1d semiconductors one sees that whether the trion or the biexciton is more stable has the greater binding energy in a particular quasi1d system depends on xmath111 and on the characteristic transverse size of the nanostructure in strongly confined quasi1d systems with relatively small xmath111 such as small diameter cns the trion is generally more stable than the biexciton in less confined quasi1d structures with greater xmath111 typical of semiconductorsxcitethe biexciton is more stable than the trion this is a generic peculiarity in the sense that it comes from the tunnel exchange in the quasi1d electron hole system in the configuration space greater xmath111 while not affecting significantly the single charge tunnel exchange in the trion complex makes the neutral biexciton complex generally more compact facilitating the mixed charge tunnel exchange in it and thus increasing the stability of the complex from fig fig4 b one sees that this generic feature is not affected by the variation of xmath26 although the increase of xmath26 decreases the binding energies of both excitonic complexes in agreement both with theoretical studiesxcite and with experimental observations of lower binding energies compared to those in cns of these complexes in conventional semiconductor nanowiresxcite the latter are self assembled nanostructures of one transversely confined semiconductor embedded in another bulk semiconductor with the characteristic transverse confinement size typically greater than that of small diameter cns and so both inside and outside material dielectric properties matter there 120 cm figure fig5 shows the cross section of fig fig4 a taken at xmath114 to present the relative behavior of xmath97 and xmath98 in semiconducting cns of increasing radius both xmath97 and xmath98 decrease andso does their ratio as the cn radius increases from the graph xmath115 and xmath116 mev xmath117 and xmath118 mev for the 65 and 97 cns respectively this is to be compared with xmath6 and xmath7 mev for the 65 cn versus xmath4 and xmath5 mev for the 97 cn reported experimentallyxcite one sees that as opposed to perturbative theoriesxcite the present configuration space theory underestimates experimental data just slightly most likely due to the standard variational treatment limitations it does explain well the trends observed and so the graph in fig fig5 can be used as a guide for trion and biexciton binding energy estimates in small diameter xmath3 nm nanotubes recently there has been a considerable interest in studies of optical properties of coupled quantum wells cqwsxcite the cqw semiconductor nanostructure fig fig6 consists of two identical semiconductor quantum wells separated by a thin barrier layer of another semiconductor the tunneling of carriers through the barrier makes two wells electronically coupled to each other as a result an electron a hole can either reside in one of the wells or its wave function can be distributed between both wells a coulomb bound electron hole pair residing in the same well forms a direct exciton fig fig6 a if the electron and hole of a pair are located in different wells then an indirect exciton is formed fig fig6 b 120cm05 cm physical properties of cqws can be controlled by using external electro and magnetostatic fields see eg refsxcite and refs therein for example applying the electrostatic field perpendicular to the layers increases the exciton radiative lifetime due to a reduction in the spatial overlap contact density between the electron and hole wave functions as this takes place the exciton binding energy reduces due to an increased electron hole separation to make the exciton less stable against ionization in contrast with the exciton magnetostatic stabilization effect under the same geometryxcite the tunneling effect is also enhanced as the electric field allows the carriers to leak out of the system resulting in a considerable shortening of the photoluminescence decay time cqws embedded into bragg mirror microcavities show a special type of voltage tuned exciton polaritons which can be used for low threshold power polariton lasingxcite new non linear phenomena are also reported for these cqw systems both theoretically and experimentally such as bose condensationxcite and parametric oscillationsxcite of exciton polaritons for laterally confined cqw structures experimental evidence for controllable formation of multiexciton wigner like molecular complexes of indirect excitons single exciton biexciton triexciton etc was reported recentlyxcite trion complexes formed both by direct and by indirect excitons as sketched in figs fig6 a and b were observed in cqws as wellxcite all these findings make cqws a much richer system capable of new developments in fundamental quantum physics and nanotechnology as compared to single quantum wellsxcite they open up new routes for non linear coherent optical control and spinoptronics applications with quasi2d semiconductor cqw nanostructures very recently the problem of the trion complex formation in cqws was studied theoretically in great detail for trions composed of a direct exciton and an electron or a hole located in the neighboring quantum wellxcite as sketched in fig fig6 a significant binding energies are predicted on the order of xmath119 mev at interwell separations xmath120 nm for the lowest energy positive and negative trion states to allow one suggest a possibility for trion wigner crystallization figure fig6 b shows another possible trion complex that can also be realized in cqws here the trion is composed of an indirect exciton and an electron or a hole in such a manner as to keep two same sign particles in the same quantum well with the opposite sign particle being located in the neighboring well this can be viewed as the two equivalent configurations of the three particle system in the configuration space xmath121 xmath122 of the two independent in plane projections of the relative electron hole distances xmath123 and xmath124 in the two equivalent indirect excitons sharing the same electron or the same hole as shown in fig fig6 b such a three particle system in the quasi2d semiconductor cqw nanostructure is quite analogous to the quasi1d trion presented in sec 3 above cf fig6 b and fig fig2 therefore the landau herring configuration space approach can be used here as well to evaluate the binding energy for this special case of the quasi2d trion state following is a brief outline of how one could proceed with the configuration space method to obtain the ground state binding energy for the quasi2d trion complex sketched in fig fig6 b a complete analysis of the problem will be presented elsewhere the method requires knowledge of the ground state characteristics of the indirect exciton abbreviated as xmath125 in what follows specifically one needs to know the quasi2d ground state energy xmath126 and corresponding in plane relative electron hole motion wave function xmath127 for the indirect exciton in the cqw system with the interwell distance xmath128 these can be found by solving the radial scrdinger equation that is obtained by decoupling radial relative electron hole motion in the cylindrical coordinate system with the xmath129axis being perpendicular to the qw layers see fig fig6 b such an equation was derived and analyzed previously by leavitt and littlexcite the energy and the wave function of interest are as follows xmath130 labelindirect02 cm psiixrho dnexplambdasqrtrho2d2dhskip05cmnonumberendaligned where xmath131 is the exponential integral xmath132 the normalization constant xmath133 is determined from the condition xmath134 and all quantities are measured in atomic units as defined in sec 2 120 cm with eq indirect in place one can work out strategies for tunneling current calculations in the configuration space xmath121 xmath122 of the two independent in plane projections of the relative electron hole distances xmath123 and xmath124 in the two equivalent indirect excitons as shown in fig fig6 b both tunneling current responsible for the trion complex formation and that responsible for the biexciton complex formation can be obtained in full analogy with how it was done above for the respective quasi1d complexes the only formal difference now is the change in the phase integration volume from xmath135 to xmath136 minimizing the tunneling current with respect to the center of mass to center of mass distance of the two equivalent indirect excitons results in the binding energy of a few particle complex of interest note that the method applies to the complexes formed by excitons only as they allow equivalent configurations for a few particle system to tunnel throughout in the configuration space xmath121 xmath122 thereby forming a respective tunnel coupled few particle complex binding energy calculations for indirect exciton complexes in semiconductor cqw nanostructures are important to understand the principles of the more complicated electron hole structure formation such as that shown in fig this is a coupled charge neutral spin aligned wigner like structure formed by two trions one positively charged and another one negatively charged the entire structure is electrically neutral and it has an interesting electron hole spin alignment pattern this structure can also be viewed as a triexciton a coupled state of three indirect singlet excitons one could also imagine a wigner like crystal structure formed by unequal number of electrons and holes as opposed to that in fig fig7 whereby the entire coupled structure could possess net charge and spin at the same time to allow precise electro and magnetostatic control and manipulation by its optical and spin properties such wigner like electron hole crystal structures in cqws might be of great interest for spinoptronics applications all in all indirect excitons biexcitons and trions formed by indirect excitons are those building blocks that control the formation of more complicated wigner like electron hole crystal structures in cqws the configuration space method presented here allows one to study the binding energies for these building blocks as functions of cqw system parameters and thus to understand how stable electron hole wigner crystallization could possibly be in these quasi2d nanostructures the method should also work well for biexciton and trion complexes in quasi2d self assembled transition metal dichalcogenide heterostructures where electrons and holes accumulated in the opposite neighboring monolayers are recently reported to form indirect excitons with new exciting properties such as increased recombination timexcite and vanishing high temperature viscosityxcitepresented herein is a universal configuration space method for binding energy calculations of the lowest energy neutral biexciton and charged trion exciton complexes in spatially confined quasi1d semiconductor nanostructures the method works in the effective two dimensional configuration space of the two relative electron hole motion coordinates of the two non interacting quasi1d excitons the biexciton or trion bound state forms due to under barrier tunneling between equivalent configurations of the electron hole system in the configuration space tunneling rate controls the binding strength and can be turned into the binding energy by means of an appropriate variational procedure quite generally trions are shown to be more stable have greater binding energy than biexcitons in strongly confined quasi1d structures with small reduced electron hole masses biexcitons are more stable than trions in less confined structures with large reduced electron hole masses a universal crossover behavior is predicted whereby trions become less stable than biexcitons as the transverse size of the quasi1d nanostructure increases an outline is given of how the method can be used for electron hole complexes of indirect excitons in quasi2d semiconductor systems such as coupled quantum wells and van der waals bound transition metal dichalcogenide heterostructures here indirect excitons biexcitons and trions formed by indirect excitons control the formation of more complicated wigner like electron hole crystal structures the configuration space method can help develop understanding of how stable wigner crystallization could be in these quasi2d nanostructures wigner like electron hole crystal structures are of great interest for future spinoptronics applications this work is supported by the us department of energy de sc0007117 discussions with david tomanek michigan state u roman kezerashvili nyxmath137citytech and masha vladimirova u montpellier france are acknowledged thanks tony heinz stanford u for pointing out refxcite of relevance to this work 99 hhaug and swkoch quantum theory of the optical and electronic properties of semiconductors 5th edn world scientific london 2005 pyyu and mcardona fundamentals of semiconductors 4th edn springer verlag berlin 2010 egbarbagiovanni djlockwood pjsimpson and lvgoncharova appl 1 2014 011302 msdresselhaus gdresselhaus rsaito and ajorio annu 58 2007 719 jdeslippe mdipoppa dprendergast mvomoutinho rbcapaz and sglouie nanolett 9 2009 1330 msteiner mfreitag vperebeinos anaumov jp small aabol and phavouris nanolett 9 2009 3477 cdspataru and flonard phys lett 104 2010 177402 tmueller mkinoshita msteiner vperebeinos aabol dbfarmer and phavouris nature nanotech 5 2010 27 myoshida ykumamoto aishii ayokoyama and ykkato appl 105 2014 161104 ivbondarev lmwoods and ktatur phys b80 2009 085407 opt 282 2009 661 ivbondarev phys b85 2012 035448 ivbondarev and avmeliksetyan phys b89 2014 045414 mltrolle and tgpedersen phys b92 2015 085431 rsaito gdresselhaus and msdresselhaus science of fullerenes and carbon nanotubes imperial college press london 1998 mdresselhaus gdresselhaus and phavouris eds carbon nanotubes synthesis structure properties and applications springer berlin 2001 djstyersbarnett spellison bpmehl bcwestlake rlhouse cpark kewise and jmpapanikolas j phys chem c112 2008 4507 ahgele chgalland mwinger and aimamolu phys 100 2008 217401 phavouris mfreitag and vperebeinos nature photonics 2 2008 341 nmgabor zhzhong kbosnick jpark and plmceuen science 325 2009 1367 thertel nature photonics 4 2010 77 ivbondarev j comp nanosci 7 2010 1673 xdang hyi m hham jqi dsyun rladewski msstrano pthammond and ambelcher nature nanotech 6 2011 377 ivbondarev lmwoods and apopescu plasmons theory and applications nova science new york 2011 chap 16 p 381 snanot ehharoz j hkim rhhauge and jkono adv 24 2012 4977 volder shtawfick rhbaughman and ajhart science 339 2013 535 thertel and ivbondarev eds photophysics of carbon nanotubes and nanotube composites special issue chem phys 413 1 2013 ivbondarev optics express 23 2015 3971 dbirkedal jsingh vglyssenko jerland and jmhvam phys 76 1996 672 jsingh dbirkedal vglyssenko and jmhvam phys rev b53 1996 15909 athilagam phys rev b56 4665 1997 avfilinov criva fmpeeters yuelozovik and mbonitz phys b70 2004 035323 asbrackerxmath138eastinaffxmath138dgammonxmath138meware jgtischler dpark dgershoni avfilinov mbonitz fm peeters and criva phys b72 2005 035332 tbaars wbraun mbayer and aforchel phys rev b58 1998 r1750 acrottini jlstaehli bdeveaud xlwang mogura solid state commun 121 2002 401 ysidor bpartoens and fmpeeters phys b77 2008 205413 baln dfuster g muoz matutano jmartnezpastor ygonzlez jcanetferrer and lgonzlez phys lett 101 2008 067405 mjaschuetz mgmoore and cpiermarocchi nature physics 6 2010 919 tgpedersen phys rev b67 2003 073401 tgpedersen kpedersen hdcornean and pduclos nanolett 5 2005 291 dkammerlander dprezzi ggoldoni emolinari and uhohenester phys 99 2007 126806 physica e 40 2008 1997 tfrnnow tgpedersen and hdcornean phys rev b81 2010 205446 tfrnnow tgpedersen bpartoens and kkberthelsen phys rev b84 2011 035316 rmatsunaga kmatsuda and ykanemitsu phys 106 2011 037404 smsantos byuma sberciaud jshaver mgallart pgilliot lcognet and blounis phys 107 2011 187401 ivbondarev phys b83 2011 153409 ivbondarev phys b90 2014 245430 kwatanabe and kasano phys b85 2012 035416 ibid 83 2011 115406 tfrnnow tgpedersen and bpartoens phys b85 2012 045412 lcolombier jselles erousseau jslauret fvialla cvoisin and gcassabois phys 109 2012 197402 byuma sberciaud jbesbas jshaver ssantos sghosh rbweisman lcognet mgallart mziegler bhnerlage blounis and pgilliot phys b87 2013 205412 bpatton wlangbein and uwoggon phys b68 2003 125316 mkaniber mfhuck kmller ecclark ftroiani mbichler hjkrenner and jjfinley nanotechnology 22 2011 325202 vjovanov skapfinger mbichler gabstreiter and jjfinley phys b84 2011 235321 tando j phys 74 2005 777 ldlandau and emlifshitz quantum mechanics non relativistic theory pergamon oxford 1991 lpgorkov and lppitaevski dokl nauk sssr 151 1963 822 english transl soviet physdokl 8 1964 788 cherring rev 34 1962 631 cherring and mflicker phys 134 1964 a362 togawa and ttakagahara phys b44 1991 8138 ajorio cfantini mapimenta rbcapaz gegsamsonidze gdresselhaus msdresselhaus jjiang nkobayashi agrneis and rsaito phys b71 2005 075401 rloudon am 27 1959 649 fwang gdukovic eknoesel lebrus and tfheinz phys rev b70 2004 241403r dabramavicius y zma mwgraham lvalkunas and grfleming phys b79 2009 195445 asrivastava and jkono phys b79 2009 205407 tando j phys 79 2010 024706 tbyrnes gvkolmakov ryakezerashvili and yyamamoto phys b90 2014 125314 olberman ryakezerashvili and shmtsiklauri int j mod b28 2014 1450064 gjschinner jrepp eschubert akrai dreuter adwieck aogovorov awholleitner and jpkotthaus phys rev 110 2013 127403 ksivalertporn lmouchliadis alivanov rphilp and eamuljarov phys b85 2012 045207 gchristmann aaskitopoulos gdeligeorgis zhatzopoulos sitsintzos pgsavvidis and jjbaumberg appl phys 98 081111 2011 agwinbow jrleonard mremeika yykuznetsova aahigh athammack lvbutov jwilkes aaguenther alivanov mhanson and acgossard phys rev lett 106 196806 2011 gjschinner eschubert mp stallhofer jpkotthaus dschuh akrai dreuter adwieck and aogovorov phys rev b83 2011 165308 ggrosso jgraves athammack aahigh lvbutov mhanson and acgossard nature photonics 3 2009 577 athammack lvbutov jwilkes lmouchliadis eamuljarov alivanov and acgossard phys b80 2009 155331 zvrs dwsnoke lpfeiffer and kwest phys lett 103 2009 016403 cdiederichs jtignon gdasbach cciuti alematre jbloch phroussignol and cdelalande nature 440 2006 904 zvrs dwsnoke lpfeiffer and kwest phys 97 2006 016803 ajshields jlosborne dmwhittaker mysimmons mpepper and daritchie phys b55 1997 1318 rpleavitt and jwlittle phys b42 1990 11774 fceballos mzbellus h ychiu and hzhao acs nano 8 2014 12717 mmfogler lvbutov and ksnovoselov nature commun 5 2014 4555
a configuration space method is developed for binding energy calculations of the lowest energy exciton complexes trion biexciton in spatially confined quasi1d semiconductor nanostructures such as nanowires and nanotubes quite generally trions are shown to have greater binding energy in strongly confined structures with small reduced electron hole masses biexcitons have greater binding energy in less confined structures with large reduced electron hole masses this results in a universal crossover behavior whereby trions become less stable than biexcitons as the transverse size of the quasi1d nanostructure increases the method is also capable of evaluating binding energies for electron hole complexes in quasi2d semiconductors such as coupled quantum wells and bilayer van der walls bound heterostructures with advanced optoelectronic properties
introduction biexciton in quasi-1d trion in quasi-1d comparative analysis of @xmath30 and @xmath93 in quasi-1d configuration space method as applied to quasi-2d systems conclusion acknowledgments
with significant research efforts being directed to the development of neurocomputers based on the functionalities of the brain a seismic shift is expected in the domain of computing based on the traditional von neumann model the xmath0 xcite xmath1 xcite and the ibm xmath2 xcite are instances of recent flagship neuromorphic projects that aim to develop brain inspired computing platforms suitable for recognition image video speech classification and mining problems while boolean computation is based on the sequential fetch decode and execute cycles such neuromorphic computing architectures are massively parallel and event driven and are potentially appealing for pattern recognition tasks and cortical brain simulationsto that end researchers have proposed various nanoelectronic devices where the underlying device physics offer a mapping to the neuronal and synaptic operations performed in the brain the main motivation behind the usage of such non von neumann post cmos technologies as neural and synaptic devices stems from the fact that the significant mismatch between the cmos transistors and the underlying neuroscience mechanisms result in significant area and energy overhead for a corresponding hardware implementation a very popular instance is the simulation of a cat s brain on ibm s blue gene supercomputer where the power consumption was reported to be of the order of a few xmath3 xcite while the power required to simulate the human brain will rise significantly as we proceed along the hierarchy in the animal kingdom actual power consumption in the mammalian brain is just a few tens of watts in a neuromorphic computing platform synapses form the pathways between neurons and their strength modulate the magnitude of the signal transmitted between the neurons the exact mechanisms that underlie the learning or plasticity of such synaptic connections are still under debate meanwhile researchers have attempted to mimic several plasticity measurements observed in biological synapses in nanoelectronic devices like phase change memories xcite xmath4 memristors xcite and spintronic devices xcite etc however majority of the research have focused on non volatile plasticity changes of the synapse in response to the spiking patterns of the neurons it connects corresponding to long term plasticity xcite and the volatility of human memory has been largely ignored as a matter of fact neuroscience studies performed in xcite have demonstrated that synapses exhibit an inherent learning ability where they undergo volatile plasticity changes and ultimately undergo long term plasticity conditionally based on the frequency of the incoming action potentials such volatile or meta stable synaptic plasticity mechanisms can lead to neuromorphic architectures where the synaptic memory can adapt itself to a changing environment since sections of the memory that have been not receiving frequent stimulus can be now erased and utilized to memorize more frequent information hence it is necessary to include such volatile memory transition functionalities in a neuromorphic chip in order to leverage from the computational power that such meta stable synaptic plasticity mechanisms has to offer drawing1 a demonstrates the biological process involved in such volatile synaptic plasticity changes during the transmission of each action potential from the pre neuron to the post neuron through the synapse an influx of ionic species likexmath5 and xmath6 causes the release of neurotransmitters from the pre to the post neuron this results in temporary strengthening of the synaptic strength however in absence of the action potential the ionic species concentration settles down to its equilibrium value and the synapse strength diminishes this phenomenon is termed as short term plasticity stp xcite however if the action potentials occur frequently the concentration of the ions do not get enough time to settle down to the equilibrium concentration and this buildup of concentration eventually results in long term strengthening of the synaptic junction this phenomenon is termed as long term potentiation ltp while stp is a meta stable state and lasts for a very small time duration ltp is a stable synaptic state which can last for hours days or even years xcite a similar discussion is valid for the case where there is a long term reduction in synaptic strength with frequent stimulus and then the phenomenon is referred to as long term depression ltd such stp and ltp mechanisms have been often correlated to the short term memory stm and long term memory ltm models proposed by atkinson and shiffrin xcite fig drawing1b this psychological model partitions the human memory into an stm and an ltm on the arrival of an input stimulus information is first stored in the stm however upon frequent rehearsal information gets transferred to the ltm while the forgetting phenomena occurs at a fast rate in the stm information can be stored for a much longer duration in the ltm in order to mimic such volatile synaptic plasticity mechanisms a nanoelectronic device is required that is able to undergo meta stable resistance transitions depending on the frequency of the input and also transition to a long term stable resistance state on frequent stimulations hence a competition between synaptic memory reinforcement or strengthening and memory loss is a crucial requirement for such nanoelectronic synapses in the next section we will describe the mapping of the magnetization dynamics of a nanomagnet to such volatile synaptic plasticity mechanisms observed in the brain let us first describe the device structure and principle of operation of an mtj xcite as shown in fig drawing2a the device consists of two ferromagnetic layers separated by a tunneling oxide barrier tb the magnetization of one of the layers is magnetically pinned and hence it is termed as the pinned layer pl the magnetization of the other layer denoted as the free layer fl can be manipulated by an incoming spin current xmath7 the mtj structure exhibits two extreme stable conductive states the low conductive anti parallel orientation ap where pl and fl magnetizations are oppositely directed and the high conductive parallel orientation p where the magnetization of the two layers are in the same direction let us consider that the initial state of the mtj synapse is in the low conductive ap state considering the input stimulus current to flow from terminal t2 to terminal t1 electrons will flow from terminal t1 to t2 and get spin polarized by the pl of the mtj subsequently these spin polarized electrons will try to orient the fl of the mtj parallel to the pl it is worth noting here that the spin polarization of incoming electrons in the mtj is analogous to the release of neurotransmitters in a biological synapse the stp and ltp mechanisms exhibited in the mtj due to the spin polarization of the incoming electrons can be explained by the energy profile of the fl of the mtj let the angle between the fl magnetization xmath8 and the pl magnetization xmath9 be denoted by xmath10 the fl energy as a function of xmath10 has been shown in fig drawing2a where the two energy minima points xmath11 and xmath12 are separated by the energy barrier xmath13 during the transition from the ap state to the p state the fl has to transition from xmath12 to xmath11 upon the receipt of an input stimulus the fl magnetization proceeds uphill along the energy profile from initial point 1 to point 2 in fig drawing2a however since point 2 is a meta stable state it starts going downhill to point 1 once the stimulus is removed if the input stimulus is not frequent enough the fl will try to stabilize back to the ap state after each stimulus however if the stimulus is frequent the fl will not get sufficient time to reach point 1 and ultimately will be able to overcome the energy barrier point 3 in fig drawing2a it is worth noting here that on crossing the energy barrier at xmath14 it becomes progressively difficult for the mtj to exhibit stp and switch back to the initial ap state this is in agreement with the psychological model of human memory where it becomes progressively difficult for the memory to forget information during transition from stm to ltm hence once it has crossed the energy barrier it starts transitioning from the stp to the ltp state point 4 in fig drawing2a the stability of the mtj in the ltp state is dictated by the magnitude of the energy barrier the lifetime of the ltp state is exponentially related to the energy barrier xcite for instance for an energy barrier of xmath15 used in this work the ltp lifetime is xmath16 hours while the lifetime can be extended to around xmath17 years by engineering a barrier height of xmath18 the lifetime can be varied by varying the energy barrier or equivalently volume of the mtj the stp ltp behavior of the mtj can be also explained from the magnetization dynamics of the fl described by landau lifshitz gilbert llg equation with additional term to account for the spin momentum torque according to slonczewski xcite xmath19 where xmath20 is the unit vector of fl magnetization xmath21 is the gyromagnetic ratio for electron xmath22 is gilberts damping ratio xmath23 is the effective magnetic field including the shape anisotropy field for elliptic disks calculated using xcite xmath24 is the number of spins in free layer of volume xmath25 xmath26 is saturation magnetization and xmath27 is bohr magneton and xmath28 is the spin current generated by the input stimulus xmath29 xmath30 is the spin polarization efficiency of the pl thermal noise is included by an additional thermal field xcite xmath31 where xmath32 is a gaussian distribution with zero mean and unit standard deviation xmath33 is boltzmann constant xmath34 is the temperature and xmath35 is the simulation time step equation llg can be reformulated by simple algebraic manipulations as xmath36 hence in the presence of an input stimulus the magnetization of the fl starts changing due to integration of the input however in the absence of the input it starts leaking back due to the first two terms in the rhs of the above equation it is worth noting here that like traditional semiconductor memories magnitude and duration of the input stimulus will definitely have an impact on the stp ltp transition of the synapse however frequency of the input is a critical factor in this scenario even though the total flux through the device is same the synapse will conditionally change its state if the frequency of the input is high we verified that this functionality is exhibited in mtjs by performing llg simulations including thermal noise the conductance of the mtj as a function of xmath10 can be described by xmath37 where xmath38 xmath39 is the mtj conductance in the p ap orientation respectively as shown in fig drawing2b the mtj conductance undergoes meta stable transitions stp and is not able to undergo ltp when the time interval of the input pulses is large xmath40 however on frequent stimulations with time interval as xmath41 the device undergoes ltp transition incrementally drawing2b and c illustrates the competition between memory reinforcement and memory decay in an mtj structure that is crucial to implement stp and ltp in the synapse we demonstrate simulation results to verify the stp and ltp mechanisms in an mtj synapse depending on the time interval between stimulations the device simulation parameters were obtained from experimental measurements xcite and have been shown in table i table table i device simulation parameters cols the mtj was subjected to 10 stimulations each stimulation being a current pulse of magnitude xmath42 and xmath43 in duration as shown in fig drawing3 the probability of ltp transition and average device conductance at the end of each stimulation increases with decrease in the time interval between the stimulations the dependence on stimulation time interval can be further characterized by measurements corresponding to paired pulse facilitation ppf synaptic plasticity increase when a second stimulus follows a previous similar stimulus and post tetanic potentiation ptp progressive synaptic plasticity increment when a large number of such stimuli are received successively xcite drawing4 depicts such ppf after 2nd stimulus and ptp after 10th stimulus measurements for the mtj synapse with variation in the stimulation interval the measurements closely resemble measurements performed in frog neuromuscular junctions xcite where ppf measurements revealed that there was a small synaptic conductivity increase when the stimulation rate was frequent enough while ptp measurements indicated ltp transition on frequent stimulations with a fast decay in synaptic conductivity on decrement in the stimulation rate hence stimulation rate indeed plays a critical role in the mtj synapse to determine the probability of ltp transition the psychological model of stm and ltm utilizing such mtj synapses was further explored in a xmath44 memory array the array was stimulated by a binary image of the purdue university logo where a set of 5 pulses each of magnitude xmath45 and xmath46 in duration was applied for each on pixel the snapshots of the conductance values of the memory array after each stimulus have been shown for two different stimulation intervals of xmath47 and xmath48 respectively while the memory array attempts to remember the displayed image right after stimulation it fails to transition to ltm for the case xmath49 and the information is eventually lost xmath50 after stimulation however information gets transferred to ltm progressively for xmath51 it is worth noting here that the same amount of flux is transmitted through the mtj in both cases the simulation not only provides a visual depiction of the temporal evolution of a large array of mtj conductances as a function of stimulus but also provides inspiration for the realization of adaptive neuromorphic systems exploiting the concepts of stm and ltm readers interested in the practical implementation of such arrays of spintronic devices are referred to ref the contributions of this work over state of the art approaches may be summarized as follows this is the first theoretical demonstration of stp and ltp mechanisms in an mtj synapse we demonstrated the mapping of neurotransmitter release in a biological synapse to the spin polarization of electrons in an mtj and performed extensive simulations to illustrate the impact of stimulus frequency on the ltp probability in such an mtj structure there have been recent proposals of other emerging devices that can exhibit such stp ltp mechanisms like xmath52 synapses xcite and xmath53 memristors xcite however it is worth noting here that input stimulus magnitudes are usually in the range of volts 13v in xcite and 80mv in xcite and stimulus durations are of the order of a few msecs 1ms in xcite and 05s in xcite in contrast similar mechanisms can be exhibited in mtj synapses at much lower energy consumption by stimulus magnitudes of a few hundred xmath54 and duration of a few xmath55 we believe that this work will stimulate proof of concept experiments to realize such mtj synapses that can potentially pave the way for future ultra low power intelligent neuromorphic systems capable of adaptive learning the work was supported in part by center for spintronic materials interfaces and novel architectures c spin a marco and darpa sponsored starnet center by the semiconductor research corporation the national science foundation intel corporation and by the national security science and engineering faculty fellowship j schemmel j fieres and k meier in neural networks 2008 ijcnn 2008ieee world congress on computational intelligence ieee international joint conference on1em plus 05em minus 04emieee 2008 pp 431438 b l jackson b rajendran g s corrado m breitwisch g w burr r cheek k gopalakrishnan s raoux c t rettner a padilla et al nanoscale electronic synapses using phase change devices acm journal on emerging technologies in computing systems jetc vol 9 no 2 p 12 2013 m n baibich j m broto a fert f n van dau f petroff p etienne g creuzet a friederich and j chazelas giant magnetoresistance of 001 fe001 cr magnetic superlattices physical review letters 61 no 21 p 2472 1988 g binasch p grnberg f saurenbach and w zinn enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange physical review b vol 39 no 7 p 4828 1989 w scholz t schrefl and j fidler micromagnetic simulation of thermally activated switching in fine particles journal of magnetism and magnetic materials vol 233 no 3 pp 296304 2001 pai l liu y li h tseng d ralph and r buhrman spin transfer torque devices utilizing the giant spin hall effect of tungsten applied physics letters vol 101 no 12 p 122404 2012 h noguchi k ikegami k kushida k abe s itai s takaya n shimomura j ito a kawasumi h hara et al in solid state circuits conferenceisscc 2015 ieee international1em plus 05em minus 04emieee 2015 pp t ohno t hasegawa t tsuruoka k terabe j k gimzewski and m aono short term plasticity and long term potentiation mimicked in single inorganic synapses nature materials vol 10 no 8 pp 591595 2011 r yang k terabe y yao t tsuruoka t hasegawa j k gimzewski and m aono synaptic plasticity and memory functions achieved in a wo3 x based nanoionics device by using the principle of atomic switch operation nanotechnology vol 24 no 38 p 384003
synaptic memory is considered to be the main element responsible for learning and cognition in humans although traditionally non volatile long term plasticity changes have been implemented in nanoelectronic synapses for neuromorphic applications recent studies in neuroscience have revealed that biological synapses undergo meta stable volatile strengthening followed by a long term strengthening provided that the frequency of the input stimulus is sufficiently high such memory strengthening and memory decay functionalities can potentially lead to adaptive neuromorphic architectures in this paper we demonstrate the close resemblance of the magnetization dynamics of a magnetic tunnel junction mtj to short term plasticity and long term potentiation observed in biological synapses we illustrate that in addition to the magnitude and duration of the input stimulus frequency of the stimulus plays a critical role in determining long term potentiation of the mtj such mtj synaptic memory arrays can be utilized to create compact ultra fast and low power intelligent neural systems
introduction formalism results and discussions conclusions acknowledgements
in this paper all graphs considered are finite simple and undirected we use xmath5 xmath6 xmath7 and xmath2 to denote the vertex set the edge set the minimum degree and the maximum degree of a graph xmath1 respectively denote xmath8 and xmath9 let xmath10 or xmath11 for simple denote the degree of vertex xmath12 a xmath13 xmath14 and xmath15xmath16 is a vertex of degree xmath13 at least xmath13 and at most xmath13 respectively any undefined notation follows that of bondy and murty xcite a graph xmath1 is xmath0immersed into a surface if it can be drawn on the surface so that each edge is crossed by at most one other edge in particular a graph is xmath0planar if it is xmath0immersed into the plane ie has a plane xmath0immersion the notion of xmath0planar graph was introduced by ringel xcite in the connection with problem of the simultaneous coloring of adjacent incidence of vertices and faces of plane graphs ringel conjectured that each xmath0planar graph is xmath17vertex colorable which was confirmed by borodin xcite recently albertson and mohar xcite investigated the list vertex coloring of graphs which can be xmath0immersed into a surface with positive genus borodin et al xcite considered the acyclic vertex coloring of xmath0planar graphs and proved that each xmath0planar graph is acyclically xmath18vertex colorable the structure of xmath0planar graphs was studied in xcite by fabrici and madaras they showed that the number of edges in a xmath0planar graph xmath1 is bounded by xmath19 this implies every xmath0planar graph contains a vertex of degree at most xmath20 furthermore the bound xmath20 is the best possible because of the existence of a xmath20regular xmath0planar graph see fig1 in xcite in the same paper they also derived the analogy of kotzig theorem on light edges it was proved that each xmath21connected xmath0planar graph xmath1 contains an edge such that its endvertices are of degree at most xmath18 in xmath1 the bound xmath18 is the best possible the aim of this paper is to exhibit a detailed structure of xmath0planar graphs which generalizes the result that every xmath0planar graph contains a vertex of degree at most xmath20 in section 2 by using this structure we answer two questions on light graphs posed by fabrici and madaras xcite in section 3 and give a linear upper bound of acyclic edge chromatic number of xmath0planar graphs in section 4 to begin with we introduce some basic definitions let xmath1 be a xmath0planar graph in the following we always assume xmath1 has been drawn on a plane so that every edge is crossed by at most one another edge and the number of crossings is as small as possible such a dawning is called to be xmath22 so for each pair of edges xmath23 that cross each other at a crossing point xmath24 their end vertices are pairwise distinct let xmath25 be the set of all crossing points and let xmath26 be the non crossed edges in xmath1 then the xmath27 xmath28 xmath29 xmath30 of xmath1 is the plane graph such that xmath31 and xmath32 thus the crossing points in xmath1 become the real vertices in xmath30 all having degree four for convenience we still call the new vertices in xmath30 crossing vertices and use the notion xmath33 to denote the set of crossing vertices in xmath30 a simple graph xmath1 is xmath34 if every cycle of length xmath35 has an edge joining two nonadjacent vertices of the cycle we say xmath36 is a xmath37 xmath38 of a xmath0planar graph xmath1 if xmath36 is obtained from xmath1 by the following operations step 1 for each pair of edgesxmath39 that cross each other at a point xmath40 add edges xmath41 and xmath42 close to xmath40 ie so that they form triangles xmath43 and xmath44 with empty interiors step 2 delete all multiple edges step 3 if there are two edges that cross each other then delete one of them step 4 triangulate the planar graph obtained after the operation in step 3 in any way step 5 add back the edges deleted in step 3 note that the associated planar graph xmath45 of xmath36 is a special triangulation of xmath30 such that each crossing vertex remains to be of degree four also each vertex xmath46 in xmath45 is incident with just xmath47 xmath21faces denote xmath48 to be the neighbors of xmath46 in xmath45 in a cyclic order and use the notations xmath49 xmath50 where xmath51 and xmath52 is taken modulo xmath53 in the following we use xmath54 to denote the number of crossing vertices which are adjacent to xmath46 in xmath45 then we have the following observations since their proofs of them are trivial we omit them here in particular the second observation uses the facts that xmath36 admits no multiple edge and the drawing of xmath36 minimizes the number of crossing obs for a canonical triangulation xmath36 of a xmath0planar simple graph xmath1 we have 1 any two crossing vertices are not adjacent in xmath45 2 if xmath55 then xmath56 3 if xmath57 then xmath58 4 if xmath59 then xmath60 let xmath61 and xmath62 be a crossing vertex in xmath45 such that xmath63 then by the definitions of xmath64 and xmath65 we have xmath66 furthermore the path xmath67 in xmath45 corresponds to the original edge xmath68 with a crossing point xmath62 in xmath36 let xmath69 be the neighbor of xmath46 in xmath36 so that xmath70 crosses xmath68 at xmath62 in xmath36 by the definition of xmath45 we have xmath71 we call xmath69 the xmath72xmath73 of xmath46 in xmath36 and xmath74 the xmath75xmath76 of xmath46 in xmath36 other neighbors of xmath46 in xmath36are called xmath77xmath76 sometimes when we say mirror vertex image vertex and normal vertex we refer to mirror neighbor image neighbor and normal neighbor of xmath46 the triangle xmath78 in xmath36 is called the xmath72xmath79 incident with xmath46 since the neighbors of xmath46 in xmath45 can be listed in a cyclic order via replacing the crossing vertex by the mirror vertex incident with it the neighbors of xmath46 in xmath36can be also listed in a cyclic order note that different crossing vertices are adjacent to different mirror vertices since multiple edges are forbidden in xmath1 let xmath80 be the neighbors of xmath46 in xmath36 in a cyclic order we define xmath81 xmath82 and xmath83 where xmath52 is taken modulo xmath84 note that xmath36 is a canonical triangulation of xmath1 then xmath85 is a cycle which is called the xmath27 xmath86 of xmath46 denoted by xmath87 we call the path xmath88 a xmath89 of xmath87 if a the elements of xmath90 are image neighbors of xmath46 b the elements of xmath91 are mirror neighbors of xmath46 c the triangles in xmath36 of the form xmath92 where xmath93 are mirror triangles incident with xmath46 and d xmath94 and xmath95 are not mirror neighbors of xmath46 then xmath96 of a segment xmath97 is defined to be the number of mirror triangles incident with xmath46 using vertices in xmath98 denoted by xmath99 then we easily have xmath100 of a 1planar graph xmath1titlefigwidth604height207 now we show the main result in this section structure let xmath1 be a xmath0planar simple graph then there exists a vertex xmath46 in xmath1 with exactly xmath13 neighbors xmath101 satisfying xmath102 such that one of the following is true xmath1031 xmath104 xmath1032 xmath105 with xmath106 xmath1033 xmath107 with xmath108 and xmath109 xmath1034 xmath110 with xmath111 xmath112 and xmath113 xmath1035 xmath114 with xmath115 xmath116 xmath117 and xmath118 xmath1036 xmath119 with xmath120 xmath121 xmath122 xmath123 and xmath124 the theorem is proved by contradiction let xmath1 be a simple xmath0planar graph with a fixed embedding in the plane and suppose xmath1 is a counterexample to the theorem note that if we add a new edge xmath125 between two nonadjacent vertices in xmath1 so that xmath126 is still 1planar graph then xmath126 shall also be a counterexample to the theorem hence in the following without loss of generality we always assume xmath1 is 2connected and xmath127 where xmath36 is a canonical triangulation of xmath1 that has been draw on a plane properly in other words xmath1 is just a canonical triangulation of itself so in the next there is no necessity to distinguish the two notions xmath1 and xmath36 and when we say xmath30 we also refer to xmath45 on the other hand by the definition of associated plane graph one can observe that xmath128 when xmath46 is not a crossing vertex so in the detail proof below we either need not to distinguish xmath129 and xmath10 when xmath46 is a vertex of xmath1 in which case we only use the notation xmath11 to represent both xmath129 and xmath10 for a fixed vertex xmath46 of xmath1 we define xmath130 xmath131 in short to be the number of neighbors of xmath46 in xmath1 which are of degree xmath52 for a vertex set xmath132 let xmath133 denote xmath134 and xmath135 for a subgraph xmath136 of xmath1 xmath137 represents the number of xmath52vertices contained in xmath136 let xmath87 be the associated cycle of xmath46 suppose xmath87 has xmath138 segments denoted by xmath139 in this cyclic order denote xmath140 let xmath141 xmath142 we call the vertex set xmath143 the xmath144 of the associated cycle xmath87 for each segment xmath97 we define a graph xmath145 so that xmath146 and xmath147 let xmath148 by the definition of xmath97 we have xmath149 see fig cal a triangle xmath150 in xmath1 is xmath151 if xmath152 otherwise we say that xmath150 is xmath153 note that for the vertex xmath46 described above there are xmath154 mirror triangles incident with it recall the definition of the parameter xmath154 now suppose xmath155 of them are heavy and xmath156 of them are light we divide all the light mirror triangles incident with xmath46 into three classes class slowromancap1 triangles in the form xmath150 such that xmath157 is mirror vertex and xmath158 are image vertices with xmath159 and xmath160 class slowromancap2 triangles in the form xmath150 such that xmath157 is mirror vertex and xmath158 are image vertices with xmath161 class slowromancap3 triangles in the form xmath150 such that xmath162 denote the number of triangles belonging to class slowromancap1 slowromancap2 and slowromancap3 by xmath163 and xmath164 where xmath165 and xmath166 claim 1 xmath167 since each heavy mirror triangle incident with xmath46 covers at least one xmath168vertex there are at least xmath169 xmath168vertices contained in heavy mirror triangles andthis lower bound is reachable only if each heavy mirror triangle covers exactly one xmath168vertex such that each pair of incident mirror triangles share one common xmath168vertex for each class slowromancap2 light mirror triangle xmath150 such that xmath157 is mirror vertex and xmath158 are image vertices with xmath170 since xmath1031xmath1034 are forbidden in xmath1 we have xmath171 and another three neighbors of xmath172 are all xmath173vertices let xmath87 be the associated cycle of xmath46 then xmath174 if xmath175 let xmath176 then xmath177 is a normal vertex so xmath177 could be incident with at most two image vertices of degree no more than five if xmath178 let xmath179 then xmath180 is a heavy mirror triangle with two xmath168vertices in either case we would account at least xmath181 new xmath168vertices which are not counted in the above step hence we have xmath182 claim 2 there is an integer xmath183 such that xmath184 since each class slowromancap1 light mirror triangle contains two vertices either of degree xmath17 or of degree xmath20 and each class slowromancap3 light mirror triangle contains three vertices either of degree xmath17 or of degree xmath20 we deduce that xmath185 claim 3 xmath186 since xmath1031 is forbidden we have xmath187 and xmath188 note that xmath189 we deduce from claim 1 and claim 2 that xmath186 claim 4 xmath190 recall the definitions of xmath191 and xmath192 where xmath142 each xmath145 contains xmath193 mirror triangles incident with xmath46 suppose xmath194 of them are light and xmath195 of them are heavy then xmath196 since xmath1033 is forbidden no light mirror triangle contains xmath197vertex so the neighbors of xmath46 in xmath1 with degree xmath197 are all contained either in the heavy mirror triangles or in the interval of xmath87 note that all the image vertices contained in xmath198 are of degree at least five see figure cal so if there is a 4vertex in xmath198 then it must be a mirror vertex in view of this one can easily claim that xmath198 contains at most xmath195 xmath197vertices furthermore if xmath199 xmath200 and xmath201 then xmath202 are all xmath173vertices since xmath1033 is forbidden in xmath1 so the triangles xmath203 and xmath204 are both heavy then one can similarly claim that xmath205 contains at most xmath206 xmath197vertices where xmath207 by 2 of observation obs and the definition of xmath145 if there are xmath21vertices on xmath87 they must be on the interval suppose there are xmath208 xmath21vertices in xmath192 where xmath209 if xmath210 then xmath192 contains at least xmath211 nonxmath197vertices since xmath1032 and xmath1033 are forbidden here note that neither xmath212 nor xmath213 can be xmath21vertex by 2 of observation obs since each image vertex is adjacent to a crossing vertex in xmath30 so xmath214 if xmath210 if xmath199 then xmath215 so the above inequation on xmath216 holds unless xmath200 and xmath201 in this special case this inequation becomes xmath217 indeed but on the other hand we have xmath218 and xmath219 by the former arguments note that xmath220 and xmath221 so we can deduce that xmath222 hence we have xmath190 now we apply the discharging method to the associated planar graph xmath30 of xmath1 since xmath30 is a planar graph and xmath223 by euler s formula we have xmath224 now we define xmath225 to be the initial charge of xmath226 let xmath227 for each vertex xmath12 and let xmath228 and for each face xmath229 it follows that xmath230 we now redistribute the initial charge xmath225 and form a new charge xmath231 for each xmath226 by discharging method since our rules only move charge around and do not affect the sum we have xmath232 a xmath21face in xmath30 is xmath233 if it is incident with one crossing vertex our discharging rules are defined as follows xmath2341 each non special xmath21face in xmath30 receives xmath235 from each vertex incident with it xmath2342 each special xmath21face in xmath30 receive xmath236 from each non crossing vertex incident with it xmath2343 each vertex xmath46 in xmath1 with xmath237 sends xmath238 to each adjacent xmath20vertex in xmath1 xmath2344 each vertex xmath46 in xmath1 with xmath239 sends xmath240 to each adjacent xmath20vertex and xmath241 to each adjacent xmath17vertex in xmath1 xmath2345 each vertex xmath46 in xmath1 with xmath242 sends xmath243 to each adjacent xmath20vertex xmath244 to each adjacent xmath17vertex and xmath245 to each adjacent xmath246vertex in xmath1 xmath2346 each vertex xmath46 in xmath1 with xmath247 sends xmath248 to each adjacent xmath20vertex xmath249 to each adjacent xmath17vertex xmath235 to each adjacent xmath246vertex and xmath250 to each adjacent xmath197vertex in xmath1 xmath2347 each vertex xmath46 in xmath1 with xmath251 sends xmath252 to each adjacent xmath20vertex xmath235 to each adjacent xmath17vertex xmath253 to each adjacent xmath246vertex xmath254 to each adjacent xmath197vertex and xmath255 to each adjacent xmath21vertex in xmath1 let xmath256 be a face of xmath30 then xmath257 if xmath256 is non special then by xmath2341 xmath258 if xmath256 is special then by observation obs xmath256 is incident with two non crossing vertices by xmath2342 we have xmath259 let xmath46 be a vertex of xmath1 since xmath1031 is forbidden we have xmath260 suppose xmath46 is a xmath53vertex and has xmath53 neighbors xmath261 in xmath1 where xmath262 in the following we show xmath263 for each such a vertex suppose xmath264 since xmath1032 is forbidden xmath46 is adjacent three xmath173vertices note that xmath46 is not incident with any special xmath21face by 2 of observation obs by xmath2341 and xmath2347 we have xmath265 suppose xmath266 since xmath58 by 3 of observation obs xmath46 is incident with at most two special xmath21faces if xmath267 then by xmath2341 xmath2342 and xmath2346 we have xmath268 if xmath108 then xmath269 since xmath1033 is forbiddenso by xmath2347 we have xmath270 suppose xmath271 if xmath272 then by xmath2341 xmath2342 xmath2345 and 4 of observation obs we have xmath273 so we may assume xmath111 if xmath274 then by xmath2341 xmath2342 and xmath2346 we have xmath275 so we may assume xmath112 then xmath276 for otherwise xmath1034 occurs in this case by xmath2341 xmath2342 and xmath2347 we also have xmath277 suppose xmath278 if xmath279 then by xmath2341 xmath2342 and xmath2344 we have xmath280 so we may assume xmath115 if xmath281 then by xmath2341 xmath2342 and xmath2345 we have xmath282 so we may assume xmath116 if xmath283 then by xmath2341 xmath2342 and xmath2346 we have xmath284 so we may assume xmath285 then xmath286 for otherwise xmath1035 occurs in this case by xmath2341 xmath2342 and xmath2347 we also have xmath287 suppose xmath288 if xmath289 then by xmath2341 xmath2342 xmath2343 and 4 of observation obs we have xmath290 so we may assume xmath120 if xmath291 then by xmath2341 xmath2342 and xmath2344 we have xmath292 so we may assume xmath121 if xmath293 then by xmath2341 xmath2342 and xmath2345 we have xmath294 so we may assume xmath295 if xmath296 then by xmath2341 xmath2342 and xmath2346 we have xmath297 so we may assume xmath123 then xmath298 for otherwise xmath1036 occurs in this case by xmath2341 xmath2342 and xmath2347 we also have xmath299 suppose xmath300 then by xmath2341xmath2348 we have xmath301 suppose xmath302 recall that xmath154 is the number of mirror triangles incident with xmath46 so xmath303 note that xmath304 and xmath305 by xmath2341 xmath2342 and xmath2343 we have xmath306 suppose xmath307 note that xmath304 and xmath308 by xmath2341xmath2342 and xmath2344 we have xmath309 suppose xmath310 note that xmath304 by xmath2341xmath2342 xmath2345 and claims 2 3 we have xmath311 suppose xmath312 note that xmath304 by xmath2341xmath2342 xmath2346 and claims 2 3 4 we have xmath313 suppose xmath314 note that xmath304 by xmath2341xmath2342 xmath2347 and claims 2 3 4 we have xmath315 by the above arguments we have xmath316 a contradiction hence we have proved the theorem let xmath317 be a family of graphs and let xmath136 be a connected graph let xmath318 be the smallest integer with the property that each graph xmath319 contains a subgraph xmath320 such that xmath321 if such an integer does not exist we write xmath322 we say that the graph xmath136 is xmath151 in the family xmath317 if xmath323 by xmath324 we denote the set of light graphs in the family xmath317 in the next xmath325 denotes a path with xmath13 vertices and xmath326 denotes a star with maximum degree xmath13 we use the notation xmath327 for the family of all xmath0planar graphs of minimum degree at least xmath328 in xcite fabrici and madaras showed that xmath329 and xmath330 and posed a few of open problems two of them are stated as follows is xmath331 true is xmath332 true in this section we partially answer these two questions by applying the results in section 2 let xmath1 be a simple xmath0planar graph with minimum degree xmath333 then xmath1 contains a xmath21path with all vertices of degree at most xmath334 by theorem structure xmath1 contains one of the configuration in xmath1033xmath1034xmath1035xmath1036 described in section xmath335 in each case we will find a path xmath336 in xmath1 such that xmath337 similarly we can prove an analogous theorem let xmath1 be a simple xmath0planar graph with minimum degree xmath338 then xmath1 contains a xmath21star with all vertices of degree at most xmath334 hence we have the following many corollaries xmath339 xmath340 a mapping xmath341 from xmath6 to the sets of colors xmath342 is called a xmath22 xmath13xmath343 xmath344 of xmath1 provided any two adjacent edges receive different colors the xmath343 xmath345 xmath346 xmath347 is the minimum number of colors needed to color the edges of g properly a xmath22 xmath13xmath343 xmath344 xmath341 of xmath1 is called an xmath348 xmath13xmath343xmath344 of xmath1 if there are no bichromatic cycles in xmath1 under the coloring xmath341 the smallest number of colors such that xmath1 has an acyclic edge coloring is called the xmath348 xmath345 xmath346 of xmath1 denoted by xmath349 acyclic edge coloring was introduced by alon et al xcite and they presented a linear upper bound on xmath349 it was proved that xmath350 holds for every graph which was later improved to xmath351 by molloy and reed xcite for planar graph xmath1 a fiedorowicz et al xcite proved that xmath352 recently hou et al xcite gave a better upon bound they showed that xmath353 holds for each planar graph let xmath354 be an edge coloring of xmath1 for any vertex xmath12 we define xmath355 in this section we consider the acyclic edge coloring of xmath0planar graphs let xmath360 and xmath361 suppose xmath362 by the minimality of xmath1 the graph xmath363 has an acyclic xmath3edge coloring xmath354 with color set xmath87 let xmath364 and xmath365 for the edge xmath366 we remain xmath367 note that xmath368 then xmath369 is an acyclic xmath3edge coloring of xmath1 a contradiction so xmath370 let xmath371 then xmath372 has an acyclic xmath3edge coloring xmath354 with color set xmath87 now we let xmath373 since xmath374 and xmath375 xmath376 now we color xmath377 by xmath378 it is easy to see that xmath379 for the edge xmath380 we also remain xmath367 then xmath369 is again an acyclic xmath3edge coloring of xmath1 a contradiction in this case xmath1 has one of the five configurations xmath1032xmath1033xmath1034xmath1035xmath1036 which are described in theorem structure let xmath381 xmath382 xmath383 xmath384 and xmath385 suppose xmath1 contains the xmath386th configuration where xmath387 if xmath388 let xmath389 otherwise let xmath371 then xmath372 has an acyclic xmath3edge coloring xmath354 with color set xmath87 if xmath388 let xmath390otherwise let xmath391 now we color xmath392 by a color xmath393 note that xmath374 and xmath394 we have xmath395 let xmath396 and xmath397 where xmath398 then we color xmath399 in turn as follows let xmath400 if xmath401 let xmath402 otherwise we let xmath403 atlast for each xmath404 we let xmath405 for the edge xmath380 we still remain xmath367 note that xmath406 xmath407 and xmath408 so this coloring xmath369 does exist it is easy to check that xmath369 is proper and acyclic so we have constructed a new coloring xmath369 which is an acyclic xmath3edge coloring of xmath1 a contradiction this completes the proof of theorem acyclic
a graph is called xmath0planar if it can be drawn on the plane so that each edge is crossed by at most one other edge in this paper we establish a local property of xmath0planar graphs which describes the structure in the neighborhood of small vertices ie vertices of degree no more than seven meanwhile some new classes of light graphs in xmath0planar graphs with the bounded degree are found therefore two open problems presented by fabrici and madaras the structure of 1planar graphs discrete mathematics 307 2007 854865 are solved furthermore we prove that each xmath0planar graph xmath1 with maximum degree xmath2 is acyclically edge xmath3choosable where xmath4 keywords xmath0planar graph light graph acyclic edge coloring please cite this published article as x zhang g liu j l wu structural properties of 1planar graphs and an application to acyclic edge coloring scientia sinica mathematica 2010 40 10251032
introduction local structure of @xmath0-planar graphs light graphs in @xmath0-planar graphs of the bounded degree acyclic edge coloring of @xmath0-planar graphs
photoemission spectroscopy measures the energy distribution of photo emitted electrons when materials are irradiated with light xcitefig it is widely used in solid state physics and chemistry for investigating the electronic structure of surface interface and bulk materialsxcite recently it has become a prime choice of technique in studying strongly correlated electron systemsxcite such as high temperature superconductorsxcite the availability of synchrotron light sources and lasers combined with the latest advancement of electron energy analyzer has made a dramatic improvement on the energy resolution of photoemission technique in the last decade an energy resolution of xmath0 5mev or better can now be routinely obtained these achievements have made it possible to probe intrinsic properties of materials and many body effectsxcite for example measurements of the superconducting gap on the order of 1 mev as in conventional superconductorsxcite and in some high temperature superconductorsxcite have been demonstrated on the other hand the utilization of pulsed light sources such as synchrotron light or pulsed lasers has also brought about concerns of the space charge effectxcite when a large number of electrons are generated from a short pulsed source andleave the sample surface the electrons will first experience a rapid spatial distribution depending on their kinetic energy then because of the coulomb interaction the fast electrons tend to be pushed by the electrons behind them while the slow electrons tend to be retarded by those fast electrons this energy redistribution will distort the intrinsic information contained in the initial photoelectrons by giving rise to two kinds of effects one is a general broadening of the energy distribution due to both acceleration and retardation of electrons in their encounters the other is a systematic shift in the energy the space charge broadening of the energy distribution has been known for a long time as a limiting factor in electron monochromators and other electron beam devicesxcite but it has not been considered in photoemission until very recentlyxcite the main concern there was whether such an effect will set an ultimate limit on further improving the energy resolution of the photoemission techniquexcite here we report the first experimental observation of the space charge effect in photoemission in addition by combining experimental measurement with numerical simulations we show that the mirror charges also known as image charges in the literature in the sample also play an important role in the energy shift and broadening the combined effect of these coulomb interactions gives an energy shift and broadening on the order of 10 mev for a typical third generation synchrotron light source which is already comparable or larger than the energy resolution set by the light source and the electron analyzer the value is also comparable to the many body effect actively pursued by modern photoemission spectroscopy these effects therefore should be taken seriously in interpreting experimental data and in designing next generation experiments the experiment was carried out on beamline 1001 at the advanced light source this is a third generation synchrotron source which generates pulsed light with a frequency of 500 mhz and a duration of xmath060 ps the beamline can generate linearly polarized bright ultraviolet light with a photon flux on the order of 10xmath1 photons second with a resolving power exmath2e of 10000 e is the photon energy and xmath2e the beamline energy resolution the endstation is equipped with a high resolution scienta 2002 analyzer the analyzer together with the chamber is rotatable with respect to the beam while the sample position is fixed the measurement geometry is illustrated in the upright inset of fig 1 there are two angles to define the direction of electrons entering the analyzer with respect to the sample normal tilt angle xmath3 and analyzer rotation angle xmath4 we measured the sample current to quantitatively measure the number of electrons escaping from the sample which is proportional to the photon flux with the pulse frequency of 500mhz at the als 1 na of the sample current corresponds to 125 electrons per pulse 2a shows a typical photoemission spectrum of polycrystalline gold taken with a photon energy of 35 ev it consists of a fermi edge drop exmath5 near xmath030 ev valence band between 20xmath030ev and a secondary electron tail extending to lower kinetic energy arising from the inelastic scattering we chose to measure on gold because the sharp fermi edge at low temperature xmath020 k for all the measurements in the paper gives a good measure of both the energy position and width fig the fermi edge is fitted by the fermi dirac function fe1xmath6 at zero temperature convoluted with a gaussian with a full width at half maximum fwhm xmath7 this width xmath7 includes all the contributions from thermal broadening analyzer resolution beamline resolution and others in photoemission experiments it is a routine procedure to use fermi level of a metal such as gold as the energy referencing point for the sample under study because the fermi levels are expected to line up with each other when the metal and the sample are in good electrical contact the fermi level of the metal is also expected to be dependent only on the photon energy and not on other experimental conditions such as sample temperature photon flux etc it was therefore quite surprising when we first found out that the gold fermi edge shifts position with incident photon intensity fig a systematic measurement reveals that under some measurement geometries the fermi level varies linearly with the sample current and the shift can be as high as xmath020mev within the photon flux range measured fig note that the fermi level energy gets higher with increasing photon flux this rules out the possibility of sample charging that usually occurs due to poor electrical grounding of the sample in that case the fermi level energy would be pushed downward with increasing photon flux we can also rule out the possibility of the local sample heating due to high photon flux because temperature only affects the fermi edge broadening but will not change the fermi level position as we estimated for a photon flux of xmath010xmath8 photons second at a photon energy of 35 ev the corresponding power is xmath00056 mw the temperature increase with such a small power spread over an area of 1 mmxmath9 is negligible so it also has little effect on the thermal broadening of the fermi edge the first thing to check is whether this fermi level shift with photon flux is due to instrumental problems which can be from either the beamline or the electron analyzer regarding the beamline the photon flux is usually varied by adjusting the size of the beamline slits this will change the beamline energy resolution correspondingly but may potentially also cause energy position change to check whether this is the case we put a photon blocker in the beamline fig 1 so that it can attenuate the photon flux while keeping the photon energy and resolution intact using the photon blocker we observed a similar variation of the fermi level with photon flux fig 3a thus ruling out the possibility of beamline problems we also put an electron blocker fig 1 to vary the number of electrons collected by the analyzer when the photon flux on the sample is fixed the fermi edge shows little change with the number of electrons entering the analyzer fig this indicates that the energy shift we have observed is not due to problems of the electron analyzer either therefore the observed energy shift must be associated with the photoemission process itself in addition to the energy position shift there is also an energy broadening associated with increasing photon flux to observe such an effect we have to compromise the beamline energy resolution in the way that it has a relatively high photon flux to induce an obvious broadening effect and a relatively high energy resolution xmath010mev in order to resolve the additional broadening from all other contributions the measurement is made possible by taking the advantage of the photon blocker to fix the contribution from the beamline the total width increases with increasing photon flux inset of fig 4 taking the width at the lowest photon flux as arising from all the other contributions including the beamline the analyzer and sample temperature broadening the photon induced energy broadening can be extracted after deconvolution as seen in fig 4 it varies with the photon flux with a magnitude comparable to but slightly larger than the energy shift we have found that the fermi edge shift and broadening are sensitive to the spot size of the beam on the sample fig the spot size is changed by varying the vertical focus of the beamline the horizontal beamsize is fixed it is measured using the transmission mode of the analyzer calibrated by using samples with known size as seen from fig 5a as the spot size increases the energy shift gets less sensitive to the change of photon flux as also seen from the slope change as a function of the spot size fig 6 for comparison 6 also includes the simulated data over a large range of spot sizes although the data of energy broadening fig 5b is scattered as a result of deconvolution from a relatively large background value the trend is clear that the broadening gets smaller with increasing spot size again for a given beam size the magnitude of the energy broadening is comparable to but slightly larger than the corresponding energy shift the fermi edge shift and broadening are also sensitive to the electron emission angle we set the gold sample at different tilt angles and measured the fermi level position and width as a function of the analyzer angle under various photon flux as seen in fig 7 the fermi level position exhibits a strong variation with the analyzer angle particularly at high photon flux the fermi level is higher near smaller analyzer angle and decreases with increasing analyzer angle when the analyzer angle is close to 90 degrees all the curves with different sample tilt angle and with different sample current tend to approach to a similar position within the experimental error the overall measured fermi level width basically follows the trend of the energy shift it becomes smaller with increasing analyzer angle we also notice that the curves are not symmetrical with respect to the zero analyzer angle since the surface of the polycrystalline gold we used is not perfectly flat one possible reason is that the exact angle may be slightly off from the nominal value another possibility is the presence of a small systematic error as indicated from fig 7 when the sample current is small 23 na one can still observe fermi level shift with the analyzer angle which may be due to a systematic error associated with the experimental setup to gain more insight on the angle dependence we also measured the energy shift and broadening as a function of the sample current at different analyzer angles figs 8a and b it is interesting to note that while for small analyzer angles the energy shift is proportional to the sample current as we have seen before it deviates significantly from the straight line for large angles in this case the energy shift exhibits linear relation only at high sample current when the sample current gets smaller it goes through a minimum and then gets larger again even with further decreasing of the sample current one may expect that at zero sample current the energy shift approach zero so that all curves should converge at the zero sample current as indeed shown by the data in fig the small fermi level scattering at zero sample current may be due to the systematic error as discussed before this implies that for large analyzer angles the energy shift can be even negative at some sample current fitting the high sample current part of the curves in figs 8a and b with a straight line we extracted their slopes and plotted them in fig 8c for two sample tilt angles the shape of the curves is similar to that in fig the high sample current part overlaps with each other when extrapolated to 90 degreesthe fermi level shift is approaching zero which is also consistent with the converging of the fermi level at high analyzer angle as seen in fig 7 to further investigate the origin of the angle dependent energy shift and broadening we measured the gold valence band at different analyzer angles fig the intensity of these spectra are normalized to the photon flux so they are comparable with each other the shape of the valence band shows no obvious change with the analyzer angle but their relative intensity changes dramatically for a quantitative comparison we integrated the spectral weight over a large energy range 5xmath035 ev and the result is shown in the inset of fig 9 integration over a smaller energy window such as 25xmath035 gives essentially the same shape we have found that the angular variation of the relative valence band intensity and the fermi level shift is identical inset of fig this indicates that the angle dependence of the fermi level is directly related to the angle dependence of the number of photo emitted electrons it is expected that the space charge effect depends on a number of parametersxcite 1 the number of electrons per pulse 2 the pulse length 3 the size and shape of the excitation area and 4 the energy distribution of the electrons we have performed numerical simulations using the monte carlo based technique developed earlierxcite in order to quantitatively examine our results this serves first to check whether the observed energy shift and broadening can be entirely attributed to the space charge effect it then helps to understand the microscopic processes associated with it such as the time scale of the process moreover it can be extended to investigate situations that are difficult or not accessible for the experiments such as the effect of the electron energy distribution the effect of the pulse length and the case of a continuous source as we will discuss below in the simulation a specified number of electrons 1 100000 denoted as interaction electrons hereafter are started at random positions within the specified source area at random times during the pulse and with random energies with some specified distribution because the acceptance angle of the electron energy analyzer is small the electrons for which the energy spread and broadening are to be calculated denoted as test electrons hereafter are started in the forward direction with a specified initial energy but with a random distribution in start position and time this condition corresponds to the measurement geometry of the analyzer angle xmath40 and the sample tilt angle xmath30 each test electron is assumed to feel the coulomb force from all interaction electrons within some cut off distance the interaction electrons are assumed to move in straight lines defined by their initial conditions ie all mutual interactions between them are neglected this is legitimate because their position changes are extremely small and random the energy evolution of a single test electron is followed until all interaction electrons have vanished outside the cut off distance then the process is repeated with a new set of interaction electrons and one new test electron this procedure is repeated a few thousand times after which the energy distribution of the test electrons is calculated for the accuracy of the integration to be of the same order of magnitude as the statistical uncertainty the cut off distance has to be at least 1 mm and for most calculations it was chosen to be 2 mm the energy distribution can usually be well fitted by a gaussian although the number of electrons which experience very large shifts is significantly larger than for the gaussian distribution such extreme outliers are neglected when calculating the width of the distribution the electrons in the pulse will experience coulomb interaction from all the other electrons at different energies including the large number of low energy secondary electrons fig to evaluate the effect of the electron energy distribution on the electrons at the fermi level we divided the energy range below exmath5 into a number of regions and calculated the contribution from each individual region the simulated energy shift and broadening from the direct space charge effectare plotted in figs 10a and 10b respectively the energy shift displays a strictly linear relation with the number of electrons in a pulse and the slope as a function of test electron kinetic energy is plotted in fig 11 on the other hand the energy broadening exhibits a nearly linear relation only at large number of electrons at small number of electrons it shows a bend clearly all electrons contribute to the fermi level energy shift and broadening but they contribute differently the high energy electrons contribute more than the low energy ones fig in fact an electron at a distance z in front of a conducting metal surface will also experience an attractive force fzexmath9xmath102zxmath9 identical to that produced by a positive mirror image charge at a distance z inside the metalxcite the basic assumption behind the mirror charge concept is that the charges on the sample surface redistribute themselves in such a way that the surface is always an equipotential surface whether this assumption is correct on the time scale considered heremay be dependent eg on the conductivity of the sample in this case each interaction electron is accompanied by a mirror charge in the sample inset of fig 12a which also interacts with the test electron the interaction of the test electron with its own mirror charge is not included here because it is always present in the earlier simulationxcite the mirror charges could be neglected when only considering the broadening caused by interaction electrons with energies close to that of the test electron for the casewhen the test electron has higher energy than all interaction electrons this is no longer true in particular when the energy shifts are also considered fig 12a and12b show simulated energy shift and broadening for different energy ranges by incorporating both the space and mirror charge effects the energy shift retains a linear variation with the number of test electrons per pulse and the slope is plotted in fig the contribution from the mirror charge alone can be easily extracted apparently the mirror charge gives rise to a negative energy shift with increasing number of electrons per pulse this helps in compensating the positive energy shift from the space charge effect the combined effect on the energy broadening is more complicated for the highest energy range of the interaction electrons 25 30 ev the combined broadening fig 12b is larger than that from the space charge effect alone fig but for the lower energy range of the interaction electrons it is smaller than that from the space charge effect we have found that the energy shift and broadening occur at very different time scales as seen from fig 13 the energy shift evolves gradually within the first nanosecond the energy broadening on the other hand has already reached its equilibrium value at 100 ps followed by random fluctuations this is because the energy shift takes place only after the electrons have spatially sorted themselves according to their energy after that the forces are all acting in the same direction we also note that initially each interaction electron and its mirror charge form a very short dipole from which the field decreases rapidly with distance the broadening on the other hand is much more of a nearest neighbor effect which is strongest when the pulse is dense detailed study of the energy evolution for individual electrons shows that the random part of the energy change is often dominated by one single event ie a close encounter with another electron since the energy shift continues to grow over a time that is comparable to the interval between pulses we have also checked whether it can be affected by remaining slow electrons from the previous pulse we have found that this contribution is completely negligible since a time continuous light source such as discharge lamps is widely used as a lab source for photoemission it is important to check whether similar effects still exist in that case for a continuous light source because there will be no spatial redistribution of the electrons according to their energy one might expect the contribution to the energy shift from the space charge to be close to zero while the mirror charge will give a negative shift the broadening can be expected to be of the same order of magnitude as that from a pulsed source with the same number of electrons per unit time to simulate a continuous source we first start with a pulsed source varying the pulse length while keeping the number of electrons per unit time constant and try to extrapolate to infinite length to approximate a continuous source we have considered a typical case of helium i radiation photon energy 2112ev on polycrystalline gold and varied the sample current during the pulse from 015 to 50 electrons ps 14 shows the energy shift and broadening for different sample currents as a function of the pulse length when scaled by the sample current all energy shift curves overlap with each other because the shift is proportional to the current for all pulse lengths fig the energy shift shows non monotonic dependence with the pulse length owing to the competition between the direct space charges and mirror charges when the pulse length is short the space charge dominates which gives positive energy shift when the pulse length is long enough the effect from mirror charges dominates which leads to a negative energy shift eventually it asymptotically reaches a value that can be taken for a continuous source the shift is 07mev for 15xmath11xmath1 electrons second and can get significant when the photon flux is larger the energy broadening fig 14b on the other hand does not scale with the sample current particularly at longer pulse length it also exhibits a non monotonic variation with the pulse length reaching a maximum around 10xmath12 ps and then decreases with further increasing of the pulse length if we assume an asymptotic behavior following the drop the broadening for the continuous source is close to 1 mev for a sample current of 15xmath1310xmath1 electrons second as we have seen from both the experimental measurements and the simulation the energy shift and broadening depend on many parameters such as the number of electrons per pulse the pulse length the spot size on the sample the emission angle of electrons and the photon energy used moreover it is material specific this is first because it depends on the shape of the valence band ie the energy distribution of photoelectrons seocnd for metals and insulators the effect of mirror charge may vary significantly with so many factors coming into play simultaneously it is hard to exhaust all the possibilities and a proper approach to take is to measure or simulate on an individual basis as shown in figs 4 and 5 the measured energy shift is proportional to the sample current andthe broadening is nearly linear at high sample current and shows a bend at lower sample current qualitatively speaking both observations are consistent with the simulated results from either the space charge effect fig 10 or combined space and mirror charge effects fig 12 after obtaining the contribution for each individual energy range from the space charge effect fig 10 we calculated the overall energy shift from the measured valence band fig 2a as a weighted sum of the contributions from the different energy ranges we also used a model where the energy distribution is approximated by a rectangular shape corresponding to the valence band and a triangular distribution of the secondary electrons the obtained results are similar it was found that the value for the energy shift obtained from the space charge effect alone is much higher than that measured from experiment for example for the spot size of 043mmxmath13042 mm the calculated energy shift is 0175 mev na much higher than the measured 0055 mev na the large discrepancy indicates that the space charge effect alone can not account for the observed energy shift this prompted us in identifying the mirror charge effect that should be present for metals such as gold after considering both effectsfig 12 the calculated energy shift becomes quantitatively consistent with the experiment as seen in fig 6 even for different spot sizes considering that there are no adjustable parameters in the simulation this level of agreement is striking this indicates that we have captured the main contributors to the energy shift effect the quantitative comparison between the measurement and the simulation has made us able to identify the mirror charge effect that was not included beforexcite for the area dependence fig 6 we note that the size of the spot on the sample relative to the distance an electron travels during the pulse is important depending on the relative ratio the space charge effect may exhibit different dependence on the spot size if the light spot is much larger than the electron travelling distance for 30 ev electron the travelling distance is xmath002 mm within 60 ps the shape of electron spatial distribution is basically flat the space charge effect is expected to be proportional to the number of electrons area when the spot size gets smaller one will get increasingly important edge effects because electrons that move outside the spot will not be compensated by electrons coming from the outside in the limit where the spot is very small the spatial distribution of electrons is a half sphere the average distance between electrons will be defined by their time interval rather than by the distance between the points where they started so in that case the effect may become independent of the spot size on the other hand there are cases where the simulation deviates from measurements we found that the measured broadening is larger than the values calculated from the simulation as shown in fig 6 from the simulation the broadening is smaller than the shift whereas from the measurement fig 4 and 5 the broadening is comparable or slightly larger than the shift the reason for this discrepancy is not clear yet and probably more sophisticated simulations are needed to address the discrepancy we note the broadening can be larger than the shift when the energy of the interaction electrons is close to that of the test electron energy range 25xmath030 ev in fig 12b which is probably due to the longer average interaction times however in the case of gold because the fraction of electrons in the range close to the fermi edge is very small fig 2a this contribution is small to the overall broadening the angular dependence of the energy shift fig 7 can be well attributed to angle dependent number of electrons at different emission angles fig 9 which is probably associated with the linear polarization of the synchrotron light however to understand the negative energy shift for high analyzer angles at lower sample current more simulation is also needed the observation of space and mirror charge effects has important implications in photoemission experiments as well as the future development of the technique these findings first ask for particular caution in interpreting photoemission data one immediate issue is the electron energy referencing in photoemission spectroscopy in photoemission communityit is a routine procedure to use the fermi level of a metal as a reference this is usually realized by measuring the fermi level from a metal such as gold which is electrically connected to the sample under measurement it is true that the intrinsic fermi level of the sample is lined up with that of the metal but the measured fermi level has an offset from the space and mirror charge effects this offset can be different between the sample and the metal because it is not only material specific but also depends on many other factors when the effect on the energy shift is strong using the fermi level from a metal as a reference becomes unreliable another related issue is the fermi level instability during measurement because the photon flux usually changes with time for many synchrotron light sources due to the finite life time of electrons in the storage ring the fermi level is always changing with time during measurements as we have shown before this can give rise to an fermi level uncertainty on the order of 10 mev for a typical experimental setting using a third generation synchrotron light source this is comparable or larger than many energy scales which are actively pursued in many body problems in the condensed matter physicsxcite measurement with an energy precision of 1 mev is necessary for example when the superconducting gap in some conventional metals as well as in some high temperature superconductors is on the order of 1 mevxcite in this case an uncertainty or shift on the order of 10 mev definitely poses a big problem to resolve the fermi level referencing problem one can always minimize the space charge effect by reducing the photon flux or increasing the spot size apparently this is not desirable particularly when a high photon flux is necessary to take data with a good statistics and a high efficiency given that the fermi level referencing to a metal is no longer reliable one may use an internal reference from the sample under measurement this internal reference can be obtained from priori knowledge or measurements with negligible space charge effect for example in high temperature cuprate materials the 00 to xmath14xmath14 nodal direction can be used as an internal reference to locate the fermi level because it has been shown that the superconducting gap and pseudogap approaches zero along this direction except for slightly doped samplesxcite as for the fermi level instability with time since the energy shift exhibits a linear relation with the photon flux it is possible to make corrections by recording the sample current or photon flux ideally this problem can be minimized if the synchrotron light source is operated at a constant or quasi constant photon flux top off mode in addition to the fermi level uncertainty the energy broadening is another serious issue facing the photoemission technique since most physical properties of materials are dictated by electronic excitations within an energy range of xmath0kxmath15 t near the fermi level kxmath15 is the boltzman constant and t a temperature to probe the intrinsic electronic properties the energy resolution has to be comparable or better than kxmath15 t which is 08 mev for 10 k therefore there is a strong scientific impetus to improve the photoemission technique to even higher energy resolution sub mev accompanied by high photon flux and small beam size the space and mirror charge effects should be taken into account seriously in the future development of new light sources and electron energy analyzers the high photon flux and small spot size will enhance the space and mirror charge effects the resultant energy broadening can be well beyond the resolution from the electron analyzer and the light source with the increasing demand of high energy resolution it is important to investigate how to alleviate or remove the space charge effect for example it is interesting to study whether applying a bias voltage between the sample and the electron detector will affect the space charge effect on the other hand in addition to putting more effort on improving the performance of the light sources it is very important to put emphasis on enhancing the capabilities of the electron energy analyzer one aspect is to further increase the sensitivity of electron detection by using new electron detection schemes the other aspect is to keep improving the analyzer throughput note that even for the state of the art display electron analyzer using angle resolved mode only less than 1xmath16 of electrons are collected during measurements while all the rest of electrons emitted over 2xmath14 solid angle from the sample surface are wasted a new scheme needs to be explored on how to record large solid angle at the same time when maintaining high energy resolution it is apparent that much work needs to be done and we hope our identification of the coulomb effects can stimulate more work along this direction s huefner photoemission spectroscopy principles and applications springer verlag berlin 1995 angle resolved photoemission theory and current applications edited by s d kevan elsevier the netherlans 1992 special issue of science 288 no 5465 2000 special issue of j electron spectroscopy and related phenomena 117 118 12001 a damascelli z hussain and z x shen rev modern phys 75 4732003 a chainani et al phys lett 85 19662001 n p armitage et al lett 86 11262001 t sato et al science 291 15172001 b wannberg p baltzer and s shin preprint 2000 h boersch z physik 139 115 1954 u hofer et al science 277 1480 1997 p m echenique and j b pendry progress in surf 32 111 1989 we thank a fujimori j bozek and s sodergren for stimulating discussions the experiment was performed at the als of lbnl which is operated by the doe s office of bes division of material science with contract de fg03 01er45929a001 the division also provided support for the work at ssrl with contract de fg03 01er45929a001 the work at stanford was supported by nsf grant dmr0304981 and onr grant n00014 98 1 0195p0007 and the work at colorado was supported by nsf grant dmr 0402814 and doe grant de fg02 03er46066 in the xy horizontal plane and parallel to y axis the sample normal is in the xz plane and its angle with respect to the z axis is referred to as xmath3 the analyzer is rotatable and the lens axis is in the yz plane the angle of the lens axis with respect to the z axis is referred to as xmath4 is 45 degrees and the analyzer angle xmath4 is 0 the beam spot size is xmath0043xmath13030 mmxmath9 the photon flux corresponding to 150 na sample current is xmath05xmath1310xmath8 photons second the inset shows the measured overall fermi edge width as a function of the sample current which includes all contributions including the beamline the analyzer and the temperature broadening the net broadening resulting from pulsed photons is obtained by deconvolution of the measured data taking the width at low photon flux as from all the other contributions is 45 degrees and the analyzer angle xmath4 is 0 effect of beam size on the energy broadening the broadening at high sample current can be approximated as a straight line the solid lines also act as a guide to the eye for different sample tilt angles axmath322 bxmath337 and cxmath352 the curves in each panel represent different sample currents sc under a given beamline resolution de for any given curvethe sample current is nearly a constant the sample tilt angle is xmath337 degrees in the insetshows the integrated spectral weight over the entire energy range of 5xmath035 ev as a function of the analyzer angle xmath4 black solid square for comparison the fermi level as a function of the analyzer angle measured under similar condition is also plotted blue circle 042 mm for the energy shift all curves overlap with each other indicating that the energy shift is proportional to the electron current but for the energy broadening they do not strictly overlap with each other particularly at longer pulse length
we report the observation and systematic investigation of the space charge effect and mirror charge effect in photoemission spectroscopy when pulsed light is incident on a sample the photoemitted electrons experience energy redistribution after escaping from the surface because of the coulomb interaction between them space charge effect and between photoemitted electrons and the distribution of mirror charges in the sample mirror charge effect these combined coulomb interaction effects give rise to an energy shift and a broadening which can be on the order of 10 mev for a typical third generation synchrotron light source this value is comparable to many fundamental physical parameters actively studied by photoemission spectroscopy and should be taken seriously in interpreting photoemission data and in designing next generation experiments key words space charge mirror charge photoemission fermi level shift fermi level broadening
introduction experiment numerical simulation of space charge effect and mirror charge effect comparison between the experiment and the numerical simulation and discussions implications of space charge effect
the interplay between superconducting and magnetic order in ruthenocuprates raised considerable interest for these materials they exist in three modifications rusrxmath0recuxmath0oxmath1 ru1212 rusrxmath0rexmath2cexmath3cuxmath0oxmath4 ru1222 and recently synthesized xcite rusrxmath0recexmath0cuxmath0oxmath5 ru1232 re gd eu for ru1212 and ru1222 re y dy for ru1232 neutron scattering measurements on ru1212 xcite have revealed the existence of the antiferromagnetic order in ru sublattice below 130 k although ferromagnetic like features have been observed in magnetization measurements xcite measurements of microwave absorption xcite ac susceptibility xcite nmr xcite have shown an evidence of the spontaneous vortex phase as an explanation for the coexistence of the superconductivity and magnetism below sc transition around 30 k on the other hand xue et al xcite have suggested the nanoscale separation between ferromagnetic and antiferromagnetic islands as for the ru1222 composition the magnetic structure is still unknown esr measurements xcite have displayed a ferromagnetic resonance below txmath6 180 k with an enhancement in magnetism around txmath7 100 k xue et al xcite have found evidence of the clusters above the main peak at txmath7 around 100 k depends on the eu ce ratio detailed magnetization and mssbauer study xcite have shown two component magnetism which has been also supported with the muon spin rotation results xcite spin glass like behavior as suggested by cardoso et al xcite occurs at txmath7 various dynamical features have been reported by us xcite including pronounced time relaxation of ac susceptibility and the inverted butterfly hysteresis loops in ac susceptibility in this paperwe present further investigation of the unusual behavior of the ru1222 material we have concentrated on the peculiarities of the butterfly hysteresis and have found supporting evidence in favor of the above mentioned two component magnetism picture samples used in this study are the same as the ones used in our previous article xcite ac susceptibility measurements were taken by the use of commercial cryobind system with the frequency of the alternating field equal to 431 hz and the field amplitude of 01 oe the stabilization of the temperature for the hysteresis measurement was better than 10 mk when a standard ferromagnet characterized with the domain structure is swept in a dc field up and down its magnetization m shows a well known m h hysteresis the characteristic elements of the m h hysteresis loop are the coercive field and the remanent magnetization another type of hysteretic behavior of a ferromagnet can be studied in the mutual inductance arrangement of ac susceptibility technique imposing an ac field superimposed over the sweeping and cycling dc field in the latter case onemeasures a butterfly hysteresis xcite in increasing dc fieldit is characterized by a monotonously decreasing virgin curve followed by the repetitive branches of further reduced susceptibility values the repetitive branches are characterized by the characteristic maxima which are related to the coercive field the values of the coercive field as obtained from m h and butterfly hysteresis are not necessarily the same due to the inherent difference between dc and ac techniques in fig srruo3hys butterfly hysteresis is shown for srruoxmath8 an itinerant ferromagnet xmath9 virgin branch xmath10 descending field branch xmath11 ascending field branch at variance with a standard butterfly hysteresis the one discovered to characterize ru1222 material xcite shows very different behavior when subjected to the sweeping dc field in fig ru1222hysa we present the butterfly hysteresis for the ru1222euce x 10 material the virgin branch has a maximum denoted with hxmath12 followed by the descending field branch which has systematically higher susceptibility than the virgin branch instead of one there are two maxima types at hxmath13 before and hxmath14 after hxmath15 0 in the ascending field branch hxmath13 maximum appears again followed by the hxmath14 maximum for hxmath16 such an unusual magnetism hasnt been reported so far and it reveals that the ru1222 material is a rather unique magnetic system very different from an ordinary ferromagnet ru1222hysb displays the m h curve measured with a vibrating sample magnetometer see also xcite apart from a somewhat unusual virgin curve overlapping with the right hand side hysteresis branch the m h hysteresis appears quite regular and standard implying that the unusual butterfly hysteresis features rely on magneto dynamics of the ru1222 compound virgin branch xmath10 descending field branch xmath11 ascending field branch b normal hysteresis obtained with a vibrating sample magnetometer taken at the same temperature and the same maximum dc field as in a temperature dependence of ac susceptibility of the ru1222 materials is shown in fig tempdep for x 10the main peak txmath7 is at 120 k while for x 07 it is at 90 k the x 07 sample is superconducting below 30 k the anomaly at 130 k can also be seen especially for the x 10 sample it has been shown by felner et al xcite that this anomaly has a different magnetic origin than the main peak at txmath7 also xmath17sr experiments xcite revealed no bulk character of the latter anomaly for the x 10 composition the main peak at txmath7 and the anomaly at 130 k are very close and overlapping so we will use the x 07 material to exclude the possibility that the butterfly hysteresis are connected to this anomaly a qualitative change in the behavior of the butterfly hysteresis can be seen in fig hystemp three characteristic temperatures have been chosen in order to illustrate the range of peculiar behaviors of butterfly hysteresis covered by varying the temperature all the curves have been scaled by the value xmath18hxmath19 0 oe the most important feature to notice is the disappearance of the inverted part characterized by the maximum at hxmath13 in fig ru1222hys for temperatures above 30 kthe butterfly hysteresis is inverted but below it starts to accommodate the single peak shape associated with the ferromagnets fig srruo3hys except for the virgin branch the hxmath12 maximum in the virgin curve has been linked with the possible spin flop mechanism xcite where the initial afm state changes into a fm state under the influence of the external dc field the exact temperature where the hxmath12 peak and the butterfly hysteresis set in is indicated with the arrows in the fig tempdep for the x 10 compositionthere is just a small difference in the temperature between the main peak at txmath7 and the anomaly at 130 k and one can not be sure with which peak to associate this event for that reason we have measured the x 07 material for which the main peak is at 90 k and it clearly indicates that the observed dynamics is not connected with the small anomaly around 130 k but rather with the main peak at txmath7 which in turn depends on the eu ce ratio cexmath20 x 07 at 95 k just above the temperature at which the inverted peak emerges this featureless curve resembles the shape of the butterfly hysteresis characterizing ru1212 material below its magnetic ordering transition xcite at 78 k bottom panel the inverted peak is well defined and dominates in the hysteretic behavior xmath9 virgin branch xmath10 descending field branch xmath11 ascending field branch in addition to the temperature dependence of the butterfly hysteresis for the ru1222 material we have investigated the influence of the maximum dc field one reaches in cycling the dc field maxdc shows how the inverted peak hxmath13 emerges for small dc fields above 25 oe and eventually completely overwhelms the underlying ferromagnetic behavior for larger dc fields where it tends to saturate here larger denotes a few hundred oe after the careful analysis has been done and the ferromagnetic background has been subtracted one gets the dependence of the inverted peak hxmath13 vs maximum dc field imposed on the system hxmath21 shown in the inset of fig maxdc the line represents a fit to the formula xmath22 with the parameters hxmath23 421 oe and hxmath24 724 oe correlation coefficient r 0987 at the same time there is no observable shift in the hxmath14 peak as is expected for a coercive maximum one should note that there is no sign of the hxmath13 peak in the descending field branch squares for hxmath25 0 nor in the ascending field branch triangles for hxmath26 0 also it seems that the hxmath12 peak is linked only to the virgin curve resembling the s shaped vsm virgin curve see fig srruo3hysb and has no influence on the behavior of the inverted part for hxmath21 350 oe inset the maximum of the inverted peak hxmath27 vs the maximum dc field hxmath21 the line is the fit to the exponential saturation see text xmath9 virgin branch xmath10 descending field branch xmath11 ascending field branch as has been recently proposed xcite ru1222 materials consist of two phases the minority one that orders around 180 k and the majority one that orders around 100 k depending on the eu ce ratio exact nature of both orderings is still unclear although ferromagnetic like features have been observed detailed investigation of butterfly hysteresis in ru1222 reveals a more complex magnetism characterizing this material temperature dependence of the inverted part fig hystemp shows that it emerges at the main peak txmath7 where the majority phase orders progressively freezing out as the temperature is lowered we have investigated samples with x from 05 to 10 and they all show the same behavior suggesting similar magnetic ordering to take place in all compositions the hxmath14 peak which marks the coercive field gradually increases as the temperature is decreased and reaches the value of 100 oe for 42 k it can be compared with the values of 250 oe obtained in xcite for much larger maximum fields 50 koe from the figs hystemp and maxdc it is obvious that hxmath14 is not affected by the presence of the inverted peak suggesting two separate contributions to the ac susceptibility presently we are unable to provide a full interpretation of the exponential dependence shown in the inset of the fig maxdc we note however that it indicates the presence of some fundamental interaction which is susceptible to small magnetic fields also a remarkable fact is that no feature can be seen in the magnetization measurements either vsm or squid that would correspond to the inverted part of the butterfly hysteresis we suggest that the inverted behavior might reflect the interaction of the two magnetic phases namely the ferromagnetic clusters and the background matrix ordering at txmath6 and at txmath7 respectively as suggested by cardoso et al xcite the phenomenology of spin glasses could be relevant for ru1222 as well indeed we have observed the frequency dependence of the main peak at txmath7not shown the magnitude of the shift follows the spin glass like behavior xcite but the magnitude of the ac susceptibility signal is orders of magnitude larger than for the usual spin glass material we speculate that the ferromagnetic clusters randomly distributed and oriented in matrix could impose a frustration on the surrounding matrix giving rise to the spin glass behavior and the observed frequency dependence we have measured the butterfly hysteresis for the ru1222 material and have found that it consists of two components the first one comes from the already observed ferromagnetic like behavior of the compound and is represented by the coercive field peak the second component is responsible for the onset of the inverted peak in butterfly hysteresis formed after the dc field reaches its maximum value and starts reducing the inverted part disappears as the temperature is lowered while the coercive maximum just shifts to larger values exponential dependence of the inverted peak vs maximum dc field has been observed but it s microscopic background is not well understood as yet interaction between ferromagnetic clusters and the ordered matrix embedding the clusters has been proposed to account for the observed behavior we thank prof i felner for providing us with samples and giving valuable comments awana et al j appl 2005 10b111 jw lynn et al phys b 61 2000 r14964 gwm williams and s krmer phys b 61 2000 6401 m poek et al phys rev b 61 2000 r14964 i ivkovi et al europhys lett 60 2002 917 h sakai et al b 67 2003 184409 yy xue et al phys b 67 2003 224511 yy xue et al phys b 67 2003 184507 k yoshida h kojima h shimizu j phys soc jpn 72 2003 3254 i felner et al physrev b 70 2004 094504 i felner e galstyan i nowik phys rev b 71 2005 064510 a shengelaya et al b 69 2004 024517 ca cardoso et al phys b 67 2003 020407r i ivkovi et al phys rev b 65 2002 144420 fh salas and d weller jmmm 128 209 1993 and references therein ja mydosh spin glasses an experimental introduction taylor francis london 1993
we report detailed studies of the ac susceptibility butterfly hysteresis on the ru1222 ruthenocuprate compounds two separate contributions to these hysteresis have been identified and studied one contribution is ferromagnetic like and is characterized by the coercive field maximum another contribution represented by the so called inverted maximum is related to the unusual inverted loops unique feature of ru1222 butterfly hysteresis the different nature of the two identified magnetic contributions is proved by the different temperature dependences involved by lowering the temperature the inverted peak gradually disappears while the coercive field slowly raises if the maximum dc field for the hysteresis is increased the size of the inverted part of the butterfly hysteresis monotonously grows while the position of the peak saturates in reaching saturation exponential field dependence has been demonstrated to take place at t 78 k the saturation field is 42 oe
introduction experimental details experimental results discussion conclusion acknowledgments
the origin and nature of the dark energy xcite is one of the most difficult challenges facing physicists and cosmologists now among all the proposed models to tackle this problem a scalar field is perhaps the most popular one up to now the scalar field denoted by xmath1 might only interact with other matter species through gravity or have a coupling to normal matter and therefore producing a fifth force on matter particles this latter idea has seen a lot of interests in recent years in the light that such a coupling could potentially alleviate the coincidence problem of dark energy xcite and that it is commonly predicted by low energy effective theories from a fundamental theory nevertheless if there is a coupling between the scalar field and baryonic particles then stringent experimental constraints might be placed on the fifth force on the latter provided that the scalar field mass is very light which is needed for the dark energy such constraints severely limit the viable parameter space of the model different ways out of the problem have been proposed of which the simplest one is to have the scalar field coupling to dark matter only but not to standard model particles therefore evading those constraints entirely this is certainly possible especially because both dark matter and dark energy are unknown to us and they may well have a common origin another interesting possibility is to have the chameleon mechanism xcite by virtue of which the scalar field acquires a large mass in high density regions and thus the fifth force becomes undetectablly short ranged and so also evades the constraints study of the cosmological effect of a chameleon scalar field shows that the fifth force is so short ranged that it has negligible effect in the large scale structure formation xcite for certain choices of the scalar field potential but it is possible that the scalar field has a large enough mass in the solar system to pass any constraints and at the same time has a low enough mass thus long range forces on cosmological scales producing interesting phenomenon in the structure formation this is the case of some xmath2 gravity models xcite which survives solar system tests thanks again to the chameleon effect xcite note that the xmath2 gravity model is mathematically equivalent to a scalar field model with matter coupling no matter whether the scalar field couples with dark matter only or with all matter species it is of general interests to study its effects in cosmology especially in the large scale structure formation indeed at the linear perturbation level there have been a lot of studies about the coupled scalar field and xmath2 gravity models which enable us to have a much clearer picture about their behaviors now but linear perturbation studies do not conclude the whole story because it is well known that the matter distribution at late times becomes nonlinear making the behavior of the scalar field more complex and the linear analysis insufficient to produce accurate results to confront with observations for the latter purpose the best way is to perform full xmath0body simulations xcite to evolve the individual particles step by step xmath0body simulations for scalar field and relevant models have been performed before xcite for example in xcite the simulation is about a specific coupled scalar field model this study however does not obtain a full solution to the spatial configuration of the scalar field but instead simplifies the simulation by assuming that the scalar field s effect is to change the value of the gravitational constant and presenting an justifying argument for such an approximation as discussed in xcite this approximation is only good in certain parameter spaces and for certain choices of the scalar field potential and therefore full simulations tare needed to study the scalar field behaviour more rigorously recently there have also appeared xmath0body simulations of the xmath2 gravity model xcite which do solve the scalar degree of freedom explicitly however embedded in the xmath2 framework there are some limitations in the generality of these works as a first thing xmath2 gravity model no matter what the form xmath3 is only corresponds to the couple scalar field models for a specific value of coupling strength xcite second in xmath2 models the correction to standard general relativity is through the modification to the poisson equation and thus to the gravitational potential as a whole xcite while in the coupled scalar field models we could clearly separate the scalar fifth force from gravity and analyze the former directly xcite also in xmath2 models as well as the scalar tensor theories the coupling between matter and the scalar field is universal the same to dark matter and baryons while in the couple scalar field models it is straightforward to switch on off the coupling to baryons and study the effects on baryonic and dark matter clusterings respectively as we will do in this paper correspondingly the general framework of xmath0body simulations in coupled scalar field models could also handle the situation where the chameleon effect is absent andor scalar field only couples to dark matter and thus provide a testbed for possible violations of the weak equivalence principle in this paper we shall go beyond xcite and consider the case where the chameleon scalar field couples differently to different species of matter to be explicit we consider two matter species and let one of them have no coupling to the scalar field because it is commonly believed that normal baryons being observable in a variety of experiments should have extremely weak if any coupling to scalar fields we call the uncoupled matter species in our simulation baryons it is however reminded here that this matter species is not really baryonic in the sense that it does not experience normal baryonic interactions the inclusion of true baryons will make the investigation more complicated and is thus beyond the scope of the present work the paper is organized as follows in sect eqns we list the essential equations to be implemented in the xmath0body simulations and describe briefly the difference from normal lcdm simulations sect simu is the main body of the paper in which subsect simudetail gives the details about our simulations such as code description and parameter set up subsect simuresult displays some preliminary results for visualization such as baryon cdm distribution potential scalar field configuration and the correlation between the fifth force for cdm particles and gravity subsect simups quantifies the nonlinear matter power spectrum of our model especially the difference from lcdm results and the bias between cdm and baryons subsect simumf briefly describes the essential modifications one must bear in mind when identifying virialized halos from the simulation outputs and shows the mass functions for our models subsect simuprof we pick out two halos from our simulation box and analyzes their total internal profiles as well as their baryonic cdm density profiles we finally summarize in sect con in this section we first describe the method to simulate structure formation with two differently coupled matter species and the appropriate equations to be used those equations for a single matter species have been discussed in details previously in xcite but the inclusion of different matter species requires further modifications and we list all these for completeness the lagrangian for our coupled scalar field model isxmath4 vvarphi cvarphimathcallmathrmcdm mathcallmathrms endaligned where xmath5 is the ricci scalar xmath6 with xmath7 newton s constant xmath1 is the scalar field xmath8 is its potential energy and xmath9 its coupling to dark matter which is assumed to be cold and described by the lagrangian xmath10 xmath11 includes all other matter species in particular our baryons the contribution from photons and neutrinos in the xmath0body simulations for late times ie xmath12 is negligible but should be included when generating the matter power spectrum from which the initial conditions for our xmath0body simulations are obtained see below the dark matter lagrangian for a point like particle with bare mass xmath13 is xmath14 where xmath15 is the coordinate and xmath16 is the coordinate of the centre of the particle from this equationit can be easily derived that xmath17 also because xmath18 where xmath19 is the four velocity of the dark matter particle the lagrangian could be rewritten as xmath20 which will be used below eq dmemtparticle is just the energy momentum tensor for a single dark matter particle for a fluid with many particlesthe energy momentum tensor will be xmath21 in which xmath22 is a volume microscopically large and macroscopically small and we have extended the 3dimensional xmath23 function to a 4dimensional one by adding a time component herexmath19 is the averaged 4velocity of the collection of particles inside this volume and is not necessarily the same as the xmath24velocity of the observer meanwhile using xmath25 it is straightforward to show that the energy momentum tensor for the scalar field is given by xmath26endaligned so the total energy momentum tensor is xmath27nonumber cvarphitmathrmcdmab tmathrmsabendaligned where xmath28 xmath29 is the energy momentum tensor for all other matter species including baryons and the einstein equation is xmath30 where xmath31 is the einstein tensor note that due to the coupling between the scalar field xmath1 and the dark matter the energy momentum tensors for either will not be conserved and we have xmath32 where throughout this paper we shall use a xmath33 to denote the derivative with respect to xmath1 finally the scalar field equation of motion eom from the given lagrangian is xmath34 where xmath35 using eq eq dmlagrangian2 it can be rewritten as xmath36 eqs eq emttot eq einsteineq eq dmenergyconservation eq phieom summarize all the physics that will be used in our analysis we will consider a special form for the scalar field potential xmath37muendaligned where xmath38 and xmath39 are dimensionless constants while xmath40 has mass dimension four as has been discussed in xcite xmath41 to evade observational constraints and xmath39 can be set to xmath42 without loss of generality since we can always rescale xmath1 as we wish meanwhile the coupling between the scalar field and dark matter particle is chosen as xmath43 where xmath44 is yet another dimensionless constant characterizing the strength of the coupling as discussed in xcite the two dimensionless parameters xmath38 and xmath45have clear physical meanings roughly speaking xmath38 controls the time when the scalar field becomes important in cosmology while xmath45 determines how important the scalar field would ultimately be in fact the potential given in eq eq potential is partly motivated by the xmath2 cosmology xcite in which the extra degree of freedom behaves as a coupled scalar field in the einstein frame as we can see from eq eq potential the potential xmath46 when xmath47 while xmath48 when xmath49 in the latter case however xmath50 so that the effective total potential xmath51 has a global minimum at some finite xmath1 if the total potential xmath52 is steep enough around this minimum then the scalar field becomes very heavy and thus follows its minimum dynamically as is in the case of the chameleon cosmology see eg if xmath53 is not steep enough at the minimum however the scalar field will experience a more complicated evolution these two different cases can be obtained by choosing appropriate values of xmath45 and xmath38 if xmath45 is very large or xmath38 is small then we run into the former situation and if xmath45 is small and xmath38 is large we have the second in reality the situation can get even more complicated because when xmath45 which characterizes the coupling strength increases the cdm evolution could also get severely affected which in turn has back reactions on the scalar field itself the xmath0body simulation only probes the motion of particles at late times and we are not interested in extreme conditions such as black hole formation evolution which mean that taking the non relativistic limit of the above equations should be a sufficient approximation for our purpose the existence of the scalar field and its different couplings to matter particles lead to the following changes to the xmath54cdm model firstly the energy momentum tensor has a new piece of contribution from the scalar field secondly the energy density of dark matter in gravitational field equations is multiplied by the function xmath9 which is because the coupling to scalar field essentially renormalizes the mass of dark matter particles thirdly dark matter particles will not follow geodesics in their motions as in xmath54cdm but rather the total force on them has a contribution the fifth force from the exchange of scalar field quanta finally cdm particles must be distinguished from baryons so that the fifth force only acts on the former and these two species only interact gravitationally this last point is one main difference between the present work and a previous one xcite these imply that the following things need to be modified or added 1 the scalar field xmath1 equation of motion which determines the value of the scalar field at any given time and position 2 the poisson equation which determines the gravitational potential and thus gravity at any given time and position according to the local energy density and pressure which include the contribution from the scalar field as obtained from xmath1 equation of motion 3 the total force on the dark matter particles which is determined by the spatial configuration of xmath1 just like gravity is determined by the spatial configuration of the gravitational potential 4 the cdm and baryonic particles must be tagged respectively so that the code knows to assign forces correctly to different species we shall describe these one by one now for the scalar field equation of motion we denote xmath55 as the background value of xmath1 and xmath56 as the scalar field perturbation then eq eq phieom could be rewritten as xmath57 by subtracting the corresponding background equation from it herexmath58 is the covariant spatial derivative with respect to the physical coordinate xmath59 with xmath60 the conformal coordinate and xmath61 xmath58 is essentially the xmath62 but because here we are working in the weak field limit we approximate it as xmath63 by assuming a flat background the minus sign is because our metric convention is xmath64 instead of xmath65 for the simulation here we will also work in the quasi static limit assuming that the spatial gradient is much larger than the time derivative xmath66 which will be justified below thus the above equation can be further simplified as xmath67nonumberendaligned in which xmath68 is with respect to the conformal coordinate xmath60 so that xmath69 and we have restored the factor xmath70 in front of xmath71 the xmath1 here and in the remaining of this paper is xmath72 times the xmath1 in the original lagrangian unless otherwise stated note that here xmath22 and xmath73 both have the dimension of mass density rather than energy density next look at the poisson equation which is obtained from the einstein equation in weak field and slow motion limits herethe metric could be written as xmath74 from which we find that the time time component of the ricci curvature tensor xmath75 and then the einstein equation xmath76 gives xmath77 where xmath78 and xmath79 are respectively the total energy density and pressure the quantity xmath80 can be expressed in terms of the comoving coordinate xmath60 as xmath81 where we have defined a new newtonian potential xmath82 and used xmath83 thus xmath84nonumberendaligned where in the second step we have used eq eq einsteineqn and the raychaudhrui equation and an overbar means the background value of a quantity because the energy momentum tensor for the scalar field is given by eq eq phiemt it is easy to show that xmath85 and so xmath86rightnonumber 4pi ga3 leftbarrhomathrmcdmcbarvarphi barrhomathrmb 2leftdotbarvarphi2vbarvarphirightrightnonumberendaligned now in this equation xmath87 in the quasi static limit and so could be dropped safely so we finally have xmath88nonumber 4pi ga3 leftrhomathrmbbarrhomathrmbright 8pi ga3leftvvarphivbarvarphiright endaligned finally for the equation of motion of the dark matter particle consider eq eq dmenergyconservation using eqs eq dmemtparticle eq dmlagrangian2 this can be reduced to xmath89 obviously the left hand side is the conventional geodesic equation and the right hand side is the new fifth force due to the coupling to the scalar field note that because xmath90 is the projection tensor that projects any 4tensor into the 3space perpendicular to xmath19 so xmath91 is the spatial derivative in the 3space of the observer and perpendicular to xmath92 consequently the fifth force xmath93 has no component parallel to xmath92 the time component indicating that the energy density of cdm will be conserved and only the particle trajectories are modified as mentioned in xcite remember that xmath92 in eq eq dmeom is the 4velocity of individual particles but from eq eq wfphieom we see that xmath94 is computed in the fundamental observer s frame where density perturbation is calculated so if we also want to work on eq eq dmeom in the fundamental observer s frame so that we can use the xmath94 from eq eq wfphieom directly then we must rewrite eq eq dmeom by substituting xmath95 up to first order in perturbations in which xmath96 is the 4velocity of the fundamental observer and xmath97 is the peculiar velocity of the particle then the first term in the above expression is the gradient of xmath94 observed by the fundamental observer rather than an observer comoving with the particle and the second term is a velocity dependent acceleration xcite in xciteit is claimed that the second term is of big importance in our simulations however this term will be neglected from here on because it depends on xmath98 which is very small due to the chameleon nature of the model we have checked in a linear perturbation computation that removing this term only changes the matter power spectrum by less than 00001 now in the non relativistic limit the spatial components of eq eq dmeom can be written as xmath99 where xmath100 is the physical time coordinate if we instead use the comoving coordinate xmath60 then this becomes xmath101 where we have used eq eq newphi the canonical momentum conjugate to xmath60 is xmath102 so we have now xmath103 in which eq eq wfdpdtcomov is for cdm particle and eq eq wfdpdtcomovb is for baryons note that according to eq eq wfdpdtcomov the quantity xmath104 acts as a new piece of potential the potential for the fifth force this is an important observation and we will come back to it later when we calculate the escape velocity of cdm particles within a virialized halo eq wfphieom eq wfpoisson eq wfdxdtcomov eq wfdpdtcomov eq wfdpdtcomovb will be used in the code to evaluate the forces on the dark matter particles and evolve their positions and momenta in time in our numerical simulationwe use a modified version of mlapm xcite see subsect simudetail and we will have to change our above equations in accordance with the internal units used in that code herewe briefly summarize the main features mlapm code uses the following internal units with subscript xmath105 xmath106 in which xmath107 is the present size of the simulation box and xmath108 is the present hubble constant and xmath109 with subscript can represent the density for either cdm xmath110 or baryons xmath111 using these newly defined quantities it is easy to check that eqs eq wfdxdtcomov eq wfdpdtcomov eq wfpoisson eq wfphieom could be rewritten as xmath112 labeleq intpoissonnabla2phic frac32omegamathrmcdmbarc leftrhocmathrmcdmfraccbarc1rightnonumber frac32omegamathrmbleftrhocmathrmb1right kappafracvbarvh20a3endaligned and xmath113 where xmath114 is the present cdm fractional energy density we have again restored the factor xmath70 and again the xmath1 is xmath72 times the xmath1 in the original lagrangian note that in eq eq intdpdtcomov the term in the bracket on the right hand side only applies to cdm but not to baryons also note that from here on we shall use xmath115 unless otherwise stated for simplicity we also define xmath116 to be used below making discretized version of the above equations for xmath0body simulations is non trivial task for example the use of variable xmath117 instead of xmath1 appendix appen discret helps to prevent xmath118 which is unphysical but numerically possible due to discretization we refer the interested readers to appendix appen discret to the whole treatment with which we can now proceed to do xmath0body runs subsect eqnnonrel the full name of mlapm is multi level adaptive particle mesh code as the name has suggested this code uses multilevel grids xcite to accelerate the convergence of the nonlinear gauss seidel relaxation method xcite in solving boundary value partial differential equations but more than this the code is also adaptive always refining the grid in regions where the mass particle density exceeds a certain threshold each refinement level form a finer grid which the particles will be then relinked onto and where the field equations will be solved with a smaller time step thus mlapm has two kinds of grids the domain grid which is fixed at the beginning of a simulation and refined grids which are generated according to the particle distribution and which are destroyed after a complete time step one benefit of such a setup is that in low density regions where the resolution requirement is not high less time steps are needed while the majority of computing sources could be used in those few high density regions where high resolution is needed to ensure precision some technical issues must be taken care of however for example once a refined grid is created the particles in that region will be linked onto it and densities on it are calculated then the coarse grid values of the gravitational potential are interpolated to obtain the corresponding values on the finer grid when the gauss seidel iteration is performed on refined grids the gravitational potential on the boundary nodesare kept constant and only those on the interior nodes are updated according to eq eq gs just to ensure consistency between coarse and refined grids this point is also important in the scalar field simulation because like the gravitational potential the scalar field value is also evaluated on and communicated between multi grids note in particular that different boundary conditions lead to different solutions to the scalar field equation of motion in our simulation the domain grid the finest grid that is not a refined grid has xmath119 nodes and there are a ladder of coarser grids with xmath120 xmath121 xmath122 xmath123 xmath124 nodes respectively these grids are used for the multi grid acceleration of convergence for the gauss seidel relaxation method the convergence rate is high upon the first several iterations but quickly becomes very slow then this is because the convergence is only efficient for the high frequency short range fourier modes while for low frequency long range modes more iterations just do not help much to accelerate the solution process one then switches to the next coarser grid for which the low frequency modes of the finer grid are actually high frequency ones and thus converge fast the mlapm solver adopts the self adaptive scheme if convergence is achieved on a grid then interpolate the relevant quantities back to the finer grid provided that the latter is not on the refinements and solve the equation there again if convergence becomes slow on a grid then go to the next coarser grid this way it goes indefinitely except when converged solution on the domain grid is obtained or when one arrives at the coarsest grid normally with xmath125 nodes on which the equations can be solved exactly using other techniques for our scalar field model the equations are difficult to solve anyway and so we truncate the coarser grid series at the xmath124node one on which we simply iterate until convergence is achieved furthermore we find that with the self adaptive scheme in certain regimes the nonlinear gs solver tends to fall into oscillations between coarser and finer grids to avoid such situations we then use v cycle xcite instead for the refined gridsthe method is different here one just iterate eq eq gs until convergence without resorting to coarser grids for acceleration as is normal in the gauss seidel relaxation method convergence is deemed to be achieved when the numerical solution xmath126 after xmath127 iterations on grid xmath128 satisfies that the norm xmath129 mean or maximum value on a grid of the residual xmath130 is smaller than the norm of the truncation error xmath131endaligned by a certain amount or in the v cycle case the reduction of residual after a full cycle becomes smaller than a predefined threshold indeed the former is satisfied whenever the latter is note here xmath132 is the discretization of the differential operator eq eq diffop on grid xmath128 and xmath133 a similar discretization on grid xmath134 xmath135 is the source term xmath136 is the restriction operator to interpolate values from the grid xmath128 to the grid xmath134 in the modified codewe have used the full weighting restriction for xmath136 correspondingly there is a prolongation operator xmath137 to obtain values from grid xmath134 to grid xmath128 and we use a bilinear interpolation for it for more details see xcite mlapm calculates the gravitational forces on particles by centered difference of the potential xmath138 and propagate the forces to locations of particles by the so called triangular shaped cloud tsc scheme to ensure momentum conservation on all grids the tsc scheme is also used in the density assignment given the particle distribution the main modifications to the mlapm code for our model are 1 we have added a parallel solver for the scalar field based on eq eq uphieom the solver uses a nonlinear gauss seidel method and the same criterion for convergence as the linear gauss seidel poisson solver 2 the solved value of xmath117 is then used to calculate local mass density and thus the source term for the poisson equation which is solved using fast fourier transform 3 the fifth force is obtained by differentiating the xmath117 just like the calculation of gravity 4 the momenta and positions of particlesare then updated taking in account of both gravity and the fifth force there are a lot of additions and modifications to ensure smooth interface and the newly added data structures for the output as there are multilevel grids all of which host particles the composite grid is inhomogeneous and thus we choose to output the positions momenta of the particles plus the gravity fifth force and scalar field value at the positions of these particles we can of course easily read these data into the code calculate the corresponding quantities on each grid and output them if needed as mentioned above the most important difference of the present work from xcite is the inclusion of baryons the particles which do not couple to the scalar field the baryons do not contribute to the scalar field equation of motion and are not affected by the scalar fifth force at least directly so that it is important to make sure that they do not mess up the physics in the modified codewe distinguish baryons and cdm particles by tagging all of them we consider the situation where 20 of all matter particles are baryonic and 80 are cdm at the beginning of each simulation we loop over all particles and for each particle we generate a random number from a uniform distribution in xmath139 if this random number is less than 02 then we tag the particle as baryon and otherwise we tag it as cdm once these tags have been set up they will never been changed again and the code then determines whether the particle contributes to the scalar field evolution and feels the fifth force or not according to its tag all the simulations are started at the redshift xmath140 in principle modified initial conditions initial displacements and velocities of particles which is obtained given a linear matter power spectrum need to be generated for the coupled scalar field model because the zeldovich approximation xcite is also affected by the scalar field coupling xcite in practice however we have found in our linear perturbation calculation xcite that the effect on the linear matter power spectrum is negligible xmath141 for our choices of parameters xmath142 another way to see that the scalar field has really negligible effects on the matter power spectrum at early times is to look at fig fig figure2 below which shows that at those times the fifth force is just much weaker than gravity and therefore its impact ignorable considering these we simply use the xmath54cdm initial displacements velocities for the particles in these simulations which are generated using grafic2 xcite the physical parameters we use in the simulations are as follows the present day dark energy fractional energy density xmath143 and xmath144 xmath145 km s mpc xmath146 xmath147 the simulation box has a size of xmath148 mpc where xmath149 we simulate 4 models with parameters xmath150 equal to xmath151 xmath152 xmath153 and xmath154 respectively such parameters are chosen so that the deviation from xmath54cdm will be neither too small to be distinguishable or too large to be realistic for each modelwe make 5 runs with exactly the same initial condition but different seeds in generating the random number to tag baryons and cdm particles all the 4 models use the same 5 seeds so that results can be directly compared we hope the average of the results from 5 runs could reduce the scatter in all those simulations the mass resolution is xmath155 the particle number is xmath156 the domain grid is a xmath157 cubic and the finest refined grids have 16384 cells on one side corresponding to a force resolution of xmath158kpc we also make a run for the xmath54cdm model using the same parameters except for xmath159 which are not needed now and initial condition in table tab table1 we have listed some of the main results for the 20 runs we have made from which we could obtain some rough idea how the motions of baryons and cdm particles differ from each other we see that for the model xmath160 the cdm particles could be up to xmath161 times faster than baryons thanks to the enhancement by the fifth force we will come back to this point later when we argue for the necessity of a modified strategy of identifying virialized halos colsoptionsheader in fig fig figure1 we have shown some snapshots of the distribution of baryonic and cdm particles to give some idea about the hierachical structure formation and for comparisons with other figures below it shows clearly how some clustering objects develop with filaments connecting them together the baryons roughly follow the clustering of cdm particles but in some low density regions they become slightly separated to understand how the motion of the particles is altered by the coupling to the scalar field in fig fig figure2 we have shown the correlation between the magnitudes of the fifth force and gravity on the cdm particles remember that baryons do not feel the fifth force ref xcite has made a detailed qualitatively analysis about the general trend of this correction and here we just give a brief description from eqs eq intpoisson eq intphieom we could see that when the scalar field potential ie the last term of eqs eq intpoisson eq intphieom could be neglected then the scalar field xmath1 is simply proportional to the gravitational potential xmath138 and as a result eq eq intdpdtcomov tells us that the strength of the fifth force is just xmath162 times that of gravity in other words the effect of the scalar field is a rescaling of the gravitational constant by xmath163 this is because in this situation the effective mass of the scalar field which is given by xmath164 where xmath165 is the effective total potential is light and the fifth force is long range like gravity for comparison in fig fig figure2 we also plot this xmath162 proportion between the two forces as a straight line xmath166 where xmath167 denote respectively the magnitudes of the fifth force and gravity and the factor 08 in front of xmath168 comes from the fact that only 80 of the particles are cdm and thus contribute to the fifth force this scaling relation actually sets an upper limit on how strong the fifth force could be relative to gravity should it not be suppressed by other effects in contrast when the value of xmath1 is small the last term of eqs eq intpoisson eq intphieom is not negligible and the scalar field acquires a heavy mass making it short ranged as a result a particle outside a high density region might not feel the fifth force exerted by particles in that region even it is quite close to the region but because it can feel gravity from that region so the total fifth force on the particle becomes less than the xmath162 scaling in general the value of xmath1 is determined by xmath169 as well as its background value xmath55 which sets the boundary condition to solve the interior value at early timesxmath55 is very close to 0 and xmath73 is high everywhere making xmath1 small everywhere too and suppressing the fifth force so that it is significantly below the xmath162scaling first row of fig fig figure2 at later times xmath55 increases and xmath73 decreases weakening the above effect so that the fifth force becomes saturated ie approaches the xmath170 prediction and the points in the figure hit the straight lines last two rows because decreasing xmath38 and increasing xmath45 have the same effects of making xmath1 small in the models with xmath171 the fifth force saturates later than in the models with xmath172 in addition because high xmath73 tends to decrease xmath1 and increase the scalar field mass so in high density regions where gravity is stronger the fifth force also saturates later the agreement between the numerical solution of the fifth force and the xmath162scaling relation in cases of weak chameleon effect serves as an independent check of our numerical code fig figure3 plots the spatial configuration for the gravitational potential at the same position and output times as in fig fig figure1 as expected the potential is significantly deeper where there is significant clustering of matter cf fig figure1 we also show in fig fig figure4 the spatial configuration for the scalar field xmath1 at the same output position and times at early timeswhen xmath1 is small and the scalar field mass is heavy the fifth force is so short ranged that xmath1 only depends on the local density this means that the spatial configuration of xmath1 in this situation could well reflect the underlying dark matter distribution a fact which could be seen clearly in the first row as time passes by the mass of the scalar field decreases on average and xmath55 increases the value of xmath1 at one point is more and more influenced by the matter distribution in neighboring regions and such an averaging effect weakens the contrast and makes the plots blurring last two rows furthermore obviously decreasing xmath38 and increasing xmath45 could increase the scalar field s mass shorten the range of the fifth force make xmath1 less dependent on its value in neighboring regions and thus strengthen the contrast in the figures to have a more quantitative description about how the matter clustering property is modified with respect to the xmath54cdm model we consider the matter power spectra xmath173 in our simulation boxes the nonlinear matter power spectrum in the present work is measured using powmes xcite which is a public available code based on the taylor expansion of trigonometric functions and yields fourier modes from a number of fast fourier transforms controlled by the order of the expansion we also average the results from the 5 runs for each model and calculate the variance in fig fig figure5 shown are the fractional differences between the xmath173 for our 4 models and for xmath54cdm at two different output times at early times xmath174 upper solid curves in each panel the difference is generally small but still the 2 models with xmath172 show up to xmath175 deviation from xmath54cdm prediction this is because for larger xmath38 the scalar field is lighter and the fifth force less suppressed its influence in the structure formation therefore enhanced notice that on small scales the deviation from xmath54cdm decreases which is a desirable property of chameleon models which are designed to suppress the fifth force on small scale high density regions the lower solid curves in each panel of fig fig figure5 display the same quantities at xmath176 late times we can see the trend of increasing deviation from xmath54cdm for all 4 models because fifth force is essentially unsuppressed at the late epoch cf fig figure2 for example the deviation of the model xmath177 is significantly larger than that of the model xmath178 as navely expected thanks to the lack of suppression of fifth force in both models the xmath179 model obviously has a smaller saturated fifth force for comparisonwe also plot the xmath180 that is predicted by the linear perturbation theory for the 4 models under consideration the dashed curves as can be seen there at large scales small xmath128 where linear perturbation is considered as a good approximation the linear and nonlinear results agree pretty well especially for the xmath174 case the largest scale we can probe is limited by the size of our simulation boxes xmath181 and as a result we can not make plot beyond the point xmath182 where nonlinearity is expected to first become significant however in the case of xmath176 we can see the clear trend of the linear and nonlinear results merging towards xmath183 at vanishing xmath180 similar results can be found in fig 2 of xcite for fr gravity because we have two species of matter particles one uncoupled to the scalar field we are also interested in their respect power spectrum and the bias between them these are displayed in fig fig figure6 the results could be understood easily because a cdm particle always feel stronger total force than a baryon at the same position so the clustering of the former is identically stronger than the latter as well this could result in a significant bias between these two species at the present time especially for the models with xmath172 where the fifth force is less suppressed we identify halos in our xmath0body simulations using mhf mlapm halo finder xcite which is the default halo finder for mlapm mhf optimally utilizes the refinement structure of the simulation grids to pin down the regions where potential halos reside and organize the refinement hierarchy into a tree structure because mlapm refines grids according to the particle density on them so the boundaries of the refinements are simply isodensity contours mhf collect the particles within these isodensity contours as well as some particles outside it then performs the following operations i assuming spherical symmetry of the halo calculate the escape velocity xmath184 at the position of each particle ii if the velocity of the particle exceeds xmath184 then it does not belong to the virialized halo and is removed i and ii are then iterated until all unbound particles are removed from the halo or the number of particles in the halo falls below a pre defined threshold which is 20 in our simulations note that the removal of unbound particles is not used in some halo finders using the spherical overdensity so algorithm which includes the particles in the halo as long as they are within the radius of a virial density contrast another advantage of mhf is that it does not require a pre defined linking length in finding halos such as the friend of friend procedure when it comes to our coupled scalar field model the mhf algorithm also needs to be modified the reason is that as we mentioned above the scalar field xmath1 behaves as an extra potential which produces the fifth force and so the cdm particles experience a deeper total gravitational potential than what they do in the xmath54cdm model consequently the escape velocity for cdm particles increases compared with the newtonian prediction this is indeed important to bear in mind because as we have seen in table tab table1 the cdm particles are typically much faster than what they are in the xmath54cdm simulation and so if we underestimate xmath184 then some particles which should have remained in the halo would be incorrectly removed by mhf in general similar things should be taken care of in other theories involving modifications to gravity in the non relativistic limit such as mond and xmath2 gravity an exact calculation of the escape velocity in the coupled scalar field model is obviously difficult due to the complicated behaviour of the scalar field and thus here we introduce an approximated algorithm which is based on the mhf default method xcite to estimate it mhf works out xmath184 using the newtonian result xmath185 in which xmath138 is the gravitational potential under the assumption of spherical symmetry the poisson equation xmath186 could be integrated once to give xmath187 which is just the newtonian force law this equation can be integrated once again to obtain xmath188 where xmath189 is an integration constant and can be fixed xcite by requiring that xmath190 as xmath191 in which xmath192 is the virial radius of the halo and xmath193 is the mass enclosed in xmath192 when the fifth force acts on cdm particles the force law eq eq ahfnewtonlaw is modified and these particles feel a larger total gravitational potential to take this into account we need to have the knowledge about how the force law is modified and a simple rescaling of gravitational constant xmath7 has been shown to be not physical in certain regimes to solve the problem we notice that in the mhf code eq eq ahfnewtonlaw is used in the numerical integrations to obtain both xmath194 and xmath189 cf eqseq ahfphi eq ahfphi0 more explicitly the code loops over all particles in the halo in ascending order of the distance from the halo centre and when a particle is encountered its mass is uniformly distributed into the spherical shell between the particle and its previous particle the thickness of the shell is then the xmath195 of the integration obviously when the fifth force is added into eq eq ahfnewtonlaw we could use the same method to compute the total gravitational potential which includes the contribution from xmath1 call this contribution xmath196 because xmath197 so from eq eq wfdpdtcomov we can easily see xmath198 but now there is a subtlety here not all particles are cdm while only cdm particles contribute to xmath196 so in the modified mhf code we calculate xmath199 and xmath200 separately using all particles for the former and only cdm for the latter finally xmath201 is our estimate of the escape velocity because we have recorded the components of gravity and the fifth force for each particle in the simulation so the fifth force to gravity ratio can be computed at the position of each particle which is approximated to be xmath202 at that position in this way we have at least approximately taken into account the environment dependence of the fifth force and thus of xmath196 to see how our modification of the mhf code affects the final result on the mass function in fig fig figure7 we compare the mass functions for the model xmath177 calculated using three methods the modified mhf the original mhf by default for xmath54cdm simulations and another modified mhf in which we set xmath184 to be very large so that no particles will ever escape from any halo this is similar to the spherical overdensity algorithm mentioned above it could be seen that the difference between these methods can be up to a few percent for small halos where the potential is shallow particle number is small and the result is sensitive to whether more particles are removed as expected the second method gives least halos because there xmath184 is smallest and many particles are removed while the third method gives most halos because no particles are removed at all fig figure8 displays the mass functions for the 4 models as compared to the xmath54cdm result as expected the fifth force enhances the structure formation and thus produces more halos in the simulation box as our simulations have reached a force resolution of xmath203 kpc while the typical size of the large halos in the simulations is of order mpc we could ask what the internal profiles of the halos look like and how they have been modified by the coupling between cdm particles and the scalar field we have selected 2 typical halos from each simulation halo i is centred on xmath204 mpc which is slightly different for different simulations and has a virial mass xmath205 halo ii is centred on xmath206 mpc which is also slightly different for different simulations and has a virial mass xmath207 note here that the virial masses are for the xmath54cdm simulations and for scalar field simulations they can be and generally are slightly different the largest halos such as halo i generally reside in the higher density regions where the scalar field has a heavier mass and shows stronger chameleon effect consequently the fifth force inside them is severely suppressed so that we can expect small deviation from the xmath54cdm halo profile on the other hand the intermediate and small halos such as halo ii are mostly in relatively low density regions in which scalar field has a light mass and the fifth force tends to saturate this means that they should generally have a higher internal density than the same halos in the xmath54cdm simulation due to the enhanced growth by the fifth force this above analysis is qualitatively confirmed by fig fig figure9 where we can see that for the cases of xmath171 stronger chameleon the difference between the predictions of the coupled scalar field models and the xmath54cdm paradigm is really small for halo i furthermore this figure also shows some new and more interesting features for the cases of xmath172 considering halo i in the models xmath178 and xmath177 for example in the former case the coupled scalar field model produces an obviously consistent higher internal density profile than xmath54cdm from the inner to the outer regions of the halo while for the latter case the density profile of the coupled scalar field model is lower in the inner region but higher in the outer region we plan to make a detailed analysis of the complexities arising here regarding the effects of a coupled scalar field on the internal density profiles for halos in a future work and in this work we will only give a brief explanation for the new feature observed above fig fig figure11 shows the distributions of particle velocities and fifth force to gravity ratio at the positions of the particles in halo i for the two models xmath178 and xmath177 as we can see there for the model xmath177 the large value of xmath45 makes the chameleon effect strong so that the fifth force is generally much smaller than gravity in magnitude while at the same time the velocities of particles are more concentrated towards the high end implying a significantly higher mean speed than xmath54cdm as we have checked numerically as a result in the central region of the halo the particles have higher kinetic energy than in xmath54cdm but the potential is not significantly deeper so that particles tend to get far away from the centre of the halo producing a lower inner density profile as for the model with xmath178 the situation is just the opposite the fifth force is saturated and of the same order as gravity due to the weak chameleon effect so that the total potential is greatly deeper than its xmath54cdm counterpart while at the same time the particles are not as fast as those in xmath177 the consequence is that the halo accretes more particles towards its centre one might also have interests in how the cdm part and the baryonic part of the halo profile differ from each other and this is given in fig fig figure10 in which we compare the cdm and baryon density profiles of halos i and ii obviously the cdm density is higher than baryons everywhere again thanks to the boost from the fifth force for smaller xmath38 and larger xmath45 which help strengthen the chameleon effect and suppress the fifth force the difference between the two is smaller compare the halo i in the models xmath178 and xmath208 however in the situations where the fifth force is already saturated or unsuppressed such as in halo ii increasing xmath45 increases the saturated value of the fifth force and thus can instead magnify the difference compare the halo ii in the models xmath178 and xmath208 the above results again show the complexity of the chameleon scalar field model as compared to other coupled scalar field models we will study the environment and epoch dependence of the halo density profiles in a upcoming work to summarize in this paper we have investigated into a model where cdm and baryons couple differently to a chameleon like scalar field and performed full xmath0body simulations by directly solving the spatial distribution of the scalar field to study the nonlinear structure formation under this setup the new complexity introduced here compared to xcite is that we must distinguish between baryons and cdm so that we know how to calculate the force upon each particle we do this by tagging the initial almost homogeneously distributed particle randomly so that 80 of all particles are tagged as cdm we then only use cdm particles to calculate the scalar field and only applies the fifth scalar force on cdm particles the coupling function characterized by the coupling strength xmath45 and bare potential characterized by the parameter xmath38 which controls its steepness of the scalar field are chosen to be the same as in xcite as discussed there the coupling of the scalar field to cdm particles acts on the latter a fifth force when xmath38 is small and xmath45 is large the chameleon effect becomes stronger which gives the scalar field a heavy mass making the fifth force short ranged and the scalar field dependent only on the local matter density other ways to strengthen the chameleon effect includes increasing the density and decreasing the background value of the scalar field which itself is equivalent to increasing the background cdm energy density we have displayed in figs fig figure2 fig figure4 how changing the determining factors change the scalar field configuration and the strength of the fifth force we have also measured the nonlinear matter power spectrum from the simulation results and compared them with the xmath54cdm prediction depending on the values of xmath142 as well as the background cdm density the former can be up to xmath209 larger than the latter nonetheless when the chameleon effect is set to be strong the deviation gets suppressed and in particular decreases towards small scales showing the desirable property of chameleon models that they evade constraints on small scales the bias between cdm and baryons power spectra follows the same trend to identify virialized halos from the simulations we have modified mhf mlapm s default halo finder so that the calculation of the escape velocity includes the effect of the scalar field we find that such a modification leads to up to a few percent enhancement on the mass function compared with what is obtained using the default mhf code because in the latter case the escape velocity is underestimated and some particles are incorrectly removed from the virialized halos we find that the mass function in the coupled scalar field models is significantly larger than the xmath54cdm result because of the enhanced structure growth induced by the fifth force finally we have analyzed the internal profiles of the same two halos selected from each simulation we find that when chameleon effect is strong and fifth force is suppressed our result is very close to the xmath54cdm prediction this is the case for very large halos which generally reside in higher density regions and xmath171 for large halos and xmath172 the situation is more complicated because the competition between two effects of the coupled scalar field namely the speedup of the particles and the deepening of the total attractive potential has arrived a critical point if the former wins such as in the model xmath177 then the inner density profile can be lower than that in xmath54cdm if the latter wins such as in the model xmath177 then we expect the opposite for smaller halos which locate in lower density regions the fifth force is not suppressed that much and causes faster growth of the structure so the halo is more concentrated and has a higher internal density meanwhile the bias between baryons and cdm density profiles also increase as the fifth force becomes less suppressed which is as expected our results already show that new features can be quantitatively studied with the xmath0body method and the improving supercomputing techniques and that the chameleon model has rather different consequences from other coupled scalar field models the enhancement in the structure formation due to the fifth force is significant for some of our parameter space which means other observables such as weak lensing could place new constraints on the model also one might be interested in how the halo profiles would be like at different epoch of the cosmological evolution and in different environments these will be left as future work the work described here has been performed under the hpc europa project with the support of the european community research infrastructure action under the fp8 structuring the european research area programme the xmath0body simulations are performed on the huygens supercomputer in the netherlands and the post precessing of data is performed on cosmos the uk national cosmology supercomputer we thank john barrow kazuya koyama andrea maccio and gong bo zhao for helpful discussions relevant to this work and lin jia for assistance in plotting the figures b li is supported by a research fellowship in applied mathematics at queens college university of cambridge and the stfc rolling grant in damtp natexlab bibnamefont bibfnamefont citenamefont url urlprefix 22 22 and and and and and and and and and and and and and and and and and and arxiv09023452 astro ph and and arxiv08123901 astro ph and and submitted arxiv09103207 astrophco and and and and and and and numerical recipes in c the art of scientific computing cambridge university press cambridge 1992 second ed and a multigrid tutorial society for industrial and applied mathematics philadelphia 2000 second ed and arxiv astro ph9506070 and and and and in the mlapm code the partial differential equation eq eq intpoisson is and in our modified code eq eq intphieom will also be solved on discretized grid points and as such we must develop the discretized versions of eqs eq intdxdtcomov eq intphieom to be implemented into the code but before going on to the discretization we need to address a technical issue as the potential is highly nonlinear in the high density regime the value of the scalar field xmath210 will be very close to 0 and this is potentially a disaster as during the numerical solution process the value of xmath210 might easily go into the forbidden region xmath118 xcite one way of solving this problem is to define xmath211 in which xmath212 is the background value of xmath213 as in xcite then the new variable xmath117 takes value in xmath214 so that xmath215 is positive definite which ensures that xmath216 however since there are already exponentials of xmath213 in the potential this substitution will result terms involving xmath217 which could potentially magnify any numerical error in xmath117 instead we can define a new variable xmath117 according to xmath218 by this xmath117 still takes value in xmath214 xmath219 and thus xmath220 which ensures that xmath213 is positive definite in numerical solutions besides xmath221beta so that there will be no exponential of exponential terms and the only exponential is what we have for the potential itself xmath222 above then the poisson equation becomes xmath223nonumber frac32omegamathrmbleftrhocmathrmb1right frac3omegav0a3left1left1eurightbetarightmu 3baromegava3 endaligned where we have defined xmath224 which is determined by background cosmology the quantity xmath225 is also determined solely by background cosmology these background quantities should not bother us here the scalar field eom becomes xmath226mu1nonumber 3gammaomegamathrmcdmegammasqrtkappabarvarphi frac3mubetaomegav0a3ebetasqrtkappabarvarphi left1ebetasqrtkappabarvarphirightm1endaligned in which we have used the fact that xmath227 and moved all terms depending only on background cosmology the source terms to the right hand side so in terms of the new variable xmath117 the set of equations used in the xmath0body code should be xmath228endaligned plus eqs eq upoisson eq uphieom these equations will ultimately be used in the code among them eqs eq upoisson eq udpdt will use the value of xmath117 while eq eq uphieom solves for xmath117 in order that these equations can be integrated into mlapm we need to discretize eq eq uphieom for the application of newton gauss seidel iterations to discretize eq eq uphieom let us define xmath229 the discretization involves writing down a discretion version of this equation on a uniform grid with grid spacing xmath230 suppose we require second order precision as is in the standard poisson solver of mlapm then xmath231 in one dimension can be written as xmath232 where a subscript xmath233 means that the quantity is evaluated on the xmath234th point of course the generalization to three dimensions is straightforward xmath240nonumber frac1h2leftbi jfrac12kui j1k ui j kleftbi jfrac12kbi jfrac12kright bi jfrac12kui j1krightnonumber frac1h2leftbi j kfrac12ui j k1 ui j kleftbi j kfrac12bi j kfrac12right bi j kfrac12ui j k1rightendaligned xmath242nonumber frac1h2leftbi jfrac12kui j1k ui j kleftbi jfrac12kbi jfrac12kright bi jfrac12kui j1krightnonumber frac1h2leftbi j kfrac12ui j k1 ui j kleftbi j kfrac12bi j kfrac12right bi j kfrac12ui j k1rightnonumber fraclefth0bright2ac2left3gammaomegamathrmcdmrhomathrmcdmc i j k left1eui j krightgamma frac3mubetaomegav0a3left1eui j krightbeta left1left1eui j krightbetarightmu1rightnonumber fraclefth0bright2ac2left 3gammaomegamathrmcdmegammasqrtkappabarvarphi frac3mubetaomegav0a3ebetasqrtkappabarvarphi left1ebetasqrtkappabarvarphirightmu1rightendaligned then the newton gauss seidel iteration says that we can obtain a new and often more accurate solution of xmath117 xmath243 using our knowledge about the old and less accurate solution xmath244 as xmath245 the old solution will be replaced by the new solution to xmath246 once the new solution is ready using the red black gauss seidel sweeping scheme note that xmath247nonumber frac12h2leftbi1j kbi1j kbi j1k bi j1kbi j k1bi j k16bi j krightnonumber fraclefth0bright2ac23gamma2omegamathrmcdmrhomathrmcdmc i j k left1eui j krightgammabi j knonumber fraclefth0bright2ac2 frac3mubeta2omegav0a3left1eui j krightbeta left1left1eui j krightbetarightmu1bi j k left1mu1fracleft1eui j krightbeta1left1eui j krightbetarightendaligned in principle if we start from a high redshift then the initial guess of xmath246 could be such that the initial value of xmath213 in all the space is equal to the background value xmath212 because anyway at this time we expect this to be approximately true for subsequent time steps we could use the solution for xmath246 at the previous time step as our initial guess if the time step is small enough then we do not expect xmath117 to change significantly between consecutive times so that such a guess will be good enough for the iteration to converge fast
in this paper we present the results of xmath0body simulations with a scalar field coupled differently to cold dark matter cdm and baryons the scalar field potential and coupling function are chosen such that the scalar field acquires a heavy mass in regions with high cdm density and thus behaves like a chameleon we focus on how the existence of the scalar field affects the formation of nonlinear large scale structure and how the different couplings of the scalar field to baryons and cdm particles lead to different distributions and evolutions for these two matter species both on large scales and inside virialized halos as expected the baryon cdm segregation increases in regions where the fifth force is strong and little segregation in dense regions we also introduce an approximation method to identify the virialized halos in coupled scalar field models which takes into account the scalar field coupling and which is easy to implement numerically it is find that the chameleon nature of the scalar field makes the internal density profiles of halos dependent on the environment in a very nontrivial way
introduction the equations simulation and results discussion and conclusion discretized equations for our @xmath0-body simulations
in 1983 thouless xcite proposed a simple pumping mechanism to produce even in the absence of an external bias a quantized electron current through a quantum conductor by an appropriate time dependent variation of the system parameters experimental realizations of quantum pumps using quantum dots qds were already reported in the early 90 s xcite more recently due to the technological advances in nano lithography and control such experiments have risen to a much higher sophistication level making it possible to pump electron xcite and spin xcite currents through open nanoscale conductors as well as through single and double qds xcite early theoretical investigations where devoted to the adiabatic pumping regime within the single particle approximation xcite this is well justified for experiments with open qds where interaction effects are believed to be weak xcite and the typical pumping parameters are slow with respect the characteristic transport time scales such as the electron dwell time xmath0 this time scale separation enormously simplifies the analysis of the two time evolution of the system within the adiabatic regime inelastic and dissipation xcite effects of currents generated by quantum pumps were analyzed furthermore issues like counting statistics xcite memory effects xcite and generalizations of charge pumping to adiabatic quantum spin pumps were also proposed and studied xcite non adiabatic pumping has been theoretically investigated within the single particle picture either by using keldysh non equilibrium green s functions negf with an optimal parametrization of the carrier operators inspired by bosonization studies xcite or by a flouquet analysis of the xmath1matrix obtained from the scattering approach xcite while the first approach renders complicated integro differential equations for the green s functions associated to the transport the second one gives a set of coupled equations for the flouquet operator it is worth to stress that in both cases the single particle picture is crucial to make the solution possible and it is well established that both methods are equivalent xcite several works have provided a quite satisfactory description of quantum pumping for weakly interacting systems in contrast the picture is not so clear for situations where interaction effects are important different approximation schemes have been proposed to deal with pumping in the presence of interactions and to address charging effects which are not accounted for in a mean field approximation typically two limiting regimes have been studied namely the one of small pumping frequencies xmath2 such that xmath3 adiabatic limit xcite and the one of very high frequencies xmath4 sudden or diabatic limit xcite nonadiabatic pumping is mainly studied as a side effect of photon assisted tunneling xcite where xmath4 unfortunately it is quite cumbersome to calculate corrections to these limit cases for instance the analysis of higher order corrections to the adiabatic approximation for the current gives neither simple nor insightful expressions xcite in addition to the theoretical interest a comprehensive approach bridging the limits of xmath4 and xmath5 has also a strong experimental motivation most current experimental realizations of quantum pumping deal with qds in the coulomb blockade regime and xmath6 this regime was recently approached from below by means of a diagrammatic real time transport theory with a summation to all orders in xmath2 xcite however the derivation implied the weak tunnel coupling limit whereas experiments xcite typically rely on tunnel coupling variations which include both weak and strong coupling to address the above mentioned issues and to account for the different time scalesinvolved it is natural to use a propagation method in the time domain xcite in this workwe express the current operator in terms of density matrices in the heisenberg representation we obtain the pumped current by truncating the resulting equations of motion for the many body problem the time dependence is treated exactly by means of an auxiliary mode expansion xcite this approach provides a quite amenable path to circumvent the usual difficulties of dealing with two time green s functions xcite moreover it has been successfully applied to systems coupled to bosonic reservoirs xcite and to the description of time dependent electron transport using generalized quantum master equations for the reduced density matrix xcite since the auxiliary mode expansion is well controlled xcite the accuracy of our method is determined solely by the level of approximation used to treat the many body problem the formalism we put forward is illustrated by the study of the charge pumped through a qd in the coulomb blockade regime by varying its resonance energy and couplings to the leads the external drive is parametrized by a single pulse whose duration and amplitude can be arbitrarily varied by doing so the formalism is capable to reproduce all known results of the adiabatic limit and to explore transient effects beyond this simple limit the paper is organized as follows in sec sec model we present the resonant level model as well the theoretical framework employed in our analysis in sec sec prop we introduce the general propagation scheme suitable to calculate the pumping current at the adiabatic regime and beyond it next in sec sec app we discuss few applications of the method finally in sec sec conclusion we present our conclusions the standard model to address electron transport through qds is the anderson interacting single resonance model coupled to two reservoirs one acting as a source and the other as a drain despite its simplicity the model provides a good description for coulomb blockade qds and for qds at the kondo regime where the electrons are strongly correlated in this paperwe address the coulomb blockade regime for qds whose typical line width xmath7 is much smaller than the qd mean level spacing xmath8 justifying the use of the anderson single resonance model in addition in the coulomb blockade regime xmath7 is much smaller than the resonance charging energy xmath9 the total hamiltonian is given by the usual threefold decomposition into a quantum dot hamiltonian xmath10 a hamiltonian xmath11 representing the leads and a coupling term xmath12 namely eq hamiltonian xmath13 the qd is modeled by a single level of energy xmath14 which can be occupied by spin up and spin down electrons which interact through a contact interaction of strength xmath9 the qd hamiltonian reads xmath15 where xmath16 xmath17 and xmath18 stand for electron number creation and annihilation operator for the respective spin state xmath19 in the dot the two reservoirs labeled as xmath20 left and xmath21 right are populated by non interacting electrons whose hamiltonian reads xmath22 where xmath23 and xmath24 stand for the electron creation and annihilation operators for the xmath25reservoir state xmath26 respectively the reservoir single particle energies have the general form xmath27 with the xmath28 accounting for a time dependent bias the stationary current due to a time dependent bias was already addressed several years ago xcite for pumping we take xmath29 as usual finally the coupling hamiltonian is given by xmath30 with xmath31 denoting the coupling matrix element between the qd and the reservoir xmath25 we are interested in the electronic current from reservoir xmath25 to the qd state xmath32 which can be obtained from current operator xmath33endaligned here and in the following we use units where the elementary charge xmath34 and the reduced planck constant xmath35 unless otherwise indicated to calculate xmath36 we use the following equations of motion which are obtained from the hamiltonian eqs by means of the heisenberg equation eq eomtheisenberg xmath37labeleq eomtheisenberg3endaligned analogous equations hold for xmath38 and xmath39 in the spirit of the scheme introduced by caroli and co workers xcite we assume an initially uncorrelated density operator of the combined system ie we set xmath40 for xmath41 further we apply the so called wide band limit xcite where the square of the tunneling element xmath42 is inversely proportional to the density of states xmath43 at energy xmath44 by means of the lead green function xcite xmath45 we can define the decay rate eq gammawbl xmath46 which becomes local in time in the wide band limit namely xmath47 in the following we replace the sum in eq by the expression involving the xmath8function in eq the equation of motion for the reservoir operators xmath48 eq is now readily integrated yielding eq eomtresop xmath49 where we have used the lead green functions eq and introduced xmath50 equations are used to rewrite eq as xmath51 hatcst notag sumlimitsalpha k talpha kt hatbalpha k st labeleq ceomendaligned here the wide band limit eq is employed to obtain the decay term proportional to xmath52 similarly we can rewrite eq as xmath53 mathrmigammat hatnbarstnotagendaligned hereagain the time integral of xmath54 is reduced to a decay width due to the wide band limit the expression for the time dependent current is given by the expectation value of the current operator xmath55 defined in eq as will become clear later on it is useful to write this expectation value as xmath56 with the current matrices of the first order xmath57 these current matrices are an essential ingredient of our propagation scheme which is based on finding equations of motion for xmath58 such equations have been derived starting from a negf formalism for non interacting electrons xcite exactly as for the operator equations above we can use xmath59 from eq and employ the wide band limit for the current matrices defined in eq this leads to the following decomposition eq negfdefpi1 xmath60 having derived all relevant equations of motion for the operators we can specify the respective equations for the two contributions xmath61 and xmath62 the term xmath61 is the simplest and is basically given by the equation of motion for xmath63 cf the corresponding equation for the occupation xmath64 reads xmath65 the above relation can be viewed as the charge conservation equation for the qd the rate by which the charge in the qd changes is equal to the total electronic currents the first term at the rhs of the equation can be interpreted as the current flowing into the qd whereas the second term gives the current flowing out since we do not consider a spin dependent driving or spin polarized initial states it is xmath66 this relation is not explicitly used in the derivation but is employed as a consistency check throughout the analysis the evaluation of xmath62 requires the solutions for both the lead operator xmath48 and the dot operator xmath18 using those we write xmath67 herewe have introduced the abbreviation xmath68 and used that xmath69 with xmath70 the fermi function describing the equilibrium occupation of lead xmath25 the last term in eq uses the auxiliary current matrices of the second order xmath71 which will be subject to further approximations in the following before we turn to the approximations we would like to briefly discuss the physical meaning of xmath72 the equation of motion for the two electron density matrix xmath73 reads xmath74 which follows from eq the two electron density matrix may be interpreted as the occupation of one quantum dot level under the condition that the other one is occupied the rate of change of this conditional occupation is consequently given by tunneling into and out of the respective dot state under the same condition the latter process is described by the first term on the rhs of eq the former process is governed by the auxiliary current matrices xmath72 which can be rewritten in the suggestive form xmath75 consequently the current matrices xmath72 describe the conditional current from reservoir xmath25 into the quantum dot level with spin xmath32 the simplest approximation to xmath72 consists in using the following factorization xmath76 inserting this expression into eq results in the following equation of motion xmath77 pialpha k st notag talpha kt falpha kendaligned this result is equivalent to the hartree fock approximation applied to the anderson model standard two electron green function xcite as any mean field approach it does not lead to a double resonance green function which is required to properly account for charging effects hence as it is well known a good description of the coulomb blockade regime requires going beyond this level of truncation in the equations of motion instead of factorizing xmath72 directly we proceed by deriving its equation of motion by means of eqs we get xmath78 phialpha k st sumlimitsalphak talpha kt leftlangle hatbdaggeralpha k st hatbalpha k st hatnbarst rightranglenotag sumlimitsalphak left talpha kt leftlangle hatbdaggeralpha k st hatcst hatbdaggeralpha k barst hatcbarst rightrangle talpha kt leftlangle hatbdaggeralpha k st hatcst cdaggerbarst hatbalpha k barst rightrangle right labeleq phieomendaligned note that the term proportional to xmath9 has only four operators in the expectation values because of xmath79 the approximation consists in neglecting matrix elements involving opposite spins which renders the following factorizations xmath80 this approximation for the density matrices is equivalent to the truncation scheme employed in the negf approach used for the study of coulomb blockade regime high temperature limit of the anderson model xcite as a result of the factorization we obtain the following compact equation of motion for the approximated second order current matrices xmath81 tildephialpha k st notag talpha kt falpha k nbarst labeleq pieomhubbardendaligned the equations of motion for xmath82 eq xmath83 eq with xmath72replaced by xmath84 and for xmath85 eq form a closed set of equations which can be solved by means of an auxiliary mode expansion discussed below the general idea of the auxiliary mode expansion consists in making use of a contour integration and the residue theorem to perform the energy integration for instance in eq to this end the fermi function is expanded in a sum over simple poles or auxiliary modes and the respective integrals are given as finite sums cf appendix sec appexp the transition to auxiliary modes denoted by the index xmath86 is facilitated by the following set of rules eq rules xmath87 which are derived in appendix sec appexp the first rule replaces the reservoir energy xmath44 by the complex pole xmath88 of the expansion cf the second rule replaces the fermi function by the respective weight which is the same for all auxiliary modes finally the third rule provides the actual expansion for the current matrices applying these rules the current matrices become xmath89 the equation of motion for the auxiliary matrix xmath90 is obtained from eq one arrives at xmath91 pialpha s p t nonumber frac1betatalphat u phialpha s p t labeleq auxeomtpiendaligned the equations of motion for the auxiliary matrices xmath92 are quite similar to those of eq namely xmath93 tildephialpha s p t nonumber frac1betatalphat nbarst labeleq auxeomtphiendaligned the solution of the above equations still requires a complete description of the population dynamics given by xmath82 the latter can be directly obtained from eq in terms of the current matrices xmath94this concludes the derivation of the auxiliary mode propagation scheme the set of equations to with initial conditions xmath95 xmath96 and xmath97 can be solved numerically using standard algorithms before the desired time dependence of the parameters xmath98 and xmath99 sets in the system has to be propagated until a steady state is reached in this way transient effects arising from the choice of the initial state are avoided for conveniencewe derive in appendix sec appstat the expressions for the stationary occupations which may also be used as initial values for xmath100 in this section we present two applications of the formalism developed above as shown below one of the interesting features of non adiabatic pumping is an increasing delay in the current response to the external drive with growing driving speed hence in distinction to the adiabatic limit the current caused by a train of pulses can show interesting transient effects whenever the pulse period is shorter than the system response time to better understand non adiabatic driving effects we focus our analysis on single pulses and vary the speed by which their shape is changed it is worth stressing that our propagation method does not possess restrictions on the time dependence of the system driving parameters in other words the external time dependent drive can be just a single pulse or a train of pulses it can also be either fast or slow as compared with the system internal time scales let us begin by discussing the current generated by a single gaussian voltage pulse changing the resonance energy as xmath101endaligned here xmath102 sets the pumping time scale we take xmath103 to be time independent and equal for both leads xmath104 since thereby xmath105 we will consider only xmath106 in the following figure fig singlea shows the time dependence of the resonance energy according to eq the two bottom panels show the instantaneous current xmath106 as a function of time for both the non interacting xmath107 and the interacting xmath108 case in the limit of large xmath102 we use as a check for our results an analytical expression for the pumped current xmath106 obtained for xmath107 within the adiabatic approximation xcite for different pulse lengths xmath102 and xmath107 and xmath109 parameters used xmath110 xmath111 xmath112 xmath113 and xmath114 number of auxiliary modes dots denote the adiabatic limit xcite here due to the l r symmetry there is no net charge flowing through the qd at any given time both leads pump the same amount of charge in or out in the driving scheme defined by eq the qd is initially nearly empty at xmath115 the resonance energy favors an almost full occupation for very slowpumping large xmath116 the current xmath106 depends only on the resonance energy xmath14 as the resonance dives into the fermi sea the qd is loaded with charge and the process is reversed as xmath14 starts increasing this is no longer true when the drive is faster and xmath116 decreases now one observes a retardation effect namely the xmath106 depends not only on the resonance position but also on driving speed for fast driving one needs to integrate xmath106 over times much longer than xmath102 to observe a vanishing net charge per pulse the pumped currents xmath117 characterize the time dependent electron response to the external drive however in most applications one is only interested in the charge pumped per cycle xmath118 or per pulse xmath119 in the latter case xmath119 is given as time integral over the current which we write in a symmetric way xmath120 one of the beautiful lessons learned form the investigation of adiabatic pumping establishes a proportionality relation between xmath119 and the area swapped by the time dependent driving forces in parameter space xcite in other words the total charge flowing through a qd per cycle or per pulse in a single parameter adiabatic pump vanishes due to the constraints of single parameter pumps in most applications at least two parameters are used xcite on the other hand by using a single gate modulation one can realize a constrained two parameter pump xcite which implies that the time dependence of the parameters is ultimately coupled due to the modulation of only a single gate voltage in the followingwe will investigate the implications of this scenario for non adiabatic pumping upper row and the decay widths xmath121 lower row blue full line and xmath122 lower row red broken line for three different cases a xmath123 b xmath124 and c xmath125 the dotted lines indicate the chemical potential in the reservoirs and the shaded area shows the times when the resonance energy is below the chemical potential specifically let us consider voltage pulses of the form xmath126 here xmath102 measures the characteristic pulse time whereas xmath8 governs the time the pulse sets in the numerical factor xmath127 ensures that xmath102 is the full width at half maximum of the pulse which simplifies the following discussion by tuning the delayone can conveniently switch between a single parameter xmath128 and a two parameter setup xmath129 further the time dependence of the resonance energy and the coupling strengths decay widths are chosen as xmath130 gammarm lt fracgamma02 left 1 st deltarm l rightendaligned this choice takes into account that the coupling strengths depend exponentially on the gate voltage xcite the constraint is imposed by setting xmath131 and the specific value of xmath132 for this driving parameterization the resonance and the decay widths are xmath133 and xmath134 respectively for both asymptotic limits of xmath135 in the following the parameters are taken as xmath136 xmath137 xmath138 and interaction energy either xmath107 or xmath109 in fig fig appscheme3 the time dependence of xmath139 and xmath140 is illustrated for three cases xmath124 and xmath141 as mentioned above in each casethe coupling to the right reservoir xmath122 follows the time dependence of the resonance energy when the latter attains its minimal value at xmath115 which brings the energy well below the chemical potential of the reservoirs the coupling to the right reservoir is minimal on the other hand the behavior of the coupling to the left reservoir can be influenced by the value of xmath132 for xmath123 the maximum of xmath121 comes before xmath115 while for xmath142 it is attained after xmath115 in the casexmath124 the coupling to the left reservoir is maximal simultaneously with xmath122 being minimal at xmath115 in the following the response to these drivings will be investigated vs pulse shift xmath132 in the long pulse limit upper panel and at xmath143 lower panel the dashed lines indicate half of the non interacting result knowing the time dependence shown in fig fig appscheme3 one can readily predict the behavior of xmath119 in the adiabatic limit in this case electron flow occurs when the resonance energy matches the chemical potential of the reservoirs in our pulse scheme xmath14 equals the chemical potential at xmath144 and xmath145 corresponding to the onset of charging and de charging of the qd respectively further the direction of the net current is determined by the difference of the couplings to the reservoirs at these very times for example for xmath123 one finds xmath146 while charging and xmath147 while de charging consequently the net current is directed from left to right and xmath119 is expected to be positive for xmath142 the situation is opposite and xmath119 should be finally for xmath124 the couplings are equal at both instants of time and the net current is vanishing these expectations are confirmed by our results for the adiabatic regime xmath148 and different values of xmath149 which are shown in fig fig appmonoqpphia as already mentioned one observes xmath150 for xmath124 monoparametric pumping as xmath151 begins to increase xmath152 increases as well in this scenario when the resonance energy matches the chemical potential electrons load the dot from the left or right and later they are unloaded to the right or left depending on the sign on xmath8 for larger values of xmath151 the left reservoir participates less in the loading or unloading of the qd and the charge per pulse vanishes accordingly for interaction strengthsxmath153 the double occupation of the qd is suppressed and consequently in the adiabatic regime xmath119 is half the value of xmath119 for the non interacting case the numerical results indicate that within the hubbard i approximation xmath108 does not introduce new time scales to the problem for xmath148 and its major effect is to correct the spin degeneracy factor in the equations for the xmath107 case none of the aforementioned features are observed in the non adiabatic pumping regime figure fig appmonoqpphib shows for example that for short pulses there is no simple relation between xmath119 for xmath107 and for xmath108 moreover compared to the adiabatic regime the charge per pulse can be substantially larger in this regime unfortunately the behavior of xmath119 in this regime is not as easily predicted in general since the evolution of the parameters xmath154 xmath121 xmath155 after the onset of loading and unloading has to be taken into account this is because in the non adiabatic regime the qd charging and de charging is delayed with respect to the external system changes as it was shown in sec sec comparison taking for example the case xmath124 one finds from fig fig appscheme3b that xmath146 while the resonance energy is below the chemical potential and charging occurs during the de charging when xmath156 one finds xmath157 consequently the current is expected to flow mainly from left to right which leads to a positive charge per cycle this is confirmed by the results shown in fig fig appmonoqpphib the quantitative behavior depends on the precise magnitude of the delay which is determined by the pulse length however from the analysis presented above and for sufficiently short pulses one concludes that xmath119 has to be positive independent of xmath132 the interesting implications of this result will be discussed at the end of this section finally in fig fig appmonoqp we summarize and corroborate the discussion of the non adiabatic pumping it shows the charge pumped due the pulse as a function of pulse length xmath116 in the non interacting xmath107 and the coulomb blockade regime xmath158 in the latter case xmath159 for all pulse lengths as discussed above the amount of pumped charge xmath119 depends very strongly on the value of xmath116 in the limit of large pulse lengths xmath119 approaches the respective adiabatic value while for xmath160 the charge per pulse vanishes moreover one finds that xmath119 is indeed positive for small pulse lengths this has the intriguing consequence that the charge per pulse can change its sign sweeping from short to long pulses this is shown in fig fig appmonoqpb for xmath125 where xmath119 is negative in the adiabatic regime a more general and quantitative analysis of this effect is certainly desirable but beyond the scope of this article it may lead however to interesting new applications it is also worth to mention that by changing the pumping parameters it is possible to optimize the charge pumped per pulse and in particular to find situations where xmath161 which may be very interesting for metrology purposes xcite we presented a new method to analyze non adiabatic charge pumping through single level quantum dots that takes into account coulomb interactions the method is based on calculating the time evolution for single electron density matrices the many body aspects of the problem are approximated by truncating the equations of motion one order beyond mean field the novelty is the way the time evolution is treated by means of an auxiliary mode expansion we obtain a propagation scheme that allows for dealing with arbitrary driving parameters fast and slow the method presented in this papercan be applied to a wide range of coupling parameters xmath162 provided one avoids the kondo regime hence we are not restricted to the weak coupling limit where xmath119 the charge pumped per pulse is rather small the presented results for single pulses are also valid for pulse trains provided the time between the pulses is sufficiently long one can expect to find qualitatively new and interesting effects by decreasing the time lag the propagation scheme allows in principle for studying transient effects in addition by propagating over a periodic sequence of pulses it constitutes a complementary approach to the more familiar periodic driving in this regard our propagation scheme has the potential to be a valuable tool and provide deeper insights into non adiabatic quantum pumps this work is supported in part by cnpq brazil here we motivate the rules given in sect sec prop to begin with we introduce correlation functions which can be approximated by finite sums then we write the current matrices in terms of these finite sums as we will show later in the present case we have to consider the following reservoir correlation function xmath163 right labeleq corrfuncintendaligned where the line width function xmath164 is defined as usual xcite xmath165 in the second line we have used the wide band limit in order to perform the energy integration in eq we expand the fermi function xmath166 as a finite sum over simple poles xmath167 with xmath168 and xmath169 instead of using the matsubara expansion xcite with polesxmath170 we use a partial fraction decomposition of the fermi function xcite which converges much faster than the standard matsubara expansion in this casethe poles xmath171 are given by the eigenvalues xmath172 of a xmath173 matrix xcite the poles are arranged such that all poles xmath174 xmath175 are in the upper lower complex plane as in the matsubara expansion all poles have the same weight employing the expansion given by eq one can evaluate the energy integrals by contour integration in the upper or lower complex plane depending on the sign of xmath176 thereby the integral in eq becomes a finite sum of the residues for xmath177 one gets eq corrfuncexp xmath178 with the auxiliary modes for reservoir xmath25 given by xmath179xpbeta here xmath180 is the chemical potential and xmath181 is due to the time dependent single particle energies xmath182 of the reservoir hamiltonian eqeq reshamilop the set of equations and can be formally solved in order to write down these solutionswe define the following functions 11 xmath183 gus t t equiv exp mathrmiinttt dt leftvarepsilonst u mathrmifracgammat2right endaligned with these definitions the formal solution of eq reads xmath184endaligned where we have assumed xmath185 corresponding to our choice of an initially uncorrelated density matrix see sec sec setup an analogous equation holds for xmath85 again with xmath186 we can combine these two expressions to get for the second part of the current matrix xmath187 where we have used the definition of the correlation function xmath188 given by eq finally by means of the expansion of the correlation functions we obtain an expansion of the current matrices xmath189 which resembles the last rule of eqsusing the explicit expression for xmath90 and taking the time derivative one can easily verify the first two rules given by eqs similarly one also obtains an expression for xmath190 which reads xmath191 the time derivative of this expression is given by eq if neither the couplings xmath42 and thus xmath7 nor the levels xmath192 or xmath44 depend on time the level occupations xmath193 and the currents xmath194 converge to stationary values these values can be obtained by setting all time derivatives in the respective equations of motion to zero in order to simplify the notation we characterize the stationary values by omitting the time argument within the hartree fock approximation sec sec hfapp we get from eq xmath196 plugging this into eq changing the xmath197 summation into an integral over xmath198 and using the definition we get for the wide band limit eq xmath199 equation is a non linear equation for xmath193 and has to be solved numerically we obtain the stationary conditional current xmath200 for the hubbard i approximation sec sec hiapp from eq as xmath201this expression can be used for the stationary xmath62 in eq xmath202 endaligned we use eq and the definition and finally get for the occupation the following integral xmath203 avarepsilon equiv frac1varepsilonvarepsilons2leftfracgamma2right2 avarepsilon equiv avarepsilon fraculeft4varepsilonvarepsilonsuright varepsilonvarepsilonsu2leftfrac3gamma2right2 endaligned this time the equation is linear in xmath204 and can be solved explicitly in the limitsxmath205 and xmath206 it is xmath207 and xmath208 respectively the former limit corresponds to non interacting electrons and eq gives the correct expression for the occupation xcite the latter case describes the situation with very strong interactions
we study non adiabatic charge pumping through single level quantum dots taking into account coulomb interactions we show how a truncated set of equations of motion can be propagated in time by means of an auxiliary mode expansion this formalism is capable of treating the time dependent electronic transport for arbitrary driving parameters we verify that the proposed method describes very precisely the well known limit of adiabatic pumping through quantum dots without coulomb interactions as an example we discuss pumping driven by short voltage pulses for various interaction strengths such finite pulses are particular suited to investigate transient non adiabatic effects which may be also important for periodic drivings where they are much more difficult to reveal
introduction time-dependent interacting resonant-level model auxiliary mode propagation scheme non-adiabatic pumping conclusions auxiliary-mode expansion stationary occupations
the supermassive black hole in the center of the milky way located in the radio source sgr a is one of the most interesting astronomical objects see ref xcite for an extensive review it is now in a state of relative inactivity xcite but there is no good reason for it to be stationary eg there are interesting hints for a much stronger emission few 100 years ago xcite on the time scale of 40000 years major variability episodes are expected xcite fermi bubbles xcite could be visible manifestations xcite of its intense activity therefore it is reasonable to expect that a past emission from the galactic center leads to observable effects such scenario was recently considered in ref xcite the latest observations by the hess observatory xcite that various regions around sgr a emit xmath0rays till many tens of tev are offering us new occasions to investigate this object these xmath0rays obey non thermal distributions which are moreover different in the closest vicinity of sgr a and in its outskirts in the latter case the xmath0rays seem to extend till very high energies xmath2 tev without a perceivable cut off the xmath0rays seen by hess can be attributed to cosmic ray collisions xcite this is a likely hypothesis but the proof of its correctness requires neutrino telescopes in this connection it is essential to derive reliable predictions for the search of a neutrino signal from sgr a and its surroundings and hess observations are very valuable in this respect remarkably the possibility that the galactic centre is a significant neutrino source is discussed since the first works xcite and it is largely within expectations indeed sgr a is one of the main point source targets already for the icecube observatory xcite in this work we discuss the implications of the findings of hess briefly reviewed in sect sec gammaray where we also explain our assumptions on the xmath0ray spectra at the source the effect of xmath0ray absorption due to the known radiation fields or to new ones close to the galactic center is examined in details in sect sec abs the expected signal in neutrino telescopes evaluated at the best of the present knowledge is shown in sect sec ripu and it is quantified in sect sec rates while sect sec ccc is devoted for the conclusion we argue that the pevatron hypothesis makes the case for a cubic kilometer class neutrino telescope located in the northern hemisphere more compelling than ever the excess of vhe xmath0rays reported by the hess collaboration xcite comes from two regions around the galactic center a point source hess j1745 290 identified by a circular region centered on the radio source sgr a with a radius of 01xmath3 and a diffuse emission coming from an annulus with inner and outer radii of 015xmath3 and 045xmath3 respectively the observed spectrum from the point source is described by a cut off power law distribution as xmath4 while in the case of diffuse emission an unbroken power law is preferred in the last case however also cut off power law fits are presented as expected from standard mechanisms of particle acceleration into the galaxy the hess collaboration has summarised its observations by means of the following parameter sets best fit of the point source ps region xmath5 xmath6 tevxmath7 xmath8 sxmath7 xmath9 tev best fit of the diffuse d region xmath10 xmath11tevxmath7 xmath8 sxmath7 the best fits of both the diffuse and the point source emission are shown in fig figabs right panel however in order to predict the neutrino spectrum the xmath0ray spectrum at the source ie the emission spectrum is needed we will discuss the implication of the assumption that the emitted spectra coincide with the observed spectra as described by the previous functional forms and furthermore we will discuss the assumption that the xmath0ray emission at the source is described by different model parameters namely point source emission with an increased value of the cut off ps xmath12 xmath13 tevxmath7 xmath8 sxmath7 xmath14 tev diffuse emission as a cut off dc power law with xmath15 xmath16 tevxmath7 xmath8 sxmath7 xmath17 pev the interest in considering an increased value of the cut off the case ps that is the only case that differs significantly from the spectra observed by hess is motivated in the next section instead the inclusion of a cut off for the emission from the diffuse region agrees with the observations of hess and is motivated simply by the expectation of a maximum energy available for particle acceleration note that the xmath0ray observations extend till 20 40 tev this is an important region of energy but it covers only the lower region that is relevant for neutrinos the latter one extends till 100 tev as clear eg from fig2 and 3 of xcite and fig1 of xcite in other words it should be kept in mind that until xmath0ray observations till few 100 tev will become available thanks to future measurements by hawc xcite and cta xcite the expectations for neutrinos will rely in part on extrapolation andor on theoretical modeling in this work unless stated otherwise we rely on a minimal extrapolation assuming that the above functional descriptions of the xmath0ray spectrum are valid descriptions of the emission spectrum a precise upper limit on the expected neutrino flux can be determined from the hess measurement assuming a hadronic origin of the observed xmath0rays the presence of a significant leptonic component of the xmath0rays would imply a smaller neutrino flux in principle however also other regions close the the galactic center but not probed by hess could emit high energy xmath0rays and neutrino radiation leading to an interesting signal one reason is that the annulus chosen by hess for the analysis resembles more a region selected for observational purposes rather than an object with an evident physical meaning another reason is that the ice based neutrino telescope icecube integrates on an angular extension of about xmath18 which is 5 times larger than the angular region covered in xcite in view of these reasons the theoretical upper limit on the neutrino flux that we will derive is the minimum that is justified by the current xmath0ray data moreover there is also a specific phenomenon that increases the expected neutrino flux that can be derived from the xmath0ray flux currently measured by hess this is the absorption of xmath0rays from non standard radiation fields as discussed in the next section during their propagation in the background radiation fields of the milky way high energy photons are subject to absorption consider the observed xmath0ray spectrum as summarized by means of a certain functional form the corresponding emission spectrum is larger this is obtained modeling and then removing the effect of absorption de absorption the neutrino spectrum corresponds to the emission spectrum and thus it is larger than the one obtained by converting the observed xmath0ray spectrum instead note that the idea that the xmath0rays could suffer significant absorption already at hess energies was put forward in ref xcite here we examine it in details the existence of a cosmic microwave background cmb that pervades the whole space and it is uniformly distributed is universally known this leads to absorption of xmath0rays of very high energies around pev for what concerns the interstellar radiation background the model by porter et al xcite adopted eg in the galprop simulation program xcite can be conveniently used to describe xmath0ray absorption due to the infrared ir and starlight sl backgrounds see eg xcite that occurs at lower energies it is convenient to group these three radiation fields for instance cmb ir and sl as known radiation fields since it is not possible to exclude that in the vicinity of the galactic center new intense ir fields exist and thus we should be ready to consider also hypothetical or unknown radiation fields the formal description of their absorption effects can be simplified without significant loss of accuracy if the radiation background field is effectively parameterized in terms of a sum of thermal and quasi thermal distributions where the latter ones are just proportional to thermal distributions for the xmath19th component of the radiation background two parameters are introduced the temperature xmath20 and the coefficient of proportionality to the thermal distribution that we call the non thermic parameter and that we denote by xmath21 the reliability of this procedure for the description of the galactic absorption was already tested in xcite we found that the formalism can be simplified even further without significant loss of accuracy thanks to a fully analytical albeit approximate formula derived and discussed in details in the appendix we have checked the excellent consistency with the other approach based on xcite by comparing our results with fig3 of xcite we emphasize a few advantages of this procedure 1 the results are exact in the case of the cmb distribution that is a thermal distribution 2 such a procedure allows one to vary the parameters of the radiation field very easily discussing the effect of errors and uncertainties 3 the very same formalism allow us to model the effect of new hypothetical radiation background a photon with energy xmath22 emitted from an astrophysical source can interact during its travel to the earth with ambient photons producing electron positron pairs the probability that it will reach the earth is xmath23 where xmath24 is the opacity in the interstellar medium different radiation fields can offer a target to astrophysical photons and determine their absorption the total opacity is therefore the sum of various contributions xmath25 where the index xmath19 indicates the components of the background radiation field that causes absorption these include the cmb as well as the ir and sl backgrounds and possibly new ones present near the region of the galactic center values of the parameters of the background radiation fields used for the computation of the absorption factor of the xmath0rays from the galactic center the black body temperature xmath20 the non thermic parameter xmath21 the typical length xmath26 namely the distance of the galactic center for cmb and the exponential scales for the ir and sl radiation fields the total density of photons xmath27 obtained from eq nonno the typical energy xmath28 obtained from eq nonna cols first we examine the behaviour of the integrand in xmath29 the function xmath30 if xmath31 on the contrary when xmath32 it diverges like xmath33 the divergence is compensated by the behavior of the function xmath34 that follows from xmath35 at high values of xmath36 finally xmath37 at small values of xmath36 at this point we study the behaviour of xmath38 in xmath39 for high xmath39 we can consider only the first term of the expansion of xmath34 so the function xmath38 is well approximated by xmath40 within an accuracy of 1 for xmath41 for small xmath39 the most important contribution to the integral is given when the xmath42 diverges and the xmath34 is not exponentially suppressed this condition is realized when xmath43 and in this region xmath44 for xmath42 we can use the asymptotic expression ie xmath45 the approximation of the function xmath46 is given by xmath47 xmath48 this implies the behavior xmath49 to within an accuracy of about 3 in the interval xmath50 a global analytical approximation of the xmath38 that respects the behavior for small and large values of xmath39 is given by eq eq analyticapp its accuracy is xmath51 into the interval xmath52 when xmath53 the function rapidly decreases as we can see also from tab tab val where the values are obtained by numerically integrations without any approximation f aharonian et al hess collaboration nature 531 2016 476 r genzel f eisenhauer and s gillessen rev 82 2010 3121 g ponti m r morris r terrier and a goldwurm astrophys space sci 34 2013 331 m freitag p amaro seoane and v kalogera astrophys j 649 2006 91 m su t r slatyer and d p finkbeiner astrophys j 724 2010 1044 r m crocker and f aharonian phys rev 106 2011 101102 y fujita s s kimura and k murase phys d 92 no2 2015 023001
the hypothesis of a pevatron in the galactic center emerged with the recent xmath0ray measurements of hess xcite motivates the search for neutrinos from this source the effect of xmath0ray absorption is studied at the energies currently probed the known background radiation fields lead to small effects whereas it is not possible to exclude large effects due to new ir radiation fields near the very center precise upper limits on neutrino fluxes are derived and the underlying hypotheses are discussed the expected number of events for antares icecube and km3net based on the hess measurements are calculated it is shown that kmxmath1class telescopes in the northern hemisphere have the potential of observing high energy neutrinos from this important astronomical object and can check the existence of a hadronic pev galactic accelerator
introduction the @xmath0-ray spectra from the galactic center region[sec:gammaray] absorption of @xmath0-rays
we would like to thank d jouan and zhang xiaofai for helpful discussions this work is partly supported by the national natural science founction of china 35 t matsui and h satz phys b178 416 1986 na38 collaboration c baglin et al phys lett b220 471 1989 b251 465 1990 a capella et al phys lett b206 47 1988 c gerschel and j huefner phys lett b207 253 1988 j j aubert et al nucl phys b213 1 1983 j peng et al in proceedings of workshop on nuclear physics on the light cone july 1988 world scientific publisher p65 d kharzeev and h satz b366 316 1996 s gavin and r vogt b345 104 1990 r vogt s j brodsky and p hoyer nucl phys b360 67 1991 r vogt nucl phys a544 615c 1992 w q chao and b liu z phys c72 291 1996 d kharzeev h satz a syamtomov and g zinoviev cern th96 72 na50 collaboration to appear in proceedings of quark matter96 r l anderson slac pub 1741 1976 blaizot and j y ollitrault phys lett 77 1703 1996 wong ornl ctp 9607 1996 wong introduction to high energy heavy ion collisions world scientific publishing singapore 1994 p360 n s craigie phys rep 47 1 1978 b andersson g gustafson g ingelman and t sjstrand phys rep 97 31 1983 b andersson g gustafson and b nilsson almqvist nucl b281 289 1987 b andersson g gustafson and h pi z phys c57 485 1993 g gustafson phys lett b175 453 1986 g gustafson and u pettersson nucl phys b306 746 1988 b andersson et al z phys c43 625 1989 l lnnblad ariadne version 4 a program for simulation of qcd cascades implementing the color dipole model desy 92 046bengtsson and t sjstrand comput commun 46 43 1987 t sjstrand a manual to the lund monte carlo for jet fragmentation and xmath71 physics jetset version 73 available upon request to the author g a schuler cern th 717094 figure captions figure1 xmath0 cross sections divided by xmath49 as a function of xmath54 our results are compared with the experimental data from na51 na38 and na50 xcite figure2 xmath0 cross sections divided by drell yan cross sections as a function of xmath72 our results are compared with the experimental data from na38 and na50 xcite xmath73 gev c xmath45xmath74 xmath75xmath76xmath77 xmath78xmath79xmath80 xmath81xmath82xmath83 xmath84xmath85xmath86 xmath87xmath88xmath89 xmath90xmath91xmath92
when the cross section of xmath0 production is considered varying with the energy of the nucleon nucleon interaction the production of xmath0 in pa and aa collisions has been studied using fritiof model the calculation shows that the cross section of xmath0 production per nucleon nucleon collision decreases with increasing mass number and centrality as a consequence of continuous energy loss of the projectile nucleons to the target nucleons in their successive binary nucleon nucleon collisions we have compared our model predictions with the experimental data of xmath0 production pacs number 2575q 3ex report no bthep th96 42 dec 1996 tai anxmath1 chao wei qinxmath2 and yao xiao xiaxmath3 ttttt tt a ccast world lab p o box 8730 beijing pr china b institute of high energy physics academia sinica p o box 9184 1 beijing 100039 pr china suppression of xmath0 production in high energy heavy ion collisions was proposed as an effective signature of qgp formation ten years ago xcite the ensuing experimental data confirmed a significant suppression of xmath0 production in both pa and aa collisions xcite however alternative explanations of the xmath0 suppression exist based on the absorption of xmath0 in nuclear matter xcite the overall set of extensive data collected and analysed by na38 seems to support the absorption mechanism in pbxmath4 obxmath4 and su collisions nevertheless a rather large absorption cross section xmath5 62 mb has to be used in order to fit the experimental data about three times larger than the total xmath0n cross section from the emc collaboration xcite such a difference was already noted long time ago in xcite recent calculations based on the colour octet model show that this phenomenological cross section could be understood by the absorption of pre resonance states xmath6 in nuclear matter xcite other sources of xmath0 suppression in nuclear collisions such as the interaction of xmath0 particles with the produced mesons called xmath7 xcite gluon shadowing in nuclei intrinsic charm component energy degradation of produced xmath8 pair etc xcite have been introduced besides the absorption due to the xmath0n interaction to explain the data we in this paper propose a simple model to investigate xmath0 suppression in pa and aa collisions focusing on the decrease of the cross section of xmath0 production with increasing mass number and centrality due to the continuous energy loss of the projectile nucleons to the target nucleons in their successive binary nucleon nucleon collisions not on its later absorption in nuclear matter from the calculations of this model we conclude that the absorption of xmath0 particles in nuclear matter is only part of the sources of xmath0 suppression seen in pa and aa collisions so far since the probability for a nucleon nucleon collision leading to xmath0 production is very small it is generally accepted that multiple xmath0 production processes in pa and aa collisions can be neglected the studies of xmath0 suppression based on the absorption mechanism actually assume that the probability of xmath0 production per nucleon nucleon collision is the same independent of the masses of the colliding nuclei at a given energy however each binary nucleon nucleon collision experienced by a projectile nucleon on its way out of the target in pa and aa collisions may not be the same if the projectile nucleon loses a fraction of its energy in each binary collision to the target nucleon such that the cms energy of each binary collision of this incoming nucleon with a target nucleon is different for the cross sections of the hard qcd parton parton scatterings which is responsible for xmath0 production increase with increasing energy it is conceivable that the probability of xmath0 production would also depend on the energy of a nucleon nucleon collision in a similar way it is confirmed both theoretically and experimentally that xmath0 photoproduction cross section exhibits a strong threshold behaviour at the low energy region and then increases with energy at the higher energy region xcite in a participant spectator model of nucleus nucleus collisions like fritiof each projectile nucleon may collide several times with the target nucleons if momentum transfers are assumed to take place in each binary nucleon nucleon collision then the cms energy of these binary collisions will decrease with time thereby the probability to produce xmath0 in each binary collision going down we in this paper have calculated how the cross section of xmath0 production varies with increasing mass number of colliding nuclei and centrality using fritiof model the calculations show that xmath0 production is suppressed in pa and aa collisions in comparison with the nucleon nucleon collision and the larger the mass number of colliding nuclei and the centrality the greater the suppression as a result that probability of finding a qcd hard process in a binary nucleon nucleon decreases with increasing mass number of colliding nuclei and centrality taking into account the xmath0 absorption by xmath0n interaction our results are in good agreement with experimental data with the exception of the latest data of pbxmath9pb collisions at 158 agev c which show a further xmath0 suppression xcite the absorption cross section xmath10 needed to explain the data from pbxmath4 obxmath4 and su collisions is about 14 mb in this paper which is consistent with the experiments emc collaboration xcite gives a total xmath0n cross section 22 xmath11 07 mb and the xmath0n quasi elastic cross section is given in xcite as 079 xmath11 0012 mb furthermore our results also imply that some new mechanism of xmath0 suppression seems to be needed particularly for understanding pbxmath9pb data the authors in xcite xcite have attributed the further xmath0 suppression in pbpb data to the formation of a qgp state as we have mentioned before the cross section of xmath0 production will be different in each binary nucleon nucleon collision in pa and aa collisions if the energy loss of the projectile nucleons to the target nucleons in the successive binary collisions is taken into account assume that xmath12 is the mean cross section for the production of a xmath0 particle in a binary nucleon nucleon collision here the average is done over all the binary collisions at an impact parameter xmath13 then the total probability for producing a xmath0 particle in the collisions of a b at an impact parameter xmath13 is the sum xcite xmath14n1tbsigmannjpsi ab n labelff1 where tb is the thickness function because xmath15 is a very small quantity xmath16 xmath17 xcite xmath18xmath19 for central su collisions for instance the summation given by eqff1 is dominated by the first term with n1 the terms with nxmath201 represent multiple xmath0 production processes and shadowing corrections which are very small and can be neglected then the probability for xmath0 production in ab collisions can be approximated to be xmath21 labelff2 therefore the cross section of xmath0 production corresponding to a centrality bin xmath22 is given by the following formula xmath23 where xmath24 is the mean cross section of xmath0 production per nucleon nucleon collision within the centrality bin xmath22 we know that qcd hard scatterings the gluon fusion and quark antiquark annihilation between partons are the main source of xmath0 production let xmath25 be the probability to have a hard scattering in a nucleon nucleon collision and xmath26 the probability to produce a xmath0 from the hard scattering then we can write out xmath12 to be xmath27 where xmath28 is the total cross section of a nucleon nucleon collision we have assumed that xmath29 is approximately a constant in the energy span that we are concerning so the product is the same for all the binary collisions combining eqff4 with eqff3 and replacing xmath30 by the ratio xmath31 xmath32 is the number of binary collisions in an ab collision and xmath33 the number of binary collisions with a hard scattering both of them are the function of the impact parameter xmath13 we finally obtain xmath34 and for the minimum bias events we have xmath35 we see from eqff6 that the dependence of the quantity xmath36 on the masses of colliding nuclei or centrality is solely determined by how the mean probability of having a hard scattering in a binary nucleon nucleon collision varies with the masses of colliding nuclei or centrality before calculating xmath37 in pa and aa collisions we will give a brief introduction of fritiof dynamics focusing on how a hard parton parton scattering is distinguished from a soft one fritiof is a string model based on the concepts of the lund string model xcite which started from the modeling of inelastic hadron hadron collisions and it has been successful in describing many experimental data from the low energies at the isr regime all the way to the top sps energies xcite xcite this has been achieved by the introduction of a particular longitudinal momentum transfer scenario gluon bremsstrahlung radiation the dipole cascade model dcm xcite and the soft radiation model srm xcite this is implemented by the use of ariadne xcite as well as hard parton scattering rutherford parton scattering rps this is implemented by the pythia routines xcite in fritiof during the collision two hadrons are excited due to longitudinal momentum transfers andor a rps it is further assumed that there is no net color exchange between the hadrons the highly excited states will emit bremsstrahlung gluons according to the srm they are afterwards treated as excitations or the lund strings and the string states are allowed to decay into final state hadrons according to the lund prescription as implemented by jetset xcite in the fritiof model a hadron is assumed to behave like a massless relativistic string mrs corresponding to a confined color force field of a vortex line character embedded in a type ii color superconducting vacuum a hadron hadron collision is pictured as the multi scatterings of the partons inside the two colliding hadrons this includes both the hard and the soft components depending on the four momentum transfers xmath38 or equivalently the transverse momentum transfers involved the soft part is described by a simple phenomenological model the hard scatterings can however be calculated from perturbative qcd and correspond to the rutherford parton parton scattering rps the divergence problem in rps is handled by introducing the sudakov factor there will be color separation in the model ie there will for each hadron be a color xmath39 a diquark continuing forward along the beam direction and a valence quark a color xmath40 moving in the opposite direction due to the longitudinal momentum transfer this will lead to bremsstrahlung of a dipole character a procedure therefore is adopted in fritiof that compares the hardness of the rutherford partons to that of the bremsstrahlung gluons the rps is accepted only if it is harder than the associated radiation if the rps is drowned which is to say that it is softer than the radiation then the rps is not acceptable and the collision proceeds as a purely soft collision with this prescription the rps spectrum is suppressed smoothly at small to medium transverse momentum region for the hadron nucleus and nucleus nucleus collisions the process has in the fritiof model been treated as a set of incoherent collisions on the nucleons thus a nucleon from the projectile interacts independently with the encountered target nucleons as it passes through the nucleus the probability distribution for the number of inelastic collisions xmath41 is taken from geometric calculations each of the sub collisions is treated in the same way as an ordinary hadron hadron collision although the momentum transfers will again be additive and every encounter will make the projectile more excited if it interacts with xmath41 nucleons in the target xmath42 excited string states will be formed as a result these string states will then independently emit associated bremsstrahlung radiation and then fragment into hadrons in the same way as individual strings this picture is supported by the fact that the global features of heavy ion collisions are satisfactorily explained by the collision geometry together with the independent hadron hadron collisions using fritiof it is straightforward to calculate the number of binary nucleon nucleon collisions xmath32 and the number of the binary collisions with a hard scattering xmath43 in pa and aa collisions at a given impact parameter xmath13 so that the mean probability to have a hard scattering in a binary nucleon nucleon collision xmath30 xmath44 can be obtained since we are mainly interested in xmath0 production in pa and aa collisions relative to that in the pp collision we do not need to know how a xmath0 is actually formed from the hard scatterings in order to investigate the dependence of xmath0 production cross sections on mass number and centrality we have calculated the quantity xmath45 for various pa and aa collisions and xmath46 for different centrality bins in su and pbpb collisions at xmath47200 gev c using fritiof after determining xmath29 by the data of the pp collision we plot our results of xmath48 as a function of xmath49 in figure 1 for the minimum bias events for the cross section of xmath0 production in different centrality bins we plot the results of xmath50 as a function of xmath51 in figure 2 where xmath52 is the number of participants from the projectile and xmath53 the number of participants from the target since the drell yan cross section in a given centrality bin is found in experiments proportional to an effective xmath54 xmath55 in the same way a constant has to be determined by the corresponding data of the pp collision the impact parameter bins are taken to be the same as those extracted by na38 and na50 xcite we decided not to use the absorption length xmath56 to be the longitudinal axis as used by na50 because xmath56 calculated from the geometry model is not sensitive to the change of impact parameter for very central pbpb collisions the results of our calculations show that the decrease of quantity xmath57 or xmath50 the cross section of xmath0 production per nucleon nucleon collision is due to the fact that the probability of the qcd hard scattering per binary nucleon nucleon collision decreases with the increasing mass number and centrality when the absorption of xmath0 by xmath0n interaction is also taken into account ie the previous results are multipied by xmath58 with xmath59014 nxmath60 xmath6114 mb and xmath56 taken to be the same as those in xcite our model reproduces the data of xmath0 suppression with the exception of the latest data from pbpb collisions at 158 gev c which clearly show a further suppression one possibility which will bring about a further suppression of xmath0 production is that xmath62 the probability to produce a xmath0 from a hard process drops down suddenly under certain conditions this is equivalent to say that the xmath8 produced from the hard processes can not form a bound state however at the moment our simple model can not estimate when this would happen however there are still other possible mechanisms of xmath0 suppression which are not included in our simple model the xmath63 dependence of xmath0 suppression is not investigated yet therefore it is hard to make any conclusion now whether this further suppression in pbpb collision is due to qgp formation it is known that there is no unique criterion to distinguish a hard process from a soft one in a nucleon nucleon collision usually a xmath64 is introduced to be the minimum transverse momentum of the produced partons from a hard process a dynamic criterion is applied in fritiof to chose a hard process by comparing the hardness of a rps parton with the hardness of the bremsstrahlung gluons as mentioned before however the cross section of xmath0 production should not depend on which criterion is actually used in the calculation we have thus calculated all the results of this paper using the conventional xmath64 criterion in pythia xmath64 1gev c just to check if our conclusion relies on the specific criterion in fritiof the calculations show that the results in these two cases are in agreement with each other we have also checked if the quantity xmath65 is energy independent as we have assumed a parametrization form of the xmath0 cross section is given as xcite xmath66 where xmath67 stands for the cms energy per nucleon therefore if xmath65 in eqff6 is energy independent then we should have a ratio xmath68 we have calculated xmath45 and the ratio at various energies from xmath4760 gev c to xmath47 450 gev c for pp collisions xmath69 1 for a pp collision and the results are listed in tab1 which show that this ratio in this energy region is not sensitive to the change of energy in comparison with xmath70 but a threshold behaviour may exist at lower energies which can be seen from the value at xmath4760 gev c
acknowledgments