text
stringlengths
1
7.94M
lang
stringclasses
5 values
\begin{document} \title{Circle Packing with Generalized Branching} \begin{center}{\it Dedicated to C. David Minda on the occasion of his retirement} \end{center} Classical analytic function theory is at the heart of David Minda's research and of many of the results in this volume. It has been a pleasure in recent years to find that simple patterns of circles called circle packings could find themselves in such tight company with this classical theory. David himself contributed to this topic in \cite{MR91} and on his retirement will surely have time to dive back into it. Let us briefly review the circle packing story line. It began with Bill Thurston's observation that for every abstract triangulation $K$ of a topological sphere there exists an essentially unique configuration of circles with mutually disjoint interiors on the Riemann sphere $\bP$ whose pattern of tangencies is encoded in $K$. That is, there's a circle packing $P$ for $K$ in $\bP$. Based on this rigidity and his intuition, Thurston made a remarkable proposal at the 1985 Conference in Celebration of de~Branges' Proof of the Bieberbach Conjecture: namely, that one could use such circle packings to approximate conformal mappings. The subsequent proof of his conjecture by Burt Rodin and Dennis Sullivan \cite{RS87} established circle packing as a topic and opened its most widely known aspect, the approximation of classical analytic functions. As this approximation theory developed, a second aspect that we will call {\sl discrete analytic function theory}, began to emerge. For it became increasingly clear that classical phenomena were already at play within circle packing --- mappings between circle packings not only approximated analytic functions, they also mimicked them. The literature shows an ever growing list of conformal notions being realized discretely and often with remarkable geometric fidelity: moving circle packing into the hyperbolic geometry of $\bD$ led to infinite packings and the consequent classical type conditions --- the spherical, hyperbolic, and euclidean trichotomy --- and from that came the discrete uniformization theorem, discrete Riemann surfaces and covering theory, random walks, and then notions of branch points and boundary conditions allowed for discrete versions of familiar classes of functions, polynomials, exponentials, and the Blaschke products, Ahlfors functions, and Weierstrass functions that play their roles in this paper. Part and parcel in these developments has been a third aspect, computation. Circle packings demand to be seen; that has led to packing algorithms, followed by experiments, then new --- often surprising --- observations, augmented theory, more computations, on and on. The work here was motivated by computational challenges, and the images behind our work are produced with the open software package \CP, \cite{kS92}. Step after step in this story one can observe the remarkable faithfulness of the discrete theory to its continuous precedents so that today one can claim a fairly comprehensive discrete world parallel to the classical world of analytic functions (and invariably converging to it in the limit as the combinatorics are refined). Yet this discrete world can never be fully comprehensive, one always faces ``discretization issues''. This paper is a preliminary description of new machinery for addressing the principal remaining gap in the foundation of discrete function theory, the existence and uniqueness of discrete meromorphic functions. The sphere is a difficult setting for circle packing. On the practical side, there is no known algorithm for computing circle packings {\sl in situ}, restricting the experimental approach; essentially all circle packings on $\bP$ have been obtained {\sl via} the stereographic projection of hyperbolic or euclidean packings. More crucially, the compactness of the sphere brings conformal rigidity, with topologically mandated branching and no boundary to provide maneuvering room. Branching difficulties are the discretization issue we address here. We introduce generalized branching, which began with the thesis of the first author, \cite{jA12}. We believe general branching will provide the flexibility necessary to construct the full spectrum of discrete branched mappings while keeping two main objectives at the fore: (1) discrete analytic functions should display qualitative behaviors parallel to their classical counterparts, and (2) discrete analytic functions should converge under refinement to their classical counterparts. \section{Classical Models}\label{S:Classical} We use three types of classical functions to motivate this work: finite Blaschke products on the unit disc, Ahlfors functions on annuli, and Weierstrass functions on tori. We review these in preparation for their discrete versions. \noindent{\sl Blaschke Products:} A classical finite Blaschke product $\fB:\bD\rightarrow\bD$ is a proper analytic self map of the unit disc $\bD$. In particular, $\fB$ has finite valence $N\ge 1$, it maps the unit circle $N$ times around itself, and it has $N-1$ branch points in $\bD$, counting multiplicities --- that is $\fB'$ has $N-1$ zeros in $\bD$. The function $\fB$ is known as an $N$-fold Blaschke product. Topologically speaking, $\fB$ maps $\bD$ onto an $N$-sheeted complete branched covering of $\bD$. The images of the branch {\sl points} under $\fB$ are known as branch {\sl values}. As a concrete example, let us distinguish two points $p_1\not= p_2$ in $\bD$. It is well known that there exists a $3$-fold Blaschke product $\fB$ with $p_1,p_2$ as simple branch points. It is convenient to assume a standard normalization, so by post-composing with a conformal automorphism (M\"obius transformation) of $\bD$ we may arrange further that $\fB(0)=0$ and $\fB(i)=i$. This is the function we will have in mind for discretization later. \noindent{\sl Ahlfors Functions:} Our next model is defined on a proper annulus $\Omega$. By standard conformal mapping arguments, we may take $\Omega$ to be a {\sl standard} annulus, $\Omega=\{z:r< |z|<1/r\}$, with $0<r<\infty$. Designating a point $z_0\in \Omega$, one may consider the extremal problem: maximize $|\fF'(z_0)|$ over all analytic functions $\fF:\Omega\rightarrow \bD$. The solution $\fA(z)$ is known to exist, is unique up to multiplication by a unimodular constant, and is referred to as an {\sl Ahlfors} function for $\Omega$. Ahlfors functions are also characterized, however, by their mapping properties. They are the proper analytic mappings $\fA:\Omega\rightarrow\bD$ which extend continuously to $\partial\Omega$ and map each component of $\partial \Omega$ 1-to-1 onto the unit circle. Any such map will be a branched double covering of $\bD$ with two simple branch points, $p_1, p_2\in\Omega$. It is fundamental to function theory on $\Omega$ and is analogous to the 1-fold Blaschke products on $\bD$, i.e., M\"obius transformations. The Ahlfors function for $\Omega$ is determined uniquely by $r$ (up to pre- and post-composition by conformal automorphisms). To have a concrete example in mind for discretization, let us suppose that $z_0$ is on the midline of $\Omega$, say $z_0=1$. From elementary symmetry considerations we deduce that $\fA(1)= \fA(-1)=0$ and that the branch points in $\Omega$ lie at $p_1=i$ and $p_2=-i$. A normalization in the range, $\bD$, will put the branch values on the imaginary axis, symmetric with respect to the origin. \noindent{\sl Weierstrass Functions:} Our final model is the classical Weierstrass function $\fW$. This is a meromorphic function mapping a conformal torus $T$ to a branched double covering of the sphere. A fundamental domain for $T$ is the parallelogram in $\bC$ with corners $0,1,\tau, 1+\tau$, where $\tau$, a complex number in the upper half plane, is the so-called {\sl modulus} of $T$. The function $\fW$ has four simple branch points at $0,1/2,\tau/2,$ and $(1+\tau)/2$ and is determined uniquely by $\tau$ (up to pre- and post-composition by conformal automorphisms). Note that while all three classes of functions are characterized by their topological mapping properties, only with the Blaschke products do we get any choice in the branch points --- for Ahlfors and Weierstrass functions, branch point locations are (up to normalization) forced on us by the conformal geometry of the domain. \section{Discrete Versions}\label{S:DiscVersions} We will now describe and illustrate discrete versions of these classical functions. We assume a basic familiarity with circle packing, as presented in \cite{kS05} for example. However, a brief overview might help, and with the images here should aid the intuition, even for those not familiar with details. A {\sl discrete analytic function} is a map between circle packings. The domain, rather than being a Riemann surface, will now be a triangulated topological surface with combinatorics encoded as a simplicial 2-complex $K$: thus, we will be selecting $K$ to be a combinatorial disc, a combinatorial annulus, or a combinatorial torus, as appropriate. A {\sl circle packing} for $K$ is a configuration $P$ of circles, $P=\{c_v\}$ with a circle $c_v$ associated with each vertex $v$ of $K$. The circle packing may live in the euclidean plane, $\bC$, in the hyperbolic plane, represented as the unit disc $\bD$, or on the Riemann sphere, $\bP$. The only requirements are that whenever $\langle v,w\rangle$ is an edge of $K$, then circles $c_v,c_w$ must be (externally) tangent, and when $\langle v,u,w\rangle$, is an oriented face of $K$, then the circles $c_v,c_u,c_w$ must form an oriented triple of mutually tangent circles. The {\sl carrier} of $P$, denoted carr$(P)$, is the polyhedral surface formed by connecting the centers of tangent circles with geodesic segments; that is, carr$(P)$ is an immersion of the abstract triangulation $K$ as a concrete triangulated surface. At the foundation of the theory is the fact that each complex $K$ has a canonical {\sl maximal packing} $P_K=\{C_v:v\in K\}$. This is a univalent circle packing, meaning the circles have mutually disjoint interiors, which fills $\bD$, a conformal annulus, or a conformal torus, as the case may be. The packing $P_K$ serves as the domain for discrete analytic functions associated with $K$. The image will be a second circle packing $P$ for $K$ which lies in $\bD$ for discrete Blaschke products and discrete Ahlfors functions, or on the sphere $\bP$ for discrete Weierstrass functions. The discrete analytic function, then, will be the map $\ff:P_K\rightarrow P$ which identifies corresponding circles. (One may also treat $\ff$ as a topological mapping $\ff:\text{carr}(P_K)\rightarrow \text{carr}(P)$ by mapping circle centers to circle centers and extending {\sl via} barycentric coordinates to edges and faces.) We are now ready for the discrete constructions. Central to our work is the issue of branching, as we will see in this first discrete example. \subsection{Discrete Blaschke Product}\label{SS:DiscBl} In a sense, discrete function theory began with the introduction of discrete Blaschke products; see \cite{tD95} and \cite[\S13.3]{kS05}. The construction here will serve to remind the reader of basic notation and terminology while providing an example directly pertinent to our work. A discrete finite Blaschke product $\fb$ is illustrated in \F{discBl}, with the domain circle packing $P_K$ on the left and the image circle packing $P$ on the right, both in $\bD$. There is nothing special in the underlying complex $K$, a combinatorial disc --- it is just a generic triangulation of a topological disc, though there are minor combinatorial side conditions to avoid pathologies. \begin{figure} \caption{A $3$-fold discrete Blaschke product $\fb$, domain and range.} \label{F:discBl} \end{figure} Begin with the domain packing for $\fb$ on the left, the maximal packing $P_K=\{C_v:v\in K\}$. The boundary circles are horocyles (euclidean circles internally tangent to $\partial\bD$). A designated interior vertex $\alpha$ has its circle $C_{\alpha}$ centered at the origin and a designated boundary vertex $\gamma$ has its circle $C_{\gamma}$ centered at $z=i$; the latter appears here as dark blue. The classical Blaschke product $\fB$ discussed earlier involved branch points $p_1,p_2$; we assume these are the two black dots in the domain. To mimic this, we have identified interior circles $C_{v_1},C_{v_2}$, red circles, whose centers are nearest to $p_1,p_2$, respectively. Note that the unit disc is treated as the Poincar\'e model of the hyperbolic plane, so circle centers and radii are hyperbolic and the carrier faces are hyperbolic triangles. The boundary circles, as horocyles, are of infinite hyperbolic radius and have hyperbolic (ideal) centers at their points of tangency with the unit circle. The set of hyperbolic radii is denoted by $R_K=\{R_K(v)\}$. The existence of $P_K$ follows from the fundamental Koebe-Andreev-Thurston Theorem, \cite[Chp~6]{kS05}, as does its essential uniqueness up to conformal automorphisms of $\bD$. In practice, however, it is computed based on angle sum conditions. The {\sl angle sum} $\theta_{R_K}(v)$ at a vertex $v$ is the sum of angles at $v$ in all the faces to which it belongs and is easily computed from the radii $R_K$ using basic hyperbolic trigonometry. Clearly, one must have $\theta_{R_K}(v)=2\pi$ for very interior $v$. This, along with the condition that $R_K(w)=\infty$ for boundary vertices $w$, is enough to solve for $R_K$. Let us now move to the more visually challenging range packing in \F{discBl}, denoted $P=\{c_v:v\in K\}$. This, too, is a hyperbolic circle packing for $K$, though it is clearly not univalent. We have arranged that the circle $c_{\alpha}$ is centered at the origin and that the circle $c_{\gamma}$ is a horocycle centered at $z=i$, just as in $P_K$. The boundary circles are again horocycles, and if one starts at $c_{\gamma}$ and follows the counterclockwise chain of successively tangent horocycles, one finds that they wraps three times about the unit circle. This mimics the behavior of our $3$-fold classical Blaschke product $\fB$. The image of $P$ is a bit too fussy to show its carrier, but it is in fact a $3$-sheeted branched surface. Hidden among the interior circles of $P$ are the two associated with vertices $v_1,v_2$, the branch vertices. These circles, red in both domain and range, are difficult to pick out, but since branching is the central topic of the paper, we have blown up the local images at $v_1$ in \F{detailBl}. We now describe what you are seeing. \begin{figure} \caption{Isolated flowers for the branch vertex $v_1$ in the domain $P_K$ and range $P$ of the discrete Blaschke product $\fb$.} \label{F:detailBl} \end{figure} This branching will be termed {\sl traditional}; conceptually and computationally very simple, this method has, until now, provided all the branching for discrete function theory, \cite[\S11.3]{kS05}. The flower for vertex $v$, the central circle (red) and its neighboring circles (its {\sl petals}), are shown for $P_K$ on the left and for $P$ on the right. Whereas the six petals wrap once about $C_{v_1}$ in the domain, a careful check will show that they wrap twice around $c_{v_1}$ in the range. If $R$ denotes the set of hyperbolic radii for $P$, we may compute the angle sum $\theta_R(v_1)$ at $c_{v_1}$. Expressed in terms of angle sums, the branching is reflected in the fact that $\theta_{R_K}(v_1)=2\pi$ in the domain, while $\theta_{R}(v_1)=4\pi$ in the range. Mapping the faces about $C_{v_1}$ onto the corresponding faces about $c_{v_1}$ realizes a 2-fold branched cover in a neighborhood of the center of $c_{v_1}$ --- meaning a branched covering surface in the standard topological sense. Similar behavior could be observed locally at the other branch vertex, $v_2$, while at all other interior vertices the map between faces is locally univalent. In summary, the circle packing map $\fb:P_K\rightarrow P$ is called a {\sl discrete finite Blaschke product} because it displays the salient mapping features of the classical Blaschke product $\fB$: namely, $\fb$ is a self-map of $\bD$, a 3-fold branched covering, it maps the unit circle $3$ times about itself, and it harbors two interior branch points. We have even imposed the same normalization, $\fb(0)=0$ and $\fb(i)=i$. Additional features of such discrete analytic functions are developed in the relevant literature: Note in \F{detailBl} how much the circles for a branch vertex shrink under $\fb$; this ratio of radii mimics the vanishing of the derivative at a branch point. Note in \F{discBl} how $\fb$ draws the interior circles together; this is the discrete hyperbolic contraction principle. Note that the circles for $\alpha$ are centered at the origin in both $P_K$ and $P$, but the latter is much smaller: this reflects the discrete Schwarz Lemma. On the other hand, the horocycle associated with $\gamma$ (blue) is much larger in $P$ than in $P_K$, reflecting the behavior of angular derivatives at the boundary. Discrete analytic function theory is rife with such parallel phenomena for a wide variety of situations, including the Ahlfors and Weierstrass examples to come. \subsection{Discrete Ahlfors Function}\label{SS:DiscAhlfors} We build a clean example that mimics the classical Ahlfors function $\fA$ described earlier. Our complex $K$ triangulates a topological annulus. Its maximal packing $P_K$ is represented in \F{AnnulusK}. \begin{figure} \caption{Maximal packing $P_K$ for a combinatorial annulus $K$, represented in a fundamental domain in $\bD$.} \label{F:AnnulusK} \end{figure} A bit of explanation may help here: The maximal packing actually lives on a conformal annulus $\mathcal A$, with circles measured in its intrinsic hyperbolic metric. However, as $\bD$ is the universal cover of $\mathcal A$, we can lift the packing to lie in a fundamental domain within $\bD$ --- that is what we see in \F{AnnulusK}. The boundary edges in red represent the lifts of a cross-cut of $\mathcal A$ and are identified by the hyperbolic M\"obius transformation $\gamma$ of $\bD$ which generates the covering group for $\mathcal A$. Applying $\gamma$ to the circles of \F{AnnulusK}, one would get new circles which blend seamlessly along the cross-cut. We have chosen $K$ with foresight, as it displays two particularly helpful symmetries. The line in the center of \F{AnnulusK} marks the combinatorial midline of the annulus: $K$ is symmetric under reflection in this. Moreover, there is an order two translational symmetry along this midline; the automorphism $\sqrt{\gamma}$ will (modulo $\gamma$) carry $P_K$ to itself. Topology demands, as with the classical Ahlfors function $\fA(z)$, that we have two simple branch points. Choose the midline vertices $v_1$ and $v_2$, their circles are red in \F{AnnulusK}; these two are fixed by the reflective symmetry and interchanged by the translational symmetry. Prescribing traditional branching at $v_1,v_2$ results in the branched circle packing, $P$, of \F{AnnulusP}. The mapping $\fa:P_K\rightarrow P$ is thus a discrete analytic function from $\mathcal A$ to $\bD$. \begin{figure} \caption{The branched packing $P$ for combinatorial annulus $K$.} \label{F:AnnulusP} \end{figure} Due to its mapping properties, we refer to $\fa$ as a {\sl discrete Ahlfors function}. In particular: The boundary circles of $P_K$ are horocycles; in \F{AnnulusK}, those on one boundary component are blue, those of the other, green. We would expect the boundary circles of $P$ to be horocycles as well, meaning that $\fa$ maps each boundary component to the unit circle. With a careful look in \F{AnnulusP}, one can disentangle the closed chain of blue horocycles reaching once around the unit circle and the second closed chain of green horocycles doing the same. The branch circles, $C_{v_1}, C_{v_2}$ in $P_K$, and their images, $c_{v_1},c_{v_2}$ in $P$, are red. We have normalized by applying an automorphism to $\bD$ that centers $c_{v_1}$ and $c_{v_2}$ on the imaginary axis and symmetric with respect to the origin. Thus, $P$ represents in a discrete way a double covering of $\bD$ branched over two points. These are all hallmarks of the image of an Ahlfors function and mimic the classical function $\fA$. For reference, in $P$ we have drawn in red the edges of $P$ corresponding to red cross-cut in $P_K$. The computation of $P$ deserves special attention. Standard Perron methods allow one to compute a hyperbolic packing label $R$ for $K$ so that $R(w)=\infty$ for each boundary vertex $w\in K$ and angle sums $\theta_{R_K}(v_j)=4\pi$ for $v_1,v_2$. There is nothing special in computing $R$. There is a second step, however: with $R$ in hand, one then lays out the circles in sequence and normalizes to get the packing $P$ of \F{AnnulusP}. But {\sl why} does this second step work so nicely? In circle packing, the laying out of circles is akin to analytic continuation of an analytic function element, and since $K$ is an annulus, its fundamental group is generated by some simple, closed, non\-null\-homotopic loop $\Gamma$. Analytic continuation along $\Gamma$ would generically lead to a non-trivial holonomy: that is, given a function element $\mathfrak f$ defined at a point of $\Gamma$, one would anticipate a non-trivial automorphism $m$ of $\bD$ so that analytic continuation of $\mathfrak f$ about $\Gamma$ would lead to a new element $m(\mathfrak f)$, $m(\mathfrak f)\not= \mathfrak f$. In discrete terms, after laying out the circle $c_v$ for some vertex $v$ of $\Gamma$, and then laying out successively tangent circles for the vertices along $\Gamma$, one would not expect that upon returning to $v$ one would lay out the same circle $c_v$. Generically, there is a non-trivial automorphism $m$ so that upon returning to $v$ one lays out $m(c_v)\not= c_v$. As it happens here, things work out because of the symmetries built into $K$ --- the holonomy $m$ is trivial, so the layout process results in a coherent branched circle packing $P$. The holonomy issue is key to later considerations. \subsection{Discrete Weierstrass Function}\label{SS:DiscWeier} For this example our complex $K$ triangulates a topological torus. Its maximal packing $P_K$ is shown in \F{WeierP}. Here again the maximal packing actually lives in a conformal torus $\mathcal T$ with its intrinsic euclidean metric. As $\bC$ is the universal cover of $\mathcal T$, we may lift the packing to $\bC$, and this lift is what we see on the left in \F{WeierP}. This packs a fundamental domain, delineated by the color-coded edges, which represent the side-pairings. We again have chosen $K$ with important symmetries. The four colored circles, symmetrically placed, are our chosen branch circles. The image packing $P$ on the sphere is shown on the right in \F{WeierP}; two branched circles are visible on the front, the other two are (due to a normalization) antipodal to these. The result is a discrete analytic function $\fw:P_K\rightarrow P$ which maps $\mathcal T$ to $\bP$, that is, a {\sl discrete meromorphic} function. Reprising its mapping properties, we are justified in calling $\fw$ a {\sl discrete Weierstrass function}. \begin{figure} \caption{$P_K$ packs the fundamental domain for a combinatorial torus $K$ and shows four designated branch vertices and color-coded side pairings. $P$ is the branched image packing for $K$ on $\bP$.} \label{F:WeierP} \end{figure} There is not yet a practical circle packing algorithm in spherical geometry, so the computation of $P$ takes a circuitous route. We puncture $K$ at one of the intended branch vertices, say $v_4$, and consider $K'=K\backslash\{v_4\}$. This has a single boundary component, and the usual Perron arguments yield a hyperbolic packing label $R'$ so that $R'(w)=\infty$ for every boundary vertex $w$ and so that $\theta_{R'}(v_j)=4\pi, j=1,2,3$. Since $K'$ has genus 1, its fundamental group is again an obstruction to the layout process and a risk for non-trivial holonomy. However, symmetry saves us once more, and we obtain a coherent branched packing $P'$ in $\bD$ for $K'$. Note, in particular, that the boundary circles of $P'$ are horocycles, and topological counting arguments show that the chain of boundary horocycles must wrap twice around the unit circle. Stereographically projected to $\bP$, $P'$ lies in one hemisphere. The other hemisphere, treated as the inside of a circle, is tangent to the circles for all the former neighbors of $v_4$, so we simply declare this to be the circle for $v_4$. The neighbors wrap twice around, so this is the fourth branch point and, after a normalization, we arrive at $P$. Note: Our methods clearly yield a coherent branched packing $P$ in this case, and have done the same in literally scores of similarly structured complexes. The key seems to lie with the symmetries in $K$ and in the branch set. We leave this as a {\bf Conjecture:} {\sl If $K$ is a combinatorial torus with two commuting translational symmetries of order two and $\omega=\{v_1,v_2,v_3,v_4\}$ is an orbit of vertices under these symmetries, then traditional branching at the points of $\omega$ leads to trivial holonomy.} The good news is that we have successfully created discrete analogues for our three classical models: Blaschke products, Ahlfors functions, and Weierstrass functions. Let us now look into the bad news. \section{The Discretization Issue}\label{S:DiscIssue} Whenever a continuous theory is discretized, whether in geometry, topology, differential equations, or $p$-adic analysis, problems will crop up. Replacing a continuous surface by a triangulated one, for example, leads to combinatorial restrictions. Thus a branch vertex must be interior and have at least 5 neighbor. We expect this. However, we are after a starker discretization effect. {\sl There are only finitely many possible locations for discrete branching}. Our discrete Blaschke product could not branch precisely at the points $p_1,p_2$ prescribed for its classical model $\fB$, and we instead chose to branch using the nearby circles $C_{v_1}$ and $C_{v_2}$. This effect is admittedly minor --- the qualitative behavior of the discrete function is little affected by the misplaced branching. For the Ahlfors and Weierstrass cases, however, this problem is existential --- discrete versions may fail to exist. We will illustrate the problem in the Ahlfors cases --- and return to fix it in \S\ref{S:FixAhlfors}. Nearly any break in the combinatorial symmetries of the complex $K$ behind \F{AnnulusK} will cause the subsequent Ahlfors construction to fail. Most such failures will be difficult to fix, so we choose carefully: we make two small changes {\it via} edge flips so that we preserve the reflective symmetry but break the translational symmetry. The new complex will be denoted $K'$. Repeating the Ahlfors construction from \S\ref{SS:DiscAhlfors} with $K'$ and using the same $v_1,v_2$ as branch vertices gives the result of \F{FailedAhlfors}. \begin{figure} \caption{A failed attempt at an Ahlfors function using traditional branching. The non-trivial holonomy shows up in misalignment of the cross-cuts, and the failure of the gray circles to be tangent to one another.} \label{F:FailedAhlfors} \end{figure} There is no difficulty in computing the branched packing label $R'$ for $K'$, however, the layout process does not give a coherent circle packing. The problem might be difficult to see in \F{FailedAhlfors}, but look to the red edge paths, which correspond to layouts of the cross-cut: they are no longer coincident, as they were in \F{AnnulusP}. One is a shifted copy of the other, reflecting a non-trivial holonomy associated with the generator of the fundamental group for $K'$. More precisely, there is a non-trivial hyperbolic M\"obius transformation $m$ of $\bD$, which maps one of these red cross-cut curves onto the other. One would have to follow things very closely in the image to confirm the problem, but we illustrate with the two gray circles, which are supposed to be tangent to one another. As it happens, no matter what pair of vertices of $K'$ are chosen as branch points, the Ahlfors construction will fail --- there will be no coherent image packing. It has been a long road to get to this point, but this is where our work begins: Our goal is to introduce {\sl generalized} branching with the flexibility to make the discrete theory whole. We will illustrate it in action in \S\ref{S:FixAhlfors} by creating an Ahlfors function for this modified complex $K'$. \section{Generalized Branching}\label{S:GenBranching} Branching is perhaps most familiar in the analytic setting. Let $\ff:G\rightarrow \bC$ be a non-constant analytic function on an open domain $G\subset \bC$. Suppose $z\in G$ and $w=\ff(z)$. For $\delta>0$, consider the disc $D=D(w,\delta)=\{\zeta:|\zeta-w|<\delta\}$ and the component of the preimage $U={\ff}^{-1}(D(w,\delta))$ containing $z$. For $\delta$ sufficiently small, $U$ will be a topological disc in $G$ and the restriction of $\ff$ to the punctured disc $U'=U\backslash\{z\}$ will be a locally 1-to-1 proper mapping onto the punctured disc $D'=D\backslash\{w\}$. In particular, one can prove the existence of some $N\ge 1$ so that every point of $D'$ has $N$ preimages in $U'$. In this analytic case, if $N>1$, then ${\ff}^{k}(z)=0$ for $k=1,2,\cdots, N-1$ and we say that $z$ is a {\sl branch point} of order $N-1$ for $\ff$. We refer to $w$ as its {\sl branch value}. This is, in fact, a topological phenomenon having little to do with analyticity: by Sto\"ilow's Theorem, \cite{sS38}, the same local behavior occurs whenever the map $\ff$ is an open, continuous, light-interior mapping. In particular, this applies to our maps between the carriers of circle packings. One sees it on display for the traditional branch point illustrated in \F{detailBl}. We can set the stage for generalized branching by simply enlarging the singleton set $\{z\}$ for a branch point to a compact topological disc $H$. If $H$ is small enough, then the mapping behavior in the neighborhood of $H$ is unchanged: that is, $\ff$ will be a locally 1-to-1 proper map of the annulus $U'=U\backslash H$ onto the annulus $D'=D\backslash f(H)$ of valence $N$. When $N>1$ we will say that $\ff$ has {\sl generalized branching} of order $N-1$ in $H$. The point is that the branching is reflected in the mapping behavior between the annuli $U'$ and $D'$, even if the precise location of that branching is hidden within the hole $H$. Let us apply this notion to the classical Blaschke product $\fB$ discussed earlier. About each of its branch points $p_j,j=1,2,$ we can choose a small compact topological disc $H_j$ and open neighborhood $U_j$ of $H_j$ so that $\fB$ has generalized branching of order $1$ in $H_j$. Making $H_1,H_2$ smaller, if necessary, we may assume $U_1$ and $U_2$ have disjoint closures. This leaves a triply connected open set $\Omega=\bD\backslash\{\overline{U}_1\cup \overline{U}_2\}$. The restriction of $\fB$ to $\Omega$ is a $3$-valent map onto $\bD$ which maps $\partial U_j$ to a some curve about the branch value $\fB(p_j),j=1,2$. If we were to perturb $p_j$ to a new point $\tilde p_j$ within $H_j, j=1,2$, then the associated finite Blaschke product $\widetilde{\fB}$ would be qualitatively indistinguishable from $\fB$ on $\Omega$: that is, it is very difficult to discern where the branch points actually lie in $H_1,H_2$. This is the type of flexibility we need for discrete finite Blaschke products and motivates our strategy. Given $K$ and its max packing $P_K$, we choose interior vertices $v_1,v_2$ so their circle centers are near $p_1,p_2$. Choose small combinatorial neighborhoods $H_1,H_2$ of $v_1,v_2$ and define $L=K\backslash\{H_1\cup H_2\}$, analogous to the open set $\Omega$ earlier. Requiring simple branching at $v_1$ and $v_2$ leads to the discrete Blaschke product $\fb$ we discussed earlier. However, we have developed machinery, discussed in the next section, that allows us to modify the combinatorics and packing parameters inside $H_1$ and $H_2$. Patching these new combinatorics into $L$ gives a new combinatorial disc $\wK$ on which we define a new discrete Blaschke product $\tilde{\fb}$. The parameters involved allow us to perturb the apparent branch locations. In other words, just as with $\fB$ and $\widetilde{\fB}$ defined on $\Omega$, both $\fb$ and $\tilde{\fb}$ are defined on $L\subset K\cap \wK$ and are qualitatively indistinguishable there. Our global intention is to make adjustments in the small locales $H_1$ and $H_2$ so that $\tilde{\fb}$ behaves like a discrete Blaschke product having branch points precisely at $p_1,p_2$. Mimicking this individual Blaschke product $\fB$ may seem to be a lot of effort for little gain. However, if one thinks more broadly of the family of Blaschke products parameterized continuously by $p_1$ and $p_2$, the goal of continuously parameterized discrete versions makes more sense. It also makes more sense when the very existence of the discrete versions depends on this added flexibility, as with our broken Ahlfors example. Let us now describe the mechanics. \section{Local Mechanics}\label{S:Local} We describe discrete generalized branching which takes two forms, termed {\sl singular} and {\sl shifted} branching. Each involves identifying a {\sl black hole} $H$, a small combinatorial locale to support the branching, and its {\sl event horizon} $\Gamma=\partial H$, the chain of surrounding edges. Outside of the event horizon, our circle packing mappings are defined in the usual way, so that in an annulus about the black hole one may observe the typical topological behavior described earlier. Adjustments hidden inside the black hole, however, allow our mapping to simulate simple branching at various points. \subsection{Background}\label{SS:Background} We have recalled some circle packing mechanics, but as our work involves new features, we review the basics. A complex $K$ is assumed to be given. The fundamental building blocks of $K$ are its {\sl triangles} and {\sl flowers}. The triangles are the faces $\langle v,u,w\rangle$. The flowers are sets $\{v;v_1,v_2,\cdots,v_{n+1}\}$ where $v$ is a vertex and $v_1,\cdots,v_{n+1}$ is the counterclockwise list of neighbors in $K$. These neighbors, the {\sl petals}, define the fan of faces containing $v$. Here $n$ is the number of faces, so when $v$ is interior, then $v_{n+1}=v_1$. In talking about a circle packing $P$ for $K$, the radii and centers are, of course, the ultimate target. However, proofs of existence and uniqueness (and computations) depend on the standard Perron methods first deployed in \cite{BSt91a}. Given $K$, the fundamental data lies in three lists: the {\sl label} $R=\{R(v):v\in K\}$, edge overlaps $\Phi=\{\Phi(e): e=\langle u,v\rangle\}$, and target angle sums $A=\{A(v):v\in K\}$. Each will require some extension. \begin{itemize} \item{{\bf Labels:} The labels $R(v)$ are putative radii (they become actual radii only when a concrete packing is realized).} \item{{\bf Overlap Angles:} For an edge $e=\langle v,w\rangle$ of $K$, the overlap $\Phi(e)$ represents the desired (external) angle between the circles $c_v,c_w$ in $P$. Interest is often in ``tangency'' packings; in this case, $\Phi$ is identically zero and hence does not appear explicitly. However, from Thurston's first introduction of circle packing, non-tangency packings were included and we need them here.} \item{{\bf Target Angle Sums:} Given $R$ and $\Phi$, one can readily compute for any triangle $\langle u,v,w\rangle$ the angle which would be realized at $v$ if a triple of circles with the given labels (as radii) and edge overlaps were to be laid out. The {\sl angle sum} $\theta_{R,\Phi}(v)$ is the sum of such angles for all faces containing $v$. The target angle sum, $A(v)$ is the intended value for $\theta_{R,\Phi}(v)$. It is typically prescribed only when $v$ is interior, and then must be an integral multiple of $2\pi$, $A(v)=2\pi k$; this is precisely the result when petal circles $c_{v_1},c_{v_2},\cdots,c_{v_n}$ wrap $\frac{A(v)}{2\pi}=k$ times about $c_v$.} \end{itemize} A circle packing for $K$ is computed by finding a label $R$, termed a {\sl packing label}, with the property that $\theta_{R,\Phi}(v)=A(v)$ for every interior vertex $v$. Typically, the values $R(w)$ for boundary vertices $w$ are prescribed in advance. With label in hand, one can position the circles in the pattern of $K$ to get $P$. This positioning stage is a layout process analogous to analytic continuation for analytic functions. Only after the layout does one finally realize circle centers. Our work will be carried out in hyperbolic geometry, where we use the fact that boundary radii may be infinite when associated with horocycles. The various existence, uniqueness, and monotonicity results needed for our applications would hold in euclidean geometry as well. The Perron method for computing a packing label proceeds {\sl via} {\sl superpacking} labels, that is, labels $R$ for which the inequality $\theta_{R,\Phi}(v)\le A(v)$ holds for all interior $v$ and which has values no less than the designated values at boundary vertices. It is easy to show that the family of superpacking labels is nonempty and that the packing label is the family's infimum. This infimum may be approximated to any desired accuracy by an iterative adjustment process --- this is basically how \CP\ computations are carried out. The following condition ($\star$) is required to ensure non-degeneracy: If $\{e_1, e_2,\cdots,e_k\}$ is a simple closed edge path in $K$ which separates some edge-connected non-empty set $E$ of vertices from $\partial K$, then the following inequality must hold \begin{equation}\tag{$\star$}\label{E:blackhole} \sum_1^k(\pi-\Phi(e_j))\ge 2\pi+\sum_{v\in E}(A(v)-2\pi). \end{equation} Our work here requires the following extensions to the given data: \begin{itemize} \item{{\bf Zero Labels:} We will introduce situations in which labels for certain interior vertices go to zero, corresponding with circles that in the final configuration have degenerated to points, namely to their centers. Zero radii actually fit quite naturally into the trigonometric computations, but we will only encounter them for isolated vertices.} \item{{\bf Deep Overlaps:} When introducing circle packing, Thurston included specified overlaps $\Phi(e)$, as we do. In general, however, the restriction $\Phi(e)\in [0,\pi/2]$ is required for existence. We will allow {\sl deep overlaps}, that is overlaps in $(\pi/2,\pi]$. Note that overlaps may already be specified as part of the original packing problem under consideration, but these will remain in the range $[0,\pi/2]$. It is only in the modifications within black holes that deep overlaps may be needed, and these will carry clear restrictions.} \item{{\bf Branching:} Traditional branching, described earlier in the paper, is associated with target angle sums $A(v)=2\pi k$ for $k\ge 2$. These are subject to the condition ($\star$) noted above, which concerns interactions of combinatorics and angle sum prescriptions. It traces to the simple observation that it takes at least 5 petal circles to go twice around a circle. The tight conditions emerged first in work on branched tangency packings in \cite{tD93} and \cite{pB93}. These were modified to incorporate overlaps in \cite{BS96}; Condition ($\star$) parallels the conditions there while allowing equality, which is associated with zero labels in black holes, as we see shortly.} \end{itemize} The monotonicities behind the Perron arguments depend on our ability to realize any face $\langle u,v,w\rangle$ with a triple of circles $\{c_v,c_u,c_w\}$ having prescribed radii and overlaps. To include deep overlaps and zero labels, it is relatively easy to see that some side conditions on $\Phi$ are required. What we need is given in the following lemma, a minor extension of the hyperbolic results in \cite{jA12}. \begin{lemma}\label{L:Ashe} Given three hyperbolic radii, $r_1,r_2,r_3$, at least two of which are non-zero, and given three edge overlaps $\phi_{12},\phi_{23},\phi_{31}\in[0,\pi]$ satisfying $\phi_{12}+\phi_{23}+\phi_{31}\le \pi$, there exists a triple $\langle c_1,c_2,c_3\rangle$ of circles in $\bD$ which realize the given radii and overlaps. The angles $\alpha,\beta,\gamma$ of the triangle $T$ formed by their centers are continuous functions of the radii and overlaps and are unique up to orientation and conformal automorphisms of $\bD$. Moreover $\alpha$ is strictly decreasing in $r_1$, while area$(T)$ is strictly increasing in $r_1$. Likewise, $\beta$ (resp. $\gamma$) is strictly increasing in $r_1$ (assuming $r_2$ (resp. $r_3$) is finite). \end{lemma} In our generalized branching, zero labels and deep overlaps are temporary devices only within black holes; we modify the combinatorics and set overlap parameters in there to control apparent branch locations. The results, however, are then used to layout a circle packing $P$ for the original complex $K$; $P$ itself does not involve any zero labels or deep overlaps, and aside from ambiguity about one circle in the shifted branching case, $P$ is a normal circle packing configuration. We conclude these preparations by noting the two conditions which are necessary to guarantee existence and uniqueness of the packings. Namely, condition ($\star$) and this condition ($\star\star$) \begin{equation}\tag{$\star\star$}\label{E:DeepCondition} \Phi(e_1)+\Phi(e_2)+\Phi(e_3)\le \pi\ \text{if edges } \{e_1,e_2,e_3\}\ \text{form a face of }K. \end{equation} With this, we may now describe our two discrete generalized branching mechanisms. \subsection{Singular Branching}\label{SS:SingBranching} Singular branching is used to simulate a branch point lying in an interstice of $P_K$. The interstice is defined by a face $\langle v_1,v_2,v_3\rangle$, corresponding to red, green, and blue circles, respectively, in our illustrations. The black hole is the union of the target interstice and the three interstices sharing its edges. The combinatorics imposed and the event horizon are illustrated in \F{SingComb}. The complex $K$, modified inside the black hole, will be denoted $\wK$ and serves as our complex for subsequent computations. The circles of \F{SingComb} are a device for display only and are not part of the final circle configuration. Indeed, before computing the circles of the branched packing we need to prescribe target angle sums, $A$, and edge overlaps, $\Phi$. \begin{figure} \caption{Combinatorics for a singular black hole.} \label{F:SingComb} \end{figure} Interior to the event horizon we have introduced 4 additional vertices. Three of these, $h_1,h_2,h_3$, are termed {\sl chaperones} since they help guide the circles for $v_1,v_2,$ and $v_3$; we label $h_3$ in \F{SingComb}. A fourth vertex $g$, in the center, is called the {\sl fall guy}. Specify target angle sums $A(v)\equiv 2\pi$ for all interior vertices $v\in \wK$ with the exception of $g$, setting $A(g)=4\pi$. Singular branching is controlled {\sl via} overlap parameters associated with a partition of $\pi$, $\gamma_1+\gamma_2+\gamma_3=\pi$. For $i=1,2,3,$ the value $\gamma_i>0$ represents the overlap angle prescribed in $\Phi$ for the edges from $v_i$ to the chaparone circles on either side. These three pairs of edges are color coded in \F{SingComb}. We set $\Phi(e)= 0$ for all other edges of $\wK$. Before describing how these parameters are chosen, observe that we are assured of a circle packing $\wP$ for $\wK$ with label $\wR$, interior angle sums $A$, and overlaps $\Phi$. In particular, if $\Gamma$ denotes the chain of 6 colored faces surrounding the fall guy, $g$, then condition ($\star$) holds whenever the angle sum prescription $A(g)$ satisfies $A(g)\le 4\pi$, with equality when $A(g)=4\pi$. Traditional Perron and layout arguments imply the existence and uniqueness of the circle packing $\wP$ in which the circle for $g$ has radius zero. An example of the result is illustrated in \F{SingBranch}. For this we set roughly $\gamma_1=0.22\pi,\gamma_2=0.40\pi$, and $\gamma_3=0.37\pi$. \begin{figure} \caption{Image circle packing in the neighborhood of singular branching, and detail zoom.} \label{F:SingBranch} \end{figure} This image takes some time to understand. The circle for $g$ has degenerated to a point, the branch value, which is at the common intersection point of the circles for $v_1,v_2,v_3$ and also for chaperones $h_1,h_2,h_3$; it is labeled $w$ in the detail zoom. The branching is confirmed in the larger image by observing how the event horizon wraps twice about the branch value. If we disregard the chaperones and the fall guy, the remaining circles of $\wP$ realize a tangency circle packing for the original complex $K$. That is, the black hole structure was needed only to guide the layout of the original circles. This portion of the layout can best be understood as living on a two sheeted surface $S$ branched above $w$. Note, for instance, that the overlap of the red and blue circles is only in their projections to the plane: in actuality, the red part of the intersection is on one sheet of $S$ and the blue is on the other. This shows in the orientation of the red, green, blue, which in projection is the reverse of their orientation in $P_K$. Finally, what about choosing parameters $\gamma_1,\gamma_2,\gamma_3$ to get the desired branch point? \F{SingScheme} illustrates our scheme. We have isolated the interstice formed by circles for $v_1,v_2,v_3$ in $P_K$. The dashed circle is the common orthogonal circle through the intersection points and defines a disc $D$ which will be treated as a model of the hyperbolic plane. Point $p$ indicates a location where one might wish to have branching occur. Hyperbolic geodesics connecting $p$ to the three intersection points on $\partial D$ determine angles $\alpha_1,\alpha_2,\alpha_3$, indexed to correspond with the vertices $v_1,v_2,v_3$. We then define $\gamma_j=\pi-\alpha_j,j=1,2,3$. One has complete freedom to choose $\gamma_1$ and $\gamma_2$ in this scheme, subject to conditions $\gamma_1,\gamma_2>0$ and $\gamma_1+\gamma_2<\pi$. We will be seeing examples for $p$ and the other three red branch points later, in \F{SingMovie}. \begin{figure} \caption{The parameter scheme for singular branching.} \label{F:SingScheme} \end{figure} \subsection{Shifted Branching}\label{SS:ShiftBranching} Shifted branching simulates a branch point lying within an interior circle of $P_K$. Of course, when that point is the center, then traditional branching would be the easy choice. This will be incorporated naturally in our parameterized version, however, so we need not separate out this case. Suppose $v$ is the interior vertex whose circle is to contain the shifted branch point. The black hole combinatorics shown in \F{ShiftComb} are imposed on the flower for $v$. (Note that once again, the circles here are used for display but are not part of our target packing.) \begin{figure} \caption{Combinatorics for a shifted black hole.} \label{F:ShiftComb} \end{figure} The event horizon is the chain of edges through the original petals of the flower for $v$ (seven petals, in this case, green and blue). Interior to this horizon, we split $v$, replacing it with the twin vertices, denoted $t_1$ and $t_2$ and corresponding to the circles in two shades of red. We introduce two chaperone vertices $h_1,h_2$, respectively green and blue, and a fall guy vertex $g$, black; we label only chaperone $h_2$ in the figure. With these combinatorics inside the event horizon, we again have a new complex $\wK$, for which we need target angle sums $A$ and edge overlaps $\Phi$. Each chaperone neighbors two original petals, denoted $w_i$ and $j_i$. The petal $j_i$ is known as the {\sl jump} circle because its chaperone $h_j$ and an associated parameter $\gamma_i$ facilitate its detachment from one twin and its attachment to the other. The parameters here are $\gamma_1$ and $\gamma_2$, chosen independently within $[0,\pi]$, and used to define overlaps with the chaperones. In particular, for $i=1,2,$ prescribe $\Phi(\langle h_i,w_i\rangle)=\gamma_i$ and $\Phi(\langle h_i,j_i\rangle)= \pi-\gamma_i$; the edges are shown as solid and dashed lines, respectively, in \F{ShiftComb}. The other overlaps in $\Phi$ are zero, so Condition ($\star\star$) holds. Target angle sums are defined as before, namely, $A= 2\pi$ at interior vertices of $\wK$, save for the fall guy, with $A(g)=4\pi$. Putting aside the choice of jump circles and parameters for now, we are assured of a circle packing $\wP$ for $\wK$ with label $\wR$, interior angle sums $A$, and overlaps $\Phi$. If $\Gamma$ is the chain of edges through the four neighbors of $g$, edges for which tangency is specified, then equality holds in condition ($\star$), so in $\wR$ the radius of the fall guy is necessarily zero. \F{Shift2pi} illustrates the circle packing for $\wK$ before we prescribe the branching, in other words, with the target angle sum at $g$ kept at $2\pi$. We abuse notation by referring to circles by their vertex indices. The original petal circles, starting with $j_1$ and ending at $w_2$, are shown in green: these are tangent to twin $t_2$. Likewise, those starting at $j_2$ and ending at $w_1$ are shown in blue: these are tangent to twin $t_1$. \begin{figure} \caption{The jump circles and parameters are set for a shifted black hole before the branching is imposed.} \label{F:Shift2pi} \end{figure} We consider the action at chaperone $h_1$. First, recall two facts: (1) When a triple of circles has edge overlaps summing to $\pi$, then the three share a common intersection point; and (2) when circles overlap by $\pi$ then one is interior to the other. Here is how the machinery works at $h_1$. The circle for $w_1$ is tangent to twin $t_1$, $j_1$ is tangent to twin $t_2$, while $h_1$ is tangent to both twins. When $\gamma_1=0$, the overlap of $\pi$ between $h_1$ and $j_1$ forces the jump circle $j_1$ to be tangent to $t_1$. As $\gamma_1$ increases, however, the jump circle separates from $t_1$ until, when $\gamma_1$ reaches $\pi$, $w_1$ has been pulled in to be tangent to $t_2$. In other words, $\gamma_1$ acts like a dial: when positive, it detaches the jump circle from $t_1$, and as it increases, it moves the jump further around $t_2$. The mechanism is similar for chaparone $h_2$, as $\gamma_2$ serves to detach the jump circle $j_2$ from $t_2$ and move it further around $t_1$. Typical parameters $\gamma_1=0.7\pi$ and $\gamma_2=0.4\pi$ were specified for \F{ShiftBranch}. Maintaining these while adding branching at $g$, i.e., setting $A(g)=4\pi$, gives the configuration of \F{ShiftBranch}. As usual with branching, the image is rather difficult to interpret, so we point out the key features: The twin circles and chaperones are all tangent to $g$, and the radius for $g$ is zero, so these four circles meet at a single point. The twin circles (red) are nested, as are the chaperones (green and blue). The branch value is the white dot in the detail zoom, at the center of the small twin circle and labeled $w$; we explain this shortly. To confirm the topological behavior of generalized branching, note that the circles for the original petals of $v$ wrap twice around $w$ --- just follow the image of the event horizon in the larger image as it goes through the petal centers and tangency points. The petals are green and blue in the larger image, corresponding, as in \F{Shift2pi}, to which twin they are tangent to. The jump circles $j_1,j_2$ are also labeled. \begin{figure} \caption{Image circle packing in the neighborhood of shifted branching, with detail zoom.} \label{F:ShiftBranch} \end{figure} As with the singular branch image, the configuration of \F{ShiftBranch} makes sense if one treats it as the projection of circles lying on a two-sheeted surface $S$ branched over $w$. To see this, consider the twins in the detail zoom: $t_1$ is the larger twin, with center at the black dot and radius $r_1$. The smaller twin has center at $w$ and radius $r_2<r_1$. Now imagine attaching a string of length $r_1$ at the black dot and using it to draw the circle for $t_1$ on $S$. As the string sweeps around, it will snag on the white dot at $w$ and, like a yo-yo, trace out the smaller twin on $S$ before finishing $t_1$. In other words, the union of the two twin circles together is the projection of all points on $S$ which are distance $r_1$ from the center of $t_1$ (that is, distance {\sl within} $S$). Exactly this thought experiment was the genesis of shifted branching. If we disregard the chaperone circles and twins, the remaining circles constitute a traditional tangency circle packing $P$ for $K$, with the caveat that generically the circle for $v$ is ambiguous --- neither the circle for $t_1$ nor for $t_2$ alone can serve as $c_v$. We need to live with this ambiguity to achieve the branching behavior we want outside the event horizon. (Having said this, there are (many) settings which lead to identical twin circles, so $P$ then has this common circle as $c_v$. All these configurations are identical and are nothing but the circle packing we get when we choose traditional branching at $v$.) This brings us to the matter of configuring black hole combinatorics and parameters for this shifted branching; that is, choosing the jump circles $j_1,j_2$ and their associated overlap parameters $\gamma_1,\gamma_2$. We describe our scheme by referring to \F{ShiftScheme}, which is the flower for $C_v$ in $P_K$. \begin{figure} \caption{Choosing jumps and overlap parameters for shifted branching.} \label{F:ShiftScheme} \end{figure} The ultimate goal is to simulate branching at some point within $C_v$, such as the indicated point $p$. In mapping to the branched image packing $P$, the image of the boundary of $C_v$ wraps continuously around the boundaries of both twin circles (as we described earlier in referring to the branched surface $S$). The jump circles and parameters serve to split the boundary of $C_v$ into two arcs, the blue one will be carried to $t_1$, the green, to $t_2$. Here we need to observe how the jump and its parameter work together. Recall that in the image packing, $\gamma_1\in[0,\pi]$ acts like a dial: The value $\gamma_1=0$ forces $j_1$ to be tangent to both twins. As $\gamma_1$ increases, it pushes $j_1$ away from $t_1$ and further onto $t_2$. When $\gamma_1$ reaches $\pi$, it forces the counterclockwise petal $w_1$ to become tangent to $t_2$. This is a transition point --- at this juncture, we could designate $w_1$ as the jump circle and reset $\gamma_1$ to $0$ without altering anything in the image packing. By then increasing the new $\gamma_1$ with the new jump circle, we could push yet more boundary onto $t_2$. In summary, then, our circle packing map pushes more of $C_v$ onto $t_2$ by increasing $\gamma_1$ and/or moving the designated jump $j_1$ clockwise. Likewise, on the other side it pushes more of $C_v$ onto $t_2$ by decreasing $\gamma_2$ and/or moving the designated jump $j_2$ counterclockwise. To illustrate with the point $p$ of \F{ShiftScheme}, the scheme uses the various labeled quantities: The point $x$ where the radial line $L$ from the center of $C_v$ through $p$ hits $C_v$; the distance $\rho$ from $p$ to $x$; the circular arc (dashed) through $p$ and orthogonal to $C_v$; and the diameter $L_{\perp}$ perpendicular to $L$. To inform our choice of jumps and parameters, we take inspiration from the properties of the branch value $w$ in the eventual image packing --- that is, the center of the smaller twin, $t_2$. In qualitative terms, the blue arc of $C_v$ should map to $t_2$, the rest of $C_v$ to $t_1$. The point $x$ should map to the point of $t_2$ antipodal to the tangency point of $t_1$ and $t_2$. The ratio of $\rho$ to the radius of $C_v$ should reflect the ratio of the radii of the two twins. Thus, when $p$ moves close to $C_v$, twin $t_2$ gets smaller, while as $p$ approaches the center of $C_v$, the radius of $t_1$ approaches that of $t_2$. There is no way to ensure these outcomes precisely --- one cannot know, {\sl a priori}, the outcomes in the image packing, as all the circles get new sizes during computation. We will not burden the reader with the messy details, but we have implemented methods which realize these qualitative behaviors. We illustrate for $p$ and the other three red branch points later, \F{ShiftMovie}. \section{Fixing an Ahlfors Function}\label{S:FixAhlfors} After successfully constructing a discrete Ahlfors function $\fw$ for a combinatorial annulus $K$ in \S\ref{SS:DiscAhlfors}, we showed in \S\ref{S:DiscIssue} how easily that construction can fail. Making small modifications to $K$ that broke its translational symmetry, we obtained a new combinatorial annulus $K'$ which does not support a discrete Ahlfors function. The problem is non-trivial holonomy, and we illustrated in \F{FailedAhlfors} with an attempt at traditional branching using the same midline vertices $v_1,v_2$ we had used for $\fw$. It seems clear that for $K'$ the missing translational symmetry can be blamed for the failure. We now apply the flexibility of generalized branching to repair the damage. Since $K'$ still has a midline and reflective symmetry across it, we adopt the following strategy: proceed with traditional branching at vertex $v_1$, but use shifted branching near $v_2$. Symmetry simplifies our search for the correct branching parameters in the black hole for $v_2$: namely, if we choose vertices $j_1$ to be symmetric with $w_2$ across the midline, and likewise, $j_2$ symmetric with $w_1$, and if we specify $\gamma_2$=$\pi-\gamma_1$, then the shifted branch value must remain on the midline. After some experimental tinkering, one can in fact annihilate the holonomy and replicate the success we saw for the original complex $K$ --- the process works. We do not show the image packing $P$ because it is essentially indistinguishable from \F{AnnulusP}. The point is that we are able to make the red cross-cuts coincident. Admittedly, the fix was (almost) in for this example: we depended on reflective symmetry to reduce the parameter search from a two- to a one-dimensional problem. Nonetheless, it demonstrates well the need and potential for generalized branching. We close by discussing the broader issues. \section{Parameter Space}\label{S:Parameter} This paper is a preliminary report on work in progress. We have focused on generalized branching at a single point $p$ in the interior of $P_K$. The location of $p$ is continuously parameterized --- e.g, by its $x$ and $y$ coordinates. We have defined discrete generalized branching which seems to handle patches of this parameter space. Thus, when $p$ lies in an interior interstice, singular branching involves two real parameters, $\gamma_1,\gamma_2$. When $p$ lies in an interior circle, shifted branching involves jump circles and parameters, but in our description of the mechanics it is clear that this, too, is just two real parameters. The continuity of these parameterizations may be phrased in terms of the branched packing labels $R$ restricted to vertices on and outside of the event horizon. While a proof remains elusive, experiments strongly suggest that this continuity does hold. For example, \F{SingMovie} displays the branched circle packings associated with branching at the four red dots in \F{SingScheme}, progressing from lower left to upper right (the third of these is the packing for the distinguished point $p$ from \F{SingScheme}). The branch value is roughly at the center in each image. Subject to this and related normalizations, the radii and centers of $P$ appear to be continuous in $\gamma_1,\gamma_2$. \begin{figure} \caption{Singular branching for the four red branch points of \F{SingScheme} \label{F:SingMovie} \end{figure} \F{ShiftMovie} provides a similar sequence of shifted branched packings for the four red dots of \F{SingScheme} (caution: the chaperones play different roles now). Again we have positioned the branch values roughly at the center in each image; the third one corresponds to \F{ShiftBranch}. Here, too, experiments suggest continuity in radii and centers as we manipulate the two shifted branching parameters. \begin{figure} \caption{Shifted branching for the four red branch points of \F{ShiftScheme} \label{F:ShiftMovie} \end{figure} Concatenating the 8 frames in these last two figures highlights another parameterization issue: How are our various patches of parameter space sewn together? If $p$ lies on the mutual boundary of a circle and an interstice, for instance, its generalized branched packing may be treated as a limit of either singular branching from the interstice side or shifted branching from the circle side. We have {\sl ad hoc} methods for such transitions, though we have yet to formalize the details of parameter alignment. Nevertheless, our images may give a feel for the transition: The interstice formed by $\{C_{v_1},C_{v_2},C_{v_3}\}$ in \F{SingScheme} is contiguous to the circle $C_v$ of \F{ShiftScheme}; that is, $v=v_1$. So the 8 frames from \F{SingMovie} and \F{ShiftMovie} together are part of a movie as the branch point transitions from singular to shifted. Image circles $\{c_{v_1},c_{v_2},c_{v_3}\}$ remain red, green, and blue, respectively, throughout these 8 frames. In the last frame from \F{SingMovie} note that these three appear to be in clockwise order (as we discussed earlier). Compare this to the first frame of \F{ShiftMovie}: the red circle has now split into twins, with the branch value in the smaller twin, so the (small) red, green, and blue are again correctly oriented --- the branch point has successfully punched though from the interstice to the circle, and in the last frame of \F{ShiftMovie}, it is nearing traditional branching at $v$. This is the type of experimental evidence supporting our contention that the two parameter patches can be aligned to maintain continuity. \section{Global Considerations}\label{S:Global} We stated in the introduction that our aim is to bridge the principal gap remaining in discrete function theory, namely the existence and uniqueness of discrete meromorphic functions. Although we have local machinery, we have not confronted the global problem head-on. A few words are in order. Naturally, one of the first goals would be a more complete theory for discrete rational functions, branched mappings from $\bP$ to itself. Here $K$ would be a combinatorial sphere and one would need $2n$ branch points for a mapping of valence $n+1$. There is a tantalizing approach based on Oded Schramm's metric packing theorem which has motivated some of our work. In \cite{oS91a}, Schramm proves remarkable existence and uniqueness results for packings of ``blunt disklike'' sets --- for instance, circles defined using a Riemannian metric on $\bP$. Suppose, then, that we are given a classical rational function $\fF:\bP\rightarrow \bP$. We can define a metric $d$ on $\bP$ as the pullback under $\fF$ of the spherical metric on $\bP$. Finding a packing for $K$ by circles in this metric $d$ is tantamount to finding a normal circle packing $P$ on $\bP$, and the map from $P_K$ to this $P$ would be our discrete rational function. Unfortunately, at the critical values of $\fF$ the pullback metric $d$ is not Riemannian; the direct analogue of Schramm's result does not hold, as can be seen, for example, with circles that degenerate. There is still some hope, however, as our constructions demonstrate --- the twin circles of a shifted branch point are, after all, the image of a single circle in a pullback metric $d$. Our hands-on approach still faces many hurdles in practice. On the sphere, for instance, there is no packing algorithm --- Perron methods rely on the monotonicity of Lemma~\ref{L:Ashe}, which fails in the positive curvature setting. And in other settings, such as Ahlfors and Weierstrass, we have had to depend on symmetry. A generic combinatorial torus $K$ is likely to have no Weierstrass function using traditional branching. Though we believe generalized branching provides the flexibility to overcome the holonomy obstructions, early attempts have faltered due to the curse of (even small) dimension: we don't yet know how to search a two-dimensional space for parameters that will annihilate non-trivial holonomies. We succeeded in the Ahlfors case because partial symmetry reduced us to a one-dimensional search. We face other global difficulties as well. We list a few. We have restricted attention to {\sl simple} branching; at least in the case of shifted branching, one can see a chance to allow higher order branching --- replacing twins with triplets, etc. In general, one also needs to allow branching at more than one point, but the existence of branched packings then encounters global combinatorial issues. The notion of black holes will also need to be extended, since combinatorics may lead to patches of degenerate radii (versus isolated degenerate radii) for branch points in certain combinatorial environments. In other words, there is considerable work to be done. Nevertheless, we contend that discrete generalized branching addresses --- in theory if not in practice --- the key obstruction remaining in discrete analytic function theory. This obstruction, of course, is not the only one --- so get to work, David. \end{document}
math
The Mancos Grain Elevator was built in 1934 by Grady Clampitt. Mr. Clampitt and Mr. Luellen, a bordering farmer, grew dryland wheat in fields bordering Mesa Verde National Park on the southeast side; those same fields are used to grow dryland wheat today,and it is the only wheat grown in the Valley at this time. The Elevator was put in use upon completion and remained in use for an indeterminate number of years following Mr. Clampitt’s retirement. With the farm no longer in use the Kennedy/Mancos Grain Elevator is currently being used as a storage unit for family belongings. The Elevator is in a prominent location highly visible from Highway 160, and is considered a landmark. Although agriculture still plays a large role in the economy of the Mancos Valley, there are few structures evidencing the historic heritage of farming that once made the Valley famous. Today ranching and irrigated hay production have replaced dryland grain crop. The Elevator is a statement of the determination of the agricultural community in place at the time it was built. Few examples of this particular unique workmanship are in place anywhere in Montezuma County today. The Kennedy/Mancos Grain Elevator exemplifies some of the issues facing grain elevators across the State, in particular, those located in the Western region where grain production and farming resources are rarer. The Grain Elevator was originally deteriorating due to a lacking roof, and drainage issues causing portions of the iconic stacked plank construction to fail. Since listing, the family has raised the necessary funds and hired a contractor to restore the roof, and it is now considered a SAVE. Save the Mancos Grain Elevator!
english
\begin{document} \title{ Characterization of digital $(0,m,3)$-nets and digital $(0,2)$-sequences in base $2$\thanks{subclass 11K31, 11K38}} \author{ Roswitha Hofer\thanks{Institute of Financial Mathematics and Applied Number Theory, Johannes Kepler University Linz, Altenbergerstr. 69, 4040 Linz, Austria. e-mail: [email protected]} and Kosuke Suzuki\thanks{Graduate School of Science, Hiroshima University. 1-3-1 Kagamiyama, Higashi-Hiroshima, 739-8526, Japan. JSPS Research Fellow. e-mail: [email protected]} } \date{\today} \maketitle \begin{abstract} We give a characterization of all matrices $A,B,C \in \mathbb{F}_2Mat$ which generate a $(0,m,3)$-net in base $2$ and a characterization of all matrices $B,C\in{\mathbb F}_2^{\mathbb N\times\mathbb N}$ which generate a $(0,2)$-sequence in base $2$. \end{abstract} \section{Introduction and main results} The algorithms for constructing digital $(t,m,s)$-nets and digital $(t,s)$-sequences, which were introduced by Niederreiter \cite{Niederreiter1987pss}, are well-established methods to obtain low-discrepancy point sets and low-discrepancy sequences. Low-discrepancy point sets and sequences are the main ingredients of quasi-Monte Carlo quadrature rules for numerical integration (see for example \cite{ Dick2010dna,Niederreiter1992rng} for details). The purpose of this paper is to characterize digital nets and sequences in base $2$ with best possible quality parameter $t$. We start the paper with introducing the algorithm for digital $(t,m,s)$-nets and digital $(t,s)$-sequences in base $2$ and defining the quality parameter $t$. Let $\mathbb N$ be the set of all positive integers, $\mathbb{F}_2 = \{0,1\}$ be the field of two elements. For a positive integer $m$, $\mathbb{F}_2Mat$ denotes the set of all $m \times m$ matrices over $\mathbb{F}_2$. For a nonnegative integer $n$, we write the $2$-adic expansion of $n$ as $n = \sum_{i=1}^\infty z_i(n) 2^{i-1}$ with $z_i(n) \in \mathbb{F}_2$, where all but finitely many $z_i(n)$ equal zero. For $k \in \mathbb N \cup \{\infty\}$, we define the function $\phi_k \colon \mathbb{F}_2^k \to [0,1]$ as \[ \phi_k((y_1, \dots, y_{k})^\top) := \sum_{i=1}^k \frac{y_{i}}{2^i}. \] Let $s$, $m\in\mathbb N$, and $C_1, \dots, C_s \in \mathbb{F}_2Mat$. The digital net generated by $(C_1, \dots, C_s)$ is a set of $2^m$ points in $[0,1)^s$ that is constructed as follows. We define ${\boldsymbol{y}}_{n,j} \in \mathbb{F}_2^m$ for $0 \leq n < 2^m$ and $1 \leq j \leq s$ as \[ {\boldsymbol{y}}_{n,j} := C_j \cdot (z_1(n), \dots, z_m(n))^\top \in \mathbb{F}_2^m. \] Then we obtain the $n$-th point ${\boldsymbol{x}}_n$ by applying $\phi_m$ componentwise to the ${\boldsymbol{y}}_{n,j}$, i.e., \begin{equation*} {\boldsymbol{x}}_n := (\phi_m({\boldsymbol{y}}_{n,1}), \dots, \phi_m({\boldsymbol{y}}_{n,s})). \end{equation*} Finally letting $n$ range between $0$ and $2^m-1$ we obtain the point set $\{{\boldsymbol{x}}_0, \dots, {\boldsymbol{x}}_{2^m-1}\} \subset [0,1)^s$ that is called the digital net generated by $(C_1, \dots, C_s)$. In a similar way, for $C_1, \dots, C_s \in \mathbb{F}_2^{\mathbb N \times \mathbb N}$ the digital sequence generated by $(C_1, \dots, C_s)$ is the sequence of points in $[0,1]^s$ that is constructed as follows. We define ${\boldsymbol{y}}_{n,j} \in \mathbb{F}_2^\mathbb N$ for $n\in\mathbb N\cup\{0\}$ and $1 \leq j \leq s$ as \( {\boldsymbol{y}}_{n,j} := C_j \cdot(z_1(n), z_2(n) ,\dots)^\top \in \mathbb{F}_2^\mathbb N. \) This matrix-vector multiplication is well-defined since almost all $z_i(n)$ equal zero. Then we obtain the $n$-th point ${\boldsymbol{x}}_n$ by setting \( {\boldsymbol{x}}_n := (\phi_\infty({\boldsymbol{y}}_{n,1}), \dots, \phi_\infty({\boldsymbol{y}}_{n,s})). \) The digital sequence generated by $(C_1, \dots, C_s)$ is the sequence of points $\{{\boldsymbol{x}}_0, {\boldsymbol{x}}_1, \dots \} \subset [0,1]^s$. The nonnegative integer $t$ in the notions of $(t,m,s)$-nets and $(t,s)$-sequences quantifies in a certain sense the uniformity of digital nets and sequences. A set $\mathcal{P}$ of $2^m$ points in $[0,1)^s$ is said to a $(t,m,s)$-net in base $2$ if every subinterval of the form \[ {\mathrm{pr}}od_{i=1}^s [a_{i}/2^{c_{i}}, (a_{i} +1)/2^{c_{i}}) \qquad \text{with integers $c_i \geq 0$ and $0 \leq a_i < 2^{c_i}$} \] and of volume $2^{t-m}$ contains exactly $2^t$ points from $\mathcal{P}$. For the definition of $(t,s)$-sequences in base $2$, we need to introduce the truncation operator. For $x \in [0,1]$ with the prescribed $2$-adic expansion $x = \sum_{i=1}^\infty {x_{i}}/{2^i}$ (where the case $x_i = 1$ for almost all $i$ is allowed), we define the $m$-digit truncation $[x]_m := \sum_{i=1}^m {x_{i}}/{2^i}$. For ${\boldsymbol{x}} = (x_1, \dots, x_s) \in [0,1]^s$, the coordinate-wise $m$-digit truncation of ${\boldsymbol{x}}$ is defined as $[{\boldsymbol{x}}]_m := ([x_1]_m, \dots, [x_s]_m)$. A sequence $\mathcal{S} = \{{\boldsymbol{x}}_0, {\boldsymbol{x}}_1, \dots \}$ of points in $[0,1]^s$ with prescribed $2$-adic expansions is said to be a $(t,s)$-sequence in base $2$ if, for all nonnegative integers $k$ and $m$, the set $\{[{\boldsymbol{x}}_{k2^m}]_m, \dots, [{\boldsymbol{x}}_{(k+1)2^m-1}]_m\}$ is a $(t,m,s)$-net in base $2$. Straightforward, we define $(t,m,s)$-nets and $(t,s)$-sequences in base $b$ with $b\in\mathbb N\setminus\{1\}$ by substituting $2$ by $b$ in the definitions above. By the definitions of $(t,m,s)$-nets and $(t,s)$-sequences, a smaller $t$ implies more conditions on the uniformity of the points and of the sequences. Indeed a smaller $t$ corresponds with a smaller discrepancy bound (cf. \cite{Niederreiter1987pss}). Hence smaller $t$ would be appreciated and $t=0$ is the best possible. Having lowest possible value $0$ for $t$ has another merit: the randomized quasi-Monte Carlo estimator of a scrambled $(0,m,s)$-net in base $b$ is asymptotically normal \cite{Loh}. However, $t=0$ cannot be attained when $s$ is large. It is well known that $(0,m,s)$-nets in any base $b$ exist only if $s \leq b+1$ and $(0,s)$-sequences in base $b$ exist only if $s \leq b$ \cite[Corollary~4.24]{Niederreiter1992rng}. On the other hand, there are many known digital $(0,b)$-sequences in prime base $b$, including the two-dimensional Sobol$^{\mathrm{pr}}ime$ sequence for $b=2$ \cite{Sobolcprime1967dpi}, Faure sequences \cite{Faure1982dds}, generalized Faure sequences \cite{Tezuka}, and its reordering \cite{FaureTezuka}. From these sequences we can construct digital $(0,m,b+1)$-nets in base $b$, see \cite[Lemma~4.22]{Niederreiter1992rng} or Lemma~\ref{lem:seq-to-net}. A characterization of $(0,m,3)$-net in base $2$ generated by $(I,B,B^2)$ with some $B \in \mathbb{F}_2Mat$ was given in \cite{Kajiura} and a characterization of $(0,2)$-sequences in base $2$ generated by NUT matrices $(C_1,C_2)$ was given in \cite{Hofer2010edc}. Our contribution in this note is to characterize all generating matrices of digital $(0,m,3)$-nets and digital $(0,2)$-sequences in base $2$. For the statements of our results, we introduce some notation. Let $I_m$ be the $m \times m$ identity matrix in ${\mathbb F}_2^{m\times m}$. Let $J_m$ be the $m \times m$ anti-diagonal matrix in ${\mathbb F}_2^{m\times m}$ whose anti-diagonal entries are all $1$, and $P_m$ be the $m \times m$ upper-triangular Pascal matrix in ${\mathbb F}_2^{m\times m}$, i.e., \[ J_m = \begin{pmatrix} 0 & & 1\\ & {{\mathrm{id}}}dots \\ 1 & & 0 \end{pmatrix}, \qquad P_m = \left(\binom{j-1}{i-1}\right)_{i,j=1}^m = \begin{pmatrix} \binom{0}{0} & \binom{1}{0} & \dots & \binom{m-1}{0}\\ & \binom{1}{1} & & \vdots \\ & & \ddots & \vdots \\ & & & \binom{m-1}{m-1} \end{pmatrix}, \] which are considered modulo $2$. If there is no confusion, we omit the subscripts and simply write $I$, $J$, and $P$. Let $\Lset$ (resp.\ $\Uset$) be the set of non-singular lower- (resp.\ upper-) triangular $m \times m$ matrices over $\mathbb{F}_2$. Note that $\Lset \cap \Uset = \{I\}$ holds. Let $\Lset[\infty]$ (resp.\ $\Uset[\infty]$) be the set of non-singular lower (resp.\ upper) triangular infinite matrices over $\mathbb{F}_2$. Let $P_\infty$ be the infinite Pascal matrix, i.e., whose $m \times m$ upper-left submatrix is $P_m$ for all $m \geq 1$. Note that for $C\in{\mathbb F}_2^{\mathbb N\times\mathbb N}$ and $L\in\Lset[\infty],\,U\in\Uset[\infty]$ the products $LC$ and $CU$ are well defined and $(LC)U=L(CU)$. For a finite or infinite matrix $C$ and for $k \in \mathbb N$ we write $C^{(k)} \in \mathbb{F}_2^{k \times k}$ for the upper left $k \times k$ submatrix of $C$. We are now ready to state our main results. \begin{theorem} \label{thm:char-0m3met-general} Let $m \geq 1$ be an integer and $A,\,B,\,C \in \mathbb{F}_2Mat$. Then the following are equivalent. \begin{enuroman} \item \label{eq:0m3net-general-equiv1} $(A,B,C)$ generates a digital $(0,m,3)$-net in base $2$. \item \label{eq:0m3net-general-equiv2} There exist $L_1, L_2 \in \Lset$, $U \in \Uset$, and non-singular $M \in \mathbb{F}_2Mat$ such that \[ (A,B,C) = (JM, L_1UM, L_2PUM). \] \end{enuroman} \end{theorem} \begin{theorem}\label{thm:char-02seq} Let $B,C\in{\mathbb F}_2^{\mathbb N\times\mathbb N}$. Then the following are equivalent. \begin{enuroman} \item \label{item:thmBC} $(B,C)$ generates a digital $(0,2)$-sequence in base $2$. \item \label{item:thmLPU} There exist $L_1,L_2\in\mathcal{L}_\infty$ and $U\in\mathcal{U}_\infty$ such that $B=L_1U$ and $C=L_2P_\infty U$. \end{enuroman} \end{theorem} In the rest of the paper, we give auxiliary results in Section~\ref{sec:auxiliary} and prove the above theorems in Section~\ref{sec:0m3net}. \section{Auxiliary results}\label{sec:auxiliary} We start with $t$-value-preserving operations. \begin{lemma}[{\cite[Lemma~2.2]{Kajiura}}]\label{lem:tval-invariant} Let $C_1, \dots, C_s \in \mathbb{F}_2Mat$ and $L_1, \dots, L_s \in \Lset$. Let $G \in \mathbb{F}_2Mat$ be non-singular. Then the following are equivalent. \begin{enuroman} \item \label{eq:t-preserving-net-equiv1} $(C_1, \ldots, C_s)$ generates a digital $(t,m,s)$-net. \item \label{eq:t-preserving-net-equiv2} $(L_1C_1G, \ldots, L_sC_sG)$ generates a digital $(t,m,s)$-net. \end{enuroman} \end{lemma} \begin{lemma}\label{lem:tval-invariant-sequence} Let $C_1, \dots, C_s \in \mathbb{F}_2^{\mathbb N \times\mathbb N}$, $L_1, \dots, L_s \in \Lset[\infty]$ and $U \in \mathcal{U}_\infty$. Then the following are equivalent. \begin{enuroman} \item \label{eq:t-preserving-equiv1} $(C_1, \ldots, C_s)$ generates a digital $(t,s)$-sequence. \item \label{eq:t-preserving-equiv2} $(L_1C_1U, \ldots, L_sC_sU)$ generates a digital $(t,s)$-sequence. \end{enuroman} \end{lemma} \begin{proof} A slight adaption of the proof of \cite[Proposition~1]{FaureTezuka} (resp.\ \cite[Theorem~1]{Tezuka}) shows that multiplying $L_i$ from left (resp.\ multiplying $U$ from right) does not change the $t$-value. Note that here we used that $L_i^{-1}$ exists in $\Lset[\infty]$ and $U^{-1}$ exists in $\Uset[\infty]$. \end{proof} The following results point out relations between digital nets and sequences. \begin{lemma}[{\cite[Lemma~4.22]{Niederreiter1992rng}}]\label{lem:seq-to-net} Let $\{{\boldsymbol{x}}_i\}_{i \geq 0}$ be a $(t,s)$-sequence in base $2$. Then $\{({\boldsymbol{x}}_i, i 2^{-m})\}_{i=0}^{2^m -1}$ is a $(t,m,s+1)$-net in base $2$. \end{lemma} \begin{lemma}\label{lem:BCandJBC} Let $C_1, \dots, C_s \in{\mathbb F}_2^{\mathbb N\times\mathbb N}$. Then the following are equivalent. \begin{enuroman} \item\label{item:BC} $(C_1, \dots, C_s)$ generates a digital $(t,s)$-sequence. \item \label{item:Cm} $(C_1^{(m)}, \dots, C_s^{(m)})$ generates a digital $(t,m,s)$-net for every $m\in\mathbb{N}$. \item \label{item:JBC} $(J_m, C_1^{(m)}, \dots, C_s^{(m)})$ generates a digital $(t,m,s+1)$-net for every $m\in\mathbb{N}$. \end{enuroman} \end{lemma} \begin{proof} \eqref{item:Cm} implies \eqref{item:BC} by \cite[Theorem~4.36]{Niederreiter1992rng}. Clearly \eqref{item:JBC} shows \eqref{item:Cm}. \eqref{item:BC} implies \eqref{item:JBC} by Lemma~\ref{lem:seq-to-net}. \end{proof} Having $t=0$ is related to LU decomposability. In particular, we have a characterization of digital $(0,1)$-sequences and digital $(0,m,2)$-nets. \begin{lemma}\label{lem:IB-0m2net-structure} Let $B \in \mathbb{F}_2Mat$. Then $(J_m,B)$ generates a digital $(0,m,2)$-net if and only if there exist $L \in \Lset$ and $U \in \Uset$ such that $B = LU$. \end{lemma} \begin{proof} This is essentially proved in \cite[Lemma~3.1]{Kajiura}. \end{proof} \begin{lemma} \label{lem:char-0m2met-general} Let $m \geq 1$ be an integer and $A,B \in \mathbb{F}_2Mat$. Then $(A,B)$ generates a $(0,m,2)$-net if and only if there exist $L \in \Lset$, $U \in \Uset$ and non-singular $M \in \mathbb{F}_2Mat$ such that $(A,B) = (JM, LUM).$ \end{lemma} \begin{proof} This lemma can be reduced to Lemma~\ref{lem:IB-0m2net-structure} by a similar argument to the one in Section~\ref{sec:0m3net} that reduces Theorem~\ref{thm:char-0m3met-general} to Proposition~\ref{prop:char-0m3met}. \end{proof} \begin{lemma}\label{lem:BLUinfty} Let $B \in {\mathbb F}_2^{\mathbb N\times\mathbb N}$. Then $B$ generates a digital $(0,1)$-sequence if and only if there exist $L\in\mathcal{L}_\infty$ and $U\in\mathcal{U}_\infty$ such that $B=LU.$ \end{lemma} \begin{proof} First we assume that $B$ generates a digital $(0,1)$-sequence. Then it follows from Lemma~\ref{lem:BCandJBC} that $(J_m,B^{(m)})$ generates $(0,m,2)$-net for every $m\in\mathbb{N}$. Thus by Lemma~\ref{lem:IB-0m2net-structure} there exist $L_m \in\mathcal{L}_m$ and $U\in\mathcal{U}_m$ such that $B^{(m)}=L_mU_m.$ By comparing the upper left $n \times n$ submatrix of this equation for $n \leq m$, we have $L_m^{(n)} = L_n$ and $U_m^{(n)} = U_n$ for all $n \leq m$. This implies that there exists unique $L \in \Lset[\infty]$ and $U \in \Uset[\infty]$ such that $L^{(m)} = L_m$ and $U^{(m)} = U_m$ holds for all $m$, and hence we have $B=LU$. This shows the ``only if'' part. The converse holds from Lemma~\ref{lem:tval-invariant-sequence}. \end{proof} \begin{remark}{\rm Lemmas~\ref{lem:tval-invariant}--\ref{lem:BLUinfty} with appropriate modifications hold for digital nets and sequences over an arbitrary finite field since the proofs are based on general linear algebra. The results below uses that the base field is $\mathbb{F}_2$.} \end{remark} The first author and Larcher essentially determined all digital $(0,2)$-sequences in base $2$ generated by non-singular infinite upper-triangular matrices \cite[Proposition~4]{Hofer2010edc}. \begin{lemma}\label{lem:alternative} Let $U_1, U_2 \in\mathcal{U}_m$. Then $(J_m,U_1,U_2)$ generates $(0,m,3)$-net in base $2$ if and only if $U_2=P_mU_1$ holds. \end{lemma} \begin{proof} The ``only if'' part is essentially derived in the proof of \cite[Proposition~4]{Hofer2010edc}. Now we assume $U_2=P_mU_1$. From the construction in \cite{Faure1982dds} and Lemma~\ref{lem:seq-to-net}, $(J,I,P)$ generates a $(0,m,3)$-net. Then it follows from Lemma~\ref{lem:tval-invariant} with $(L_1,L_2,L_3) = (JU_1^{-1}J,I,I)$ and $G=U_1$ that $((JU_1^{-1}J)JU_1,IU_1,PU_1) = (J,U_1,PU_1)$ also generates a $(0,m,3)$-net. Thus we have proved the converse. \end{proof} \begin{proposition} Let $U_1, U_2 \in \Uset[\infty]$. Then the following are equivalent: \begin{enuroman}\label{thm:Hofer} \item \label{eq:02seq-equiv1} $(U_1, U_2)$ generates a digital $(0,2)$-sequence in base $2$. \item \label{eq:02seq-equiv2} $U_2=P_\infty U_1$ holds. \end{enuroman} \end{proposition} \begin{proof} \eqref{eq:02seq-equiv1} implies \eqref{eq:02seq-equiv2} by \cite[Proposition~4]{Hofer2010edc}. The converse follows from Lemma~\ref{lem:tval-invariant-sequence} and the construction in \cite{Faure1982dds}. \end{proof} \section{Proof of Theorem \ref{thm:char-0m3met-general} and \ref{thm:char-02seq}}\label{sec:0m3net} Having all the auxiliary results of the previous section at hand, the proofs of our theorems are rather short. \begin{proof}[Proof of Theorem~\ref{thm:char-0m3met-general}] Let $M=JA$. By putting $B' = BM^{-1}$ and $C' = CM^{-1}$, $t(A,B,C)=0$ is equivalent to $t(JM,B'M,C'M)=0$, which is equivalent to $t(J,B',C')=0$ by Lemma~\ref{lem:tval-invariant}. Hence Theorem~\ref{thm:char-0m3met-general} reduces to the case $A=I$, i.e., it suffices to show the following claim. \begin{proposition} \label{prop:char-0m3met} Let $m \geq 1$ be an integer and $B,C \in \mathbb{F}_2Mat$. Then the following are equivalent. \begin{enuroman} \item \label{eq:0m3net-equiv1} $(J,B,C)$ generates a $(0,m,3)$-net in base $2$. \item \label{eq:0m3net-equiv2} There exist $L_1, L_2 \in \Lset$ and $U \in \Uset$ such that $B=L_1U$ and $C=L_2PU$. \end{enuroman} \end{proposition} We now prove Proposition~\ref{prop:char-0m3met}. In this proof, for matrices $Q,R,S \in \mathbb{F}_2Mat$ let $t(Q,R,S)$ be the $t$-value of the digital net generated by $(Q,R,S)$. First we assume \eqref{eq:0m3net-equiv2}. By Lemma~\ref{lem:tval-invariant} with $(L_1,L_2,L_3) = (I,L_1^{-1},L_2^{-1})$ and $G=I$ we have \begin{equation*}\label{eq:JBC} t(J,B,C)=t(J,L_1U,L_2PU)=t(J,U,PU) = 0. \end{equation*} where the last equality follows from Lemma~\ref{lem:alternative}. Thus we have \eqref{eq:0m3net-equiv1}. We now assume \eqref{eq:0m3net-equiv1}. By Lemma~\ref{lem:IB-0m2net-structure}, there exist $L_1, L_2 \in \Lset$ and $U_1,U_2 \in \Uset$ such that $B=L_1U_1$ and $C=L_2U_2$. Hence, by Lemma~\ref{lem:tval-invariant} with $(L_1,L_2,L_3) = (I,L_1,L_2)$ and $G=I$ we have \[ t(J,U_1,U_2)=t(J,L_1U_1,L_2U_2)=t(J,B,C)=0. \] Finally, Lemma~\ref{lem:alternative} implies $U_2 = PU_1$, which shows \eqref{eq:0m3net-equiv2}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:char-02seq}] \eqref{item:thmLPU} implies \eqref{item:thmBC} by Lemma~\ref{lem:tval-invariant-sequence} and Proposition~\ref{thm:Hofer}. Let us now assume \eqref{item:thmBC}. Then by Lemma~\ref{lem:BLUinfty} there exist $L_1,\,L_2\in\mathcal{L}_\infty$ and $U_1,\,U_2\in\mathcal{U}_\infty$ such that $B=L_1U_1$ and $C=L_2U_2$. We apply Lemma~\ref{lem:tval-invariant-sequence} and obtain that $(U_1,U_2)$ generates a $(0,2)$-sequence in base $2$. Finally Proposition~\ref{thm:Hofer} brings $U_2=P_\infty U_1$ and the result follows. \end{proof} \section*{Acknowledgments} The first author is supported by the Austrian Science Fund (FWF): Project F5505-N26, which is a part of the Special Research Program ``Quasi-Monte Carlo Methods: Theory and Applications''. The second author is supported by Grant-in-Aid for JSPS Fellows (No.\ 17J00466). \end{document}
math
module Bacon module Subject ## # Syntactic sugar for creating an anonymous class in a before block. def subject( base=Object, &block ) block_given? ? before { @subject = Class.new( base, &block ).new } : @subject end end end Bacon::Context.send :include, Bacon::Subject
code
श्रीनगर जल विद्युत परियोजना के बांध से पावरहाउस तक पानी पहुंचाने वाली नहर में कई जगह पर लीकेज हो रही है जो ग्रामीणों की जान आफ़त में डाल सकती है. गढ़वाल विश्वविद्यालय (गढ़वाल यूनिवर्सिटी) के भूविज्ञान विभागाध्यक्ष प्रोफ़ेसर यशपाल सुन्दरियाल (जिओलॉजी होड़ प्रोफेसर यशपाल सुंदरियल) साफ़ शब्दों में कहते हैं कि नहर का डिज़ाइन और गुणवत्ता बहुत ख़राब है. श्रीनगर. अलकनंदा (अलकनंदा) में बनी ३३० मेगावाट क्षमता की जलविद्युत परियोजना (हाइड्रो पॉवर प्रोजेक्ट) कभी भी किसी बड़ी जनहानि का सबब बन सकती है. बांध (दम) से पावरहाउस (पॉवर हाउस) तक पानी पहुंचाने वाली नहर में कई जगह पर लीकेज (लीकागे इन कनाल) हो रही है जो ग्रामीणों की जान आफ़त में डाल सकती है. इस आशंका की तस्दीक भूवैज्ञानिक भी कर रहे हैं और मुख्यमंत्री के निर्देश पर बनी एक समिति ने इस मामले की जांच भी की है लेकिन इस परियोजना को संचालित कर रही कंपनी जीवीके को इसकी कतई भी परवाह नहीं है. ३३० मेगावाट विद्युत क्षमता की श्रीनगर जलविद्युत परियोजना की शक्ति नहर में फिर शुरु हुए रिसाव ने ग्रामीणों की नींद उड़ा दी है. पिछले साल किसी तरह नहर में टल्ले लगाकर किए गए ट्रीटमेंट के फेल होने के बाद अब नहर के लीकेज से रास्तों में चौड़ी दरारें पड़ गई हैं. भूकटाव की वजह से मंगसू, गुगली और सुरासू गांवों पर खतरा पैदा हो गया है. स्थानीय ग्रामीण अब यहां से विस्थापन की मांग कर रहे हैं क्योंकि उन्हें लगातार यह डर सताता रहता है कि नहर की लीकेज से कभी भी उनकी जान पर बन सकती है. ख़राब डिज़ाइन, ख़राब गुणवत्ता नहर निर्माण के वक्त से ही नहर के डिज़ाइन, निर्माणशैली और गुणवत्ता पर भी सवाल उठते रहे हैं. वाडिया इंस्टीट्यूट और गढ़वाल विश्वविद्यालय के भूवैज्ञानिकों की जांच में भी इस बात की पुष्टि हुई है. गढ़वाल विश्वविद्यालय के भूविज्ञान विभागाध्यक्ष प्रोफ़ेसर यशपाल सुन्दरियाल साफ़ शब्दों में कहते हैं कि नहर का डिज़ाइन और गुणवत्ता बहुत ख़राब है और अब इसकी वजह से ख़तरा बहुत बढ़ गया है. नहर निर्माण के वक्त से ही नहर के डिज़ाइन, निर्माणशैली और गुणवत्ता पर भी सवाल उठते रहे हैं. मुख्यमंत्री के निर्देश पर पिछले वर्ष एक उच्चस्तरीय चार सदस्यीय कमेटी ने इस मामले की जांच भी की थी लेकिन इसकी रिपोर्ट में क्या है यह पता नहीं चल सका है. कीर्तिनगर के तहसीलदार हरिहर उनियाल कहते हैं कि जांच रिपोर्ट मुख्यमंत्री के पास है और इसलिए वह इस मामले में कुछ नहीं कह सकते. जीवीके को परवाह नहीं हैरत की बात यह है कि पिछले ३ सालों से लगातार कभी लीकेज तो कभी नहर को पूरा भरकर ओवरफ्लो किया जा रहा है. इससे जनता की जान को जोखिम में डाला जा रहा है लेकिन परियोजना का संचालन करने वाली संस्था जीवीके को न तो शासन-प्रशासन के निर्देशों की परवाह है और न ही किसी जनांदोलन, न भूवैज्ञानिकों की चेतावनी की. विडियो: अलकनंदा नदी में उतरे लोग, किया बांध बनाने का विरोध फिर शुरू होंगी ३३ जल विद्युत परियोजनाएं, केंद्र से मिला आश्वासन
hindi
package fr.mickmouette.core.elements.exception.convertion; /** * This project is a library developed for java object developers. It provide a way to easily manipulate infix expressions with an object representation. * Copyright (C) 2017 Mickael Alvarez * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * * Contact : [email protected] * * @author Mickael Alvarez * */ public class ConvertValueOperatorException extends ConvertionException { /** * */ private static final long serialVersionUID = -387074208486576697L; }
code
/***************************************************************************************************************************************************************/ Print '--- ' + CONVERT (VARCHAR(19), SYSDATETIME()) + ' ==> script 2.002 ---------------------------------------------------------------------------------------- ' /**************************************************************************************************************************************************/ /*** ***/ /*** >>>>> This is the script used to create the table view [GRSHRcode].[dbo].[AllLongData] <<<<< ***/ /*** (extracted data from [forum] and filter fields) ***/ /*** ***/ /**************************************************************************************************************************************************/ USE [GRSHRcode] GO /**************************************************************************************************************************************************/ ---- NOTES: -- this code should be modified after... -- these variables, calculated as persisted during coding period, should be also used in the aggregated data -- currently used from [forum_ResAnal].[dbo].[vr_06w_LongData_ALL] but from final [forum] data once loaded... -- [GRI_19_x] -- [SHI_04_x] -- [SHI_05_x] -- [SHI_01_x] -- [SHI_01_summary_b] --- should be rescaled to 1-6 for coding or scaled 1-6 for lookup table? /**************************************************************************************************************************************************/ IF OBJECT_ID ('tempdb..#LCD_DB') IS NOT NULL DROP TABLE #LCD_DB /**************************************************************************************************************************************************/ /* > 1st long SET: current data in database *****************************************************************************************************/ SELECT [entity] , [link_fk] , [Nation_fk] , [Locality_fk] , [Religion_fk] , [Region5] , [Region6] , [Ctry_EditorialName] , [Locality] , [Religion] , [Question_Year] , [QS_fk] = [Question_Std_pk] , [QA_std] = /* [QA_std] is provisionally recoded here... */ CASE WHEN [QA_std] = 'SHI_11' THEN 'SHI_11_b' ELSE [QA_std] END , [QW_std] = /* [QA_std] is provisionally recoded here... */ CASE WHEN [QA_std] = 'GRI_19_x' THEN '0 - Total N of incidents resulting from government force' WHEN [QA_std] = 'SHI_11_a' THEN 'Were religious women harassed for violating secular dress norms?' ELSE [QW_std] END , [Answer_value] = /* [Answer_value] is provisionally recoded here, before we actually recode it in the database inorder to have just a consistent distridution Other variable will be added, if needed, for catching nuances. */ CASE WHEN [QA_std] IN ( 'GRI_08' ) AND CAST( [Answer_value] as decimal (38,2)) = '0.50' THEN CAST( [Answer_value] as decimal (38,2)) + 0.50 WHEN [QA_std] IN ( 'SHI_11' ) AND CAST( [Answer_value] as decimal (38,2)) = '0.50' THEN CAST( [Answer_value] as decimal (38,2)) - 0.50 ELSE CAST( [Answer_value] as decimal (38,2)) END , [answer_wording] , [answer_wording_std] = /* [AW_std] is provisionally recoded here... */ CASE WHEN [QA_std] = 'SHI_11' AND CAST( [Answer_value] as decimal (38,2)) = '0.50' THEN 'No' ELSE [answer_wording_std] END , [Question_fk] , [Answer_fk] , [Notes] , [DB] = 1 /**************************************************************************************************************************************************/ INTO [#LCD_DB] /**************************************************************************************************************************************************/ FROM [forum_ResAnal].[dbo].[vr_06w_LongData_ALL] /* NOTICE THIS IS 2014 WORKING DATA => DATA FROM DB AFTER RLS REMOVED*/ /*------------------------------------------------------------------------------------------------------------------------------------------------*/ LEFT JOIN [forum].[dbo].[Pew_Question_Std] /* NOTICE WE'LL INCLUDE QS_fk IN FUTURE VERSION OF [...LongData_ALL] */ ON [QA_std] = [Question_abbreviation_std] /*------------------------------------------------------------------------------------------------------------------------------------------------*/ /**************************************************************************************************************************************************/ WHERE [Ctry_EditorialName] NOT LIKE 'All count%' AND [QA_std] NOT IN ( 'GFI' -- aggregated/calculated , 'GFI_rd_1d' -- aggregated/calculated , 'GFI_scaled' -- aggregated/calculated , 'GRI' -- aggregated/calculated , 'GRI_01_x2' -- never used , 'GRI_08_for_index' -- will be calculated , 'GRI_11_14' -- no longer used , 'GRI_11_xG1' -- aggregated/calculated , 'GRI_11_xG2' -- aggregated/calculated , 'GRI_11_xG3' -- aggregated/calculated , 'GRI_11_xG4' -- aggregated/calculated , 'GRI_11_xG5' -- aggregated/calculated , 'GRI_11_xG6' -- aggregated/calculated , 'GRI_11_xG7' -- aggregated/calculated , 'GRI_16_ny1' -- aggregated/calculated , 'GRI_16_ny2' -- aggregated/calculated , 'GRI_19_b__p' -- provincial data no longer coded , 'GRI_19_c__p' -- provincial data no longer coded , 'GRI_19_d__p' -- provincial data no longer coded , 'GRI_19_da' -- no longer used , 'GRI_19_da__p' -- no longer used , 'GRI_19_db' -- no longer used , 'GRI_19_db__p' -- no longer used , 'GRI_19_e__p' -- provincial data no longer coded , 'GRI_19_f__p' -- provincial data no longer coded , 'GRI_19_filter' -- aggregated/calculated & DIFFERRENT WORDING , 'GRI_19_ny1' -- aggregated/calculated , 'GRI_19_ny2' -- aggregated/calculated , 'GRI_19_summ_ny1' -- aggregated/calculated , 'GRI_19_summ_ny2' -- aggregated/calculated , 'GRI_19_summ_ny3' -- aggregated/calculated , 'GRI_19_summ_ny4' -- aggregated/calculated , 'GRI_19_summ_ny5' -- aggregated/calculated , 'GRI_19_summ_ny6' -- aggregated/calculated , 'GRI_20' -- aggregated/calculated , 'GRI_20_03_top' -- aggregated/calculated , 'GRI_20_05_x1' -- no longer used , 'GRI_20_05_x1__p' -- no longer used , 'GRI_20_top' -- aggregated/calculated , 'GRI_rd_1d' -- aggregated/calculated , 'GRI_scaled' -- aggregated/calculated , 'GRX_21_01' -- no longer used , 'GRX_21_02' -- no longer used , 'GRX_21_03' -- no longer used , 'GRX_22' -- no longer used , 'GRX_22_01_ny1' -- aggregated/calculated , 'GRX_22_01_ny2' -- aggregated/calculated , 'GRX_22_02_ny1' -- aggregated/calculated , 'GRX_22_02_ny2' -- aggregated/calculated , 'GRX_22_03_ny1' -- aggregated/calculated , 'GRX_22_03_ny2' -- aggregated/calculated , 'GRX_22_04_ny1' -- aggregated/calculated , 'GRX_22_04_ny2' -- aggregated/calculated , 'GRX_22_ny1' -- no longer used , 'GRX_23' -- no longer used , 'GRX_24' -- no longer used , 'GRX_24_ny1' -- no longer used , 'GRX_24_ny2' -- no longer used , 'GRX_25_01' -- no longer used , 'GRX_25_02' -- no longer used , 'GRX_25_03' -- no longer used , 'GRX_25_ny1' -- no longer used , 'GRX_25_ny2' -- no longer used , 'GRX_25_ny3' -- no longer used , 'GRX_26_01' -- no longer used , 'GRX_26_02' -- no longer used , 'GRX_26_03' -- no longer used , 'GRX_26_04' -- no longer used , 'GRX_26_05' -- no longer used , 'GRX_26_06' -- no longer used , 'GRX_26_07' -- no longer used , 'GRX_26_08' -- no longer used , 'GRX_27_01' -- no longer used , 'GRX_27_02' -- no longer used , 'GRX_27_03' -- no longer used , 'GRX_28_01' -- no longer used , 'GRX_28_02' -- no longer used , 'GRX_28_03' -- no longer used , 'GRX_33' -- no longer used (summer 2015) , 'SHI' -- aggregated/calculated , 'SHI_01' -- aggregated/calculated , 'SHI_01_a_dummy' -- aggregated/calculated , 'SHI_01_b__p' -- provincial data no longer coded , 'SHI_01_b_dummy' -- aggregated/calculated , 'SHI_01_c__p' -- provincial data no longer coded , 'SHI_01_c_dummy' -- aggregated/calculated , 'SHI_01_d__p' -- provincial data no longer coded , 'SHI_01_d_dummy' -- aggregated/calculated , 'SHI_01_da' -- aggregated/calculated , 'SHI_01_da__p' -- aggregated/calculated , 'SHI_01_db' -- aggregated/calculated , 'SHI_01_db__p' -- aggregated/calculated , 'SHI_01_e__p' -- provincial data no longer coded , 'SHI_01_e_dummy' -- aggregated/calculated , 'SHI_01_f__p' -- provincial data no longer coded , 'SHI_01_f_dummy' -- aggregated/calculated , 'SHI_01_summary_a_ny0' -- aggregated/calculated , 'SHI_01_summary_a_ny1' -- aggregated/calculated , 'SHI_01_summary_a_ny2' -- aggregated/calculated , 'SHI_01_summary_a_ny3' -- aggregated/calculated , 'SHI_01_summary_a_ny4' -- aggregated/calculated , 'SHI_01_summary_a_ny5' -- aggregated/calculated , 'SHI_01_summary_a_ny6' -- aggregated/calculated , 'SHI_01_summary_b' -- calculated as persisted , 'SHI_01_x_14' -- aggregated/calculated , 'SHI_01_xG1' -- aggregated/calculated , 'SHI_01_xG2' -- aggregated/calculated , 'SHI_01_xG3' -- aggregated/calculated , 'SHI_01_xG4' -- aggregated/calculated , 'SHI_01_xG5' -- aggregated/calculated , 'SHI_01_xG6' -- aggregated/calculated , 'SHI_01_xG7' -- aggregated/calculated , 'SHI_04_b__p' -- provincial data no longer coded , 'SHI_04_c__p' -- provincial data no longer coded , 'SHI_04_d__p' -- provincial data no longer coded , 'SHI_04_da' -- no longer used , 'SHI_04_da__p' -- no longer used , 'SHI_04_db' -- no longer used , 'SHI_04_db__p' -- no longer used , 'SHI_04_e__p' -- provincial data no longer coded , 'SHI_04_f__p' -- provincial data no longer coded , 'SHI_04_ny0' -- aggregated/calculated , 'SHI_04_ny1' -- aggregated/calculated , 'SHI_05_b' -- aggregated/calculated , 'SHI_05_b__p' -- provincial data no longer coded , 'SHI_05_c__p' -- provincial data no longer coded , 'SHI_05_d__p' -- provincial data no longer coded , 'SHI_05_da' -- aggregated/calculated , 'SHI_05_da__p' -- no longer used , 'SHI_05_db' -- no longer used , 'SHI_05_db__p' -- no longer used , 'SHI_05_e__p' -- no longer used , 'SHI_05_f__p' -- provincial data no longer coded , 'SHI_05_ny0' -- aggregated/calculated , 'SHI_05_ny1' -- aggregated/calculated , 'SHI_07_ny0' -- aggregated/calculated , 'SHI_07_ny1' -- aggregated/calculated -- , 'SHI_11' -- will be calculated -- , 'SHI_11_a' -- will be modified later to include DES (summer 2015) , 'SHI_11_for_index' -- will be calculated , 'SHI_11_x' -- no longer used AS IT IS will be modified later to exclude 11_a (summer 2015) , 'SHI_rd_1d' -- aggregated/calculated , 'SHI_scaled' -- aggregated/calculated , 'SHX_14_01' -- no longer used , 'SHX_14_02' -- no longer used , 'SHX_14_03' -- no longer used , 'SHX_14_04' -- no longer used , 'SHX_15_01' -- no longer used , 'SHX_15_02' -- no longer used , 'SHX_15_03' -- no longer used , 'SHX_15_04' -- no longer used , 'SHX_15_05' -- no longer used , 'SHX_15_06' -- no longer used , 'SHX_15_07' -- no longer used , 'SHX_15_08' -- no longer used , 'SHX_15_09' -- no longer used , 'SHX_15_10' -- no longer used , 'SHX_25' -- no longer used , 'SHX_25_ny1' -- no longer used , 'SHX_25_ny2' -- no longer used , 'SHX_26' -- no longer used , 'SHX_26_ny1' -- no longer used , 'SHX_26_ny2' -- no longer used , 'SHX_27_01' -- no longer used , 'SHX_27_02' -- no longer used , 'SHX_27_03' -- no longer used , 'SHX_27_ny1' -- no longer used , 'SHX_27_ny2' -- no longer used , 'SHX_27_ny3' -- no longer used , 'SHX_28_01' -- no longer used , 'SHX_28_02' -- no longer used , 'SHX_28_03' -- no longer used , 'SHX_28_04' -- no longer used , 'SHX_28_05' -- no longer used , 'SHX_28_06' -- no longer used , 'SHX_28_07' -- no longer used , 'SHX_28_08' -- no longer used , 'XSG_01_xG1' -- aggregated/calculated , 'XSG_01_xG2' -- aggregated/calculated , 'XSG_01_xG3' -- aggregated/calculated , 'XSG_01_xG4' -- aggregated/calculated , 'XSG_01_xG5' -- aggregated/calculated , 'XSG_01_xG6' -- aggregated/calculated , 'XSG_01_xG7' -- aggregated/calculated , 'XSG_24' -- no longer used , 'XSG_242526_ny0' -- no longer used , 'XSG_242526_ny1' -- no longer used , 'XSG_25n27_ny1' -- no longer used , 'XSG_25n27_ny2' -- no longer used , 'XSG_25n27_ny3' -- no longer used ) /* < 1st long SET: current data in database *****************************************************************************************************/ --select * from [#LCD_DB] where [QA_std] IN ( 'GRI_08' , 'SHI_11' ) AND [Answer_value] = 0.50 /**************************************************************************************************************************************************/ /**************************************************************************************************************************************************/ IF OBJECT_ID ('tempdb..#LND_CC') IS NOT NULL DROP TABLE #LND_CC /**************************************************************************************************************************************************/ /* > 2nd long SET: new data to be entered comparable to data in database ************************************************************************/ SELECT [entity] = 'Not Yet Stored in DB' , [link_fk] = 0 , [Nation_fk] , [Locality_fk] , [Religion_fk] , [Region5] , [Region6] , [Ctry_EditorialName] , [Locality] , [Religion] , [Question_Year] = (SELECT DISTINCT MAX([Question_Year]) FROM [#LCD_DB] ) + 1 -- past year + 1 , [QS_fk] , [QA_std] , [QW_std] , [Answer_value] = CASE /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRI_08_a' OR [QA_std] LIKE 'GRI_10_0[1-3]' OR [QA_std] LIKE 'GRI_11_01[a-b]' OR [QA_std] LIKE 'GRI_11_[0-1][0-9]' OR [QA_std] LIKE 'GRI_19_[b-f&x]' -- no x OR [QA_std] LIKE 'GRI_20_01x_01[a-b]' OR [QA_std] LIKE 'GRI_20_01x_[0-1][0-9]' -- OR [QA_std] LIKE 'GRI_0[1-2]_filter' -- OR [QA_std] LIKE 'GRX_34_02_[a-g]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ OR [QA_std] LIKE 'SHI_01_[b-f&x]' -- no x OR [QA_std] LIKE 'SHI_01_x_01[a-b]' OR [QA_std] LIKE 'SHI_01_x_[0-1][0-9]' OR [QA_std] LIKE 'SHI_04_[b-f&x]' -- no x OR [QA_std] LIKE 'SHI_05_[c-f&x]' -- no x -- OR [QA_std] LIKE 'SHI_09_n' -- OR [QA_std] LIKE 'SHI_10_n' -- OR [QA_std] LIKE 'SHI_11_[a-b]_n' OR [QA_std] LIKE 'XSG_S_99_0[1-6]' -- NEW ROW/STATEMENT THEN 0.00 /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRX_29_0[2-5]' THEN 1.00 /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ELSE NULL END /*------------------------------------------------------------------------------------------------------------------------------------------------*/ , [answer_wording] = '' , [answer_wording_std] = CASE /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRI_08_a' OR [QA_std] LIKE 'GRI_10_0[1-3]' OR [QA_std] LIKE 'GRI_11_01[a-b]' OR [QA_std] LIKE 'GRI_11_[0-1][0-9]' OR [QA_std] LIKE 'GRI_20_01x_01[a-b]' OR [QA_std] LIKE 'GRI_20_01x_[0-1][0-9]' -- OR [QA_std] LIKE 'GRI_0[1-2]_filter' -- OR [QA_std] LIKE 'GRX_34_02_[a-g]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ OR [QA_std] LIKE 'SHI_01_x_01[a-b]' OR [QA_std] LIKE 'SHI_01_x_[0-1][0-9]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ OR [QA_std] LIKE 'GRX_29_0[2-5]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ THEN 'No' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRI_19_[b-f&x]' -- no x OR [QA_std] LIKE 'SHI_01_[b-f&x]' -- no x OR [QA_std] LIKE 'SHI_04_[b-f&x]' -- no x OR [QA_std] LIKE 'SHI_05_[c-f&x]' -- no x -- OR [QA_std] LIKE 'SHI_09_n' -- OR [QA_std] LIKE 'SHI_10_n' -- OR [QA_std] LIKE 'SHI_11_[a-b]_n' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ THEN '0 to N incidents' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'XSG_S_99_0[1-6]' -- NEW ROW/STATEMENT /*------------------------------------------------------------------------------------------------------------------------------------------------*/ THEN 'No supplemental source' -- ADD AS a new STANDARD ANSWER /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ELSE '' END /*------------------------------------------------------------------------------------------------------------------------------------------------*/ , [Question_fk] = 0 , [Answer_fk] = 0 , [Notes] = '' , [DB] = 1 /**************************************************************************************************************************************************/ INTO [#LND_CC] /**************************************************************************************************************************************************/ FROM [#LCD_DB] /**************************************************************************************************************************************************/ WHERE [Question_Year] = (SELECT DISTINCT MAX([Question_Year]) FROM [#LCD_DB] ) -- included questions for past year /* < 2nd long SET: new data to be entered comparable to data in database ************************************************************************/ /**************************************************************************************************************************************************/ IF OBJECT_ID ('tempdb..#LYD_CC') IS NOT NULL DROP TABLE #LYD_CC /**************************************************************************************************************************************************/ /* > 3rd long SET: former year's data to be used if necessary in GRI_01 & GRI_02 ****************************************************************/ SELECT [entity] = '' , [link_fk] = 0 , [Nation_fk] , [Locality_fk] , [Religion_fk] , [Region5] , [Region6] , [Ctry_EditorialName] , [Locality] , [Religion] , [Question_Year] = [Question_Year] + 1 -- past year + 1 , [QS_fk] , [QA_std] = [QA_std] + '_yBe' , [QW_std] = [QW_std] + ' (*the prevoius year)' , [Answer_value] , [answer_wording] , [answer_wording_std] , [Question_fk] = 0 , [Answer_fk] = 0 , [Notes] = '' , [DB] = -1 /**************************************************************************************************************************************************/ INTO [#LYD_CC] /**************************************************************************************************************************************************/ FROM [#LCD_DB] /**************************************************************************************************************************************************/ WHERE [Question_Year] = (SELECT DISTINCT MAX([Question_Year]) FROM [#LCD_DB] ) -- included questions for past year AND [QA_std] LIKE 'GRI_0[1-2]' /* < 3rd long SET: former year's data to be used if necessary in GRI_01 & GRI_02 ****************************************************************/ /**************************************************************************************************************************************************/ IF OBJECT_ID ('tempdb..#LND_NQ') IS NOT NULL DROP TABLE #LND_NQ /**************************************************************************************************************************************************/ /* > 4th long SET: new data to be entered from added questions **********************************************************************************/ SELECT [entity] = 'Not Yet Stored in DB' , [link_fk] = 0 , [Nation_fk] , [Locality_fk] = NULL , [Religion_fk] = NULL , [Region5] , [Region6] , [Ctry_EditorialName] , [Locality] = 'not detailed' , [Religion] = 'not detailed' , [Question_Year] , [QS_fk] = 0 , [QA_std] , [QW_std] , [Answer_value] = CASE /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRI_0[1-2]_filter' -- OR [QA_std] LIKE 'GRI_08_a' -- OR [QA_std] LIKE 'GRI_10_0[1-3]' -- OR [QA_std] LIKE 'GRI_11_01[a-b]' -- OR [QA_std] LIKE 'GRI_11_[0-1][0-9]' -- OR [QA_std] LIKE 'GRI_19_[b-f&x]' -- OR [QA_std] LIKE 'GRI_20_01x_01[a-b]' -- OR [QA_std] LIKE 'GRI_20_01x_[0-1][0-9]' OR [QA_std] LIKE 'GRX_34_02_[a-g]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ OR [QA_std] LIKE 'SHI_01_[b-f&x]' -- x -- OR [QA_std] LIKE 'SHI_01_x_01[a-b]' -- OR [QA_std] LIKE 'SHI_01_x_[0-1][0-9]' OR [QA_std] LIKE 'SHI_04_[b-f&x]' -- x OR [QA_std] LIKE 'SHI_05_[c-f&x]' -- x OR [QA_std] LIKE 'SHI_09_n' OR [QA_std] LIKE 'SHI_10_n' OR [QA_std] LIKE 'SHI_11_[a-b]_n' THEN 0.00 /*------------------------------------------------------------------------------------------------------------------------------------------------*/ -- WHEN [QA_std] LIKE 'GRX_29_0[2-5]' -- THEN 1.00 /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ELSE NULL END /*------------------------------------------------------------------------------------------------------------------------------------------------*/ , [answer_wording] = '' , [answer_wording_std] = CASE /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'GRI_0[1-2]_filter' -- OR [QA_std] LIKE 'GRI_08_a' -- OR [QA_std] LIKE 'GRI_10_0[1-3]' -- OR [QA_std] LIKE 'GRI_11_01[a-b]' -- OR [QA_std] LIKE 'GRI_11_[0-1][0-9]' -- OR [QA_std] LIKE 'GRI_19_[b-f&x]' -- OR [QA_std] LIKE 'GRI_20_01x_01[a-b]' -- OR [QA_std] LIKE 'GRI_20_01x_[0-1][0-9]' OR [QA_std] LIKE 'GRX_34_02_[a-g]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ -- OR [QA_std] LIKE 'SHI_01_x_01[a-b]' -- OR [QA_std] LIKE 'SHI_01_x_[0-1][0-9]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ -- OR [QA_std] LIKE 'GRX_29_0[2-5]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ THEN 'No' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE 'SHI_01_[b-f&x]' -- x OR [QA_std] LIKE 'SHI_04_[b-f&x]' -- x OR [QA_std] LIKE 'SHI_05_[c-f&x]' -- x OR [QA_std] LIKE 'SHI_09_n' OR [QA_std] LIKE 'SHI_10_n' OR [QA_std] LIKE 'SHI_11_[a-b]_n' -- OR [QA_std] LIKE 'GRI_19_[b-f]' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ THEN '0 to N incidents' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ELSE '' END /*------------------------------------------------------------------------------------------------------------------------------------------------*/ , [Question_fk] = 0 , [Answer_fk] = 0 , [Notes] = '' , [DB] = CASE /*------------------------------------------------------------------------------------------------------------------------------------------------*/ WHEN [QA_std] LIKE '%_filter' THEN -1 ELSE 0 END /*------------------------------------------------------------------------------------------------------------------------------------------------*/ /**************************************************************************************************************************************************/ INTO [#LND_NQ] /**************************************************************************************************************************************************/ FROM /*------------------------------------------------------------------------------------------------------------------------------------------------*/ (SELECT DISTINCT [Nation_fk] , [Region5] , [Region6] , [Ctry_EditorialName] , [Question_Year] FROM [#LYD_CC] ) LYD /*------------------------------------------------------------------------------------------------------------------------------------------------*/ CROSS JOIN /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ( /*--- filter questions ---------------------------------------------------------------------------------------------------------------------------*/ SELECT QA_std = 'GRI_01_filter' , QW_std = '(If GRI_01_x is "yes") Did the change in the constitution alter any statement ' + 'of specific provision "freedom of religion"?' UNION ALL SELECT QA_std = 'GRI_02_filter' , QW_std = '(If GRI_01_x is "yes") Did the change in the constitution alter any ' + 'stipulation that appears to qualify or substantially contradict the concept ' + 'of "religious freedom"' UNION ALL SELECT QA_std = 'GRI_19_filter' , QW_std = 'Were there incidents when any level of government use force toward religious ' + 'groups that resulted in individuals being killed, physically abused, ' + 'imprisoned, detained or displaced from their homes, or having their personal ' + 'or religious properties damaged or destroyed?' UNION ALL SELECT QA_std = 'SHI_04_filter' , QW_std = 'Were religion-related terrorist groups active in the country?' UNION ALL SELECT QA_std = 'SHI_05_filter' , QW_std = 'Was there a religion-related war or armed conflict in the country (including ' + 'ongoing displacements from previous wars)?' UNION ALL SELECT QA_std = 'XSG_S_99_filter' , QW_std = 'How many supplemental sources have been used?' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ /*--- persisted questions ------------------------------------------------------------------------------------------------------------------------*/ -- UNION ALL SELECT QA_std = 'GRI_19_x' , QW_std = '0 - Total N of incidents resulting from government force' /* already there... should others be there? */ UNION ALL SELECT QA_std = 'SHI_01_x' , QW_std = '0 - Total N of incidents motivated by religious hatred or bias' UNION ALL SELECT QA_std = 'SHI_01_summary_b' , QW_std = '0-6 Types of incidents motivated by religious hatred or bias' UNION ALL SELECT QA_std = 'SHI_04_x' , QW_std = '0 - Total N of incidents resulting from religion related terrorism' UNION ALL SELECT QA_std = 'SHI_05_x' , QW_std = '0 - Total N of incidents resulting from religiously-related war' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ /*--- NEW questions ------------------------------------------------------------------------------------------------------------------------------*/ UNION ALL SELECT QA_std = 'GRX_34_01' , QW_std = 'Does the government provide exemptions for religious groups from otherwise ' + 'universal laws?' UNION ALL SELECT QA_std = 'GRX_34_02' , QW_std = 'If so, what areas do the exemptions cover? (check all that apply)' UNION ALL SELECT QA_std = 'GRX_34_02_a' , QW_std = 'Work on religious holidays (for example, Christians being allowed to take off work ' + 'on Sundays)' UNION ALL SELECT QA_std = 'GRX_34_02_b' , QW_std = 'Anti-discrimination laws (for example, giving religious business the right to not ' + 'serve or provide benefits to same-sex couples)' UNION ALL SELECT QA_std = 'GRX_34_02_c' , QW_std = 'Military service (for example, allowing religious groups that oppose war or national ' + 'service to not participate in military training or activities)' UNION ALL SELECT QA_std = 'GRX_34_02_d' , QW_std = 'Taxes (for example, allowing religious groups opposed to supporting states to not' + ' pay taxes)' UNION ALL SELECT QA_std = 'GRX_34_02_e' , QW_std = 'Health care provision (for example, allowing doctors to opt out of providing ' + 'contraception or abortion services, or allowing religious organizations to exclude ' + 'contraception of abortion services from health insurance coverage)' UNION ALL SELECT QA_std = 'GRX_34_02_f' , QW_std = 'Education (for example, sending children to public schools)' UNION ALL SELECT QA_std = 'GRX_34_02_g' , QW_std = 'Unknown (the sources indicate religious groups are granted exemptions, but provide ' + 'no details)' UNION ALL SELECT QA_std = 'GRX_34_03' , QW_std = 'Were there any cases in which individuals challenged the lack of religious' + ' exemptions from otherwise universal laws?' UNION ALL SELECT QA_std = 'GRX_35' , QW_std = 'Does the government restrict individuals'' access to the internet?' UNION ALL SELECT QA_std = 'GRX_35_01' , QW_std = 'Does the government restrict individuals'' access to the internet through arrests based' + ' on internet activity?' UNION ALL SELECT QA_std = 'GRX_35_02' , QW_std = 'Does the government restrict individuals'' access to the internet through censoring ' + 'of websites?' UNION ALL SELECT QA_std = 'SHI_09_n' , QW_std = 'How many incidents in which individuals used violence or the threat of violence' + '—including so-called honor killings—to try to enforce religious norms were directed' + ' against women?' UNION ALL SELECT QA_std = 'SHI_10_n' , QW_std = 'How many incidents in which individuals were assaulted or displaced from their ' + 'homes in retaliation for religious activities considered offensive or threatening ' + 'to the majority faith were directed against women?' UNION ALL SELECT QA_std = 'SHI_11_a_n' , QW_std = 'How many incidents occurred in which women were harassed for violating secular ' + 'dress norms?' UNION ALL SELECT QA_std = 'SHI_11_b_n' , QW_std = 'How many incidents occurred in which women were harassed for violating religious ' + 'dress codes?' /*------------------------------------------------------------------------------------------------------------------------------------------------*/ ) Q /*------------------------------------------------------------------------------------------------------------------------------------------------*/ /* < 4th long SET: new data to be entered from added questions **********************************************************************************/ /**************************************************************************************************************************************************/ /**************************************************************************************************************************************************/ /**************************************************************************************************************************************************/ IF (SELECT COUNT([TABLE_NAME]) FROM [INFORMATION_SCHEMA].[TABLES] WHERE [TABLE_NAME] = 'AllLongData' ) = 1 DROP TABLE [AllLongData] /*** All Long Data *******************************************************************************************************************************/ SELECT [entity] , [link_fk] , [Nation_fk] , [Locality_fk] , [Religion_fk] , [Region5] , [Region6] , [Ctry_EditorialName] , [Locality] , [Religion] , [Question_Year] , [QS_fk] , [QA_std] , [QW_std] , [Answer_value] , [answer_wording] = CASE WHEN [answer_wording] = '' THEN NULL ELSE [answer_wording] END , [answer_wording_std] = CASE WHEN [answer_wording_std] = '' THEN NULL ELSE [answer_wording_std] END , [Question_fk] , [Answer_fk] , [Notes] , [DB] /**************************************************************************************************************************************************/ INTO [AllLongData] /**************************************************************************************************************************************************/ FROM /**************************************************************************************************************************************************/ ( SELECT * FROM [#LCD_DB] UNION SELECT * FROM [#LND_CC] UNION SELECT * FROM [#LYD_CC] UNION SELECT * FROM [#LND_NQ] ) THE4SETS /**************************************************************************************************************************************************/ /*** All Long Data *******************************************************************************************************************************/ /**************************************************************************************************************************************************/ GO -- select * from [AllLongData]
code
Nkawie (Ash), Dec. 4, GNA - The Special Voting in the Atwima-Nwabiagya and Atwima-Mponua Constituencies, all in Ashanti, started smoothly without any hitches. At the Nkawie Fire Service Polling Station, about 104 security personnel, elections officials and some observers cast their vote by 1000 hours. At Nyinahin in the Atwima-Mponua Constituency, 50 special voters had cast their votes by 1000 hours. Unlike Atwima-Nwabiagya Constituency, where only New Patriotic Party (NPP) and National Democratic Congress (NDC) polling agents were present at the polling station, all the three parties contesting the parliamentary seat, namely NDC, NPP and Convention People's Party (CPP) had their polling agents at the polling station. Meanwhile, the Atwima-Mponua District Office of the Electoral Commission (EC) has organized a day's workshop for political party agents on the need to ensure violence-free, fair and transparent elections next Tuesday. The People's National Convention (PNC), CPP, NDC and NPP party agents attended the workshop and Mr Yaw Antwi Afoakwa, the Deputy Returning Officer, who addressed them advised the polling agents to comport themselves during and after the elections. He also impressed upon them the need to sign all documents to avoid any unpleasant consequences and told them not to take in any alcohol before coming to the polling stations.
english
TRENTON – Attorney General Gurbir S. Grewal has joined a multi-state amicus brief filed in federal court that supports the city and county of San Francisco, as well as the County of Santa Clara, in their legal fight against the Trump Administration’s threatened reduction in federal funding to jurisdictions with fair and welcoming policies. Presently, the federal government is prohibited – by virtue of a permanent injunction issued by a U.S. District Court Judge in California – from enforcing Executive Order provisions that threaten to broadly strip federal funds from states and localities the Trump Administration designates as “sanctuary jurisdictions” for immigrants. The Trump Administration is now appealing that permanent injunction before the U.S Court of Appeals for the Ninth Circuit. The multi-state brief joined by Attorney General Grewal asks the Ninth Circuit to affirm the district court’s decision. It urges the Ninth Circuit to uphold the permanent injunction and, in doing so, reject what it describes as an attempt by the Administration to “coerce” state and local jurisdictions into lockstep with the federal government on immigration enforcement policy. In Fiscal Year 2017, New Jersey received billions of dollars in federal funding that could potentially be threatened by the Executive Order, including millions of dollars in federal Byrne Grant funds – some used by the State, but most “passed through” to local jurisdictions – to fund public safety initiatives. “Threatening to withhold federal funding that helps protect communities across this state is a dangerous political game that undermines public safety for all New Jerseyans. So is the practice of trying to incite fear by arguing, as the Administration has done, that fair and welcoming jurisdictions invite crime,” said Attorney General Grewal. “As a former County Prosecutor, I’ve seen firsthand that you can keep people safe while also treating your immigrant communities with dignity and respect, and I’m committed to doing just that,” Grewal said. The Attorney General said he is disappointed to see the federal government trying to use federal grant funding – much of it expended each year to pay for initiatives that protect both communities and police officers – as leverage in the immigration enforcement debate. Noting that the participating states – New Jersey included – collectively receive billions of dollars in federal grants each year that could be affected, the multi-state amicus brief supports San Francisco and Santa Clara’s claims that the Trump Executive Order is unconstitutional. It argues that continuing the permanent injunction barring Trump’s Executive Order is essential to avoiding harm to the states and local jurisdictions. In addition to highlighting flaws in the federal government’s arguments, the multi-state amicus brief highlights the public safety benefits arising from policies focused on crime prevention as opposed to enforcement of federal immigration law. In addition to California Attorney General Xavier Becerra and Attorney General Grewal, the Attorneys General of Connecticut, Hawaii, Illinois, Maryland, Massachusetts, New Mexico, New York, Oregon, Vermont, Washington and the District of Columbia are party to the amicus brief.
english
गोगुन्दा हुआ शर्मसार पांच वर्षीया मासूम को दादा ने ही बनाया हवस का शिकार | उदयपूर्टाइम्स.कॉम उदयपूर्टाइम्स.कॉम-हिन्दी-क्राइम-गोगुन्दा हुआ शर्मसार पांच वर्षीया मासूम को दादा ने ही बनाया हवस का शिकार गोगुन्दा हुआ शर्मसार पांच वर्षीया मासूम को दादा ने ही बनाया हवस का शिकार उदयपुर जिले के गोगुन्दा के नया गुडागाँव में कल गुरूवार की शाम को एक वहशी दरिंदे ने रिश्ते में अपनी पांच वर्षीया मासूम पोती के साथ बलात्कार कर गोगुन्दा को शर्मसार और कलंकित किया। लहूलुहान मासूम को उदयपुर के एमबी अस्पताल में भर्ती करवाया गया है। वहीँ पुलिस ने आरोपी को गिरफ्तार कर पोक्सो एक्ट के तहत मामला दर्ज किया है। प्राप्त जानकारी के अनुसार बालिका अपने माँ बाप के साथ खेत पर थी, जहाँ से उसकी दादी के साथ घर पर आ गई। वहां पर वह अन्य बच्चो के साथ खेल रही थी की इस दौरान बच्ची के दादा का भाई वहां आया और बच्ची को उठाकर एकांत में ले गया और बच्ची के नाज़ुक अंगो के साथ छेड़छाड़ करते हुए घिनौना काम किया और बच्ची को लहूलुहान कर दिया। घटना की जानकारी मिलते ही बच्ची के माँ बाप उसे उदयपुर लाये और एमबी अस्पताल में भर्ती करवाया। जहाँ बच्ची को वरिष्ठ चिकित्सको की देखरेख में रखा गया है। इधर पुलिस ने भी मामले की गंभीरता को देखते हुए एएसपी अनंत कुमार,उप अध्यक्ष प्रेम धनदे और गोगुन्दा थानाधिकारी धनपत सिंह ने अस्पताल आ कर बच्ची और उनके माँ बाप से घटना की जानकारी लेकर रात में ही दबिश देकर ४४ वर्षीय कुकर्मी दादा को गिरफ्तार कर हिरासत में ले लिया। बच्ची ने भी आरोपी को पहचान लिया। अब पुलिस ने आरोपी पर पोक्सो एक्ट के तहत मामला दर्ज कर छानबीन शुरू कर दी है। तेज़ रफ़्तार डम... मनीष मूलचंदानी हत्याकांड... ११५ वियूज
hindi
Chris Powell’s Blues travel to Yorkshire this weekend to take on Bradford City at the Northern Commercials Stadium with eyes on a league double over the Bantams. Southend won the reverse fixture against Bradford 2-0 and will be hoping for the same scoreline again tomorrow afternoon, which would be a repeat of their last visit to Bradford back in April. But they come up against a Bradford side who have found some form recently, winning four of their last six league games to help ease their relegation fears. Bradford still sit in the relegation zone, one point off safety, but they’ll be keen to return to winning ways with a fifth straight home win after last weekend’s 3-0 defeat to Barnsley at Oakwell. Blues’ first signing of the January transfer window, Sam Hart, will go straight into the squad to face the Bantams and provide a much-needed boost to Chris Powell’s side. Powell is also hopeful that Taylor Moore will be available again after he returned to training this week following a hamstring injury that saw him miss out last weekend. Fellow defenders Harry Lennon and Jason Demetriou are continuing their recoveries but are unlikely to feature this month, while goalkeeper Mark Oxley remains out with a back problem. Bradford will have new signing Jermaine Anderson available after the midfielder joined this week on loan from Peterborough United until the end of the season. Adam Chicksen is unavailable as he serves the first of a two-game ban, with Sean Scannell and Kevin Mellor out long-term with back and knee injuries respectively. “We need to make sure we are fully focused and I’m sure we’ll bounce back from the defeat. We have been given an initial allocation of 505 tickets, with more available if required. These tickets are situated in the East Stand. There are also three pairs of wheelchair and carer tickets for this fixture. Disabled fans pay the age relevant price and must be in receipt of DLA at the higher level, or middle rate for mobility. Tickets will come off sale on Friday 18 January at midday. Please note that adult tickets increase to £25 on the day so we encourage supporters to buy in advance. It will be cash only at the turnstiles. Travel details for the game can be found here.
english
महाराष्ट्र के चंद्रपुर जिले में राज्य के मंत्री सुधीर मुनगंटीवार की कार को बांस से लदे एक ट्रक ने टक्कर मार दी, हालांकि घटना में मंत्री बाल-बाल बच गये. पुलिस ने शनिवार को यह जानकारी दी. अधिकारी ने बताया कि घटना शुक्रवार शाम की है और उस वक्त मंत्री कार में थे. हालांकि घटना में मंत्री या कोई और हताहत नहीं हुआ. उन्होंने बताया कि मंत्री की कार उमरी पोतदार और बल्लारशाह (बल्लारपुर) के तेंभुरना की ओर जा रही थी तभी यह घटना हुई. उन्होंने बताया, बांस से लदा ट्रक बमनी फाटा से आ रहा था. घटना में कोई हताहत नहीं हुआ, हालांकि वाहन का पिछला हिस्सा क्षतिग्रस्त हो गया.' उन्होंने बताया, घटना के बाद मुनगंटीवार अपने आधिकारिक कार्यक्रम के लिये रवाना हो गये. ड्राइवर को गिरफ्तार कर लिया गया है.' इलेक्शन रेसल्ट्स २०१९ लाइव उपकेट्स: चुनाव नतीजों में कौन किस पर पड़ेगा भारी, आज आएंगे चुनाव परिणाम
hindi
/* * Copyright (c) 2012, United States Government, as represented by the Secretary of Health and Human Services. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above * copyright notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * Neither the name of the United States Government nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE UNITED STATES GOVERNMENT BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package gov.hhs.fha.nhinc.patientdiscovery.entity.proxy; import gov.hhs.fha.nhinc.common.nhinccommon.AssertionType; import gov.hhs.fha.nhinc.common.nhinccommon.NhinTargetCommunitiesType; import org.hl7.v3.PRPAIN201305UV02; import org.hl7.v3.RespondingGatewayPRPAIN201306UV02ResponseType; /** * * @author Neil Webb */ public class EntityPatientDiscoveryProxyNoOpImpl implements EntityPatientDiscoveryProxy { public RespondingGatewayPRPAIN201306UV02ResponseType respondingGatewayPRPAIN201305UV02(PRPAIN201305UV02 pdRequest, AssertionType assertion, NhinTargetCommunitiesType targetCommunities) { return new RespondingGatewayPRPAIN201306UV02ResponseType(); } }
code
تمہ پتہ چھۆل نہ تمۍ سندۍ روٗدن
kashmiri
आलिया भट्ट से शादी से पहले रणबीर कपूर कर रहे हैं ये अहम काम? मॉम नीतू सिंह ने दी सलाह! | लाटेस्ली हिन्दी रणबीर कपूर (रंजबीर कपूर) और आलिया भट्ट (एलिया भट्ट) का प्यार इन दिनों बॉलीवुड की गलियारों में चर्चा का विषय बना हुआ है. ये दोनों भले ही अपने वर्क कमिटमेंट्स को पूरा करने में व्यस्त हैं लेकिन एक दूसरे के लिए ये समय निकालकर पार्टी या फिर आउटडोर एक साथ टाइम स्पेंड करते हुए नजर आते हैं. रणबीर और आलिया के रिलेशनशिप की खबरों के बाद अब इनकी शादी को लेकर भी कयास लगाया जा रहा है. अब मुंबई मिरर में छपी खबर के अनुसार, रणबीर और आलिया इस साल शादी नहीं कर रहे हैं. साल २०१९ में रणबीर और आलिया अपनी फिल्म 'ब्रह्मास्त्र' (ब्रह्मस्त्र) पर फोकस कर रहे हैं और ऐसे में वो शादी की प्लानिंग नहीं कर रहे हैं. दिसंबर, २०१९ में 'ब्रह्मास्त्र' रिलीज होनी है और ऐसे में ये दोनों इसके काम के जुटे हुए हैं. अब बात करें रणबीर कपूर की तो वो बांद्रा के अपार्टमेंट में अपने माता-पिता के साथ रह रहे हैं. इन दिनों वो पिता ऋषि कपूर (ऋषि कपूर) का उपचार करवाने न्यूयॉर्क में हैं. मीडिया रिपोर्ट के अनुसार, रणबीर आलिया के साथ शादी करने से पहले अपना एक अलग घर बनवाना चाहते हैं और ऐसे में वो एक नए अपार्टमेंट की तलाश कर रहे हैं. इतना ही नहीं, रणबीर की मॉम नीतू सिंह (नीतु सिंह) भी उन्हें इसे लेकर सलाह दे रही हैं. आलिया से शादी करने से पहले अपना नया मकान लेने की सलाह भी मॉम नीतू सिंह ने ही उन्हें दी है.ऐसे में अब ये बात साफ है कि रणबीर अपनी पर्सनल लाइफ में भी चीजों को सेट करने के बाद ही आलिया के साथ सात फेरे लेंगे. ११ महीने ११ दिन बाद न्यूयॉर्क से इलाज कराके भारत लौटे ऋषि कपूर, मुंबई एअरपोर्ट पर पत्नी के साथ आए नजर
hindi
package net.minecraft.world.gen; import java.util.Random; public class NoiseGeneratorPerlin extends NoiseGenerator { private NoiseGeneratorSimplex[] field_151603_a; private int field_151602_b; private static final String __OBFID = "CL_00000536"; public NoiseGeneratorPerlin(Random p_i45470_1_, int p_i45470_2_) { this.field_151602_b = p_i45470_2_; this.field_151603_a = new NoiseGeneratorSimplex[p_i45470_2_]; for (int var3 = 0; var3 < p_i45470_2_; ++var3) { this.field_151603_a[var3] = new NoiseGeneratorSimplex(p_i45470_1_); } } public double func_151601_a(double p_151601_1_, double p_151601_3_) { double var5 = 0.0D; double var7 = 1.0D; for (int var9 = 0; var9 < this.field_151602_b; ++var9) { var5 += this.field_151603_a[var9].func_151605_a(p_151601_1_ * var7, p_151601_3_ * var7) / var7; var7 /= 2.0D; } return var5; } public double[] func_151599_a(double[] p_151599_1_, double p_151599_2_, double p_151599_4_, int p_151599_6_, int p_151599_7_, double p_151599_8_, double p_151599_10_, double p_151599_12_) { return this.func_151600_a(p_151599_1_, p_151599_2_, p_151599_4_, p_151599_6_, p_151599_7_, p_151599_8_, p_151599_10_, p_151599_12_, 0.5D); } public double[] func_151600_a(double[] p_151600_1_, double p_151600_2_, double p_151600_4_, int p_151600_6_, int p_151600_7_, double p_151600_8_, double p_151600_10_, double p_151600_12_, double p_151600_14_) { if (p_151600_1_ != null && p_151600_1_.length >= p_151600_6_ * p_151600_7_) { for (int var16 = 0; var16 < p_151600_1_.length; ++var16) { p_151600_1_[var16] = 0.0D; } } else { p_151600_1_ = new double[p_151600_6_ * p_151600_7_]; } double var21 = 1.0D; double var18 = 1.0D; for (int var20 = 0; var20 < this.field_151602_b; ++var20) { this.field_151603_a[var20].func_151606_a(p_151600_1_, p_151600_2_, p_151600_4_, p_151600_6_, p_151600_7_, p_151600_8_ * var18 * var21, p_151600_10_ * var18 * var21, 0.55D / var21); var18 *= p_151600_12_; var21 *= p_151600_14_; } return p_151600_1_; } }
code
Get the Healthiest and Most Thorough House Cleaning Services Available in Somerset County NJ! When people look for a professional House cleaning services, it's often after they've realized that they just don't have enough time to maintain the cleanliness of their home themselves. Their lifestyles and work schedules keep them constantly on the go, yet they do not want their home's untidy appearance to reflect their way of life. At the end of the busy work day, they want to come home to a clean and well-maintained house.
english
- दीपक के घर देर रात सिपाहियों पर फायरिंग, अलीगढ़ न्यूज इन हिन्दी -अमर उजाला बेहतर अनुभव के लिए अपनी सेटिंग्स में जाकर हाई मोड चुनें। अलीगढ़ दीपक के घर देर रात सिपाहियों पर फायरिंग अलीगढ़। महानगर के विष्णुपुरी इलाके में हुई दीपक मित्तल की हत्या अभी इलाका पुलिस के लिए सिरदर्द बनी हुई। शुक्रवार की सुबह दीपक की पत्नी ने खुद की सुरक्षा को खतरा बताया तो देर शाम उसके घर सुरक्षा ड्यूटी दे रहे सिपाहियों पर फायरिंग हो गई। इस हमले में एक सिपाही खुद के घायल होने की बात कह रहा है मगर उसके बाजू में जो चोट का निशान है, वह काफी पुराना है। घटना से इलाके में अफरातफरी मच गई। खबर पर एसओ विनोद पायल भी पहुंच गए, जो फायरिंग की घटना को सिरे से संदिग्ध बता रहे हैं। क्वार्सी थाने के सिपाही छत्रपाल सिंह और बलवीर सिंह की ड्यूटी बेगमबाग में मृतक दीपक मित्तल के घर पर सुरक्षा की दृष्टि से लगी है। शुक्रवार देर रात दोनों सिपाही घर के बाहर बनी चक्की में शटर खोलकर लेटे हुए थे, रात को करीब साढ़े ग्यारह बजे अचानक कु छ लोगों ने चक्की में फायरिंग कर दी। सिपाहियों के मुताबिक एक साथ चार फायर किए गए और हमलावर भाग गए। फायरिंग की आवाज पर दीपक के परिजन और आसपास के लोग भी जाग गए। खबर मिलने पर पुलिस भी पहुंच गई। सिपाही छत्रपाल अपने सीधे हाथ की बाजू में चोट का निशान बता रहा है, मगर उसका एक भी कपड़ा फटा नहीं है। थाना प्रभारी विनोद पायल का कहना है कि यह सिपाही हाल ही में अपनी शादी होने के कारण लगातार छुट्टी मांग रहा था, जिसे छुट्टी नहीं दी गई थी, आशंका है कि इसी के चलते फायरिंग की घटना की साजिश रची गई। चक्की के कमरे में एक भी कार्टेज का खोखा नहीं मिला है। दूसरे सिपाही का कहना है कि पटाखे की आवाज सुनी है, न तो फायर करने वालों को देखा और न बाहर से देखा है।
hindi
JK Wranglers are so popular that they're getting to be ho-hum. We even catch ourselves commenting, "Hmmm, just another lifted JK." James Fonnesbeck, owner of Expedition One, wanted to build a JK two-door that would attract attention at shows and on the trail. He did this by building his 2011 JK Wrangler as a modern "Rat Patrol" Jeep, complete with minigun. You might remember the 1960s TV series The Rat Patrol starring Christopher George about WWII Long Range Desert Group allied soldiers driving two 50 Browning AN/M2-equipped Jeeps and raising havoc behind the lines of Rommel's Afrika Korps. The JK started as a 2011 Sport model with a 3.8L V-6 and 42RE automatic transmission. A TeraFlex Pro LCG long-arm suspension with Fox 2.0 reservoir shocks allows the Jeep to clear the 37-inch Goodyear Wrangler MT/R with Kevlar tires and to work well on the trail. The weak OE Dana 30 frontend was replaced with TeraFlex's stellar new Tera44 front housing with RCV CV axles, 5.13 gears, and Eaton ELocker. A full complement of Expedition One products protects the Rat Patrol JK. The full-width front bumper easily carries a Superwinch Talon 12.5i winch. The Rubi Skid protects a Rubicon SmartBar sway-bar disconnect that isn't on the Rat Patrol JK. Expedition One Rocker Guard steps protect the rocker panels. The Expedition One rear bumper with swingaway tire carrier was the original one-touch carrier. By opening the tailgate latch, the tire swings away with the gate. Expedition One Geri cans mount securely in their can carrier. The P3 Airsoft Minigun is based on the GE Vulcan Minigun, fires 50 rounds per second, and is lethal within 50 feet when it's loaded with BBs. We saw it easily destroy a junkyard television set. Magpul Masada ACR Airsoft rifles are carried in Kydex holsters and complete the modern-day Rat Patrol image. Of course, if the Zombie Apocalypse happens, all the airsoft weapons can be exchanged for ones that are more powerful. James' Expedition One Rat Patrol JK is a real eye catcher. Just driving it to shoot the photos showed us that everybody does a double take and stares as the Jeep passes by. We're sure its show performance is the same. This isn't your normal, ho-hum JK Wrangler.
english
\begin{document} \title{ Forecasting Stock Options Prices via the Solution of an Ill-Posed Problem for the Black-Scholes Equation} \author{Michael V. Klibanov$^{\ast }$, Aleksander A. Shananin$^{\circ }$, Kirill V. Golubnichiy$^{\bullet }$ \\ and Sergey M. Kravchenko$^{\circ }$ \and $^{\ast }$Department of Mathematics and Statistics, \and University of North Carolina at Charlotte, Charlotte, NC 28223, USA \and $^{\circ }$Department of Analysis of Systems and Solutions \and Moscow Institute of Physics and Technology, Moscow, 117303, Russia \and $^{\bullet }$Department of Mathematics \and University of Washington, Seattle, WA 98195, USA \and Emails: [email protected], [email protected], \and [email protected], \and [email protected]} \mathrm{d}ate{} \maketitle \begin{abstract} In the previous paper (Inverse Problems, 32, 015010, 2016), a new heuristic mathematical model was proposed for accurate forecasting of prices of stock options for 1-2 trading days ahead of the present one. This new technique uses the Black-Scholes equation supplied by new intervals for the underlying stock and \ new initial and boundary conditions for option prices. The Black-Scholes equation was solved in the positive direction of the time variable, This ill-posed initial boundary value problem was solved by the so-called Quasi-Reversibility Method (QRM). This approach with an added trading strategy was tested on the market data for 368 stock options and good forecasting results were demonstrated. In the current paper, we use the geometric Brownian motion to provide an explanation of that effectivity using computationally simulated data for European call options. We also provide a convergence analysis for QRM. The key tool of that analysis is a Carleman estimate. \end{abstract} \textbf{\text{Key words: }}Black-Scholes equation, European call options, \text{geometric }Brownian motion, probability theory, ill-posed problem, quasi-reversibility method, Carleman estimate, trading strategy. \section{Introduction} \label{sec:1} A new heuristic mathematical algorithm designed to forecast prices of stock options was proposed in \cite{KlibGol}. This algorithm is based on the so-called Quasi-Reversibility Method (QRM). QRM is a regularization method for an ill-posed problem for the Black-Scholes equation. The goal of this paper is to address both analytically and numerically the following question: \emph{Why this algorithm has worked well for real market data in \cite{KlibGol,Nik}?} Our explanations are based on our new analytical results in the probability theory and are supported by our numerical results for the computationally simulated data generated by the geometrical Brownian motion. A significant advantage of the technique of \cite{KlibGol} is that it uses historical data about stock and option prices only over short time intervals. This assumption is a practically valuable one since formations of those prices are random processes. This indicates that the information used in the algorithm possesses stable probabilistic characteristics. The mathematical model of \cite{KlibGol} was supplied by a trading strategy. Results of \cite[Table 4]{KlibGol} for real market data of \cite{Bloom} indicate that a combination of that mathematical model with that trading strategy has resulted in 72.83\% profitable options out of 368 options for real market data. More recently, the model of \cite{KlibGol} was used in \cite{Nik} to forecast stock option prices in the case when results of QRM\ are enhanced by the machine learning approach, which was applied on the second stage of the price forecasting procedure. Market data of \cite{Bloom} for total 169,862 European call stock options were used in \cite{Nik}. Following the machine learning approach, these data were divided in three sets \cite[Table 1]{Nik}: training (132,912 options), validation (13,401 options) and testing (23,549 options). Total 23,549 options were tested by QRM, and good results on predictions of options with profits were obtained in \cite[first lines in Tables 2,3]{Nik}. Later the authors of \cite{Nik} have tested the performance of QRM for all those 169,862 options, and results were almost the same as ones of \cite[first lines in Tables 2,3]{Nik} . However, since the latter results are not yet published, then we do not discuss them here. \textbf{Remark 1.1: }\emph{Without further specifications, we consider in this paper only European call options. The mathematical model of \cite {KlibGol} does not use neither the payment function at the expiry time nor the strike price.} We now present in Tables 1,2 the most recent results of \cite{Nik}, which were obtained using the method of \cite{KlibGol} for the data consisting of 23,549 historical trades collected in 2018. The same market data of \cite {Bloom}\textbf{\ }were used in Tables 1,2. Option prices for one trading day ahead of the present day were forecasted. Definitions of accuracy, precision and recall are well known, see, e.g. \cite{Iri}. \begin{center} \textbf{Table 1. Results of QRM for market data of \cite {Bloom} for 23,549 options \cite[Table 2]{Nik}} \vspace{0.5cm } \begin{tabular}{|l|l|l|l|l|} \hline Method & Accuracy & Precision & Recall & Error \\ \hline QRM & 49.77\% & 55.77\% & 52.43\% & 12 \% \\ \hline \end{tabular} \end{center} In Table 1, \textquotedblleft Error" means the average relative error of predictions of option prices, i.e. \begin{equation*} \text{Error}=\frac{1}{N}\sum_{n=1}^{N}\left\vert \frac{p_{n,\text{corr} }-p_{n,\text{fc}}}{p_{n,\text{corr}}}\right\vert \cdot 100\%, \end{equation*} where $N=23,549$ is the total number of tested options, $p_{n,\text{corr}}$ and $p_{n,\text{fc}}$ are correct and forecasted prices respectively of the option number $n.$ \textbf{Table 2. The percentage of options with correct predictions of profits via the Quasi-Reversibility Method for the market data of \cite {Bloom} for 23,549 options \cite[Table 3]{Nik}} \begin{center} \begin{tabular}{|l|c|l|} \hline Method & Correctly Predicted Profitable Options \\ \hline QRM & 55.77\% \\ \hline \end{tabular} \emph{\ } \end{center} A perfect financial market does not allow a winning strategy \cite{Fama}. This means that to address the above question, we need to assume that the market is imperfect. The present article considers a model situation, in which there is a difference between the volatility $\sigma $ of the underlying stock and traders' opinion $\hat{\sigma}$ of the volatility of an European call option generated by this stock. We prove analytically that, theoretically, this allows one to design a winning strategy. First, we back up this theory numerically for the ideal case when both volatilities are known. In practice, however, only $\hat{\sigma}$ is approximately known from \cite{Bloom}, where implied volatility $\sigma _{\text{impl}}$ of option prices is posted. It is reasonable to conjecture that $\hat{\sigma}\approx \sigma _{\text{impl}}.$ Second, to address the question posed in the first paragraph of this section for the non ideal case, we consider a mathematical model, in which the dynamics of the stock prices is generated by the stochastic differential equation of the geometric Brownian motion. This allows us to computationally generate the time series of stock prices. At the same time, we assume that the price of the corresponding stock option is governed by the Black-Scholes equation, in which the volatility coefficient is $\hat{\sigma}$. Hence, using that time series of stock prices, we apply the Black-Scholes formula to get the time series for prices of the corresponding options. Next, we apply the QRM to predict the prices of these options for one trading day ahead of the current one. Next, we formulate the winning strategy for the non ideal case. Both the theory and the numerical studies of this paper support our two hyphotheses formulated in subsection 6.3. Our first hypothesis that the heuristic algorithm of \cite{KlibGol} actually figures out in many cases the sign of the difference $\sigma -\hat{\sigma}$. Our second hypothesis is also based on our results below as well as on the \textquotedblleft Precision" column of Table 1 and the second column of Table 2. More precisely, the second hypothesis is that probably about 56\% of tested 23,549 options of \cite{Nik} with the real market data had $\sigma -\hat{\sigma}<0.$ This algorithm of \cite{KlibGol} is based on the solution of a new initial boundary value problem (IBVP) for the Black-Scholes equation, see, e.g. \cite {Black,W} for this equation. Since the Black-Scholes equation is actually a 1-D parabolic Partial Differential Equation (PDE) with the reversed time, then that IBVP is ill-posed, see, e.g. \cite{KlibGol} for an example of a high instability of a similar problem. The ill-posedness of that IBVP\ is the main mathematical obstacle of that algorithm. Therefore, we solve that IBVP both here and in \cite{KlibGol} by a specially designed version of QRM. QRM stably solves this problem forwards in time for two consecutive trading days after the current one. QRM is a version of the Tikhonov regularization method \cite{T} being adapted to ill-posed problems for Partial Differential Equations (PDEs). We refer to \cite{LL} for the first publication on QRM as well as to \cite{Bourg1,Bourg2,Kl,KlibYag,KL} for some more recent ones. We provide a convergence analysis for QRM being applied to the above problem. The main new element of this analysis is that we lift a restrictive assumption of \cite{KlibGol} of a sufficiently small time interval. We note that the smallness assumption imposed on the time interval is a traditional one for initial boundary value problems for parabolic PDEs with the reversed time, see \cite{Kl}, \cite[Theorem 1 of section 2 in Chapter 4]{LRS}, where a certain Carleman estimate was used. However, a new Carleman estimate was derived in \cite{KlibYag} for a general parabolic operator of the second order with variable coefficients in the $n-$D case. This estimate enables one to lift that smallness assumption. We simplify here the Carleman estimate of \cite{KlibYag} as well as some other results of \cite{KlibYag} via adapting them to our simpler 1$-$D case, as compared with the $n-$D case of \cite{KlibYag}. The Black-Scholes equation describes the dependence of the price $v\left( s,t\right) $ of a stock option from the price of the underlying stock $s$ and time $t$ \cite{Bjork,Black,W}. In fact, this is a parabolic Partial Differential Equation with the reversed time. Let $t=T$ be the maturity time and $t=0$ is the present time \cite{W}. Traditionally, initial boundary value problems for the Black-Scholes equation are solved backwards with respect to time $t\in \left( 0,T\right) $ with the initial condition at $ \left\{ t=T\right\} $. The latter is a well posed problem, for which the classic theory of parabolic PDEs works, see, e.g. the book \cite{Lad} for this theory. However, the maturity time $T$ is usually a few months away from the present time. It is obviously impossible to accurately predict the future behavior of the volatility coefficient of the Black-Scholes equation on such a large time interval. Since the formations of both stock and option prices are stochastic processes, then it is intuitively clear a good accuracy of forecasting of stock option prices for long time periods is unlikely. Thus, we focus in this paper on forecasting of option prices for a short time period of just one trading day ahead of the current one. Let the time variable $t$ counts trading days. Since there are 255 trading days annually, then we introduce the dimensionless time $t^{\prime }$ as \begin{equation} t^{\prime }=\frac{t}{255}. \label{1.1} \end{equation} Hence, \begin{equation} \text{one (1) dimensionless trading day }=1/255\approx 0.00392<<1\text{.} \label{1.2} \end{equation} \textbf{Remark 1.2.} \emph{To simplify notations, we still use everywhere below the notation }$t$\emph{\ for the dimensionless time }$t^{\prime }$ \emph{of (\ref{1.1}).} \textbf{Remark 1.3}. \emph{There are many important questions about the technique of \cite{KlibGol}, which are not addressed in this paper, such as, e.g. the performance of this technique for some \textquotedblleft stress" tests, its performance for significantly larger sets of market data, its performance for the case when the transaction cost is taken into account, and many others. However, addressing any of those questions would require a significant additional effort. Therefore, those questions are outside of the scope of this publication. Still, the question of the transaction cost might probably be addressed using a threshold number }$\eta >0$\emph{\ in our trading strategy for the non-ideal case, see subsection 6.3.} This paper is organized as follows. In section 2 we show that a winning strategy on an infinitesimal time interval might be possible if $\sigma \neq \hat{\sigma}.$ In section 3 we present the heuristic mathematical model of \cite{KlibGol}. In section 4 we present a convergence analysis for our version of QRM. In section 5 we use arguments of the probability theory to justify our trading strategy in the ideal case when both volatilities $ \sigma $ and $\hat{\sigma}$ are known. In section 6 we describe our numerical studies and end up with a trading strategy for the non ideal case when only the volatility $\hat{\sigma}$ is known. In addition, we formulate in section 6 our two hyphotheses mentioned above. Concluding remarks are given in section 7. \textbf{Disclaimer}. \emph{This paper is written for academic purposes only. The authors do not provide any assurance that the technique of this paper would result in a successful trading on a real financial market.} \section{A Possible Winning Strategy} \label{sec:2} Let $\sigma $ be the volatility of a certain stock and $s$ be the price of this stock. Consider an option corresponding to this stock. Let $\hat{\sigma} $ be an idea of the volatility of that option, which has been developed among the agents, who trade this option on the market. If $\sigma \neq \hat{ \sigma}$, then the financial market is imperfect, and an opportunity for designing a winning strategy exists. At a given time $t$, the time until the maturity will occur is $\tau ,$ \begin{equation} \tau =T-t. \label{2.0} \end{equation} Let $s$ be the stock price and $f(s)$ be the payoff function of that option at the maturity time $t=T.$ We assume that the risk-free interest rate is zero. Let $u\left( s,\tau \right) $\ be the price of that option and the variable $\tau $\ is the one defined in (\ref{2.0}). We assume that the function $u\left( s,\tau \right) $ satisfies the Black-Scholes equation with the volatility coefficient $\hat{\sigma}$ \cite[Chapter 7, Theorem 7.7] {Bjork}\textbf{:} \begin{equation} \frac{\partial u\left( s,\tau \right) }{\partial \tau }=\frac{\hat{\sigma} ^{2}}{2}s^{2}\frac{\partial ^{2}u\left( s,\tau \right) }{\partial s^{2}}, \text{ }s>0, \label{2.1} \end{equation} \begin{equation*} u\left( s,0\right) =f\left( s\right) . \end{equation*} The specific formula for the payoff function is $f(s)=\max \left( s-K,0\right) $, where $K$ is the strike price \cite{Bjork}. Then the price function $u(s,\tau )$ of the option is given by the Black-Scholes formula \cite{Bjork}: \begin{equation} u(s,\tau )=s\Phi (\Theta _{+}(s,\tau ))-e^{-r\tau }K\Phi (\Theta _{-}(s,\tau )), \label{2.2} \end{equation} where $r=0$ and \begin{equation*} \Theta _{\pm }\left( s,\tau \right) =\frac{1}{\hat{\sigma}\sqrt{\tau }}\left[ \ln \left( \frac{s}{K}\right) \pm \frac{\hat{\sigma}^{2}\tau }{2}\right] , \text{ } \end{equation*} \begin{equation} \Phi \left( z\right) =\frac{1}{\sqrt{2\pi }}\int_{z}^{\infty }e^{-r^{2}/2}dr, \text{ }z\in \mathbb{R}. \label{2.200} \end{equation} Let $v\left( s,t\right) =u\left( s,T-t\right) .$ The stochastic equation of the geometric Brownian motion for the stock price $s$\ with the volatility $ \sigma $\ has the form $ds=\sigma sdW,$\ where $W$\ is the Wiener process. The It\^{o} formula implies \begin{equation} dv=\left( -\frac{\partial u(s,T-t)}{\partial \tau }+\frac{{\sigma }^{2}}{2} s^{2}\frac{\partial ^{2}u(s,T-t)}{\partial s^{2}}\right) dt+\sigma s\frac{ \partial u(s,T-t)}{\partial s}dW, \label{2.3} \end{equation} where $dv$ is the option price change on an infinitesimal time interval and $ dW$ is the Wiener process. Replacing in (\ref{2.3}) $\partial _{\tau }u(s,T-t)$ with the right hand side of (\ref{2.1}), we obtain \begin{equation} dv=\frac{(\sigma ^{2}-\hat{\sigma}^{2})}{2}s^{2}\frac{\partial ^{2}u(s,T-t)}{ \partial s^{2}}dt+\sigma s\frac{\partial u(s,T-t)}{\partial s}dW. \label{2.4} \end{equation} The mathematical expectation of $dW$ is zero \cite[Chapter 4]{Bjork}. Therefore, we find that the expected value of the increment of the option price on an infinitesimal time interval is \begin{equation} \frac{(\sigma ^{2}-\hat{\sigma}^{2})}{2}s^{2}\frac{\partial ^{2}u(s,T-t)}{ \partial s^{2}}dt. \label{2.5} \end{equation} In the mathematical finance, the second derivative \begin{equation} \frac{\partial ^{2}u(s,\tau )}{\partial s^{2}} \label{2.6} \end{equation} is called Greek $\Gamma (s,\tau ).$ For an European call option \cite[ Chapter 9]{Bjork} \begin{equation} \Gamma (s,\tau )=\frac{1}{\hat{\sigma}s\sqrt{2\pi \tau }}\exp \left[ -\frac{ (\Theta _{+}(s,\tau ))^{2}}{2}\right] >0. \label{2.7} \end{equation} Therefore, it follows from (\ref{2.5})-(\ref{2.7}) that the sign of the mathematical expectation of the increment of the option price on an infinitesimal time interval is determined by the sign of the difference $ \sigma ^{2}-\hat{\sigma}^{2}.$ Thus, if $\sigma ^{2}>\hat{\sigma}^{2},$ then a possible winning strategy involves buying an option at the present time and selling it in the next trading period. If $\sigma ^{2}<\hat{\sigma}^{2},$ then the winning strategy is to take the short position at the present time and to close the short position in the next trading period. \section{The Mathematical Model } \label{sec:3} \subsection{The model} \label{sec:3.1} We now describe the mathematical model of \cite{KlibGol}. We use this model here for computationally simulated data. Also, it was used in \cite{Nik} for real market data to obtain the above Tables 1,2. We do not differentiate in this model between volatilities $\sigma $ and $\hat{\sigma}$ and just use the time dependent volatility $\sigma \left( t\right) .$ Everywhere below, as the dimensionless time, we still use the notation $t$ for $t^{\prime }$ in (\ref{1.1}) for brevity. Let $\sigma (t)$ be the volatility of the option at the moment of time $t$. When working with the market data in \cite{KlibGol,Nik}, we have used the historical implied volatility listed on the market data of \cite{Bloom}. Let $v_{b}(t)$ and $ v_{a}(t)$ be respectively the bid and ask prices of the option and $s_{b}(t)$ and $s_{a}(t)$ be the bid and ask prices of the stock. It is known that \begin{equation*} v_{b}(t)<v_{a}(t)\text{ and }s_{b}(t)<s_{a}(t). \end{equation*} For brevity, we simplify notations as $s_{b}=s_{b}(0)$, $s_{a}=s_{a}(0)$. We impose a natural assumption that $0<s_{b}<s_{a}.$ It was observed on the market data in \cite[formulas (2.3)-(2.6)]{KlibGol} that the relative differences are usually small, \begin{equation} \left\vert \frac{s_{a}(t)}{s_{b}(t)}-1\right\vert \leq 0.03,\text{ } \left\vert \frac{v_{a}(t)}{v_{b}(t)}-1\right\vert \leq 0.27. \label{3.0} \end{equation} Hence, we define the initial condition $q\left( s\right) $ at $t=0$ of the function $v\left( s,t\right) $ as the linear interpolation on the interval $ s\in \left( s_{b},s_{a}\right) $ between $v_{b}\left( 0\right) $ and $ v_{a}\left( 0\right) ,$ \begin{equation} v\left( s,0\right) =q\left( s\right) =-\frac{s-s_{a}}{s_{a}-s_{b}} v_{b}\left( 0\right) +\frac{s-s_{b}}{s_{a}-s_{b}}v_{a}\left( 0\right) ,\text{ }s\in \left( s_{b},s_{a}\right) . \label{3.1} \end{equation} Define the domain $Q_{T}=\left\{ \left( s,t\right) \in \left( s_{b},s_{a}\right) \times \left( 0,T\right) \right\} .$ We assume that the volatility of the option depends only on $t$, i.e. $\sigma =\sigma \left( t\right) \geq \sigma _{0}=const.>0.$ Let $L$ be the partial differential operator of the Black-Scholes equation, \begin{equation} Lv=\frac{\partial v}{\partial t}+\frac{\sigma ^{2}\left( t\right) }{2}s^{2} \frac{\partial ^{2}v}{\partial s^{2}}=0\text{ in }Q_{T}. \label{3.2} \end{equation} We impose the following initial and boundary conditions on the function $ v\left( s,t\right) :$ \begin{equation} v\left( s,0\right) =q\left( s\right) ,\text{ }s\in \left( s_{b},s_{a}\right) , \label{3.3} \end{equation} \begin{equation} v\left( s_{b},t\right) =v_{b}\left( t\right) ,v\left( s_{a},t\right) =v_{a}\left( t\right) ,\text{ }t\in \left( 0,T\right) . \label{3.4} \end{equation} Conditions (\ref{3.2})-(\ref{3.4}) represent the heuristic mathematical model of \cite[formulas (2.3)-(2.6)]{KlibGol}. Also, (\ref{3.2})-(\ref{3.4}) is our IBVP for the Black-Scholes equation. We now formulate this as Problem 1: \textbf{Problem 1.} \emph{Find the function }$v\in H^{2}\left( Q_{T}\right) $ \emph{\ satisfying conditions (\ref{3.2})-(\ref{3.4}).} Problem 1 is ill-posed since we need to solve equation (\ref{3.2}) forwards in time. \textbf{Remarks 3.1:} \begin{enumerate} \item \textbf{\ }\emph{The conventional model for the Black-Scholes equation stresses on the maturity time }$T$\emph{\ via considering the function }$ u\left( s,t\right) =v\left( s,T-t\right) $\emph{\ instead of the function }$ v\left( s,t\right) $\emph{. Unlike this, we are not doing so in (\ref{3.2})-( \ref{3.4}) since we do not need the maturity time, also, see Remark 1.1. } \item \emph{As it is a conventional way in the theory of Ill-Posed problems, we increase here the required smoothness of the solution from }$ H^{2,1}\left( Q_{T}\right) $\emph{\ to }$H^{2}\left( Q_{T}\right) .$ \end{enumerate} \subsection{Three steps} \label{sec:3.2} In order to solve Problem 1, we need first to define the time dependent option's volatility $\sigma \left( t\right) ,$ and boundary conditions $ v_{b}\left( t\right) $, $v_{a}\left( t\right) .$ Then the initial condition $ q\left( s\right) $ in (\ref{3.3}) would be found via (\ref{3.3}). We explain these in Steps 1,2 of this subsection 3.2. In our computations of \cite{KlibGol,Nik} we have used the Implied Volatility of the options in the Last Trade Price (IVOL) of the day for $ \sigma \left( t\right) $ \cite{Bloom}. As to $s_{b}$ and $s_{a},$ we have used the End of the Day Underlying Price Ask and the End of Day Underlying Price Bid of \cite{Bloom}. Similarly for $v_{b}\left( t\right) $ and $ v_{a}\left( t\right) ,$ in which case the End of the Day Option Price Ask and the End of Day Option Price Bid of \cite{Bloom} were used. The moment of time $\left\{ t=0\right\} $ is the End of the Present Day Time, and similarly for the following two trading days of $t=y,2y$ and for the preceding two trading days $t=-y,-2y.$ Naturally, the question can be raised here on how did we find future values of boundary conditions $v_{b}\left( t\right) $ and $v_{a}\left( t\right) $ for $t\in \left( 0,2y\right) $ in ( \ref{3.4}), and the same for $\sigma \left( t\right) .$ This question is addressed in Step 2 below. Our method for the solution of Problem 1 consists of three steps: \textbf{Step 1 }(introducing dimensionless variables)\textbf{. }First, we make equation (\ref{3.2}) dimensionless. Recall that $s_{b}<s_{a}.$ Introduce the dimensionless variable $x$ for $s$ as: \begin{equation*} s\Leftrightarrow x=\frac{s-s_{b}}{s_{a}-s_{b}}. \end{equation*} Let $y$ denotes one dimensionless trading day. By (\ref{1.2}) \begin{equation} y=\frac{1}{255}\approx 0.00392. \label{3.100} \end{equation} By (\ref{3.1}) the function $q\left( s\right) $ is transformed in the function $g\left( x\right) ,$ \begin{equation} g\left( x\right) =\left( 1-x\right) v_{b}\left( 0\right) +xv_{a}\left( 0\right) . \label{3.5} \end{equation} And the operator $L$ in (\ref{3.2}) is transformed in the operator $M$, \begin{equation} Mv=v_{t}+\sigma ^{2}\left( t\right) A(x)v_{xx}, \label{3.6} \end{equation} \begin{equation} A(x)=\frac{255}{2}\frac{[x(s_{a}-s_{b})+s_{b})]^{2}}{(s_{a}-s_{b})^{2}}, \label{3.7} \end{equation} \begin{equation} G_{2y}=\left\{ \left( x,t\right) \in \left( 0,1\right) \times \left( 0,2y\right) \right\} . \label{3.8} \end{equation} Problem is transformed in Problem 2: \textbf{Problem 2.} \emph{Assume that functions } \begin{equation} v_{b}\left( t\right) ,v_{a}\left( t\right) \in H^{2}\left[ 0,2y\right] ,\sigma \left( t\right) \in C^{1}\left[ 0,2y\right] . \label{3.9} \end{equation} \emph{Find the solution }$u\in H^{2}\left( G_{2y}\right) $\emph{\ of the following initial boundary value problem:} \begin{equation} Mv=0\text{ in }G_{2y}, \label{3.10} \end{equation} \begin{equation} v\left( 0,t\right) =v_{b}\left( t\right) ,v\left( 1,t\right) =v_{a}\left( t\right) ,t\in \left( 0,2y\right) , \label{3.12} \end{equation} \begin{equation} v\left( x,0\right) =g\left( x\right) ,x\in \left( 0,1\right) , \label{3.11} \end{equation} \emph{where the partial differential operator }$M$\emph{\ is defined in (\ref {3.6}), the function }$A\left( x\right) $\emph{\ is defined in (\ref{3.7}), the initial condition }$g\left( x\right) $\emph{\ is defined in (\ref{3.5}), and the domain }$G_{2y}$\emph{\ is defined in (\ref{3.8}).} \textbf{Step 2 (interpolation and extrapolation).} Having the historical market data for an option up to \textquotedblleft today", we forecast the option price for \textquotedblleft tomorrow" and \textquotedblleft the day after tomorrow", with 255 trading days annually. \textquotedblleft One day" corresponds to $y=1/255.$ \textquotedblleft Today" means $t=0.$ \textquotedblleft Tomorrow" means $t=y.$ \textquotedblleft The day after tomorrow" means $t=2y.$ We forecast these prices for the $s-$interval as $ s\in \left[ s_{b}\left( 0\right) ,s_{a}\left( 0\right) \right] $ via the solution of problem (\ref{3.10})-(\ref{3.12}). To do this, however, we need to know functions $v_{b}(t),$ $v_{a}(t)$ and $\sigma (t)$ in the \textquotedblleft future", i.e. for $t\in \left( 0,2y\right) .$ We obtain approximate values of these functions via interpolation and extrapolation procedures described in the next paragraph. Let $t=-2y$ be \textquotedblleft the day before yesterday", $t=-y$ be \textquotedblleft yesterday" and $t=0$ be \textquotedblleft today". Let $ d\left( t\right) $ be any of three functions $v_{b}(t),v_{a}(t),\sigma (t)$. First, we interpolate the function $d\left( t\right) $ by the quadratic polynomial for $t\in \left[ -2y,0\right] $ using the values $d\left( -2y\right) ,d\left( -y\right) ,d\left( 0\right) .$ We obtain \begin{equation} d\left( t\right) =at^{2}+bt+c\text{ for }t\in \left[ -2y,0\right] . \label{3.13} \end{equation} Next, we extrapolate (\ref{3.13}) on the interval $t\in \left[ 0,2y\right] $ via setting \begin{equation*} d\left( t\right) =at^{2}+bt+c\text{ for }t\in \left[ 0,2y\right] . \end{equation*} The so defined functions $v_{b}(t),$ $v_{a}(t),\sigma (t)$ were used to numerically solve problem (\ref{3.10})-(\ref{3.12}) for both the computationally simulated data below and for real market data of Tables 1,2 above as well as in \cite{KlibGol}. \textbf{Step 3 (Numerical solution of Problem 2. Regularization).} Since problem (\ref{3.2})-(\ref{3.4}) is ill-posed, then we apply a regularization method to obtain an approximate solution of this problem. More precisely, we solve the following problem: \textbf{Minimization Problem 1}. \emph{Let }$J_{\beta }:H^{2}\left( G_{2y}\right) \rightarrow \mathbb{R}$\emph{\ be the regularization Tikhonov functional defined as:} \begin{equation} J_{\alpha }\left( v\right) =\int_{G_{2y}}\left( Mv\right) ^{2}dsdt+\alpha \left\Vert v\right\Vert _{H^{2}\left( G_{2y}\right) }^{2}, \label{3.14} \end{equation} \emph{where }$\alpha \in \left( 0,1\right) $\emph{\ is the regularization parameter. Minimize functional (\ref{3.14}) on the set }$S,$ \emph{where} \begin{equation} S=\left\{ v\in H^{2}\left( G_{2y}\right) :v\left( 0,t\right) =v_{b}\left( t\right) ,v\left( 1,t\right) =v_{a}\left( t\right) ,v\left( x,0\right) =g\left( x\right) \right\} . \label{3.15} \end{equation} Minimization Problem 1 is a version of QRM for Problem 2. This version is an adaptation of the QRM for problem (\ref{3.10})-(\ref{3.12}). In section 4 we present the theory of this specific version of the QRM. In particular, Theorem 4.2 of section 4 implies uniqueness of the solution $u\in H^{2}\left( G_{2y}\right) $ of Problem 2 and provides an estimate of the stability of this solution with respect to the noise in the data. Theorem 4.3 of section 4 \ implies existence and uniqueness of the minimizer $ v_{\alpha }\in H^{2,1}\left( G_{2y}\right) $ of the functional $J_{\alpha }\left( v\right) $ on the set $S$ defined in (\ref{3.15}). Following the theory of Ill-Posed problems, we call such a minimizer \textquotedblleft regularized solution" \cite{T}. Theorem 4.4 estimates convergence rate of regularized solutions to the exact solution of Problem 2 with the noiseless data. These estimates depend on the noise level in the data. \section{Convergence Analysis} \label{sec:4} In this section, we provide convergence analysis for Problem 2 of subsection 3.2. This problem is the IBVP for parabolic equation (\ref{3.10}) with the reversed time, see (\ref{3.6}). The QRM for this problem for a more general parabolic operator in $\mathbb{R}^{n}$ with arbitrary variable coefficients was proposed in \cite{Kl} and convergence analysis was also carried out there. Then corresponding theorems were reformulated in \cite{KlibGol}. Although a stability estimate was not a part of \cite{KlibGol}, such an estimate was proven in \cite{Kl}. It was pointed out in Introduction, however, that traditional stability estimates for this problem were proven, using a certain Carleman estimate, only under the assumption that the time interval is sufficiently small. The same is true for the convergence theorems of QRM\ in \cite{Kl,KlibGol}. Unlike this, the smallness assumption was lifted in \cite{KlibYag} via a new Carleman estimate. In this section, we significantly modify results of \cite{KlibYag} for a simpler 1-D case. Recall (see Introduction) that this modification allows us to obtain more accurate estimates in the 1-D case, as compared with the $n-$D case of \cite {KlibYag}. We note that even though we work in our computations below on a small time interval $\left( 0,2y\right) =\left( 0,0.00784\right) $ (see (\ref {3.100}) and (\ref{3.8})), the smallness assumption of \cite{Kl,KlibGol}, \cite[Theorem 1 of section 2 in Chapter 4]{LRS} might result in the requirement of even a smaller length of that interval. \subsection{Problem statement} \label{sec:4.1} Consider a number $T_{1}>0$ and denote \begin{equation*} Q_{T_{1}}=\left\{ \left( x,t\right) \in \left( 0,1\right) \times \left( 0,T_{1}\right) \right\} . \end{equation*} Let two numbers $a_{0},a_{1}>0$ and $a_{0}<a_{1}.$ Let the function $a\left( x,t\right) \in C^{1}\left( \overline{Q}_{T_{1}}\right) $ satisfies: \begin{equation} \text{ }\left\Vert a\right\Vert _{C^{1}\left( \overline{Q}_{T_{1}}\right) }\leq a_{1},\text{ }a\left( x,t\right) \geq a_{0}\text{ in }Q_{T_{1}}. \label{7.1} \end{equation} Let functions $\varphi _{0}\left( t\right) ,\varphi _{1}\left( t\right) \in H^{2}\left( 0,T_{1}\right) .$ In the above case of subsection 3.2, \begin{equation*} T_{1}=2y,a\left( x,t\right) =\sigma ^{2}\left( t\right) A(x),\varphi _{0}\left( t\right) =v_{b}\left( t\right) ,\varphi _{1}\left( t\right) =v_{a}\left( t\right) . \end{equation*} We now formulate Problem 3, which is a slight generalization of Problem 2. \textbf{Problem 3}. \emph{Find a solution }$w\in H^{2}\left( Q_{T_{1}}\right) $\emph{\ of the following initial boundary value problem (IBVP):} \begin{equation} Pw=w_{t}+a\left( x,t\right) w_{xx}=0\text{ in }Q_{T_{1}}, \label{7.3} \end{equation} \begin{equation} w\left( 0,t\right) =\varphi _{0}\left( t\right) ,w\left( 1,t\right) =\varphi _{1}\left( t\right) ,\text{ }t\in \left( 0,T_{1}\right) , \label{7.4} \end{equation} \begin{equation} w\left( x,0\right) =q\left( x\right) =\varphi _{0}\left( 0\right) \left( 1-x\right) +\varphi _{1}\left( 0\right) x,\text{ }x\in \left( 0,1\right) . \label{7.5} \end{equation} \textbf{Remark 4.1.} \emph{Since Problem 3 is a more general one than Problem 2, then our convergence analysis for Problem 3, which we provide below, is also valid for Problem 2.} The reason why we use the linear function for $w\left( x,0\right) $ in (\ref {7.5}) is our desire to simplify the presentation by using the fact that, in the case of Problem 2, the initial condition in (\ref{3.11}) is the linear function defined in (\ref{3.5}). Problem 3 is an IBVP\ for the parabolic equation (\ref{7.3}) with the reversed time. Therefore, this problem is ill-posed. Just as it is always the case in the theory of Ill-Posed problems \cite{T}, we assume that the boundary in (\ref{7.4}) are given with a noise of the level $\mathrm{d}elta >0,$ where $\mathrm{d}elta $ is a sufficiently small number, i.e. \begin{equation} \left\Vert \varphi _{0}-\varphi _{0}^{\ast }\right\Vert _{H^{1}\left( 0,T_{1}\right) }<\mathrm{d}elta ,\left\Vert \varphi _{1}-\varphi _{1}^{\ast }\right\Vert _{H^{1}\left( 0,T_{1}\right) }<\mathrm{d}elta , \label{7.6} \end{equation} where functions $\varphi _{0}^{\ast },\varphi _{1}^{\ast }\in H^{2}\left( 0,T_{1}\right) $ are \textquotedblleft ideal" noiseless data. Following to one of postulates of the theory of Ill-Posed problems, we assume that there exists an exact solution $w^{\ast }\in H^{2}\left( Q_{T_{1}}\right) $ of problem (\ref{7.3})-(\ref{7.5}) with these noiseless data. We will estimate below how this noise affects the accuracy of the solution of Problem 3 (if this solution exists) and also will establish the convergence rate of numerical solutions obtained by QRM to the exact one as $\mathrm{d}elta \rightarrow 0.$ Consider the following analog of functional (\ref{3.14}): \begin{equation} I_{\alpha }\left( w\right) =\int_{Q_{T_{1}}}\left( Pw\right) ^{2}dxdt+\alpha \left\Vert w\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}. \label{7.7} \end{equation} Introduce the set $Y\subset H^{2}\left( Q_{T_{1}}\right) ,$ \begin{equation} Y=\left\{ w\in H^{2}\left( Q_{T_{1}}\right) :w\left( 0,t\right) =\varphi _{0}\left( t\right) ,w\left( 1,t\right) =\varphi _{1}\left( t\right) ,w\left( x,0\right) =q\left( x\right) \right\} . \label{7.8} \end{equation} We construct an approximate solution of Problem 3 via solving the following problem: \textbf{Minimization Problem 2}. \emph{Minimize the functional }$I_{\alpha }\left( w\right) $\emph{\ on the set }$Y$\emph{\ given in (\ref{7.8}).} Similarly with the Minimization Problem1, Minimization Problem 2 means QRM\ for Problem 3. \subsection{Theorems} \label{sec:4.2} In this subsection, we formulate four theorems for Problem 3. Let $\lambda >2 $ be a parameter. Introduce the Carleman Weight Function $\psi _{\lambda }\left( t\right) $ for the operator $\partial _{t}+a\left( x,t\right) \partial _{x}^{2}$ as: \begin{equation} \psi _{\lambda }\left( t\right) =e^{\left( T_{1}+1-t\right) ^{\lambda }}, \text{ }t\in \left( 0,T_{1}\right) . \label{7.9} \end{equation} Hence, the function $\psi _{\lambda }\left( t\right) $ is decreasing on $ \left[ 0,T_{1}\right] $, $\psi _{\lambda }^{\prime }\left( t\right) <0,$ \begin{equation} \max_{\left[ 0,T_{1}\right] }\psi _{\lambda }\left( t\right) =\psi _{\lambda }\left( 0\right) =e^{\left( T_{1}+1\right) ^{\lambda }},\text{ }\min_{\left[ 0,T_{1}\right] }\psi _{\lambda }\left( t\right) =\psi _{\lambda }\left( T_{1}\right) =e. \label{7.10} \end{equation} Denote \begin{equation} H_{0}^{2}\left( Q_{T_{1}}\right) =\left\{ u\in H^{2}\left( Q_{T_{1}}\right) :u\left( 0,t\right) =u\left( 1,t\right) =0\right\} . \label{7.99} \end{equation} \begin{equation} H_{0,0}^{2}\left( Q_{T_{1}}\right) =\left\{ u\in H_{0}^{2}\left( Q_{T_{1}}\right) :u\left( x,0\right) =0\right\} . \label{7.100} \end{equation} \textbf{Theorem 4.1} (Carleman estimate). \emph{Let the coefficient }$ a\left( x,t\right) $\emph{\ of the operator }$P$\emph{\ satisfies conditions (\ref{7.1}). Then there exist a sufficiently large number }$\lambda _{0}=\lambda _{0}\left( T_{1},a_{0},a_{1}\right) >2$\emph{\ and a constant }$ C=C\left( T_{1},a_{0},a_{1}\right) >0,$\emph{\ both depending only on listed parameters, such that the following Carleman estimate holds for the operator }$P:$\emph{\ } \begin{equation*} \int_{Q_{T_{1}}}\left( Pu\right) ^{2}\psi _{\lambda }^{2}dxdt\geq C\sqrt{ \lambda }\int_{Q_{T_{1}}}u_{x}^{2}\psi _{\lambda }^{2}dxdt+C\lambda ^{2}\int_{Q_{T_{1}}}u^{2}\psi _{\lambda }^{2}dxdt \end{equation*} \begin{equation} -C\sqrt{\lambda }\left\Vert u\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}-C\lambda \left( T_{1}+1\right) ^{\lambda }e^{2\left( T_{1}+1\right) ^{\lambda }}\left\Vert u\left( x,0\right) \right\Vert _{L_{2}\left( 0,1\right) }^{2}, \label{7.11} \end{equation} \begin{equation*} \forall \lambda \geq \lambda _{0},\forall u\in H_{0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Carleman estimate (\ref{7.11}) is the key to proofs of Theorems 4.2, 4.4. \textbf{Theorem 4.2} (H\"{o}lder stability estimate for Problem 3 and uniqueness). \emph{Let the coefficient }$a\left( x,t\right) $\emph{\ of the operator }$P$\emph{\ satisfies conditions (\ref{7.1}). Assume that the functions }$w\in H^{2}\left( Q_{T_{1}}\right) $\emph{\ and }$w^{\ast }\in H^{2}\left( Q_{T_{1}}\right) $\emph{\ are solutions of Problem 3 with the vectors of data }$\left( \varphi _{0}\left( t\right) ,\varphi _{1}\left( t\right) \right) $\emph{\ and }$\left( \varphi _{0}^{\ast }\left( t\right) ,\varphi _{1}^{\ast }\left( t\right) \right) $\emph{\ respectively, where }$ \varphi _{0},\varphi _{1},\varphi _{0}^{\ast },\varphi _{1}^{\ast }\in H^{2}\left( 0,T_{1}\right) .$\emph{\ Also, assume that error estimates (\ref {7.6}) of the boundary data hold. Choose an arbitrary number }$\rho \in \left( 0,T_{1}\right) $\emph{. Denote } \begin{equation} \mu =\mu \left( T_{1},\rho \right) =\frac{\ln \left( T_{1}+1-\rho \right) }{ \ln \left( T_{1}+1\right) }\in \left( 0,1\right) . \label{7.110} \end{equation} \emph{Then there exists a sufficiently small number }$\mathrm{d}elta _{0}=\mathrm{d}elta _{0}\left( T_{1},a_{0},a_{1}\right) \in \left( 0,1\right) $\emph{\ and a constant }$C_{1}=C_{1}\left( T_{1},a_{0},a_{1},\rho \right) >0,$ \emph{both depending only on listed parameters, such that\ the following stability estimate holds for all }$\mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) :$ \begin{equation} \left\Vert w_{x}-w_{x}^{\ast }\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }+\left\Vert w-w^{\ast }\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }\leq \label{7.12} \end{equation} \begin{equation*} \leq C_{1}\left( 1+\left\Vert w-w^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\right) \exp \left[ -\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }\right] . \end{equation*} Below $C=C\left( T_{1},a_{0},a_{1}\right) >0$ and $C_{1}=C_{1}\left( T_{1},a_{0},a_{1}\right) >0$ denote different constants depending only on listed parameters. \textbf{Corollary 4.1} (uniqueness). \emph{Let the coefficient }$a\left( x,t\right) $\emph{\ of the operator }$P$\emph{\ satisfies conditions (\ref {7.1}). Then Problem 3 has at most one solution (uniqueness)}. \textbf{Proof}.\emph{\ }If $\mathrm{d}elta =0,$\ then (\ref{7.12}) implies that $ w\left( x,t\right) =w^{\ast }\left( x,t\right) $\ in $Q_{T_{1}-\rho }.$ Since $\rho \in \left( 0,T_{1}\right) $\ is an arbitrary number, then $ w\left( x,t\right) \equiv w^{\ast }\left( x,t\right) $\ in $Q_{T_{1}}.$ $ \square $ \textbf{Theorem 4.3 }(existence and uniqueness of the minimizer)\textbf{. } \emph{Let functions }$\varphi _{0}\left( t\right) ,\varphi _{1}\left( t\right) \in H^{2}\left( 0,T_{1}\right) .$\emph{\ Let }$Y$\emph{\ be the set defined in (\ref{7.8}). Then there exists unique minimizer }$w_{\min }\in Y$ \emph{\ of functional (\ref{7.7}) and } \begin{equation} \left\Vert w_{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\leq \frac{C }{\sqrt{\alpha }}\left( \left\Vert \varphi _{0}\right\Vert _{H^{2}\left( 0,T_{1}\right) }+\left\Vert \varphi _{1}\right\Vert _{H^{2}\left( 0,T_{1}\right) }\right) . \label{7.13} \end{equation} In the theory of Ill-Posed Problems, this minimizer $w_{\min }$ is called \textquotedblleft regularized solution" of Problem 3 \cite{T}. According to the theory of Ill-Posed problems, it is important to establish convergence rate of regularized solutions to the exact one $w^{\ast }.$ In doing so, one should always choose a dependence of the regularization parameter $\alpha $ on the noise level $\mathrm{d}elta ,$ i.e. $\alpha =\alpha \left( \mathrm{d}elta \right) \in \left( 0,1\right) $ \cite{T}. \textbf{Theorem 4.4} (convergence rate of regularized solutions). \emph{Let } $w^{\ast }\in H^{2}\left( Q_{T_{1}}\right) $\emph{\ be the solution of Problem 3 with the noiseless data }$\left( \varphi _{0}^{\ast }\left( t\right) ,\varphi _{1}^{\ast }\left( t\right) \right) .$\emph{\ Let functions }$\varphi _{0},\varphi _{1},\varphi _{0}^{\ast },\varphi _{1}^{\ast }\in H^{2}\left( 0,T_{1}\right) .$\emph{\ Let }$w_{\min }\in Y$ \emph{\ be the unique minimizer of functional (\ref{7.7}) on the set }$Y$ \emph{. Assume that error estimates (\ref{7.6}) hold. Choose an arbitrary number }$\rho \in \left( 0,T_{1}\right) $\emph{. Let }$\mu =\mu \left( T_{1},\rho \right) \in \left( 0,1\right) $ \emph{be the number defined in ( \ref{7.110}) and let} \begin{equation} \alpha =\alpha \left( \mathrm{d}elta \right) =\mathrm{d}elta ^{2}, \label{7.140} \end{equation} \emph{\ Then there exists a sufficiently small number }$\mathrm{d}elta _{0}=\mathrm{d}elta _{0}\left( T_{1},a_{0},a_{1}\right) \in \left( 0,1\right) $\emph{\ depending only on listed parameters such that the following convergence rate of regularized solutions }$w_{\min }$ \emph{holds for all }$\mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) :$\emph{\ } \begin{equation} \left\Vert \partial _{x}w_{\min }-\partial _{x}w^{\ast }\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }+\left\Vert w_{\min }-w^{\ast }\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) } \label{7.14} \end{equation} \begin{equation*} \leq C_{1}\left( 1+\left\Vert w^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }+\left\Vert \varphi _{0}^{\ast }\right\Vert _{H^{2}\left( 0,T_{1}\right) }+\left\Vert \varphi _{1}^{\ast }\right\Vert _{H^{2}\left( 0,T_{1}\right) }\right) \exp \left[ -\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu } \right] . \end{equation*} \subsection{Proof of Theorem 4.1} \label{sec:4.3} We assume in this proof that $u\in C^{2}\left( \overline{Q}_{T_{1}}\right) .$ The case $u\in H^{2}\left( Q_{T_{1}}\right) $ can be obtained via density arguments. It is assumed in this proof that $\lambda \geq \lambda _{0}=\lambda _{0}\left( T_{1},a_{0},a_{1}\right) >2$ and $\lambda _{0}$ is sufficiently large. We remind that $C=C\left( T_{1},a_{0},a_{1}\right) >0$ denotes different constants depending only on listed parameters. Change variables as \begin{equation} v\left( x,t\right) =u\left( x,t\right) \psi _{\lambda }\left( t\right) =u\left( x,t\right) e^{\left( T_{1}+1-t\right) ^{\lambda }}. \label{7.16} \end{equation} Hence, \begin{equation*} u\left( x,t\right) =v\left( x,t\right) e^{-\left( T_{1}+1-t\right) ^{\lambda }}, \end{equation*} \begin{equation*} u_{t}=\left( v_{t}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v\right) e^{-\left( T_{1}+1-t\right) ^{\lambda }},\text{ } \end{equation*} \begin{equation*} u_{x}=v_{x}e^{-\left( T_{1}+1-t\right) ^{\lambda }},\text{ } u_{xx}=v_{xx}e^{-\left( T_{1}+1-t\right) ^{\lambda }}. \end{equation*} Hence, \begin{equation*} \left( Pu\right) ^{2}\psi _{\lambda }^{2}=\left[ v_{t}+\left( a\left( x,t\right) v_{xx}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v\right) \right] ^{2} \end{equation*} \begin{equation} \geq v_{t}^{2}+2v_{t}\left( a\left( x,t\right) v_{xx}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v\right) . \label{7.17} \end{equation} We have used here $\left( a+b\right) ^{2}\geq a^{2}+2ab,$ $\forall a,b\in \mathbb{R}.$ We now estimate from the below terms in the second line of (\ref {7.17}). \textbf{Step 1}. Estimate from the below $2a\left( x,t\right) v_{xx}v_{t}.$ We have: \begin{equation*} 2a\left( x,t\right) v_{xx}v_{t}=\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}-2a\left( x,t\right) v_{x}v_{xt}-2a_{x}\left( x,t\right) v_{x}v_{t} \end{equation*} \begin{equation*} =\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}\right) _{t}-a_{t}\left( x,t\right) v_{x}^{2}-2a_{x}\left( x,t\right) v_{x}v_{t}. \end{equation*} Thus, \begin{equation} 2a\left( x,t\right) v_{xx}v_{t}\geq \left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}\right) _{t}-Cv_{x}^{2}-C\left\vert v_{x}\right\vert \left\vert v_{t}\right\vert . \label{7.18} \end{equation} \textbf{Step 2.} Estimate from the below $2\lambda \left( T_{1}+1-t\right) ^{\lambda -1}vv_{t}.$ We have: \begin{equation*} 2\lambda \left( T_{1}+1-t\right) ^{\lambda -1}vv_{t}=\left( \lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t}+\lambda \left( \lambda -1\right) \left( T_{1}+1-t\right) ^{\lambda -2}v^{2} \end{equation*} \begin{equation} \geq \left( \lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t}+ \frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}v^{2}. \label{7.19} \end{equation} \textbf{Step 3}. Estimate from the below the entire second line of (\ref {7.17}). Using (\ref{7.18}), (\ref{7.19}) and Cauchy-Schwarz inequality \textquotedblleft with $\varepsilon ",$ \begin{equation} 2ab\geq -\varepsilon a^{2}-\frac{1}{\varepsilon }b^{2},\text{ }\forall a,b\in \mathbb{R},\text{ }\forall \varepsilon >0, \label{7.190} \end{equation} we obtain \begin{equation*} v_{t}^{2}+2v_{t}\left( a\left( x,t\right) v_{xx}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v\right) \geq \end{equation*} \begin{equation*} \geq v_{t}^{2}-Cv_{x}^{2}-C\left\vert v_{x}\right\vert \left\vert v_{t}\right\vert +\frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}v^{2} \end{equation*} \begin{equation*} +\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t} \end{equation*} \begin{equation*} \geq \frac{1}{2}v_{t}^{2}-Cv_{x}^{2}+\frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}v^{2} \end{equation*} \begin{equation*} +\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t}. \end{equation*} Thus, we have obtained that \begin{equation*} v_{t}^{2}+2v_{t}\left( a\left( x,t\right) v_{xx}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v\right) \geq \end{equation*} \begin{equation} \geq \frac{1}{2}v_{t}^{2}-Cv_{x}^{2}+\frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}v^{2} \label{7.20} \end{equation} \begin{equation*} +\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t}. \end{equation*} Using (\ref{7.17}) and (\ref{7.20}) as well as dropping the non-negative term $v_{t}^{2}/2$ in the right hand side of (\ref{7.20}), we obtain \begin{equation} \left( Pu\right) ^{2}\psi _{\lambda }^{2}\geq -Cv_{x}^{2}+\frac{\lambda ^{2} }{2}\left( T_{1}+1-t\right) ^{\lambda -2}v^{2} \label{7.21} \end{equation} \begin{equation*} +\left( 2a\left( x,t\right) v_{x}v_{t}\right) _{x}+\left( -a\left( x,t\right) v_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}v^{2}\right) _{t}. \end{equation*} \textbf{Step 4}. Using (\ref{7.16}), change variables in the right hand side of (\ref{7.21}). We have $v^{2}=u^{2}\psi _{\lambda }^{2},v_{x}^{2}=u_{x}^{2}\psi _{\lambda }^{2}.$ Thus, \begin{equation} \left( Pu\right) ^{2}\psi _{\lambda }^{2}\geq -Cu_{x}^{2}\psi _{\lambda }^{2}+\frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}u^{2}\psi _{\lambda }^{2} \label{7.22} \end{equation} \begin{equation*} +\left( 2a\left( x,t\right) u_{x}\left( u_{t}-\lambda \left( T_{1}+1-t\right) ^{\lambda -2}u\right) \psi _{\lambda }^{2}\right) _{x}+\left( \left( -a\left( x,t\right) u_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}u^{2}\right) \psi _{\lambda }^{2}\right) _{t}. \end{equation*} \textbf{Step 5.} Estimate from the below $-Pu\cdot u\psi _{\lambda }^{2}.$ We have \begin{equation*} -Pu\cdot u\psi _{\lambda }^{2}=\left( -u_{t}-a\left( x,t\right) u_{xx}\right) ue^{2\left( T_{1}+1-t\right) ^{\lambda }} \end{equation*} \begin{equation} =\left( -\frac{1}{2}u^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }}\right) _{t}-\lambda \left( T_{1}+1-t\right) ^{\lambda -1}u^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }} \label{7.23} \end{equation} \begin{equation*} +\left( -a\left( x,t\right) u_{x}ue^{2\left( T_{1}+1-t\right) ^{\lambda }}\right) _{x}+a\left( x,t\right) u_{x}^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }}+a_{x}\left( x,t\right) u_{x}ue^{2\left( T_{1}+1-t\right) ^{\lambda }}. \end{equation*} Using (\ref{7.1}) and (\ref{7.190}), we obtain \begin{equation*} a\left( x,t\right) u_{x}^{2}+a_{x}\left( x,t\right) u_{x}u\geq a_{0}u_{x}^{2}-a_{1}\left\vert u_{x}\right\vert \left\vert u\right\vert \geq \frac{a_{0}}{2}u_{x}^{2}-Cu^{2} \end{equation*} \begin{equation*} \geq \frac{a_{0}}{2}u_{x}^{2}-\lambda \left( T_{1}+1-t\right) ^{\lambda -2}u^{2}. \end{equation*} Hence, multiplying (\ref{7.23}) by $\sqrt{\lambda }$, we obtain \begin{equation} -\sqrt{\lambda }Pu\cdot u\psi _{\lambda }^{2}\geq \frac{a_{0}}{2}\sqrt{ \lambda }u_{x}^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }}-2\lambda ^{3/2}\left( T_{1}+1-t\right) ^{\lambda -2}u^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }} \label{7.24} \end{equation} \begin{equation*} +\left( -\frac{\sqrt{\lambda }}{2}u^{2}e^{2\left( T_{1}+1-t\right) ^{\lambda }}\right) _{t}+\left( -\sqrt{\lambda }a\left( x,t\right) u_{x}ue^{2\left( T_{1}+1-t\right) ^{\lambda }}\right) _{x}. \end{equation*} \textbf{Step 6}. Estimate from the below $\left( Pu\right) ^{2}\psi _{\lambda }^{2}-\sqrt{\lambda }Pu\cdot u\psi _{\lambda }^{2}.$ Using (\ref {7.22}) and (\ref{7.24}), we obtain \begin{equation*} \left( Pu\right) ^{2}\psi _{\lambda }^{2}-\sqrt{\lambda }Pu\cdot u\psi _{\lambda }^{2}\geq \end{equation*} \begin{equation*} \geq \frac{a_{0}}{2}\sqrt{\lambda }\left( 1-\frac{2C}{\sqrt{\lambda }} \right) u_{x}^{2}\psi _{\lambda }^{2}+\frac{\lambda ^{2}}{2}\left( T_{1}+1-t\right) ^{\lambda -2}\left( 1-\frac{4}{\sqrt{\lambda }}\right) u^{2}\psi _{\lambda }^{2} \end{equation*} \begin{equation} +\frac{\partial }{\partial t}\left[ \left( -a\left( x,t\right) u_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}u^{2}-\frac{\sqrt{ \lambda }}{2}u^{2}\right) \psi _{\lambda }^{2}\right] \label{7.25} \end{equation} \begin{equation*} +\frac{\partial }{\partial x}\left[ \left( 2a\left( x,t\right) u_{x}\left( u_{t}-\lambda \left( T_{1}+1-t\right) ^{\lambda -2}u\right) -\sqrt{\lambda } a\left( x,t\right) u_{x}u\right) \psi _{\lambda }^{2}\right] . \end{equation*} \textbf{Step 7}. Estimate from the below \begin{equation*} \int_{Q_{T_{1}}}\left( Pu\right) ^{2}\psi _{\lambda }^{2}dxdt. \end{equation*} We have \begin{equation*} \left( Pu\right) ^{2}\psi _{\lambda }^{2}-\sqrt{\lambda }Pu\cdot u\psi _{\lambda }^{2}\leq \frac{3}{2}\left( Pu\right) ^{2}\psi _{\lambda }^{2}+ \frac{1}{2}\sqrt{\lambda }u^{2}\psi _{\lambda }^{2}. \end{equation*} Combining this with (\ref{7.25}), we obtain \begin{equation*} \left( Pu\right) ^{2}\psi _{\lambda }^{2}\geq C\sqrt{\lambda }u_{x}^{2}\psi _{\lambda }^{2}+C\lambda ^{2}u^{2}\psi _{\lambda }^{2} \end{equation*} \begin{equation} +\frac{\partial }{\partial t}\left[ \left( -a\left( x,t\right) u_{x}^{2}+\lambda \left( T_{1}+1-t\right) ^{\lambda -1}u^{2}-\frac{\sqrt{ \lambda }}{2}u^{2}\right) \psi _{\lambda }^{2}\right] \label{7.26} \end{equation} \begin{equation*} +\frac{\partial }{\partial x}\left[ \left( 2a\left( x,t\right) u_{x}\left( u_{t}-\lambda \left( T_{1}+1-t\right) ^{\lambda -2}u\right) -\sqrt{\lambda } a\left( x,t\right) u_{x}u\right) \psi _{\lambda }^{2}\right] . \end{equation*} Integrate (\ref{7.26}) using $u\in H_{0}^{2}\left( Q_{T_{1}}\right) $ and also using (\ref{7.10}). We obtain \begin{equation*} \int_{Q_{T_{1}}}\left( Pu\right) ^{2}\psi _{\lambda }^{2}dxdt\geq C\sqrt{ \lambda }\int_{Q_{T_{1}}}u_{x}^{2}\psi _{\lambda }^{2}dxdt+C\lambda ^{2}\int_{Q_{T_{1}}}u^{2}\psi _{\lambda }^{2}dxdt \end{equation*} \begin{equation} -C\sqrt{\lambda }\left\Vert u\left( x,T_{1}\right) \right\Vert _{H^{1}\left( 0,1\right) }^{2}-C\lambda \left( T_{1}+1\right) ^{\lambda }e^{2\left( T_{1}+1\right) ^{\lambda }}\left\Vert u\left( x,0\right) \right\Vert _{L_{2}\left( 0,1\right) }^{2}. \label{7.27} \end{equation} Finally, applying the trace theorem to the second line of (\ref{7.27}), we obtain desired estimate (\ref{7.11}) of this theorem. $\square $ \subsection{Proof of Theorem 4.2} \label{sec:4.4} Introduce the following functions: \begin{equation} \widetilde{\varphi }_{0}\left( t\right) =\varphi _{0}\left( t\right) -\varphi _{0}^{\ast }\left( t\right) ,\widetilde{\varphi }_{1}\left( t\right) =\varphi _{1}\left( t\right) -\varphi _{1}^{\ast }\left( t\right) , \label{7.28} \end{equation} \begin{equation} F\left( x,t\right) =\varphi _{0}\left( t\right) \left( 1-x\right) +\varphi _{1}\left( t\right) x,\text{ }F^{\ast }\left( x,t\right) =\varphi _{0}^{\ast }\left( t\right) \left( 1-x\right) +\varphi _{1}^{\ast }\left( t\right) x, \label{7.29} \end{equation} \begin{equation} \widetilde{F}\left( x,t\right) =F\left( x,t\right) -\text{ }F^{\ast }\left( x,t\right) =\widetilde{\varphi }_{0}\left( t\right) \left( 1-x\right) + \widetilde{\varphi }_{1}\left( t\right) , \label{7.30} \end{equation} \begin{equation} \widehat{w}\left( x,t\right) =w\left( x,t\right) -F\left( x,t\right) , \widehat{w}^{\ast }\left( x,t\right) =w^{\ast }\left( x,t\right) -F^{\ast }\left( x,t\right) , \label{7.31} \end{equation} \begin{equation} \overline{w}\left( x,t\right) =\widehat{w}\left( x,t\right) -\widehat{w} ^{\ast }\left( x,t\right) . \label{7.32} \end{equation} It follows from (\ref{7.5}), (\ref{7.6}) and (\ref{7.28})-(\ref{7.32}) that: \begin{equation} \widehat{w}\left( x,0\right) =\widehat{w}^{\ast }\left( x,0\right) = \overline{w}\left( x,0\right) =0, \label{7.34} \end{equation} \begin{equation} F_{xx}\left( x,t\right) =\text{ }F_{xx}^{\ast }\left( x,t\right) =\widetilde{ F}_{xx}\left( x,t\right) =0, \label{7.340} \end{equation} \begin{equation} \left\Vert \widetilde{F}_{t}\right\Vert _{L_{2}\left( Q_{T_{1}}\right) },\left\Vert \widetilde{F}\right\Vert _{L_{2}\left( Q_{T_{1}}\right) }\leq C\mathrm{d}elta . \label{7.35} \end{equation} By (\ref{7.3})-(\ref{7.5}) and (\ref{7.28})-(\ref{7.34}) \begin{equation} \overline{w}_{t}+a\left( x,t\right) \overline{w}_{xx}=-\widetilde{F}_{t} \text{ in }Q_{T_{1}}, \label{7.36} \end{equation} \begin{equation} \overline{w}\left( 0,t\right) =\overline{w}\left( 1,t\right) =0,\text{ }t\in \left( 0,T_{1}\right) , \label{7.37} \end{equation} \begin{equation} \overline{w}\left( x,0\right) =0,\text{ }x\in \left( 0,1\right) . \label{7.38} \end{equation} Also, by (\ref{7.99}), (\ref{7.100}) and (\ref{7.34}) \begin{equation} \widehat{w},\widehat{w}^{\ast },\overline{w}\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \label{7.39} \end{equation} Square both sides of equation (\ref{7.36}), multiply by the function $\psi _{\lambda }^{2}\left( t\right) $ and integrate over the domain $Q_{T_{1}}.$ Using (\ref{7.10}) and (\ref{7.35}), we obtain \begin{equation} \int_{Q_{T_{1}}}\left( \overline{w}_{t}+a\left( x,t\right) \overline{w} _{xx}\right) ^{2}\psi _{\lambda }^{2}\left( t\right) dxdt\leq C\mathrm{d}elta ^{2}e^{2\left( T_{1}+1\right) ^{\lambda }}. \label{7.390} \end{equation} Hence, applying Carleman estimate (\ref{7.11}) to the left hand side of (\ref {7.390}) and taking into account (\ref{7.10})-(\ref{7.100}), we obtain \begin{equation} \int_{Q_{T_{1}}}\overline{w}_{x}^{2}\psi _{\lambda }^{2}dxdt+\lambda ^{3/2}\int_{Q_{T_{1}}}\overline{w}^{2}\psi _{\lambda }^{2}dxdt\leq \label{7.41} \end{equation} \begin{equation*} \leq C\mathrm{d}elta ^{2}e^{2\left( T_{1}+1\right) ^{\lambda }}+C\left\Vert \overline{w}\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2},\text{ }\forall \lambda \geq \lambda _{0}. \end{equation*} Since $Q_{T_{1}-\rho }\subset Q_{T_{1}}$ and also since by (\ref{7.9}) $\psi _{\lambda }^{2}\left( t\right) \geq e^{2\left( T_{1}+1-\rho \right) ^{\lambda }}$ in $Q_{T_{1}-\rho },$ then (\ref{7.41}) implies \begin{equation} \left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }^{2}+\left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }^{2}\leq \label{7.42} \end{equation} \begin{equation*} \leq C\mathrm{d}elta e^{\left( T_{1}+1\right) ^{\lambda }}+Ce^{-\left( T_{1}+1-\rho \right) ^{\lambda }}\left\Vert \overline{w}\right\Vert _{H^{2}\left( Q_{T_{1}}\right) },\text{ }\forall \lambda \geq \lambda _{0}. \end{equation*} Choose $\mathrm{d}elta _{0}=\mathrm{d}elta _{0}\left( T_{1},a_{0},a_{1}\right) \in \left( 0,1\right) $ so small that \begin{equation} \ln \left[ \ln \left( \mathrm{d}elta _{0}^{-1/2}\right) ^{1/\ln \left( T_{1}+1\right) }\right] >\lambda _{0}. \label{7.43} \end{equation} Let $\mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) .$ We now choose $\lambda =\lambda \left( \mathrm{d}elta \right) $ so large that \begin{equation} e^{\left( T_{1}+1\right) ^{\lambda }}=\frac{1}{\sqrt{\mathrm{d}elta }}. \label{7.430} \end{equation} Hence, \begin{equation} \lambda =\lambda \left( \mathrm{d}elta \right) =\ln \left[ \ln \left( \mathrm{d}elta ^{-1/2}\right) ^{1/\ln \left( T_{1}+1\right) }\right] >\lambda _{0},\text{ } \forall \mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) . \label{7.44} \end{equation} Then \begin{equation} e^{-\left( T_{1}+1-\rho \right) ^{\lambda }}=\exp \left[ -\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }\right] , \label{7.45} \end{equation} where the number $\mu \in \left( 0,1\right) $ is defined in (\ref{7.110}). We have \begin{equation} \frac{e^{-\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }}}{\sqrt{\mathrm{d}elta }}=\exp \left[ -\frac{1}{2}\ln \mathrm{d}elta \left( 1+\frac{2\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }}{\ln \mathrm{d}elta }\right) \right] . \label{1} \end{equation} Since $\mu \in \left( 0,1\right) ,$ then the Hospital's rule implies \begin{equation*} \lim_{\mathrm{d}elta \rightarrow 0}\frac{2\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }}{ \ln \mathrm{d}elta }=\lim_{\mathrm{d}elta \rightarrow 0}\left( -\mu \left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu -1}\right) =0. \end{equation*} Hence, \begin{equation} \lim_{\mathrm{d}elta \rightarrow 0}\left[ -\frac{1}{2}\ln \mathrm{d}elta \left( 1+\frac{ 2\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }}{\ln \mathrm{d}elta }\right) \right] =\lim_{\mathrm{d}elta \rightarrow 0}\left( \ln \mathrm{d}elta ^{-1/2}\right) =\infty . \label{2} \end{equation} It follows from (\ref{1}) and (\ref{2}) that \begin{equation*} \lim_{\mathrm{d}elta \rightarrow 0}\frac{e^{-\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }}}{\sqrt{\mathrm{d}elta }}=\infty . \end{equation*} Hence, \begin{equation} \sqrt{\mathrm{d}elta }\leq C_{1}e^{-\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }},\text{ }\forall \mathrm{d}elta \in \left( 0,1\right) . \label{7.450} \end{equation} Using (\ref{7.42})-(\ref{7.45}) and (\ref{7.450}), we obtain \begin{equation} \left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }+\left\Vert \overline{w}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }\leq \label{7.46} \end{equation} \begin{equation*} \leq C_{1}\left( 1+\left\Vert \overline{w}\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\right) \exp \left( -\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }\right) ,\text{ }\forall \mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) . \end{equation*} By (\ref{7.28})-(\ref{7.32}) $\overline{w}=\left( w-w^{\ast }\right) - \widetilde{F}.$ Hence, the triangle inequality, (\ref{7.6}),\emph{\ }(\ref {7.28})-(\ref{7.30}), (\ref{7.35}) and (\ref{7.46}) imply (\ref{7.12}), which is the target estimate of this theorem. $\square $ \subsection{Proof of Theorem 4.3} \label{sec:4.5} Denote $\left[ ,\right] $ the scalar product in the space $H^{2}\left( Q_{T_{1}}\right) .$ Let $\widehat{w}\in H_{0,0}^{2}\left( Q_{T_{1}}\right) $ be the function defined in (\ref{7.31}). Then, using (\ref{7.7}) and (\ref {7.340}), consider the functional \begin{equation} I_{\alpha }\left( \widehat{w}+F\right) =\int_{Q_{T_{1}}}\left( P\widehat{w} +F_{t}\right) ^{2}dxdt+\alpha \left\Vert \widehat{w}+F\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}. \label{7.47} \end{equation} Suppose that the function $\widehat{w}_{\min }\in H_{0,0}^{2}\left( Q_{T_{1}}\right) $ is a minimizer of the functional $I_{\alpha }\left( \widehat{w}+F\right) $ on the space $H_{0,0}^{2}\left( Q_{T_{1}}\right) ,$ i.e. \begin{equation} I_{\alpha }\left( \widehat{w}_{\min }+F\right) \leq I_{\alpha }\left( \widehat{w}+F\right) ,\text{ }\forall \widehat{w}\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \label{7.48} \end{equation} Consider the function $w_{\min }=\widehat{w}_{\min }+F.$ Then it follows from (\ref{7.100}) and (\ref{7.29}) that $w_{\min }\in Y,$ where the set $Y$ is defined in (\ref{7.8}). Also, $w=\widehat{w}+F\in Y,$ $\forall \widehat{w} \in H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ Hence, (\ref{7.48}) implies that the function $w_{\min }$ is a minimizer of the functional $I_{\alpha }\left( w\right) $ on the set $Y$. We now prove the reverse. Suppose that a function $w^{\min }\in Y$ is a minimizer of the functional $I_{\alpha }\left( w\right) $ on the set $Y$, i.e. \begin{equation} I_{\alpha }\left( w^{\min }\right) \leq I_{\alpha }\left( w\right) ,\text{ } \forall w\in Y. \label{7.49} \end{equation} Consider the function $\widehat{w}^{\min }=w^{\min }-F.$ And for every function $\widehat{w}\in H_{0,0}^{2}\left( Q_{T_{1}}\right) $ consider the function $w=\widehat{w}+F\in Y.$ Then by (\ref{7.49}) \begin{equation*} I_{\alpha }\left( \widehat{w}^{\min }+F\right) =I_{\alpha }\left( w^{\min }\right) \leq I_{\alpha }\left( w\right) =I_{\alpha }\left( \widehat{w} +F\right) ,\text{ }\forall \widehat{w}\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Hence, $\widehat{w}^{\min }$ is a minimizer of functional (\ref{7.47}) on the space $H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ Therefore, it is sufficient to find a minimizer of the functional $I_{\alpha }\left( \widehat{w} +F\right) $ on the space $H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ By the variational principle the function $\widehat{w}_{\min }\in H_{0,0}^{2}\left( Q_{T_{1}}\right) $ is a minimizer of the functional $ I_{\alpha }\left( \widehat{w}+F\right) $ if and only if the following integral identity is satisfied: \begin{equation} \int_{Q_{T_{1}}}\left( P\widehat{w}_{\min }\cdot Ph\right) dxdt+\alpha \left[ \widehat{w}_{\min },h\right] =-\int_{Q_{T_{1}}}F_{t}\cdot Phdxdt-\alpha \left[ F,h\right] ,\text{ } \label{7.50} \end{equation} \begin{equation*} \forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Consider a new scalar product in $H_{0,0}^{2}\left( Q_{T_{1}}\right) $ defined as \begin{equation*} \left\{ u,v\right\} =\int_{Q_{T_{1}}}\left( Pu\cdot Pv\right) dxdt+\alpha \left[ u,v\right] ,\text{ }\forall u,v\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Recall that $\alpha \in \left( 0,1\right) .$ Obviously, \begin{equation*} \alpha \left\Vert u\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\leq \left\{ u,u\right\} \leq C\left\Vert u\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2},\forall u\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Hence, norms $\sqrt{\left\{ u,u\right\} }$ and $\left\Vert u\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }$ are equivalent. Hence, one can consider the scalar product $\left\{ u,v\right\} $ as the scalar product in $ H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ Hence, we can rewrite (\ref{7.50}) as \begin{equation} \left\{ \widehat{w}_{\min },h\right\} =-\int_{Q_{T_{1}}}F_{t}\cdot Phdxdt-\alpha \left[ F,h\right] ,\text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \label{7.51} \end{equation} The right hand side of (\ref{7.51}) can be estimated as \begin{equation*} \left\vert -\int_{Q_{T_{1}}}F_{t}Phdxdt-\alpha \left[ F,h\right] \right\vert \leq C\left\Vert F\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\left\{ h\right\} ,\text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Hence, the right hand side of (\ref{7.51}) can be considered as a bounded linear functional of $h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ Hence, by Riesz theorem there exists unique function $W\in H_{0,0}^{2}\left( Q_{T_{1}}\right) $ such that \begin{equation*} -\int_{Q_{T_{1}}}F_{t}Phdxdt-\alpha \left[ F,h\right] =\left\{ W,h\right\} , \text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Comparing this with (\ref{7.51}), we obtain \begin{equation*} \left\{ \widehat{w}_{\min },h\right\} =\left\{ W,h\right\} ,\text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Therefore, $\widehat{w}_{\min }=W.$ Thus, we have proven existence and uniqueness of the minimizer $\widehat{w}_{\min }$ of the functional $ I_{\alpha }\left( \widehat{w}+F\right) $ on the space $H_{0,0}^{2}\left( Q_{T_{1}}\right) .$ Therefore, it follows from the discussion in the beginning of this proof that there exists unique minimizer \ of the functional $I_{\alpha }\left( w\right) $ on the set $Y$ and this minimizer is $w_{\min }=\widehat{w}_{\min }+F.$ We now estimate the norm $\left\Vert w_{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }.$ Setting in (\ref{7.50}) $h=\widehat{w}_{\min }$ and using Cauchy-Schwarz inequality, we obtain \begin{equation*} \int_{Q_{T_{1}}}\left( P\widehat{w}_{\min }\right) ^{2}dxdt+\alpha \left\Vert \widehat{w}_{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\leq \end{equation*} \begin{equation*} \leq \frac{1}{2}\left\Vert F_{t}\right\Vert _{L_{2}\left( Q_{T_{1}}\right) }^{2}+\frac{1}{2}\left\Vert P\widehat{w}_{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}+\frac{\alpha }{2}\left\Vert F\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}+\frac{\alpha }{2}\left\Vert \widehat{w} _{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}. \end{equation*} Hence, \begin{equation*} \left\Vert \widehat{w}_{\min }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\leq \frac{C}{\sqrt{\alpha }}\left\Vert F\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }. \end{equation*} This estimate, triangle inequality and (\ref{7.29}) imply the target estimate (\ref{7.13}) of Theorem 4.3. $\square $ \subsection{Proof of Theorem 4.4} \label{sec:4.6} We still use notations (\ref{7.28})-(\ref{7.32}). By Corollary 4.1 Problem 3 has at most one solution. Hence, there exists unique exact solution $w^{\ast }\in H^{2}\left( Q_{T_{1}}\right) $ of Problem 3 with the data $\varphi _{0}^{\ast },\varphi _{1}^{\ast }\in H^{2}\left( 0,T_{1}\right) $ in (\ref {7.4}) and (\ref{7.5}). Hence, we have the following analog of integral identity (\ref{7.50}) \begin{equation} \int_{Q_{T_{1}}}\left( P\widehat{w}^{\ast }\cdot Ph\right) dxdt+\alpha \left[ \widehat{w}^{\ast },h\right] = \label{7.53} \end{equation} \begin{equation*} =-\int_{Q_{T_{1}}}F_{t}^{\ast }\cdot Phdxdt+\alpha \left[ \widehat{w}^{\ast },h\right] ,\text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Subtract (\ref{7.53}) from (\ref{7.50}). Then, using (\ref{7.30}), (\ref {7.32}) and (\ref{7.52}), we obtain \begin{equation} \int_{Q_{T_{1}}}\left( P\overline{w}\cdot Ph\right) dxdt+\alpha \left[ \overline{w},h\right] = \label{7.54} \end{equation} \begin{equation*} =-\int_{Q_{T_{1}}}\widetilde{F}_{t}\cdot Phdxdt+\alpha \left[ \widehat{w} ^{\ast },h\right] ,\text{ }\forall h\in H_{0,0}^{2}\left( Q_{T_{1}}\right) . \end{equation*} Set in (\ref{7.54}) $h=\overline{w}.$ Then, using (\ref{7.35}) and Cauchy-Schwarz inequality, we obtain \begin{equation} \int_{Q_{T_{1}}}\left( P\overline{w}\right) ^{2}dxdt\leq C\left( \mathrm{d}elta ^{2}+\alpha \left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\right) , \label{7.55} \end{equation} \begin{equation} \left\Vert \overline{w}\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\leq C\left( \frac{\mathrm{d}elta }{\sqrt{\alpha }}+\left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\right) . \label{7.56} \end{equation} Inequality (\ref{7.55}) is equivalent with \begin{equation*} \int_{Q_{T_{1}}}\left( P\overline{w}\right) ^{2}\psi _{\lambda }^{2}\psi _{\lambda }^{-2}dxdt\leq C\left( \mathrm{d}elta ^{2}+\alpha \left\Vert \widehat{w} ^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\right) . \end{equation*} Since by (\ref{7.10}) $\psi _{\lambda }^{-2}\left( t\right) \geq e^{-2\left( T_{1}+1\right) ^{\lambda }}$ in $Q_{T_{1}},$ then (\ref{7.55}) implies \begin{equation} \int_{Q_{T_{1}}}\left( P\overline{w}\right) ^{2}\psi _{\lambda }^{2}dxdt\leq C\left( \mathrm{d}elta ^{2}+\alpha \left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\right) e^{2\left( T_{1}+1\right) ^{\lambda }}. \label{7.57} \end{equation} Hence, applying Carleman estimate (\ref{7.11}) to the left hand side of (\ref {7.57}) and recalling again that $\alpha \in \left( 0,1\right) $, we obtain \begin{equation*} \int_{Q_{T_{1}}}\overline{w}_{x}^{2}\psi _{\lambda }^{2}dxdt+\lambda ^{3/2}\int_{Q_{T_{1}}}\overline{w}^{2}\psi _{\lambda }^{2}dxdt\leq \end{equation*} \begin{equation*} \leq C\left( \mathrm{d}elta ^{2}+\alpha \left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\right) e^{2\left( T_{1}+1\right) ^{\lambda }}+C\left\Vert \overline{w}\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2},\text{ }\forall \lambda \geq \lambda _{0}. \end{equation*} Hence, we obtain similarly with (\ref{7.42}) \begin{equation*} \left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }^{2}+\left\Vert \overline{w}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }^{2}\leq C\left( \mathrm{d}elta ^{2}+\alpha \left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2}\right) e^{2\left( T_{1}+1\right) ^{\lambda }} \end{equation*} \begin{equation*} +Ce^{-2\left( T_{1}+1-\rho \right) ^{\lambda }}\left\Vert \overline{w} \right\Vert _{H^{2}\left( Q_{T_{1}}\right) }^{2},\text{ }\forall \lambda \geq \lambda _{0}. \end{equation*} Combining this with (\ref{7.56}), we obtain \begin{equation} \left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }+\left\Vert \overline{w}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }\leq C\left( \mathrm{d}elta +\sqrt{\alpha }\left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\right) e^{\left( T_{1}+1\right) ^{\lambda }} \label{7.58} \end{equation} \begin{equation*} +C\frac{\mathrm{d}elta }{\sqrt{\alpha }}e^{-\left( T_{1}+1-\rho \right) ^{\lambda }}+\left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }e^{-\left( T_{1}+1-\rho \right) ^{\lambda }},\text{ }\forall \lambda \geq \lambda _{0}. \end{equation*} Suppose now that $\alpha =\alpha \left( \mathrm{d}elta \right) =\mathrm{d}elta ^{2},$ as stated in (\ref{7.140}). Choose $\mathrm{d}elta _{0}=\mathrm{d}elta _{0}\left( T_{1},a_{0},a_{1}\right) \in \left( 0,1\right) $ as in (\ref{7.43}) and $ \lambda =\lambda \left( \mathrm{d}elta \right) $ as in (\ref{7.44}). Then (\ref {7.430}), (\ref{7.45}), (\ref{7.450}) and (\ref{7.58}) imply \begin{equation} \left\Vert \overline{w}_{x}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }+\left\Vert \overline{w}\right\Vert _{L_{2}\left( Q_{T_{1}-\rho }\right) }\leq \label{7.64} \end{equation} \begin{equation*} \leq C_{1}\left( 1+\left\Vert \widehat{w}^{\ast }\right\Vert _{H^{2}\left( Q_{T_{1}}\right) }\right) \exp \left[ -\left( \ln \mathrm{d}elta ^{-1/2}\right) ^{\mu }\right] ,\text{ }\forall \mathrm{d}elta \in \left( 0,\mathrm{d}elta _{0}\right) . \end{equation*} The target estimate (\ref{7.14}) of this theorem follows immediately from the triangle inequality, (\ref{7.6}), (\ref{7.28})-(\ref{7.32}) and (\ref {7.64}). $\square $ \section{Probabilistic Arguments for a Trading Strategy} \label{sec:5} A heuristic algorithm of section 3 can be used as the basis for a trading strategy of options. The algorithm predicts the option price change relatively to the current price. The fact that this algorithm uses the information about stock and option prices only over a small time period makes realistic the assumptions of the model of Section 2 about the volatilities being independent on time. Formulas (\ref{2.4}) and (\ref{2.7}) indicate that the sign of the mathematical expectation of the option price increment should likely define the trading strategy. In addition to the mathematical expectation of the option price increment, it is necessary to take into account indicators that reflect the risk of using that trading strategy. This is because the option price dynamics is described by a random process. Based on the model of Section 2, we construct in this section such indicators for a "perfect" trading strategy, which always correctly predicts the sign of the mathematical expectation of the option price increment. We assume in this section that both the volatility $\sigma $ of the stock and the idea $\hat{\sigma}$ of the volatility of the call option, which has been developed among the participants involved in trading of this option, are known. Recall that we have assumed in Section 2 that the dynamics of the stock price is described by a stochastic differential equation of the geometric Brownian motion $ds=\sigma sdW$ with the initial condition $ s(t_{0})=s_{0}$, and also that the corresponding option price is $ v(s(t),t)=u(s(t),T-\tau )$ , where $\tau =T-t\in \left( 0,T\right) $ and the function $u(s,\tau )$ can be found by the Black-Scholes formula (\ref{2.3}). The option price expected by option market participants is described by a stochastic process $v(\hat{s}(t),t)=u(\hat{s}(t),T-t)$, where the expected stock price satisfies the stochastic differential equation of the geometric Brownian motion $d\hat{s}=\hat{\sigma}\hat{s}d\hat{W}_{1}$ with the initial condition \begin{equation} \hat{s}(t_{0})=s_{0}=s(t_{0}). \label{4.00} \end{equation} Here $W_{1}$ is a Wiener process, and the processes $W$ and $W_{1}$ are independent. Let $t_{0}\in \left( 0,T\right) $ be a certain moment of time and $ \varepsilon >0$ be a sufficiently small number. The true option price at the moment of time $t_{0}+\varepsilon $ is $v(s(t_{0}+\varepsilon ),t_{0}+\varepsilon ).$ On the other hand, at the same moment of time $ t_{0}+\varepsilon $ the price of this option expected by the participants of the market is $v(\hat{s}(t_{0}+\varepsilon ),t_{0}+\varepsilon )$. It follows from (\ref{2.4}) that, on the small time interval $ (t_{0},t_{0}+\varepsilon ),$ a winning trading strategy of the options trading should be based on an estimate of the probability that $ v(s(t_{0}+\varepsilon ),t_{0}+\varepsilon )>v(\hat{s}(t_{0}),t_{0})$. This probability is given in Theorem 5.1. \textbf{Theorem 5.1}. \emph{Let }$\varepsilon >0$\emph{\ be a sufficiently small number and }$\Phi \left( z\right) ,z\in \mathbb{R}$\emph{\ be the function defined in (\ref{2.200}). The probability that at the time }$ t_{0}+\varepsilon $\emph{\ the true option price }$v(s(t_{0}+\varepsilon ),t_{0}+\varepsilon )$\emph{\ is greater than the price expected by the participants of the options market }$v(\hat{s}(t_{0}+\varepsilon ),t_{0}+\varepsilon )$\emph{\ is } \begin{equation} p=\Phi \left( \frac{(\hat{\sigma}^{2}-\sigma ^{2})\sqrt{\varepsilon }}{2 \sqrt{(\hat{\sigma}^{2}+\sigma ^{2})}}\right) . \label{4.1} \end{equation} \textbf{Proof.} The derivative \begin{equation*} \frac{\partial u(s,\tau )}{\partial s} \end{equation*} is called the Greek delta. This parameter for a call option is \begin{equation} \Delta =\frac{\partial u(s,\tau )}{\partial s}=\Phi (\Theta _{+}(s,\tau ))>0. \label{4.2} \end{equation} Since by (\ref{4.2}) $\partial _{s}u(s,\tau )>0,$ then the inequality $ v(s(t_{0}+\varepsilon ),t_{0}+\varepsilon )>v(\hat{s}(t_{0}+\varepsilon ),t_{0}+\varepsilon )$ is equivalent to the inequality $s(t_{0}+\varepsilon )>\hat{s}(t_{0}+\varepsilon )$. It follows from (\ref{4.00}) that the latter inequality is equivalent with \begin{equation*} \ln \left( {\frac{s(t_{0}+\varepsilon )}{s(t_{0})}}\right) >\ln \left( { \frac{\hat{s}(t_{0}+\varepsilon )}{\hat{s}(t_{0})}}\right) . \end{equation*} It follows from the properties of the geometric Brownian motion, see, e.g. \cite[Chapter 5, section 5.1]{Bernt} that the random variables \begin{equation*} \ln \left( \frac{s(t_{0}+\varepsilon )}{s(t_{0})}\right) \in N\left( -\frac{ \sigma ^{2}}{2}\varepsilon ,\sigma ^{2}{\varepsilon }\right) , \end{equation*} \begin{equation*} \ln \left( \frac{\hat{s}(t_{0}+\varepsilon )}{\hat{s}(t_{0})}\right) \in N\left( -\frac{\hat{\sigma}^{2}}{2}\varepsilon ,\hat{\sigma}^{2}{\varepsilon }\right) \end{equation*} are normally distributed. Hence, the difference of these two random variables is also a normally distributed random variable, see, e.g. \cite[ Chapter 9, section 9.3]{KorSin}, i.e. \begin{equation*} \left[ \ln \left( \frac{s(t_{0}+\varepsilon )}{s(t_{0})}\right) -\ln \left( \frac{\hat{s}(t_{0}+\varepsilon )}{\hat{s}(t_{0})}\right) \right] \in N\left( \frac{\sigma ^{2}-\hat{\sigma}^{2}}{2}\varepsilon ,(\hat{\sigma} ^{2}+\sigma ^{2})\varepsilon \right) . \end{equation*} Therefore, the value given by formula (\ref{4.1}) is indeed the probability that the true option price $v(s(t_{0}+\varepsilon ),t_{0}+\varepsilon ))$ is greater than the expected option price $v(\hat{s}(t_{0}+\varepsilon ),t_{0}+\varepsilon ).$ $\square $ Theorem 5.1 implies that the operation of buying an option at the time moment $t_{0}$ and selling it at the time moment $t_{0}+\varepsilon $ will be profitable with the probability $p$ given in (\ref{4.1}), and this operation will be non-profitable with the probability $1-p.$ By (\ref{2.200}) and (\ref{4.1}) \begin{equation*} p=\left\{ \begin{array}{c} >1/2\text{ if }\sigma >\hat{\sigma}, \\ \leq 1/2\text{ if }\sigma \leq \hat{\sigma}. \end{array} \right. \end{equation*} Hence, if $\sigma >\hat{\sigma}$, then it is reasonable to buy an option at the moment of time $t_{0}$ and sell it at the moment of time $ t_{0}+\varepsilon .$ Otherwise, it is reasonable to go in the short position on the option at $t=t_{0}$. Suppose that $p>1/2$. A winning strategy, which takes into account risks, should involve the repetition operation multiple times with the same independent probabilities of outcomes. Consider $n$ non-overlapping small time intervals $[t_{j},t_{j}+\varepsilon ],$ $ j=1,...,n,$ of the same duration $\varepsilon >0,$ on which the option purchase operations are carried out at the moment of time $t_{j}$ with the subsequent sale at the moment of time $t_{j}+\varepsilon $. Consider random variables $\left\{ {\bf x}i _{j}\right\} _{j=1}^{n},$ \begin{equation} {\bf x}i _{j}= \begin{cases} 1, & \text{if }{v(s(t_{j}+\varepsilon ),t_{j}+\varepsilon )\geq v(\hat{s} (t_{j}),t_{j}),} \\ 0, & \text{if }{v(s({t_{j}+}\varepsilon ),{t_{j}+}\varepsilon )<v(\hat{s}({ t_{j}}),{t_{j}})}, \end{cases} j=1,...,n. \label{3.99} \end{equation} If ${\bf x}i _{j}=1,$ then the operation of buying that option at the moment of time $t_{j}$ and selling it at the moment of time $t_{j}+\varepsilon $ was profitable. If ${\bf x}i _{j}=0,$ then that operation was non profitable. The random variables ${\bf x}i _{1},...,{\bf x}i _{n}$ are independent identically distributed random variables \cite[Chapter 18, section 18.1]{KorSin}. The frequency of profitable trading operations is characterized by the random variable $\zeta ,$ \begin{equation} \zeta =\frac{1}{n}\sum_{j=1}^{n}{\bf x}i _{j}. \label{4.0} \end{equation} It follows from \cite[Chapter 9, section 9.3]{KorSin} that the variable $ \zeta $ has a binomial distribution with the mathematical expectation $p$ given in (\ref{4.1}) and with the dispersion $D$, where \begin{equation} D=\frac{p(1-p)}{n}. \label{4.01} \end{equation} By the Central Limit Theorem of de Moivre-Laplace, the probability that more than half of trades are profitable is estimated as \cite[Chapter 2, section 2.2]{KorSin}: \begin{equation} \sum_{k=[\frac{n}{2}+2]}^{n}\frac{n!}{(k-1)!(n-k)!}p^{k}(1-p)^{n-k}=\Phi \left( \frac{(1-2p)\sqrt{n}}{2\sqrt{p(1-p)}}\right) +\mathrm{d}elta (n), \label{4.3} \end{equation} \begin{equation} \lim_{n\rightarrow \infty }\mathrm{d}elta (n)=0. \label{4.4} \end{equation} In our trading strategy, we decide to make transactions if and only if the probability of the profitable trading is not less than a given value $\alpha >1/2.$ Hence, by (\ref{4.3}) and (\ref{4.4}) \begin{equation} \frac{(1-2p)\sqrt{n}}{2\sqrt{p(1-p)}}>\Phi ^{-1}\left( \alpha -\mathrm{d}elta (n)\right) , \label{4.40} \end{equation} where $\Phi ^{-1}$ is the inverse function of the function $\Phi $ of (\ref {2.200}). By (\ref{4.40}) \begin{equation*} p^{2}-p+\frac{n}{4\left\{ n+\left[ \Phi ^{-1}(\alpha -\mathrm{d}elta (n)\right] ^{2}\right\} }>0. \end{equation*} Thus, we should have \begin{equation} p\geq \frac{1}{2}\left( 1+\sqrt{\frac{\left[ \Phi ^{-1}(\alpha -\mathrm{d}elta (n) \right] ^{2}}{n+\left[ \Phi ^{-1}(\alpha -\mathrm{d}elta (n)\right] ^{2}}}\right) . \label{4.5} \end{equation} To fulfill inequality (\ref{4.5}), the imperfection of the stock market must be significant. More precisely, it follows from (\ref{4.1}) and (\ref{4.5}) that the difference between the volatilities $\sigma $ and $\hat{\sigma}$ must satisfy the following inequality: \begin{equation} \sigma -\hat{\sigma}\geq \frac{2\sqrt{\sigma ^{2}+\hat{\sigma}^{2}}}{\sqrt{ \varepsilon }(\sigma +\hat{\sigma})}\left\vert \Phi ^{-1}\left( \frac{1}{2} \left( 1+\sqrt{\frac{\left[ \Phi ^{-1}(\alpha -\mathrm{d}elta (n)\right] ^{2}}{n+ \left[ \Phi ^{-1}(\alpha -\mathrm{d}elta (n)\right] ^{2}}}\right) \right) \right\vert . \label{4.6} \end{equation} Based on this estimate of the difference $\sigma -\hat{\sigma}$, we design a trading strategy in the ideal case. \textquotedblleft Ideal" means that we know both volatilities $\sigma $ and $\hat{\sigma}.$ \textbf{Trading Strategy for the Ideal Case:} Let $\beta _{1}>0$ and $\beta _{2}<0$ be two threshold numbers. Our trading strategy considers three possible scenarios: \begin{enumerate} \item If $\sigma -\hat{\sigma}\geq \beta _{1}>0$ , then it is recommended to buy a call option at the current moment of time $t_{j}$ with the subsequent sale at the next moment of time $t_{j}+\varepsilon $. \item If $\sigma -\hat{\sigma}\leq \beta _{2}$, then then it is recommended to go short at the current moment of time $t_{j}$, followed by closing the short position at time $t_{j}+\varepsilon $. \item If $\beta _{2}<\sigma -\hat{\sigma}<\beta _{1}$, then it is recommended to refrain from trading. \end{enumerate} The threshold values $\beta _{1}$ and $\beta _{2}$ might probably estimated via numerical simulations using the method of section 3, combined with formula (\ref{4.6}). \section{Numerical Studies} \label{sec:6} \subsection{Some numerical details for the algorithm of section 3} \label{sec:6.1} We have computationally simulated the market data as described in subsection 6.2. These data gave us initial and boundary conditions (\ref{3.1}), (\ref {3.3}) and (\ref{3.4}), which, in turn, led us to (\ref{3.11}), (\ref{3.12} ), see Steps 1,2 of subsection 3.2. Next, we have solved Minimization Problem (\ref{3.14}), (\ref{3.15}). To minimize functional (\ref{3.14}), we wrote $Mv$ and $\left\Vert v\right\Vert _{H^{2}\left( G_{2y}\right) }^{2}$ in finite differences and, using the conjugate gradient method, have minimized the resulting discrete functional with respect to the values of the function $v$ at grid points. The starting point of the minimization procedure was $v=0$. The regularization parameter $\gamma =0.01$ was the same as in \cite{KlibGol}, and it was chosen by trial and error. The step sizes $h_{x}$ and $h_{t}$ of the finite difference scheme with respect to $x$ and $t$ were $h_{x}=0.01$ and $h_{t}=0.0000784$ respectively. Since by (\ref {3.100}) and (\ref{3.8}) $G_{2y}=\left\{ \left( x,t\right) \in \left( 0,1\right) \times \left( 0,0.00784\right) \right\} ,$ then we had 100 grid points with respect to each variable $x$ and $t.$ \subsection{The data} \label{sec:6.2} We construct the stock price trajectory $s(t)$ as a solution to the stochastic differential equation $ds=\sigma sdW$ with the initial condition $ s(0)=100,$ where $\sigma =0.2.$ We model the stock prices and then the prices of 90-days European call options of this stock during the life of the stock, assuming that the options are reissued many times with the same maturity date of 90 days. Thus, we obtain a time series $\left\{ s\left( t_{k}\right) \right\} _{k=1}^{N},N\geq n=2000.$ We set the payoff function for each option $f(s)=(s-100)_{+}.$ The generated stock price trajectory is shown on Figure ~\ref{fig: An example of the time dependent behavior of stock prices generated by the geometric Brownian motion}. \begin{figure} \caption{\emph{\ An example of the time dependent behavior of stock prices generated by the geometric Brownian motion with $\protect\sigma =0.2.$ } \label{fig: An example of the time dependent behavior of stock prices generated by the geometric Brownian motion} \end{figure} The probabilistic analysis of section 5 of the random variable $\zeta $ characterizes the effectiveness of an \textquotedblleft ideal" trading strategy. Thus, we consider the ideal case first. Recall that \textquotedblleft ideal" means here that this strategy is based on the knowledge of the information about the imperfection of the financial market, i.e. on the knowledge of both volatilities $\sigma $ and $\hat{\sigma}$. However, in the real market data only approximate values of\textbf{\ }$\hat{ \sigma}$ are available \cite{Bloom}. We test total thirty three (33) values of $\hat{\sigma}.$ More precisely, in our computational simulations, we took the discrete values of $\hat{\sigma},$ where \begin{equation} \hat{\sigma}\in \lbrack 0.05,0.38]\text{ with the step size }h_{\hat{\sigma} }=0.01. \label{6.1} \end{equation} We now generate the function, which describes the dependence of the mathematical expectation of the random variable $\zeta $ in (\ref{4.0}) on $ \hat{\sigma}$. Keeping in mind that $\sigma =0.2$ in all cases, we compute for each of the discrete values of $\hat{\sigma}$ in (\ref{6.1}) the mathematical expectation $p\left( \hat{\sigma}\right) $ of the random variable $\zeta $. We use formula (\ref{4.1}) for $p\left( \hat{\sigma} \right) ,$ also, see (\ref{2.200}). This way we obtain the function $p\left( \hat{\sigma}\right) ,$ which is the above dependence for the ideal case. Second, we consider a non-ideal case. More precisely, \ we test how our heuristic algorithm works for the computationally simulated data described in this subsection. We choose $n=2000$ non-overlapping time intervals $ [t_{j},t_{j}+y],$ $j=1,...,n,$ where $y=1/255$ means one dimensionless trading day, see (\ref{1.2}) and (\ref{3.100}). We still use the dimensionless time as in (\ref{1.1}), while keeping the same notation $t$ for brevity. For every fixed value of $\hat{\sigma}$ indicated in (\ref{6.1} ), we calculate the option price $v(s\left( t_{j}\right) ,t_{j})=u(s\left( T-t_{j}\right) ,T-t_{j}),$ where the function $u\left( s,\tau \right) =u\left( s,T-t\right) $ is given by Black-Scholes formula (\ref{2.2}). Thus, numbers $v(s\left( t_{j}\right) ,t_{j})$ form the option price trajectory. Figure 2 displays a sample of the trajectory of the option price for $\hat{ \sigma}=0.1.$ Based on (\ref{3.0}), we set bid and ask stock prices as well as corresponding bid and ask option prices as: \begin{equation*} s_{b}\left( t_{j}\right) =0.99\cdot s\left( t_{j}\right) \text{ and } s_{a}\left( t_{j}\right) =1.01\cdot s\left( t_{j}\right) , \end{equation*} \begin{equation*} v_{b}(s\left( t_{j}\right) ,t_{j})=0.99\cdot v(s\left( t_{j}\right) ,t_{j}) \text{ and }v_{a}(s\left( t_{j}\right) ,t_{j})=1.01\cdot v(s\left( t_{j}\right) ,t_{j}). \end{equation*} Next, we solve Problem 2 of section 3 for each $j$ on the time interval $ \left[ t_{j},t_{j}+2y\right] $ by the algorithm of that section. When doing so, we take in (\ref{3.6}) $\sigma ^{2}\left( t\right) =\hat{\sigma}^{2}$ for $t\in \left[ t_{j},t_{j}+2y\right] $ for all $j=1,...,2000.$ In particular, this solution via QRM gives us the function $v_{\text{comp} }\left( s,t_{j}+y\right) ,$ $s\in \left( s_{b}\left( t_{j}\right) ,s_{a}\left( t_{j}\right) \right) .$ We set the predicted price of the option at the moment of time $t_{j}+y$ as: \begin{equation} v_{\text{pred}}\left( t_{j}+y\right) =v_{\text{comp}}\left( \frac{ s_{b}\left( t_{j}\right) +s_{a}\left( t_{j}\right) }{2},t_{j}+y\right) . \label{6.2} \end{equation} \begin{figure} \caption{\emph{\ The trajectory of the option price with } \label{fig: The trajectory of the option price with} \end{figure} \subsection{Results} \label{sec:6.3} For every discrete value of $\hat{\sigma}$ in (\ref{6.1}), we introduce the sequence $\left\{ \overline{{\bf x}i }_{j}\left( \hat{\sigma}\right) \right\} _{j=1}^{n}.$ This sequence is similar with the sequence $\left\{ {\bf x}i _{j}\right\} _{j=1}^{n}$ in (\ref{3.99}). Recall that $v(s(t_{j}),t_{j})$ and $v(s(t_{j}+y),t_{j}+y)$ are true prices of the option at the moments of time $t_{j}$ and $t_{j}+y$ respectively. We set \begin{equation} \overline{{\bf x}i }_{j}\left( \hat{\sigma}\right) =\left\{ \begin{array}{c} 1\text{ if }v_{\text{pred}}\left( t_{j}+y\right) \geq v(s(t_{j}),t_{j})\text{ and }v(s(t_{j}+y),t_{j}+y)\geq v(s(t_{j}),t_{j}), \\ 0\text{ otherwise.} \end{array} \right. \label{6.3} \end{equation} Next, we introduce the function $\overline{\zeta }\left( \hat{\sigma}\right) $ of the discrete variable $\hat{\sigma}$ as: \begin{equation} \overline{\zeta }\left( \hat{\sigma}\right) =\frac{1}{n}\sum_{j=1}^{n} \overline{{\bf x}i }_{j}\left( \hat{\sigma}\right) ,\text{ }n=2000. \label{6.4} \end{equation} It follows from (\ref{6.3}) and (\ref{6.4}) that $\overline{\zeta }\left( \hat{\sigma}\right) $ is the frequency of correctly predicted profitable cases for trading of this option with the market's opinion $\hat{\sigma}$ of the volatility of the option. Predictions are performed by our algorithm of section 3. Comparison of (\ref{3.99}) and (\ref{4.0}) with (\ref{6.3}) and ( \ref{6.4}) shows that $\overline{\zeta }\left( \hat{\sigma}\right) $ is similar with the ideal case of the random variable $\zeta .$ The bold faced curve on Figure ~\ref{fig:Results of our computations} depicts the graph of the function $\overline{\zeta }\left( \hat{\sigma}\right) .$ The middle non-horizontal curve on Figure \ref{fig:Results of our computations} depicts the graph of the function $p\left( \hat{\sigma}\right) ,$ which was constructed in subsection 6.2 for the ideal case. The upper and the lowest curves on Figure ~\ref{fig:Results of our computations} display the shifts of the ideal curve up and down by $\sqrt{D},$ where $D$ is the dispersion of $\zeta $ and $D$ is given in (\ref{4.01}). In other words, there is a high probability chance that the values of $\zeta $ are contained in the trust corridor between these two curves. The vertical line indicates the \textquotedblleft critical" value $\hat{\sigma}=\sigma =0.2,$ where $\sigma =0.2$ is the volatility of the stock. \begin{figure} \caption{ \emph{Results of our computations. The vertical line indicates the value } \label{fig:Results of our computations} \end{figure} One can see from the bold faced curve of Figure ~\ref{fig:Results of our computations} that as long as $\hat{\sigma}$ is either rather close to $ \sigma $ or $\hat{\sigma}<\sigma ,$ the short position of this option represents a significant risk. However, when $\hat{\sigma}$ becomes less than $\approx 0.7\sigma ,$ the probability of the profit in the short position increases. This coincides with the prediction of our theory. The bold faced curve is an analog of the middle curve of the mathematical expectation of the random variable $\zeta $ in the ideal case. Since the bold faced curve on Figure \ref{fig:Results of our computations} lies within the trust corridor of the ideal algorithm, then we conclude that our prediction accuracy of profitable cases is comparable with the ideal one. Unlike the ideal case, in a realistic scenario of the financial market data of, e.g. \cite{Bloom} only the information about the approximate values of $ \hat{\sigma}$ is available. It is this information, which was used in \cite {KlibGol,Nik} and, in particular, in Tables 1,2. Thus, our results support the following trading strategy in the non-ideal case: \textbf{Trading Strategy for the Non-Ideal Case:} Let $\eta >0$ be a threshold number, which should be determined numerically by trial and error.\ For example, $\eta $ might probably be linked with the transaction cost. Let $v_{\text{pred}}\left( t_{j}+y\right) $ be the number defined in (\ref{6.2}). \begin{enumerate} \item If $v_{\text{pred}}\left( t_{j}+y\right) \geq v\left( s\left( t_{j}\right) ,t_{j}\right) +\eta ,$ then it is recommended to buy the option at the current trading day $t_{j}$ and sell it on the next trading $t_{j}+y.$ \item If $v_{\text{pred}}\left( t_{j}+y\right) <v\left( s\left( t_{j}\right) ,t_{j}\right) -\eta ,$ then it is recommended to go short at the current trading day $t_{j}$, and to follow by closing the short position at the trading day $t_{j}+y$. \item If $v\left( s\left( t_{j}\right) ,t_{j}\right) -\eta \leq v_{\text{pred }}\left( t_{j}+y\right) <v\left( s\left( t_{j}\right) ,t_{j}\right) +\eta ,$ then it is recommended to refrain from trading. \end{enumerate} We believe that our results support the following two hypotheses: \textbf{Hyphothesis 1:} \emph{The reason why the heuristic algorithm of \cite {KlibGol} and section 3 performs well is that it likely forecasts in many cases the signs of the differences }$\sigma -\hat{\sigma}$\emph{\ for the next trading day ahead of the current one.} \textbf{Hyphothesis 2:} Since the maximal value of $\overline{\zeta }\left( \hat{\sigma}\right) $ in the bold faced curve of Figure ~\ref{fig:Results of our computations} is 0.515, which is rather close to the value of 0.5577 in the \textquotedblleft Precision" column of Table 1 and in the second column of Table 2, then we probably had in those tested real market data about 56\% of options, in which $\sigma -\hat{\sigma}<0.$ \section{Concluding Remarks} \label{sec:7} We have considered a mathematical model, in which two markets are in place: the stock market and the options market. We have assumed that the market is imperfect. More precisely, we have assumed that agents of the option market have their own idea about the volatility $\hat{\sigma}$ of the option, and this idea might be different from the volatility $\sigma $ of the stock. We have proven that if that $\sigma \neq \hat{\sigma}$, then there is an opportunity for a winning strategy. A rigorous probabilistic analysis was carried out. This analysis has shown that the mathematical expectation of the correctly guessed option price movements can be obtained, and it depends on the difference between $\sigma $ and $\hat{\sigma}$. We have considered both ideal and non-ideal cases. In the ideal case, both volatilities $\sigma $ and $\hat{\sigma}$ are known. In the more realistic non-ideal case, however, only the volatility $\hat{\sigma}\approx \sigma _{ \text{impl}}$ of the option is known from the market data, see, e.g. \cite {Bloom}. We have demonstrated in our numerical simulations that the accuracy of our prediction of profitable cases by the algorithm of \cite{KlibGol} for the non-ideal case is comparable with that accuracy for the ideal case. These results led us to two hypotheses. The first hypothesis is that our algorithm of \cite{KlibGol} actually forecasts in many cases the signs of the differences $\sigma -\hat{\sigma}$ for the next trading day ahead of the current one. Our second hypothesis is based on our above results as well as on the \textquotedblleft Precision" column in Table 1 and the second column in Table 2. This second hyphothesis tells one that probably about 56\% out of tested 23,549 options of \cite{Nik} with the real market data had $ \sigma -\hat{\sigma}<0.$ A new convergence analysis of our algorithm was carried out. To do this, the technique of \cite{KlibYag} was modified and simplified for our specific case of the 1-D parabolic equation with the reversed time. We have lifted here the assumption of \cite{KlibGol} that the time interval $\left( 0,2y\right) $ is sufficiently small. Indeed, even though we actually work with a small number $2y$ in our computations, that assumption might require even smaller values. In addition, we have derived a stability estimate for Problem 3 of subsection 4.1, which was not done in \cite{KlibGol}. \begin{center} \textbf{Acknowledgment} \end{center} The work of A.A. Shananin was supported by the Russian Foundation for Basic Research, grant number 20-57-53002. \end{document}
math
पुस्तक का साइज़ : ५.५9 म्ब गोस्वामी श्री राम चरण पुरी - गोस्वामी श्री राम चरण पूरी के बारे में कोई जानकारी उपलब्ध नहीं है | जानकारी जोड़ें | भाषाटीकासाहिता पटल १ ् हे उस कर्मकाण्डमें दो प्रकार हैं एक निषेष दूसरा विधि निषेध कम करनेसे निश्चय पाप होता है विधान करें करनेते निश्वय करके पुण्य होता है ॥ २० ॥ ॥ २१ निविधो विधिकूटः स्यान्नित्यनैमित्ति- - काम्यतः ॥ _नित्येश्कते किल्विष॑ स्यात्काम्ये नैमितिके फलम् ॥ २२ ॥ टीका-विधि कमेंमें तीन प्रकारका भेद कहा हे नित्य १ नेमित्तिक २ सकाम ३ नित्यकमे संध्या देवा- चैन आदि न करनेसे पाप होता है सकाम अथोत जो कमे फ़ठके इच्छासे किया जाता हे और नेमि- त्तिक जो तीथींमें पवोदिकमे स्रानादिक करते हैं इनके न करनेसे पाप नहीं होता परन्तु करनेसे फठ इोताहै॥ र२॥ द्विविषं तु फ् जय स्वर्ग नरकमेव च ॥ स्वर्ग नानाविष॑ चेव नरके च तथा मवत् ॥ .९३ ॥ टीका-फ दो प्रकारका होता हे स्वगे और नरक स्वगे नाना प्रकारका है ऐसेही नरकभी बहुत प्रकारका हे तात्पये यह हे कि जेसा जो मनुष्य झुभाझुभ करे करता हे वेसेही नरक वा स्वगेमें जाता हे ॥ ९३ ॥ . पुण्यकर्माण व॑ स्वगा नरक पापक-
hindi
बुधवार, २२ मई २०१९ | समय ०५:२१ हर्स(इस्ट) योगी सरकार को भी उखाड़ फेंकने तक चुप नहीं बैठेगा गठबंधन: मायावती मायावती ने जनता का आह्वान किया कि आज आप लोग यह तय करके जाएं कि चंदौली से चुनाव लड़ रहे प्रदेश भाजपा अध्यक्ष महेन्द्र नाथ पाण्डेय की जमानत जब्त कराएंगे। मिर्जापुर/चंदौली (उप्र)। बसपा प्रमुख मायावती ने शुक्रवार को कहा कि सपा और रालोद के साथ उनका महागठबंधन विचारों का गठजोड़ है और यह केन्द्र की नरेन्द्र मोदी सरकार के अलावा उत्तर प्रदेश की योगी आदित्यनाथ सरकार को भी सत्ता से उखाड़ फेंकने तक चुप नहीं बैठेगा। मायावती ने मिर्जापुर और चंदौली में महागठबंधन प्रत्याशियों के समर्थन में आयोजित संयुक्त महारैलियों में कहा कि लोकसभा चुनाव के पिछले छह चरणों में अपनी बुरी हालत से हताश भाजपा नेताओं के चेहरे बता रहे हैं कि मोदी सरकार विदा होने वाली है। उनके बुरे दिन २३ मई से शुरू हो जाएंगे। उसके बाद योगी के मठ में वापस जाने की भी पूरी तैयारी शुरू हो जाएगी। उन्होंने आरोप लगाया कि चुनाव में अपनी खराब स्थिति को देखते हुए भाजपा एण्ड कम्पनी के लोगों ने सपा और बसपा के बीच भ्रम पैदा करने की पूरी कोशिश की है, जिसमें उन्हें कामयाबी नहीं मिली है। हमारा गठबंधन बहुत सोच समझकर बना है। यह लम्बा चलेगा। उन्होंने कहा, यह भाजपा के लोगों की तरह महामिलावटी नहीं, बल्कि विचारों का गठबंधन है। हम तब तक चुप नहीं बैठेंगे, जब तक हम योगी सरकार को भी यहां सत्ता से उखाड़ नहीं फेंकते। समाजवादी पार्टी के राष्ट्रीय अध्यक्ष एवं पूर्व मुख्यमंत्री श्री अखिलेश यादव और बसपा अध्यक्ष सुश्री बहन मायावती जी ने मिर्जापुर और चंदौली में संयुक्त जनसभाओं को संबोधित करते हुए... पिक.ट्विटर.कॉम/४७ऐत्र्ज६यब मायावती ने जनता का आह्वान किया कि आज आप लोग यह तय करके जाएं कि चंदौली से चुनाव लड़ रहे प्रदेश भाजपा अध्यक्ष महेन्द्र नाथ पाण्डेय की जमानत जब्त कराएंगे। उन्होंने भाजपा के लोगों की बात शौचालय से शुरू होकर शौचालय पर ही खत्म होने के अखिलेश के बयान को स्पष्ट करते हुए कहा कि भाजपा इस बयान को गलत तरीके से पेश कर रही है। जहां तक बहन-बेटियों के सम्मान की रक्षा के लिये जितना काम सपा-बसपा ने किया है, वह और किसी ने नहीं किया है। उन्होंने कहा कि बसपा की सरकार ने नक्सलवाद से निपटने के लिये प्रभावित क्षेत्रों को विकास कार्यों से जोड़ा है। हमने नक्सलवादियों का कत्लेआम करने के बजाय उन्हें रोजीरोटी से जोड़ा। हमने क्षेत्र की ज्यादातर नौकरियां सोनभद्र के गरीब लोगों को दी। उसके बाद आगे सपा सरकार ने भी इस दिशा में काफी काम किया है। मायावती ने कहा कि सपा प्रमुख अखिलेश यादव ने गठबंधन के तहत बसपा के प्रत्याशियों के पक्ष में प्रचार में काफी मेहनत की है। ऐसे में बसपा कार्यकर्ताओं की नैतिक जिम्मेदारी बनती है तो वह भी चंदौली से सपा प्रत्याशी को जिताएं। यह समझकर काम करें कि यहां से बसपा उम्मीदवार ही खड़े हैं। अखिलेश ने इस मौके पर दावा किया कि चुनाव के शुरुआती छह चरणों में इस बार गठबंधन के लोगों ने भाजपा का पूरा सफाया कर दिया है। भाजपा नेताओं को अब नींद नहीं आ रही है, जैसे-जैसे २३ मई नजदीक आ रही है, वैसे-वैसे उनमें खौफ बढ़ रहा है। सपा प्रमुख ने कहा कि प्रधानमंत्री मोदी हमें समाजवाद सिखाना चाहते हैं, कहते हैं कि समाजवादियों को समाजवाद के बारे में कुछ नहीं पता। हम प्रधानमंत्री से कहना चाहते हैं कि दो दिन के बाद आपके पास बहुत फुरसत है। हम आपको डॉक्टर राम मनोहर लोहिया के कुछ किताबें भिजवा देते हैं। लोहिया की किताब हिन्दू बनाम हिन्दू और इतिहास चक्र पढ़ लेना आपको भारत के बारे में समझ में आ जाएगा। उन्होंने कहा कि अगर नहीं पढ़ा तो आप उन बातों को नहीं समझते, जिनके आंदोलन को लेकर यह गठबंधन बना है। गरीब, दलित, पिछड़े और अल्पसंख्यक जिन्हें हक सम्मान नहीं मिला, उन्हें यह दिलाने के लिये ही गठबंधन बना है। जो लोग इस गठबंधन को तोड़ना चाहते हैं, वे खुद ही टूट जाएंगे। अखिलेश ने कहा कि मोदी ने पांच साल में इस देश को विकास की दौड़ में पीछे किया है। अर्थव्यवस्था खराब की है। इस देश पर जिस पर कभी ३५ लाख करोड़ रुपये कर्ज था, वह आज ७० लाख करोड़ रुपये पहुंच गया है। अगर ३५ लाख करोड़ रुपये गरीब, किसान और नौजवान तक नहीं पहुंचा तो वह पैसा गया कहां। मोदी हमारे आपके प्रधानमंत्री नहीं हैं, वह देश के एक प्रतिशत अमीरों के प्रधानमंत्री हैं।
hindi
/* WrappedPlainView.java -- Copyright (C) 2005, 2006 Free Software Foundation, Inc. This file is part of GNU Classpath. GNU Classpath is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU Classpath is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU Classpath; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Linking this library statically or dynamically with other modules is making a combined work based on this library. Thus, the terms and conditions of the GNU General Public License cover the whole combination. As a special exception, the copyright holders of this library give you permission to link this library with independent modules to produce an executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you also meet, for each linked independent module, the terms and conditions of the license of that module. An independent module is a module which is not derived from or based on this library. If you modify this library, you may extend this exception to your version of the library, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version. */ package javax.swing.text; import java.awt.Color; import java.awt.Container; import java.awt.FontMetrics; import java.awt.Graphics; import java.awt.Rectangle; import java.awt.Shape; import javax.swing.event.DocumentEvent; import javax.swing.text.Position.Bias; /** * @author Anthony Balkissoon abalkiss at redhat dot com * */ public class WrappedPlainView extends BoxView implements TabExpander { /** The color for selected text **/ Color selectedColor; /** The color for unselected text **/ Color unselectedColor; /** The color for disabled components **/ Color disabledColor; /** * Stores the font metrics. This is package private to avoid synthetic * accessor method. */ FontMetrics metrics; /** Whether or not to wrap on word boundaries **/ boolean wordWrap; /** A ViewFactory that creates WrappedLines **/ ViewFactory viewFactory = new WrappedLineCreator(); /** The start of the selected text **/ int selectionStart; /** The end of the selected text **/ int selectionEnd; /** The height of the line (used while painting) **/ int lineHeight; /** * The base offset for tab calculations. */ private int tabBase; /** * The tab size. */ private int tabSize; /** * The instance returned by {@link #getLineBuffer()}. */ private transient Segment lineBuffer; public WrappedPlainView (Element elem) { this (elem, false); } public WrappedPlainView (Element elem, boolean wordWrap) { super (elem, Y_AXIS); this.wordWrap = wordWrap; } /** * Provides access to the Segment used for retrievals from the Document. * @return the Segment. */ protected final Segment getLineBuffer() { if (lineBuffer == null) lineBuffer = new Segment(); return lineBuffer; } /** * Returns the next tab stop position after a given reference position. * * This implementation ignores the <code>tabStop</code> argument. * * @param x the current x position in pixels * @param tabStop the position within the text stream that the tab occured at */ public float nextTabStop(float x, int tabStop) { int next = (int) x; if (tabSize != 0) { int numTabs = ((int) x - tabBase) / tabSize; next = tabBase + (numTabs + 1) * tabSize; } return next; } /** * Returns the tab size for the Document based on * PlainDocument.tabSizeAttribute, defaulting to 8 if this property is * not defined * * @return the tab size. */ protected int getTabSize() { Object tabSize = getDocument().getProperty(PlainDocument.tabSizeAttribute); if (tabSize == null) return 8; return ((Integer)tabSize).intValue(); } /** * Draws a line of text, suppressing white space at the end and expanding * tabs. Calls drawSelectedText and drawUnselectedText. * @param p0 starting document position to use * @param p1 ending document position to use * @param g graphics context * @param x starting x position * @param y starting y position */ protected void drawLine(int p0, int p1, Graphics g, int x, int y) { try { // We have to draw both selected and unselected text. There are // several cases: // - entire range is unselected // - entire range is selected // - start of range is selected, end of range is unselected // - start of range is unselected, end of range is selected // - middle of range is selected, start and end of range is unselected // entire range unselected: if ((selectionStart == selectionEnd) || (p0 > selectionEnd || p1 < selectionStart)) drawUnselectedText(g, x, y, p0, p1); // entire range selected else if (p0 >= selectionStart && p1 <= selectionEnd) drawSelectedText(g, x, y, p0, p1); // start of range selected, end of range unselected else if (p0 >= selectionStart) { x = drawSelectedText(g, x, y, p0, selectionEnd); drawUnselectedText(g, x, y, selectionEnd, p1); } // start of range unselected, end of range selected else if (selectionStart > p0 && selectionEnd > p1) { x = drawUnselectedText(g, x, y, p0, selectionStart); drawSelectedText(g, x, y, selectionStart, p1); } // middle of range selected else if (selectionStart > p0) { x = drawUnselectedText(g, x, y, p0, selectionStart); x = drawSelectedText(g, x, y, selectionStart, selectionEnd); drawUnselectedText(g, x, y, selectionEnd, p1); } } catch (BadLocationException ble) { // shouldn't happen } } /** * Renders the range of text as selected text. Just paints the text * in the color specified by the host component. Assumes the highlighter * will render the selected background. * @param g the graphics context * @param x the starting X coordinate * @param y the starting Y coordinate * @param p0 the starting model location * @param p1 the ending model location * @return the X coordinate of the end of the text * @throws BadLocationException if the given range is invalid */ protected int drawSelectedText(Graphics g, int x, int y, int p0, int p1) throws BadLocationException { g.setColor(selectedColor); Segment segment = getLineBuffer(); getDocument().getText(p0, p1 - p0, segment); return Utilities.drawTabbedText(segment, x, y, g, this, p0); } /** * Renders the range of text as normal unhighlighted text. * @param g the graphics context * @param x the starting X coordinate * @param y the starting Y coordinate * @param p0 the starting model location * @param p1 the end model location * @return the X location of the end off the range * @throws BadLocationException if the range given is invalid */ protected int drawUnselectedText(Graphics g, int x, int y, int p0, int p1) throws BadLocationException { JTextComponent textComponent = (JTextComponent) getContainer(); if (textComponent.isEnabled()) g.setColor(unselectedColor); else g.setColor(disabledColor); Segment segment = getLineBuffer(); getDocument().getText(p0, p1 - p0, segment); return Utilities.drawTabbedText(segment, x, y, g, this, p0); } /** * Loads the children to initiate the view. Called by setParent. * Creates a WrappedLine for each child Element. */ protected void loadChildren (ViewFactory f) { Element root = getElement(); int numChildren = root.getElementCount(); if (numChildren == 0) return; View[] children = new View[numChildren]; for (int i = 0; i < numChildren; i++) children[i] = new WrappedLine(root.getElement(i)); replace(0, 0, children); } /** * Calculates the break position for the text between model positions * p0 and p1. Will break on word boundaries or character boundaries * depending on the break argument given in construction of this * WrappedPlainView. Used by the nested WrappedLine class to determine * when to start the next logical line. * @param p0 the start model position * @param p1 the end model position * @return the model position at which to break the text */ protected int calculateBreakPosition(int p0, int p1) { Segment s = new Segment(); try { getDocument().getText(p0, p1 - p0, s); } catch (BadLocationException ex) { assert false : "Couldn't load text"; } int width = getWidth(); int pos; if (wordWrap) pos = p0 + Utilities.getBreakLocation(s, metrics, tabBase, tabBase + width, this, p0); else pos = p0 + Utilities.getTabbedTextOffset(s, metrics, tabBase, tabBase + width, this, p0, false); return pos; } void updateMetrics() { Container component = getContainer(); metrics = component.getFontMetrics(component.getFont()); tabSize = getTabSize()* metrics.charWidth('m'); } /** * Determines the preferred span along the given axis. Implemented to * cache the font metrics and then call the super classes method. */ public float getPreferredSpan (int axis) { updateMetrics(); return super.getPreferredSpan(axis); } /** * Determines the minimum span along the given axis. Implemented to * cache the font metrics and then call the super classes method. */ public float getMinimumSpan (int axis) { updateMetrics(); return super.getMinimumSpan(axis); } /** * Determines the maximum span along the given axis. Implemented to * cache the font metrics and then call the super classes method. */ public float getMaximumSpan (int axis) { updateMetrics(); return super.getMaximumSpan(axis); } /** * Called when something was inserted. Overridden so that * the view factory creates WrappedLine views. */ public void insertUpdate (DocumentEvent e, Shape a, ViewFactory f) { // Update children efficiently. updateChildren(e, a); // Notify children. Rectangle r = a != null && isAllocationValid() ? getInsideAllocation(a) : null; View v = getViewAtPosition(e.getOffset(), r); if (v != null) v.insertUpdate(e, r, f); } /** * Called when something is removed. Overridden so that * the view factory creates WrappedLine views. */ public void removeUpdate (DocumentEvent e, Shape a, ViewFactory f) { // Update children efficiently. updateChildren(e, a); // Notify children. Rectangle r = a != null && isAllocationValid() ? getInsideAllocation(a) : null; View v = getViewAtPosition(e.getOffset(), r); if (v != null) v.removeUpdate(e, r, f); } /** * Called when the portion of the Document that this View is responsible * for changes. Overridden so that the view factory creates * WrappedLine views. */ public void changedUpdate (DocumentEvent e, Shape a, ViewFactory f) { // Update children efficiently. updateChildren(e, a); } /** * Helper method. Updates the child views in response to * insert/remove/change updates. This is here to be a little more efficient * than the BoxView implementation. * * @param ev the document event * @param a the shape */ private void updateChildren(DocumentEvent ev, Shape a) { Element el = getElement(); DocumentEvent.ElementChange ec = ev.getChange(el); if (ec != null) { Element[] removed = ec.getChildrenRemoved(); Element[] added = ec.getChildrenAdded(); View[] addedViews = new View[added.length]; for (int i = 0; i < added.length; i++) addedViews[i] = new WrappedLine(added[i]); replace(ec.getIndex(), removed.length, addedViews); if (a != null) { preferenceChanged(null, true, true); getContainer().repaint(); } } updateMetrics(); } class WrappedLineCreator implements ViewFactory { // Creates a new WrappedLine public View create(Element elem) { return new WrappedLine(elem); } } /** * Renders the <code>Element</code> that is associated with this * <code>View</code>. Caches the metrics and then calls * super.paint to paint all the child views. * * @param g the <code>Graphics</code> context to render to * @param a the allocated region for the <code>Element</code> */ public void paint(Graphics g, Shape a) { Rectangle r = a instanceof Rectangle ? (Rectangle) a : a.getBounds(); tabBase = r.x; JTextComponent comp = (JTextComponent)getContainer(); // Ensure metrics are up-to-date. updateMetrics(); selectionStart = comp.getSelectionStart(); selectionEnd = comp.getSelectionEnd(); selectedColor = comp.getSelectedTextColor(); unselectedColor = comp.getForeground(); disabledColor = comp.getDisabledTextColor(); selectedColor = comp.getSelectedTextColor(); lineHeight = metrics.getHeight(); g.setFont(comp.getFont()); super.paint(g, a); } /** * Sets the size of the View. Implemented to update the metrics * and then call super method. */ public void setSize (float width, float height) { updateMetrics(); if (width != getWidth()) preferenceChanged(null, true, true); super.setSize(width, height); } class WrappedLine extends View { /** Used to cache the number of lines for this View **/ int numLines = 1; public WrappedLine(Element elem) { super(elem); } /** * Renders this (possibly wrapped) line using the given Graphics object * and on the given rendering surface. */ public void paint(Graphics g, Shape s) { Rectangle rect = s.getBounds(); int end = getEndOffset(); int currStart = getStartOffset(); int currEnd; int count = 0; // Determine layered highlights. Container c = getContainer(); LayeredHighlighter lh = null; JTextComponent tc = null; if (c instanceof JTextComponent) { tc = (JTextComponent) c; Highlighter h = tc.getHighlighter(); if (h instanceof LayeredHighlighter) lh = (LayeredHighlighter) h; } while (currStart < end) { currEnd = calculateBreakPosition(currStart, end); // Paint layered highlights, if any. if (lh != null) { // Exclude trailing newline in last line. if (currEnd == end) lh.paintLayeredHighlights(g, currStart, currEnd - 1, s, tc, this); else lh.paintLayeredHighlights(g, currStart, currEnd, s, tc, this); } drawLine(currStart, currEnd, g, rect.x, rect.y + metrics.getAscent()); rect.y += lineHeight; if (currEnd == currStart) currStart ++; else currStart = currEnd; count++; } if (count != numLines) { numLines = count; preferenceChanged(this, false, true); } } /** * Calculates the number of logical lines that the Element * needs to be displayed and updates the variable numLines * accordingly. */ private int determineNumLines() { int nLines = 0; int end = getEndOffset(); for (int i = getStartOffset(); i < end;) { nLines++; // careful: check that there's no off-by-one problem here // depending on which position calculateBreakPosition returns int breakPoint = calculateBreakPosition(i, end); if (breakPoint == i) i = breakPoint + 1; else i = breakPoint; } return nLines; } /** * Determines the preferred span for this view along the given axis. * * @param axis the axis (either X_AXIS or Y_AXIS) * * @return the preferred span along the given axis. * @throws IllegalArgumentException if axis is not X_AXIS or Y_AXIS */ public float getPreferredSpan(int axis) { if (axis == X_AXIS) return getWidth(); else if (axis == Y_AXIS) { if (metrics == null) updateMetrics(); return numLines * metrics.getHeight(); } throw new IllegalArgumentException("Invalid axis for getPreferredSpan: " + axis); } /** * Provides a mapping from model space to view space. * * @param pos the position in the model * @param a the region into which the view is rendered * @param b the position bias (forward or backward) * * @return a box in view space that represents the given position * in model space * @throws BadLocationException if the given model position is invalid */ public Shape modelToView(int pos, Shape a, Bias b) throws BadLocationException { Rectangle rect = a.getBounds(); // Throwing a BadLocationException is an observed behavior of the RI. if (rect.isEmpty()) throw new BadLocationException("Unable to calculate view coordinates " + "when allocation area is empty.", pos); Segment s = getLineBuffer(); int lineHeight = metrics.getHeight(); // Return a rectangle with width 1 and height equal to the height // of the text rect.height = lineHeight; rect.width = 1; int currLineStart = getStartOffset(); int end = getEndOffset(); if (pos < currLineStart || pos >= end) throw new BadLocationException("invalid offset", pos); while (true) { int currLineEnd = calculateBreakPosition(currLineStart, end); // If pos is between currLineStart and currLineEnd then just find // the width of the text from currLineStart to pos and add that // to rect.x if (pos >= currLineStart && pos < currLineEnd) { try { getDocument().getText(currLineStart, pos - currLineStart, s); } catch (BadLocationException ble) { // Shouldn't happen } rect.x += Utilities.getTabbedTextWidth(s, metrics, rect.x, WrappedPlainView.this, currLineStart); return rect; } // Increment rect.y so we're checking the next logical line rect.y += lineHeight; // Increment currLineStart to the model position of the start // of the next logical line if (currLineEnd == currLineStart) currLineStart = end; else currLineStart = currLineEnd; } } /** * Provides a mapping from view space to model space. * * @param x the x coordinate in view space * @param y the y coordinate in view space * @param a the region into which the view is rendered * @param b the position bias (forward or backward) * * @return the location in the model that best represents the * given point in view space */ public int viewToModel(float x, float y, Shape a, Bias[] b) { Segment s = getLineBuffer(); Rectangle rect = a.getBounds(); int currLineStart = getStartOffset(); // Although calling modelToView with the last possible offset will // cause a BadLocationException in CompositeView it is allowed // to return that offset in viewToModel. int end = getEndOffset(); int lineHeight = metrics.getHeight(); if (y < rect.y) return currLineStart; if (y > rect.y + rect.height) return end - 1; // Note: rect.x and rect.width do not represent the width of painted // text but the area where text *may* be painted. This means the width // is most of the time identical to the component's width. while (currLineStart != end) { int currLineEnd = calculateBreakPosition(currLineStart, end); // If we're at the right y-position that means we're on the right // logical line and we should look for the character if (y >= rect.y && y < rect.y + lineHeight) { try { getDocument().getText(currLineStart, currLineEnd - currLineStart, s); } catch (BadLocationException ble) { // Shouldn't happen } int offset = Utilities.getTabbedTextOffset(s, metrics, rect.x, (int) x, WrappedPlainView.this, currLineStart); // If the calculated offset is the end of the line (in the // document (= start of the next line) return the preceding // offset instead. This makes sure that clicking right besides // the last character in a line positions the cursor after the // last character and not in the beginning of the next line. return (offset == currLineEnd) ? offset - 1 : offset; } // Increment rect.y so we're checking the next logical line rect.y += lineHeight; // Increment currLineStart to the model position of the start // of the next logical line. currLineStart = currLineEnd; } return end; } /** * <p>This method is called from insertUpdate and removeUpdate.</p> * * <p>If the number of lines in the document has changed, just repaint * the whole thing (note, could improve performance by not repainting * anything above the changes). If the number of lines hasn't changed, * just repaint the given Rectangle.</p> * * <p>Note that the <code>Rectangle</code> argument may be <code>null</code> * when the allocation area is empty.</code> * * @param a the Rectangle to repaint if the number of lines hasn't changed */ void updateDamage (Rectangle a) { int nLines = determineNumLines(); if (numLines != nLines) { numLines = nLines; preferenceChanged(this, false, true); getContainer().repaint(); } else if (a != null) getContainer().repaint(a.x, a.y, a.width, a.height); } /** * This method is called when something is inserted into the Document * that this View is displaying. * * @param changes the DocumentEvent for the changes. * @param a the allocation of the View * @param f the ViewFactory used to rebuild */ public void insertUpdate (DocumentEvent changes, Shape a, ViewFactory f) { Rectangle r = a instanceof Rectangle ? (Rectangle) a : a.getBounds(); updateDamage(r); } /** * This method is called when something is removed from the Document * that this View is displaying. * * @param changes the DocumentEvent for the changes. * @param a the allocation of the View * @param f the ViewFactory used to rebuild */ public void removeUpdate (DocumentEvent changes, Shape a, ViewFactory f) { // Note: This method is not called when characters from the // end of the document are removed. The reason for this // can be found in the implementation of View.forwardUpdate: // The document event will denote offsets which do not exist // any more, getViewIndex() will therefore return -1 and this // makes View.forwardUpdate() skip this method call. // However this seems to cause no trouble and as it reduces the // number of method calls it can stay this way. Rectangle r = a instanceof Rectangle ? (Rectangle) a : a.getBounds(); updateDamage(r); } } }
code
بَراک حُسٖین اؤبامہ (اَنٖگرَیزی زَبانَ : Barack Hussein Obama II) اوس اَمریکہک صَدرِ مَملِکَت.
kashmiri
رؠتہٕ پتہٕ چھ ونٛد یوان
kashmiri
/* * This file is part of the coreboot project. * * Copyright (C) 2011 Sven Schnelle <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; version 2 of the License. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <arch/io.h> #include <cpu/x86/tsc.h> #include "i82801jx.h" static void store_initial_timestamp(void) { /* * We have two 32bit scratchpad registers available: * D0:F0 0xdc (SKPAD) * D31:F2 0xd0 (SATA SP) */ tsc_t tsc = rdtsc(); pci_write_config32(PCI_DEV(0, 0x00, 0), 0xdc, tsc.lo); pci_write_config32(PCI_DEV(0, 0x1f, 2), 0xd0, tsc.hi); } static void enable_spi_prefetch(void) { u8 reg8; pci_devfn_t dev; dev = PCI_DEV(0, 0x1f, 0); reg8 = pci_read_config8(dev, 0xdc); reg8 &= ~(3 << 2); reg8 |= (2 << 2); /* Prefetching and Caching Enabled */ pci_write_config8(dev, 0xdc, reg8); } static void bootblock_southbridge_init(void) { store_initial_timestamp(); enable_spi_prefetch(); /* Enable RCBA */ pci_write_config32(PCI_DEV(0, 0x1f, 0), D31F0_RCBA, (uintptr_t)DEFAULT_RCBA | 1); }
code
बात उसकी जो सदा छपती है | मानिए ऊंची सरपरस्ती है | आपकी बातों में हो दम चाहे, उनकी नज़रों को नहीं जचती है | वे कहें, उनकी तरह सोचें -लिखें , कैसी नज़रों की तंगदस्ती है | पैर छुएं झुक झुक के सलाम करें , आपको ये बात कहाँ पचती है | नई सोच से परहेज़ है उन्हें , गोया ये बात बड़ी सस्ती है | सारे समाज को ही ले डूबेगी, यह सोच इक ज़र्ज़र कश्ती है | देश समाज जाए भाड़ में चाहे, उनकी अपनी तो मौज-मस्ती है | हाथ लंबे , पहुँच ऊपर तक है, तेरी भी 'श्याम कोई हस्ती है || लेबल्स: मौज-मस्ती, सरपरस्ती, हस्ती उनकी अपनी तो मौज-मस्ती है तेरी भी 'श्याम कोई हस्ती है |
hindi
بنیادی طور پیٹھ شہری علاقن منٛز چِھ 15 فیصد کھوتہٕ تی کم آبأدی گأمی کمیٔنٹی منٛذ روزان۔
kashmiri
L: 75" x W: 39" x H: 11.5" King Koil Perfect Response Elite Windmere Plush Twin Mattress Get a good night's sleep that's tailored to your needs with the King Koil Perfect Response Elite Windmere twin mattress. Plush, with an AdvantaGel™ quilt layer helps to regulate your body temperature as you sleep. Plus, latex lumbar support provides fast-acting support and pressure relief in the lower back and lumbar area. Get a good night's sleep that's tailored to your needs with the King Koil Perfect Response Elite Windmere twin mattress. Plush, with an AdvantaGel™ quilt layer helps to regulate your body temperature as you sleep. Plus, latex lumbar support provides fast-acting support and pressure relief in the lower back and lumbar area. L: 75" x W: 39" x H: 16.5" L: 75" x W: 39" x H: 20.5" We like the mattress...it is very comfortable. I have slept very well since we got the new mattress. Great mattress. However did not realize until after delivery the mattress did not need the additional box spring. Could have saved $100. A very plush mattress; molds to your body when you lay down, in a good way.
english
प्रश्न २४: तुम यह प्रमाण देते हो कि सर्वशक्तिमान परमेश्वर देहधारी परमेश्वर है, जो अभी अंतिम दिनों में अपने न्याय के कार्य को कर रहा है। लेकिन धार्मिक पादरियों और प्राचीन लोगों का कहना है कि सर्वशक्तिमान परमेश्वर का कार्य वास्तव में मनुष्य का काम है, और बहुत से लोग जो प्रभु यीशु पर विश्वास नहीं करते, वे कहते हैं कि ईसाई धर्म एक मनुष्य में विश्वास है। हम अभी भी यह नहीं समझ पाए हैं कि परमेश्वर के कार्य और मनुष्य के काम के बीच में क्या अंतर है, इसलिए कृपया हमारे लिए यह सहभागिता करो। | राज्य के अवरोहण का सुसमाचार प्रश्न २४: तुम यह प्रमाण देते हो कि सर्वशक्तिमान परमेश्वर देहधारी परमेश्वर परमेश्वर का कार्य और मनुष्य का कार्य निश्चित रूप से अलग हैं। अगर हम सावधानी से जाँच करें तो हम लोग इसे देख पाएँगे। मिसाल के तौर पर, अगर हम प्रभु यीशु के कथनों और कार्य को देखें और फिर प्रेरितों के कथनों और कार्य पर नज़र डालें, तो हम कह सकते हैं कि अंतर एकदम स्पष्ट है। प्रभु यीशु द्वारा कहा गया हर वचन सत्य है और उसमें अधिकार है, और बहुत से रहस्यों को प्रकट कर सकता है। ये सभी ऐसी चीज़ें हैं जिन्हें मनुष्यजाति कभी नहीं कर सकती है। यही वजह है कि बहुत से लोग हैं जो प्रभु यीशु का अनुसरण करते हैं। हालाँकि, प्रेरित सिर्फ सुसमाचार को प्रसारित सकते हैं, परमेश्वर की गवाही दे सकते हैं और कलीसिया को आपूर्ति कर सकते हैं। परिणाम सभी बहुत सीमित होते हैं। परमेश्वर के कार्य और मनुष्य के कार्य के बीच अंतर एकदम साफ है। फिर लोग इसकी सच्ची प्रकृति का पता क्यों नहीं लगा पाते हैं? क्या कारण है? ऐसा इसलिए है क्योंकि भ्रष्ट मनुष्यजाति परमेश्वर को नहीं जानती है और उसमें किसी तरह का कोई सत्य नहीं है। इसलिए, हममें इसका परिणाम यह होता है कि हम परमेश्वर के कार्य और इंसान के कार्य के बीच अंतर नहीं कर पाते हैं, और इससे देहधारी परमेश्वर के कार्य को आसानी से इंसान के कार्य, और हमारे पसंदीदा व्यक्ति के कार्य को और दुष्ट आत्माओं के कार्य को, झूठे मसीहों और झूठे नबियों के कार्य को स्वीकार और अनुसरण करने लायक परमेश्वर का कार्य मान लेना आसान बन जाता है। ये सच्चे मार्ग से डिगना और परमेश्वर का विरोध करना है, और इसे मनुष्य को अत्यधिक स्नेह करना, शैतान का अनुसरण और शैतान की आराधना करना माना जाता है। यह परमेश्वर के स्वभाव के खिलाफ एक गंभीर अपराध है और परमेश्वर द्वारा शापित किया जाएगा। इस तरह के लोग बचाए जाए का मौका गँवा देंगे। यही कारण है कि यह प्रश्न हमारे सच्चे मार्ग की जाँच करने और अंत के दिनों के परमेश्वर के कार्य को जानने के लिये बहुत महत्वपूर्ण है। बाहर से देखने पर, देहधारी परमेश्वर का कार्य और परमेश्वर द्वारा उपयोग किए जाने वाले लोगों का कार्य, दोनों ही ऐसे लगते हैं जैसे कि इंसान ही कार्य कर रहा हो और बोल रहा हो। लेकिन उनके सार और उनके कार्य की प्रकृति में ज़मीन-आसमान का अंतर है। आज सर्वशक्तिमान परमेश्वर ने आकर सारे सत्य और रहस्य प्रकट कर दिए हैं और परमेश्वर के कार्य और इंसान के कार्य में अंतर को प्रकट कर दिया है। अब जाकर हमें परमेश्वर के कार्य और इंसान के कार्य का ज्ञान हुआ है और उसको समझ पाए हैं। आइए सर्वशक्तिमान परमेश्वर के वचनों पर नज़र डालें। सर्वशक्तिमान परमेश्वर कहते हैं, "स्वयं परमेश्वर के कार्य में सम्पूर्ण मनुष्यजाति का कार्य समाविष्ट है, और यह सम्पूर्ण युग के कार्य का भी प्रतिनिधित्व करता है। कहने का तात्पर्य है कि, परमेश्वर का स्वयं का कार्य पवित्र आत्मा के सभी कार्य की चाल और प्रवृत्ति का प्रतिनिधित्व करता है, जबकि प्रेरितों का कार्य परमेश्वर के स्वयं के कार्य का अनुसरण करता है और युग की अगुवाई नहीं करता है, न ही यह सम्पूर्ण युग में पवित्र आत्मा के कार्य करने की प्रवृत्ति का प्रतिनिधित्व करता है। वे केवल वही कार्य करते हैं जो मनुष्य को अवश्य करना चाहिए, जो प्रबंधन कार्य को बिलकुल भी समाविष्ट नहीं करता है। परमेश्वर का स्वयं का कार्य प्रबंधन कार्य के भीतर एक परियोजना है। मनुष्य का कार्य केवल उपयोग किए जा रहे मनुष्यों का कर्तव्य है और इसका प्रबंधन कार्य से कोई सम्बन्ध नहीं है" ("वचन देह में प्रकट होता है" में "परमेश्वर का कार्य और मनुष्य का कार्य")। "देहधारी परमेश्वर का कार्य एक नये विशेष युग का आरम्भ करता है, और जो लोग उसके कार्य को निरन्तर जारी रखते हैं वे ऐसे मनुष्य हैं जिन्हें उसके द्वारा उपयोग किया जाता है। मनुष्य के द्वारा किया गया समस्त कार्य देहधारी परमेश्वर की सेवकाई के भीतर होता है, और वह इस दायरे के परे जाने में असमर्थ होता है। यदि देहधारी परमेश्वर अपने कार्य को करने के लिए नहीं आए, तो मनुष्य पुराने युग का समापन करने और नए विशेष युग की शुरुआत करने में समर्थ नहीं है। मनुष्य के द्वारा किया गया कार्य मात्र उसके कर्तव्य के दायरे के भीतर होता है जो मानवीय रूप से संभव है, और परमेश्वर के कार्य का प्रतिनिधित्व नहीं करता है। केवल देहधारी परमेश्वर ही आकर उस कार्य को पूरा कर सकता है जो उसे करना चाहिए, और उसके अलावा, कोई भी उसकी ओर से इस कार्य को नहीं कर सकता है। निस्संदेह, मैं जिस बारे में बात करता हूँ वह देधारण के कार्य के सम्बन्ध में है" ("वचन देह में प्रकट होता है" में "भ्रष्ट मनुष्यजाति को देहधारी परमेश्वर के उद्धार की अधिक आवश्यकता है")। "जो देहधारी परमेश्वर है, वह परमेश्वर का सार धारण करेगा, और जो देहधारी परमेश्वर है, वह परमेश्वर की अभिव्यक्ति धारण करेगा। चूँकि परमेश्वर देहधारी हुआ, वह उस कार्य को प्रकट करेगा जो उसे अवश्य करना चाहिए, और चूँकि परमेश्वर ने देह धारण किया, तो वह उसे अभिव्यक्त करेगा जो वह है, और मनुष्यों के लिए सत्य को लाने में समर्थ होगा, मनुष्यों को जीवन प्रदान करने, और मनुष्य को मार्ग दिखाने में सक्षम होगा। जिस शरीर में परमेश्वर का सार नहीं है, निश्चित रूप से वह देहधारी परमेश्वर नहीं है; इस बारे में कोई संदेह नहीं है। ... "... देहधारी परमेश्वर के वचन एक नया युग आरंभ करते हैं, समस्त मानवजाति का मार्गदर्शन करते हैं, रहस्यों को प्रकट करते हैं, और एक नये युग में मनुष्य को दिशा दिखाते हैं। मनुष्य द्वारा प्राप्त की गई प्रबुद्धता मात्र एक आसान अभ्यास या ज्ञान है। वह समस्त मानवजाति को एक नये युग में मार्गदर्शन नहीं दे सकती है या स्वयं परमेश्वर के रहस्य को प्रकट नहीं कर सकती है। परमेश्वर आखिरकार परमेश्वर है, और मनुष्य मनुष्य ही है। परमेश्वर में परमेश्वर का सार है और मनुष्य में मनुष्य का सार है" ("वचन देह में प्रकट होता है" के लिए प्रस्तावना)। "देहधारी परमेश्वर उन लोगों से मौलिक रूप से भिन्न है जो परमेश्वर द्वारा उपयोग किए जाते हैं। देहधारी परमेश्वर दिव्यता का कार्य करने में समर्थ है, जबकि परमेश्वर के द्वारा उपयोग किए जाने वाले लोग समर्थ नहीं हैं। प्रत्येक युग के आरंभ में, नया युग शुरू करने के लिए और मनुष्य को एक नई शुरुआत में लाने के लिए परमेश्वर का आत्मा व्यक्तिगत रूप से बोलता है। जब वह वचन बोलना पूरा कर लेता है, तो यह प्रकट करता है कि परमेश्वर की दिव्यता के भीतर उसका कार्य पूरा हो गया है। उसके बाद, सभी लोग अपने जीवन अनुभव में प्रवेश करने के लिए उन लोगों के पदचिह्नों का अनुसरण करते हैं जो परमेश्वर के द्वारा उपयोग किए जाते हैं" ("वचन देह में प्रकट होता है" में "देहधारी परमेश्वर और परमेश्वर द्वारा उपयोग किए गए लोगों के बीच महत्वपूर्ण अंतर")। "जो कुछ परमेश्वर प्रकट करता है परमेश्वर स्वयं वही है, और यह मनुष्य की पहुँच से परे, अर्थात्, मनुष्य की सोच से परे है। वह सम्पूर्ण मानवजाति की अगुवाई करने के अपने कार्य को व्यक्त करता है, और यह मानव अनुभव के विवरणों के प्रासंगिक नहीं है, बल्कि इसके बजाए यह उसके अपने प्रबंधन से सम्बन्धित है। मनुष्य अपने अनुभव को व्यक्त करता है, जबकि परमेश्वर अपने अस्तित्व को व्यक्त करता हैयह अस्तित्व उसका अंतर्निहित स्वभाव है और यह मनुष्य की पहुँच से परे है। मनुष्य का अनुभव उसका देखना और परमेश्वर की अपने अस्तित्व की अभिव्यक्ति के आधार पर प्राप्त किया गया उसका ज्ञान है। ऐसा देखना और ज्ञान मनुष्य का अस्तित्व कहलाता है। ये मनुष्य के अंतर्निहित स्वभाव और उसकी वास्तविक क्षमता के आधार पर व्यक्त होते हैं; इसलिए इन्हें मनुष्य का अस्तित्व भी कहा जाता है। ...देहधारी परमेश्वर के देह द्वारा कहे गए वचन पवित्रात्मा की प्रत्यक्ष अभिव्यक्ति हैं और उस कार्य को अभिव्यक्त करते हैं जो पवित्रात्मा के द्वारा किया गया है। देह ने इसे अनुभव किया या देखा नहीं है, परन्तु तब भी उसके अस्तित्व को व्यक्त करता है क्योंकि देह का सार पवित्रात्मा है, और वह पवित्रात्मा के कार्य को व्यक्त करता है" ("वचन देह में प्रकट होता है" में "परमेश्वर का कार्य और मनुष्य का कार्य")। "जिस कार्य को परमेश्वर करता है वह उसके देह के अनुभव का प्रतिनिधित्व नहीं करता है; जिस कार्य को मनुष्य करता है वह मनुष्य के अनुभव का प्रतिनिधित्व करता है। हर कोई अपने व्यक्तिगत अनुभव के बारे में बात करता है। परमेश्वर सीधे तौर पर सत्य को व्यक्त कर सकता है, जबकि मनुष्य केवल सत्य का अनुभव करने के पश्चात् ही तदनुरूप अनुभव को व्यक्त कर सकता है। परमेश्वर के कार्य में कोई नियम नहीं हैं और यह समय या भौगोलिक अवरोधों के अधीन नहीं है। वह जो है उसे वह किसी भी समय, कहीं पर भी प्रकट कर सकता है। जो वह है उसे वह किसी भी समय और किसी भी स्थान पर व्यक्त कर सकता है। उसे जैसा अच्छा लगता है वह वैसा ही करता है। मनुष्य के कार्य में परिस्थितियाँ और सन्दर्भ होते हैं; अन्यथा, वह कार्य करने में असमर्थ होता है और वह परमेश्वर के बारे में अपने ज्ञान को व्यक्त करने या सत्य के बारे में अपने अनुभव को व्यक्त करने में असमर्थ होता है। यह बताने के लिए कि क्या यह परमेश्वर का अपना कार्य है या मनुष्य का कार्य है तुम्हें बस उनके बीच अन्तर की तुलना करनी है" ("वचन देह में प्रकट होता है" में "परमेश्वर का कार्य और मनुष्य का कार्य")। "अगर मनुष्य को यह कार्य करना पड़ता, तो यह बहुत ही सीमित होता; यह मनुष्य को एक निश्चित स्थिति तक ले जा सकता था, किन्तु यह मनुष्य को शाश्वत मंज़िल पर ले जाने में सक्षम नहीं होता। मनुष्य की नियति को निर्धारित करने में मनुष्य समर्थ नहीं है, इसके अतिरिक्त, न ही वह मनुष्य के भविष्य की संभावनाओं और भविष्य की मंज़िल को सुनिश्चित करने में समर्थ है। हालाँकि, परमेश्वर के द्वारा किया गया कार्य भिन्न होता है। चूँकि उसने मनुष्य को सृजित किया है, इसलिए वह मनुष्य की अगुवाई करता है; चूँकि वह मनुष्य को बचाता है, इसलिए वह उसे पूरी तरह से बचाएगा, और उसे पूरी तरह प्राप्त करेगा; चूँकि वह मनुष्य की अगुवाई करता है, इसलिए वह उसे उस उपयुक्त मंज़िल पर ले जाएगा, और चूँकि उसने मनुष्य को सृजित किया है और उसका प्रबंध करता है, इसलिए उसे मनुष्य के भाग्य और उसकी भविष्य की संभावनाओं की ज़िम्मेदारी लेनी होगी। यही वह कार्य है जिसे सृजनकर्ता के द्वारा किया जाता है" ("वचन देह में प्रकट होता है" में "मनुष्य के सामान्य जीवन को पुनःस्थापित करना और उसे एक अद्भुत मंज़िल पर ले जाना")। सर्वशक्तिमान परमेश्वर के वचनों ने परमेश्वर के कार्य और इंसान के कार्य के बीच के अंतर को एकदम साफ कर दिया है। चूँकि देहधारी परमेश्वर का सार और परमेश्वर द्वारा उपयोग किए जाने वाले इंसान का सार अलग-अलग होता है, इसलिए वे जो काम करते हैं वह भी बहुत अलग होता है। परमेश्वर बाहर से एक साधारण, सामान्य व्यक्ति दिखाई देता है, लेकिन वह देह में साकार हुआ परमेश्वर का आत्मा है। इसलिए उसमें एक दिव्य सार है, उसमें परमेश्वर का अधिकार, सामर्थ्य, सर्वशक्तिमत्ता और बुद्धि है। इसलिए देहधारी परमेश्वर अपने कार्य में सीधे सत्य को व्यक्त कर सकता है और परमेश्वर के धार्मिक स्वभाव को और अपने स्वरूप को व्यक्त कर सकता है, और एक नए युग का आरंभ और पुराने युग का अंत कर सकता है, परमेश्वर के इरादों और मनुष्यजाति से परमेश्वर की अपेक्षाओं को व्यक्त करते हुए, परमेश्वर की प्रबंधन योजना के सभी रहस्यों को प्रकट कर सकता है। देहधारी परमेश्वर द्वारा व्यक्त सभी वचन सत्य हैं और इंसान का जीवन बन सकते हैं और इंसान के जीवन स्वभाव को बदल सकते हैं। परमेश्वर का कार्य मनुष्य को जीत सकता है, उसे शुद्ध कर सकता है इंसान को शैतान के प्रभाव से बचा सकता है, और मनुष्यजाति को एक ख़ूबसूरत मंज़िल तक ले जा सकता है। ऐसे कार्य का प्रभाव कुछ ऐसा है जिसे कोई इंसान नहीं कर सकता है। देहधारी परमेश्वर का कार्य स्वयं परमेश्वर का कार्य है, और उनकी जगह कोई नहीं ले सकता है। दूसरी तरफ, परमेश्वर द्वारा उपयोग किए जाने वाले मनुष्य का सार मनुष्य है। उनमें केवल मानवता होती है और उनमें मसीह का दिव्य सार नहीं होता है, इसलिए वे सत्य या परमेश्वर के स्वभाव को और उसके स्वरूप को व्यक्त नहीं कर सकते हैं। वे परमेश्वर के कथनों और कार्य की बुनियाद पर परमेश्वर के वचनों के अपने निजी ज्ञान का ही संवाद कर सकते हैं, या अपने अनुभवों और गवाही के बारे में बात कर सकते हैं। उनका सारा ज्ञान और गवाही परमेश्वर के वचनों की उनकी व्यक्तिगत समझ को दर्शाती है। उनका ज्ञान कितना भी ऊँचा या उनके वचन कितने भी सटीक क्यों न हों, वे जो कुछ भी कहते हैं उसे सत्य नहीं कहा जा सकता है, और इसके अलावा उन्हें परमेश्वर के वचन तो नहीं कहा जा सकता है, इसलिये ये बातें इंसान का जीवन नहीं हो सकती हैं इंसान को सिर्फ़ सहायता, आपूर्ति, समर्थन और नसीहत दे सकती हैं, इंसान को शुद्ध करने, बचाने और सिद्ध बनाने के परिणाम हासिल नहीं कर सकती हैं। इसलिए परमेश्वर द्वारा उपयोग किया जाने वाला इंसान स्वयं परमेश्वर का कार्य नहीं कर सकता है और मनुष्य के कर्तव्यों को पूरा करने के लिए केवल परमेश्वर के साथ समन्वय कर सकता है। जब परमेश्वर के कार्य और मनुष्य के कार्य में अंतर की बात आती है, तो हम इस बात को हर किसी के लिए और ज़्यादा स्पष्ट करने के लिए एक वास्तविक उदाहरण दे सकते हैं। अनुग्रह के युग में, प्रभु यीशु ने स्वर्ग के राज्य के रहस्यों को प्रकट करते हुए, प्रायश्चित के मार्ग का उपदेश दिया, "मन फिराओ क्योंकि स्वर्ग का राज्य निकट आया है," और इंसान से उसके पापों को स्वीकार करवाते हुए, प्रायश्चित करवाते हुए और मनुष्यों के पापों को क्षमा करते हुए, लोगों को अपराध और कानून के अभिशाप से मुक्त करवाते हुए, उसे मनुष्य के लिए पापबलि के रूप में सूली पर चढ़ा दिया गया ताकि हम परमेश्वर से प्रार्थना करने और उसके साथ संगति करने के लिए उनके सामने आने की योग्यता हासिल कर सकें, परमेश्वर का समृद्ध अनुग्रह और सत्य पा सकें, हमें परमेश्वर के प्रेममय और दयालु स्वभाव को देखने दिया। प्रभु यीशु के कार्य ने अनुग्रह के युग का आरंभ और व्यवस्था के युग का अंत किया। यह अनुग्रह के युग में परमेश्वर के कार्य का हिस्सा है। जब प्रभु यीशु ने अपना कार्य पूरा कर लिया उसके बाद, उसके प्रेरितों ने परमेश्वर के कथनों और कार्य की बुनियाद पर, उसके चुने हुए लोगों की प्रभु यीशु के वचनों का अनुभव करने और अभ्यास करने में अगुवाई की, उसके उद्धार की गवाही को फैलाया और पूरी धरती पर मनुष्यजाति को छुटकारा दिलाने वाले उसके सुसमाचार का प्रचार किया। यह अनुग्रह के युग में प्रेरितों का कार्य है और साथ ही परमेश्वर के द्वारा उपयोग किये गए लोगों का कार्य है। इससे हम यह देख पाते हैं कि प्रभु यीशु के कार्य और प्रेरितों के कार्य के सार में अंतर है। अंत के दिनों में सर्वशक्तिमान परमेश्वर ने मनुष्यजाति को शुद्ध करने और बचाने के लिए, मनुष्य को परमेश्वर के धार्मिक, प्रतापी, कोपपूर्ण, अपमान न किये जा सकने योग्य स्वभाव के दर्शन कराते हुए मनुष्यजाति को शैतान की भ्रष्टता और असर से पूरी तरह से बचाने के लिए, सभी सत्य व्यक्त किये, परमेश्वर की ६,००० साल की प्रबंधन योजना के सभी रहस्यों को प्रकट किया, परमेश्वर के घर से आरंभ होने वाला न्याय का कार्य किया, ताकि भ्रष्ट मनुष्यजाति पाप-मुक्त हो सके, शुद्धता प्राप्त कर सके, और परमेश्वर द्वारा प्राप्त की जा सके। सर्वशक्तिमान परमेश्वर के कार्य ने राज्य के युग का आरंभ और अनुग्रह के युग का अंत किया। यह राज्य के युग के लिए परमेश्वर का कार्य है। सर्वशक्तिमान परमेश्वर के कार्य और वचनों की बुनियाद पर, परमेश्वर द्वारा उपयोग किए गए इंसान का कार्य परमेश्वर के चुने हुए लोगों को सींचना है और उनकी चरवाही करना है, उन्हें परमेश्वर के वचनों की वास्तविकता में प्रवेश करने और परमेश्वर में विश्वास करने के लिए सही मार्ग में प्रवेश करने की ओर उनकी अगुवाई करना है, और राज्य के उतरने के सर्वशक्तिमान परमेश्वर के सुसमाचार को फैलाना और उसकी गवाही देना है। यह राज्य के युग में परमेश्वर द्वारा उपयोग किए जाने वाले व्यक्ति का कार्य है। इससे हम यह देख पाते हैं कि दोनों बार जब परमेश्वर ने देहधारण किया तो परमेश्वर का कार्य एक युग का आरंभ करना और एक युग का अंत करना है। उसका कार्य समस्त मानवजाति की ओर प्रवृत्त है और यह सब परमेश्वर की प्रबंधन योजना को पूरा करने के लिए कार्य का एक चरण है। यह ठीक मनुष्यजाति को छुड़ाने और बचाने का कार्य है। दोनों बार परमेश्वर के देहधारण करने के कार्य से यह सत्यापित होता है कि मनुष्यजाति को शुद्ध करने और बचाने के अपने कार्य में सिर्फ़ परमेश्वर ही सत्य को व्यक्त कर सकता है। कोई भी इंसान परमेश्वर का कार्य नहीं कर सकता। केवल देहधारी परमेश्वर ही परमेश्वर का कार्य कर सकते हैं। इसलिए दोनों बार जब परमेश्वर देहधारी बना, तो उसने गवाही दी कि मसीह ही सत्य, मार्ग और जीवन हैं। देहधारी परमेश्वर के अलावा, और कोई स्वयं परमेश्वर का कार्य नहीं कर सकता। वे न तो नए युग का आरंभ कर सकते हैं, न पुराने युग का अंत कर सकते हैं और इसके अलावा न ही मनुष्यजाति को बचा सकते हैं। परमेश्वर द्वारा उपयोग किए जाने वाले इंसान का कार्य परमेश्वर के चुने हुए लोगों की अगुवाई और चरवाही करने के लिए और इंसान का कर्तव्य पूरा करने के लिये, केवल परमेश्वर के कार्य के साथ समन्वय कर सकता है। चाहे इंसान ने कितने भी साल काम क्यों न किया हो या कितने ही वचन क्यों न बोले हों, या बाहर से उसके कार्य कितने ही विशाल क्यों न प्रतीत होते हों, इसका सार है कि समस्त इंसान का कार्य है। यह एक सच्चाई है। देहधारी परमेश्वर के कार्य में और परमेश्वर द्वारा उपयोग किए गए इंसानों के काम में यही मुख्य अंतर है। सर्वशक्तिमान परमेश्वर के वचनों ने हमें परमेश्वर के कार्य और मनुष्य के कार्य में मूलभूत अंतर का एहसास कराया है। हमें अब जाकर समझ में आया है कि जब देहधारी परमेश्वर कार्य करता है तो वह सत्य को व्यक्त कर सकता है, परमेश्वर के स्वभाव और परमेश्वर के स्वरूप को व्यक्त कर सकता है। अगर हम परमेश्वर के कार्य को स्वीकार कर लें और अनुभव कर लें तो हम सत्य को समझ पाएँगे, और परमेश्वर के पवित्र और धार्मिक स्वभाव को, परमेश्वर के सार को, मनुष्यजाति को बचाने के परमेश्वर के इरादों को, मनुष्यजाति को बचाने के परमेश्वर के तरीकों को, और मनुष्यजाति के लिए परमेश्वर के प्यार को ज़्यादा से ज़्यादा समझ पाएँगे। साथ ही हम शैतान द्वारा हमें भ्रष्ट किए जाने के सार, प्रकृति और सत्य की समझ को भी पा लेंगे। इस तरह, हमारा भ्रष्ट स्वभाव शुद्धिकरण और परिवर्तन हासिल कर सकता है, हम परमेश्वर के प्रति सच्ची आज्ञाकारिता और भय मानना पैदा कर सकते हैं, और परमेश्वर द्वारा उद्धार को पा सकते हैं। हालाँकि मनुष्य का कार्य और परमेश्वर का कार्य पूरी तरह से अलग हैं। क्योंकि इंसान सत्य को व्यक्त नहीं कर सकता है और परमेश्वर के वचनों के बारे में केवल अपने निजी अनुभव और ज्ञान की चर्चा ही कर सकता है, भले ही ये सत्य के अनुरूप हों, इसलिए ये केवल परमेश्वर के चुने हुए लोगों का मार्गदर्शन, चरवाही, समर्थन और सहायता ही कर सकते हैं। इससे पता चलता है कि अगर यह परमेश्वर द्वारा अनुमोदित व्यक्ति है, तो उसका काम सिर्फ परमेश्वर के कार्य के साथ समन्वय करना और इंसान का कर्तव्य पूरा करना है। अगर यह व्यक्ति परमेश्वर द्वारा उपयोग नहीं किया गया है, पवित्र आत्मा के कार्य से रहित व्यक्ति है, तो वह ऐसा इंसान है जो मनुष्य की योग्यता, प्रतिभा और प्रसिद्धि का उत्कर्ष करने वाला व्यक्ति है। अगर ऐसे लोग बाइबल की व्याख्या भी करते हैं, तो वे बाइबल में मनुष्य के वचनों का ही उत्कर्ष करते हैं, परमेश्वर के वचनों को अप्रासंगिक बना देते हैं और परमेश्वर के वचनों को बदलने के लिए मनुष्य के वचनों का उपयोग करते हैं। ऐसे लोगों का काम फरीसियों का कार्य है और परमेश्वर का विरोध करने का कार्य है। इंसानों का काम ख़ास तौर से इन दो अलग-अलग स्थितियों में आता है। चाहे कुछ भी हो, इंसान के कार्य और परमेश्वर के कार्य के बीच सबसे बड़ा अंतर यह है: अगर यह सिर्फ़ इंसान का कार्य है, तो ये इंसानों को शुद्ध करने और बचाने के परिणाम हासिल नहीं कर सकता है। केवल परमेश्वर ही अपने कार्य में सत्य को व्यक्त करने में सक्षम है और केवल उसका कार्य की इंसान को शुद्ध करने और बचाने के परिणाम हासिल कर सकता है। यह एक सच्चाई है। हम जिस मुख्य बात की यहाँ चर्चा कर रहे हैं, वह है परमेश्वर के कार्य और परमेश्वर द्वारा उपयोग किए गए लोगों के कार्य के बीच अंतर। परमेश्वर द्वारा उपयोग नहीं किए गए उन धार्मिक अगुवों का कार्य एक अन्य मामला है। जब परमेश्वर के कार्य और इंसान के काम में इतना साफ़ अंतर है, तो फिर भी हम परमेश्वर पर विश्वास करने के साथ-साथ इंसान की पूजा और उसका अनुसरण क्यों करते हैं? अभी भी इतने सारे ऐसे लोग क्यों हैं जो उन प्रसिद्ध आध्यात्मिक हस्तियों और धार्मिक अगुवाओं के कार्यो को परमेश्वर के कार्यों की तरह मानते हैं जिनकी वे आराधना करते हैं? ऐसे लोग क्यों हैं जो झूठे मसीहों और दुष्ट आत्माओं के धोखे तक को परमेश्वर का कार्य मानते हैं? उसकी वजह यह है कि हमारे पास सत्य नहीं है और हम परमेश्वर के कार्य और इंसान के काम में अंतर नहीं कर सकते। हम देहधारी परमेश्वर और इंसान के सार को नहीं जानते हैं, हम नहीं जानते कि कैसे पहचाने कि सत्य क्या है और कौनसी बात सत्य के अनुरूप है। हम परमेश्वर की वाणी और इंसान के कथनों में अंतर नहीं कर सकते हैं, इसके साथ ही, हमें शैतान द्वारा भ्रष्ट कर दिया गया है और हम सब ज्ञान और योग्यता की आराधना करते हैं, इसलिए बाइबल के उस ज्ञान को, धार्मिक सिद्धांतों को और आध्यात्मिक मतों को सत्य मान लेना बड़ा आसान होता है जो इंसान से आता है। इन बातों को स्वीकार करना जो सत्य नहीं हैं और जो इंसान से आती हैं, वे बातें हमारे ज्ञान को बढ़ाने का काम तो कर सकता है, किन्तु ये हमारे जीवन को किसी भी तरह की आपूर्ति प्रदान नहीं करती हैं, और इसके अलावा, परमेश्वर को जानने और परमेश्वर का आदर करने का प्रभाव हासिल नहीं कर सकती हैं। इस सच्चाई को नकारा नहीं जा सकता है। इसलिए, इंसान चाहे कितना भी काम क्यों न कर ले, कितने भी वचन क्यों न बोल ले, कितने भी समय तक काम क्यों न कर ले, या कितना भी बड़ा काम क्यों न कर ले, यह इंसान को शुद्ध करने और बचाने का परिणाम हासिल नहीं कर सकता है। इंसान का जीवन नहीं बदलेगा। इससे ये ज़ाहिर होता है कि इंसान का काम परमेश्वर के कार्य की जगह नहीं ले सकता है। सिर्फ़ परमेश्वर का कार्य ही इंसान को बचा सकता है। परमेश्वर के कार्य की अवधि कितनी भी छोटी क्यों न हो, उसके द्वारा बोले गए वचन चाहे कितने ही कम क्यों न हों, ये किसी युग का आरंभ और अंत कर सकते हैं, और मानवजाति को छुड़ाने और बचाने के परिणामों को हासिल कर सकते हैं। परमेश्वर और इंसान के कार्य के बीच यह एकदम साफ़ अंतर है। परमेश्वर और इंसान के कार्य के बीच अंतर को समझने के बाद ही हम आँख बंद करके इंसान की आराधना और उसका अनुसरण नहीं करेंगे, और झूठे मसीहों और मसीहविरोधियों के धोखों में और नियंत्रण को समझ और अस्वीकार कर पाएँगे। इस तरह, हम अंत के दिनों के सर्वशक्तिमान परमेश्वर के कार्य को स्वीकार और उसका पालन कर पाएँगे, और परमेश्वर से उद्धार पाने के लिए परमेश्वर के न्याय और शुद्धिकरण को हासिल कर पाएँगे। अगर इन्सान परमेश्वर के कार्य और मनुष्य के काम में अंतर न कर सकते हैं, तो हम झूठे मसीहों और मसीहविरोधियों के धोखों और नियंत्रण से नहीं निकल पाएँगे। इस तरह के लोग परमेश्वर में नाममात्र को विश्वास करते हैं, बल्कि वास्तव में इन्सान में विश्वास करते हैं, इन्सान का अनुसरण करते हैं और इन्सान की आराधना करते हैं; वे मूर्तियों की पूजा कर रहे हैं। यह परमेश्वर का विरोध करना और परमेश्वर के साथ विश्वासघात करना है। अगर हम अभी भी अपने मार्ग की त्रुटि का एहसास करने से इनकार करते हैं, तो आख़िरकार परमेश्वर के स्वभाव को नाराज़ करने के कारण परमेश्वर द्वारा हमें शाप दिया जाएगा और हटा दिया जाएगा। अगर हम परमेश्वर के कार्य और इंसान के कार्य में अंतर न कर सकते हैं, या उन लोगों के कार्य में जिनका उपयोग परमेश्वर करते हैं और उन पाखंडी फरीसियों के कार्य में अंतर न कर सकते हैं, तो हम इंसानों की पूजा करने और उनका अनुसरण करने की ओर प्रवृत्त होंगे, और आसानी से सच्चे मार्ग से भटक जाएँगे! यह ठीक वैसा ही होगा जैसा जब प्रभु यीशु अपना कार्य करने आए थे, और यहूदी धर्म में परमेश्वर के चुने हुए लोग उन्हें त्यागकर पाखंडी फरीसियों का अनुसरण करते थे, और उसका परित्याग करते थे। अंत के दिनों में, सर्वशक्तिमान परमेश्वर न्याय का कार्य करता है। धार्मिक जगत में, पादरी और अगुवा और आधुनिक-दिनों-के फरीसी, बहुत से लोगों को धोखा देते हैं, काबू में कर लेते हैं, उन्हें कैद कर लेते हैं, अंत के दिनों के मसीह का परित्याग करने के लिए प्रेरित करते हैं। यह एक गंभीर सबक है जो हमें अवश्य सीखना चाहिए। परमेश्वर का अनुसरण करने के लिए, हमें धार्मिक अगुवाओं और पाखंडी फरीसियों के सार को समझने में अवश्य समर्थ होना चाहिए। वे अपनी योग्यताओं और प्रतिभा के माध्यम से कार्य करते हैं, बाइबल की अपनी स्वयं की आवधारणा, कल्पना और तर्क के अनुसार व्याख्या करते हैं। दरअसल वे धर्मशास्त्र और बाइबल के सिद्धांतों के उपदेश देते हैं। वे बाइबल में दिए गए परमेश्वर के वचनों का उत्कर्ष करने और उनकी गवाही देने की बजाय, बाइबल में दिये इंसान के वचनों की व्याख्या करते हैं। वे प्रभु यीशु के वचनों की जगह इंसान के वचनों का उपयोग करके प्रभु को मात्र कठपुलती बना देते हैं। ऐसा कार्य परमेश्वर की इच्छा के बिल्कुल उलट है। फरीसियों के परमेश्वर के विरोध का मूल यहीं निहित है। तमाम धार्मिक लोग फरीसियों की अगुवाई और चरवाही में पड़ते हैं और वे आँख बंद करके अनुसरण करते हैं। वे बरसों तक परमेश्वर का अनुसरण करते हैं, लेकिन उन्हें सत्य या जीवन का कोई पोषण कभी नहीं मिलता है। ज़्यादा से ज़्यादा, वे केवल बाइबल और धार्मिक सिद्धांतों का ज्ञान हासिल करने की ही उम्मीद कर सकते हैं। वे अधिक से अधिक घमंडी, आत्म-तुष्ट और घृणित स्वभाव के होते जाते हैं, और उनमें परमेश्वर के लिये ज़रा भी श्रद्धा नहीं होती है। धीरे-धीरे, उनके हृदय से परमेश्वर का स्वभाव दूर हो जाता है, और उनके अनजाने में, वे परमेश्वर से ठीक विपरीत फरीसियों के मार्ग पर चल पड़ते हैं। विशेष रूप से, ऐसे बहुत से धार्मिक अगुवा और हस्तियाँ हैं जो संदर्भ से परे जाकर बाइबल की ग़लत व्याख्या करते हैं, लोगों को धोखा देने, क़ैद करने और फँसाने के लिये ऐसे पाखंड और भ्रांतियाँ फैलाते हैं जो मनुष्य की अवधारणाओं और कल्पनाओं से मेल खाती हैं और उनकी महत्वाकांक्षाओं और इच्छाओं को पूरा करती हैं। बहुत से लोग इस पाखंडों और भ्रांतियों को परमेश्वर का वचन और सत्य मान लेते हैं। वे ग़लत रास्ते पर ले जाए जाते हैं। ये धार्मिक अगुवा और तथाकथित प्रसिद्ध हस्तियाँ वास्तव में ऐसे मसीह-विरोधी हैं जिन्हें परमेश्वर ने अंत के दिनों में अपने न्याय के कार्य के माध्यम से उजागर किया है। ये तथ्य इस बात को साबित करने के लिए पर्याप्त हैं कि इन तथाकथित धार्मिक अगुवाओं और आध्यात्मिक लोगों का कार्य पवित्र आत्मा के कार्य से नहीं आता है। बल्कि ये मात्र फरीसी और मसीह-विरोधी हैं जो हमें धोखा दे और नुकसान पहुँचा रहे हैं। ये सब परमेश्वर के विरोध में खड़े होकर उसके साथ विश्वासघात करते हैं। यही लोग हैं जो परमेश्वर को एक बार और सूली पर चढ़ाते हैं, और परमेश्वर ने इन्हें शाप दिया है! परमेश्वर के कार्य और मनुष्य के कार्य के बीच तीन मुख्य अंतर हैं। पहला अंतर यह है कि परमेश्वर के कार्य में युग शुरू करना और समाप्त करना शामिल है। इस प्रकार, उसका कार्य पूरी मानवजाति पर निर्देशित है। यह सिर्फ एक ही देश, एक ही जाति के लोगों या एक निश्चित समूह के लोगों पर लक्षित नहीं है। यह पूरी मानवजाति पर लक्षित है। परमेश्वर के सभी कार्य अनिवार्य रूप से पूरी मानवजाति को प्रभावित करते हैं। यह परमेश्वर के कार्य और मनुष्य के कार्य के बीच सबसे बड़ा अंतर है। अनुग्रह के युग के दौरान, परमेश्वर प्रभु यीशु के रूप में देहधारी हुआ और उस चरण विशेष में छुटकारे का कार्य किया। जब प्रभु यीशु को क्रूस पर ठोंका गया, तो उसने छुटकारे के कार्य को पूरा किया। इसके बाद, पवित्र आत्मा ने प्रभु यीशु की गवाही देने के लिए परमेश्वर के चुने हुए लोगों का नेतृत्व करना शुरू कर दिया। अंत में, यह साक्ष्य पूरी मानवजाति तक पहुंच गया। इस प्रकार प्रभु यीशु द्वारा किये गये छुटकारे के कार्य का सुसमाचार पृथ्वी के सभी कोनों में फैल गया, यह दर्शाता है कि यह परमेश्वर का कार्य है। यदि यह मनुष्य का कार्य होता, तो यह निश्चित रूप से पृथ्वी के सभी कोनों में फैला नहीं होता। दो हज़ार साल का अंतर अनुग्रह के युग और राज्य के युग को अलग करता है। इन दो हज़ार वर्षों में, हमने ऐसे किसी व्यक्ति को नहीं देखा है जो एक नये युग को शुरू करने के कार्य को करने में सक्षम था। इसके अलावा, ऐसा कोई भी नहीं था जो कुछ ऐसा करने में सक्षम था जो दुनिया के सभी देशों में फैल जाये। जब अंत के दिनों में परमेश्वर देह बन गया और न्याय और ताड़ना का कार्य करना शुरू किया तब तक इसका कोई उदाहरण नहीं था। परमेश्वर के कार्य के इस चरण के लिए जाँच-परियोजना पहले ही चीन में सफल रही है। परमेश्वर की महान परियोजना पहले से ही पूरी हो चुकी है। अब, परमेश्वर का कार्य दुनिया के सभी कोनों में विस्तारित होना शुरू हो गया है। इस तरह, हम और अधिक निश्चित हो सकते हैं कि परमेश्वर के सभी कार्य मानवजाति की ओर निर्देशित हैं। शुरुआत में, परमेश्वर एक देश में एक परीक्षण के रूप में अपना कार्य करता है। इसे सफलतापूर्वक पूरा करने के बाद, परमेश्वर का कार्य फैलने और पूरी मानवजाति तक पहुंचने लगेगा। परमेश्वर के कार्य और मनुष्य के कार्य के बीच यह सबसे बड़ा अंतर है। ... परमेश्वर के कार्य और मनुष्य के कार्य के बीच दूसरा अंतर यह है कि परमेश्वर का कार्य उस हर चीज को व्यक्त करता है जो परमेश्वर है। यह परमेश्वर के स्वभाव का पूरी तरह से प्रतिनिधि है। जो कुछ भी परमेश्वर व्यक्त करता है वह पूरी तरह से सत्य, मार्ग और जीवन है। जो लोग परमेश्वर के कार्य का अनुभव करते हैं, वे परमेश्वर की धार्मिकता, पवित्रता, सर्वशक्तिमत्ता, बुद्धि, अद्भुतता और अथाह गहराई को समझते हैं। मनुष्य का कार्य मनुष्य के अनुभव और समझ को व्यक्त करता है। यह मनुष्यों की मानवता का प्रतिनिधि है। चाहे मनुष्य कितना भी कार्य करे या यह कितना भी महान हो, यह अभी भी मनुष्य का अनुभव और समझ है। यह निश्चित रूप से सत्य नहीं है। मनुष्य को केवल सत्य की समझ या अनुभव हो सकता है। वह सत्य का कथन नहीं कर सकता और न ही वह सत्य का प्रतिनिधित्व कर सकता है। ... आओ हम परमेश्वर के कार्य और मनुष्य के कार्य के बीच अंतर के तीसरे पहलू की जांच करें। परमेश्वर के कार्य में लोगों को जीतने, लोगों को बदलने, लोगों के स्वभाव को बदलने और शैतान के प्रभाव से लोगों को मुक्त करने की शक्ति है। चाहे मनुष्य के पास परमेश्वर के वचन का कितना भी अनुभव और समझ हो, उसका कार्य लोगों को बचा नहीं सकता है। इसके अलावा, यह लोगों के स्वभाव को बदलने में असमर्थ है। ऐसा इसलिए है क्योंकि परमेश्वर का वचन सत्य है। केवल सत्य एक व्यक्ति का जीवन हो सकता है। मनुष्य का वचन बहुत अच्छा होने पर वह वो हो सकता है जो सत्य के अनुरूप है। यह केवल अस्थायी रूप से अन्य लोगों की सहायता और उन्नयन कर सकता है। यह किसी का जीवन नहीं हो सकता है। यही कारण है कि परमेश्वर का कार्य मनुष्य को बचाने में सक्षम है। मनुष्य का कार्य किसी और को उद्धार प्रदान करने में असमर्थ है। परमेश्वर का कार्य लोगों के स्वभाव को बदल सकता है। मनुष्य का कार्य किसी के स्वभाव को बदलने में असमर्थ है। जिनके पास अनुभव है वे सभी इसे स्पष्ट रूप से देख सकते हैं। असल में, एक व्यक्ति में चाहे पवित्र आत्मा का कितना भी कार्य हो, भले ही वह कुछ सालों से लोगों के बीच कार्य कर रहा हो, फिर भी उसका कार्य उनके स्वभाव को बदलने में असमर्थ है। यह उन्हें वास्तविक उद्धार प्राप्त करने में मदद करने में असमर्थ है। यह एक पूर्ण सत्य है। केवल परमेश्वर का कार्य ही कर सकता है। यदि मनुष्य सचमुच सत्य का अनुभव करता है और उसे खोजता है, तो वह पवित्र आत्मा का कार्य प्राप्त कर पाएगा और वह अपना जीवन स्वभाव बदल पायेगा। वह अपने भ्रष्ट सार की वास्तविक समझ प्राप्त करेगा। अंत में, वह खुद को शैतान के प्रभाव से मुक्त करने और परमेश्वर के माध्यम से उद्धार पाने में सक्षम हो जाएगा। यह मनुष्य के कार्य और परमेश्वर के कार्य के बीच सबसे बड़ा अंतर है। परमेश्वर के कार्य और मनुष्य के काम के बीच सबसे बड़ा अंतर यह है कि परमेश्वर युगों को शुरू और समाप्त कर सकता है। केवल परमेश्वर ही युगों को शुरू और समाप्त करने का कार्य कर सकता है; मनुष्य यह नहीं कर सकते। मनुष्य यह कार्य क्यों नहीं कर सकते? क्योंकि मनुष्यों में सत्य नहीं है, वे सत्य नहीं हैं; केवल परमेश्वर ही सत्य है। चाहे मनुष्य की बातें सत्य के कितने भी करीब आ जाएँ, चाहे प्रचार कितना भी उत्कृष्ट हो, या चाहे वे कितना भी समझते हों, वह केवल थोड़े-से अनुभव और परमेश्वर के वचनों या सत्य के थोड़े-से ज्ञान से बढ़कर नहीं है, और यह केवल एक सीमित चीज़ है जिसे परमेश्वर के कार्य के अनुभव के माध्यम से हासिल किया गया है। यह सटीक सत्य नहीं है। इसलिए, चाहे कोई व्यक्ति सत्य को कितना भी समझता हो, वह युगों को शुरू और समाप्त करने का कार्य नहीं कर सकता। यह मनुष्यों के वास्तविक सार से निर्धारित होता है। ... और दूसरा मुद्दा क्या है? मनुष्य के पास सत्य नहीं है, मनुष्य सत्य नहीं है; जो मनुष्य के पास है और जो कुछ वह है, चाहे उसकी मानवता कितनी भी ऊंची या अच्छी हो, वह सब अभी भी वो सीमित चीज़ है जो सामान्य मानवता में समाहित होनी चाहिए और जिसकी परमेश्वर के स्वरूप के साथ तुलना नहीं की जा सकती है और न ही जिसकी परमेश्वर द्वारा व्यक्त किए गए सत्य के साथ तुलना की जा सकती है। यह स्वर्ग और पृथ्वी के बीच का अंतर है; इसलिए मनुष्य वह कार्य नहीं कर सकता जो परमेश्वर करता है। ...चाहे तुम्हारा काम कितना भी बड़ा हो, चाहे तुम कितने ही सालों के लिए काम करो, चाहे तुम देहधारी परमेश्वर की तुलना में कितने ही अधिक वर्षों के लिए काम करो, या तुम उसकी तुलना में कितने भी अधिक शब्द बोलते हो, फिर भी वे वास्तविक चीज़ें, जिनके बारे में तुम्हारे शब्द बताते हैं, केवल वो चीज़ें हैं जो इंसान की हैं और जो मानवीय हैं; वे केवल इंसान के थोड़े-से अनुभव और सत्य के थोड़े ज्ञान से अधिक कुछ और नहीं हैं। वे किसी व्यक्ति का जीवन नहीं हो सकते। इसलिए यदि कोई व्यक्ति कई ऐसे उपदेश देता है जो लोगों को उत्कृष्ट लगते हैं, और चाहे वह कितना भी काम करे, वह जो प्रस्तुत करता है वह सत्य नहीं है, और वह सत्य की सबसे सटीक अभिव्यक्ति नहीं है; समग्र मानव जाति को आगे ले जाने में तो यह उससे भी कम सक्षम होगा। हालांकि किसी व्यक्ति के भाषण में पवित्र आत्मा का प्रबोधन और प्रकाश हो सकता है, यह लोगों को केवल थोड़ा शिक्षित करेगा, और उनके लिए थोड़ा पोषण प्रदान करेगा। यह केवल एक निश्चित समयावधि के लोगों के लिए कुछ मदद ला सकता है और उससे अधिक कुछ नहीं कर सकता। यह वो परिणाम है जो मनुष्यों के काम से प्राप्त किया जा सकता है। तो ऐसा क्यों है कि मनुष्यों का काम उस परिणाम को नहीं पा सकता है, जो कि परमेश्वर का कार्य पाता है? ऐसा इसलिए है कि मनुष्यों का सार सत्य नहीं है; इसमें कुछ ऐसी बात है जो सामान्य मानवता का स्वरूप है, वह परमेश्वर के स्वरूप से, और परमेश्वर के अस्तित्व से, काफ़ी दूर है, वह परमेश्वर द्वारा व्यक्त किए गए सत्य से भी काफ़ी दूर है। दूसरे शब्दों में, यदि मनुष्य परमेश्वर के कार्य से हट जाता है और पवित्र आत्मा कार्य करना बंद कर देता है, तो मनुष्य का काम खुद मनुष्य के लिए उत्तरोत्तर कम लाभकारी होता जाता है और मनुष्य के पास कोई रास्ता नहीं रह जाता है। कुछ स्पष्ट परिणाम हैं जिन्हें केवल परमेश्वर का कार्य हासिल कर सकता है और मनुष्य का काम कभी नहीं हासिल कर सकता: मनुष्य चाहे जो कुछ भी कर ले, मनुष्य का जीवन-स्वभाव नहीं बदल सकता। मनुष्य जो भी करे, मनुष्य को परमेश्वर को सचमुच जानने के योग्य नहीं बनाया जा सकता। मनुष्य चाहे जो भी कर ले, वह मनुष्य के शुद्ध होने का कारण नहीं हो सकता। यह सब निश्चित है। कुछ लोग कहते हैं: "कार्य की कम अवधि के कारण ऐसा होता है।" यह गलत है! एक लंबी अवधि तो क्या, एक सौ साल भी कुछ काम नहीं आयेंगे। क्या मनुष्य का काम, मनुष्य को परमेश्वर का ज्ञान प्राप्त करने देता है? (नहीं।) चाहे कितने ही साल तुम अन्य लोगों का नेतृत्व कर लो, तुम उन्हें परमेश्वर को जानने के योग्य नहीं बना सकते। चलो, हम एक उदाहरण पर विचार करते हैं। क्या पौलुस का काम लोगों को परमेश्वर को जानने में सक्षम बना सकता है? (नहीं।) क्या नए नियम के प्रेरितों के इतने सारे धर्म-पत्र लोगों को परमेश्वर को जानने में सक्षम बना सकते हैं? (नहीं।) क्या पुराने नियम के बहुत-से नबी और परमेश्वर के सेवकगण लोगों को परमेश्वर को जानने में सक्षम बना सकते हैं? (नहीं।) उनमें से कोई भी ऐसा नहीं कर सकता। जो नतीजे मनुष्य के काम हासिल कर सकते हैं, वे बेहद सीमित होते हैं। वे परमेश्वर के कार्य की एक अवधि को बनाए रखने के अलावा कुछ भी नहीं कर सकते हैं। ...यह क्या दर्शाता है? मनुष्य का काम परमेश्वर को जानने में मनुष्य की मदद नहीं कर सकता, मनुष्य का काम मनुष्य के स्वभाव को नहीं बदल सकता, और मनुष्य का काम मनुष्य को शुद्धता प्राप्त नहीं करा सकता। यह प्रमाणित है। जहाँ तक अंत के दिनों के दौरान परमेश्वर के कार्य का सवाल है, इसके अधिक से अधिक प्रमाण मौजूद हैं: इतने सारे लोगों पर विजय प्राप्त की गई है, इतने सारे लोगों ने शानदार गवाही दी है, इतने सारे लोगों ने अपने जीवन के अनुभवों को लिखा है, इतने सारे लोगों पर इसलिए विजय प्राप्त हुई है कि वे परमेश्वर का अनुसरण करने के लिए बाक़ी सब कुछ छोड़ दें; इस तरह सभी पहलुओं के लिए गवाही मौजूद है। कुछ लोग जिनके पास आठ या दस वर्ष का अनुभव है, वे बहुत अच्छी गवाही देते हैं, और दूसरों के पास तीन से पांच साल का ही अनुभव है और वे भी बहुत अच्छी गवाही देते हैं। फिर, यदि इन लोगों को जिन्होंने कुछ गवाही दी है, दस या बीस साल का और अनुभव हो, तो उनकी गवाही कैसी होगी? क्या यह एक अधिक शानदार और अधिक गौरवशाली गवाही होगी? (हाँ।) क्या यह परिणाम परमेश्वर के कार्य से हासिल किया गया है? (हाँ।) क्या यह सच नहीं कि परमेश्वर के कार्य के दस वर्षों से प्राप्त परिणाम, सौ या हज़ारों वर्षों के मनुष्य के काम के परिणाम से बेहतर होता है? ...यह क्या दर्शाता है? केवल परमेश्वर का कार्य मनुष्य को बचाने, मनुष्य को बदलने, और मनुष्य को पूर्ण करने के लक्ष्य और परिणाम को प्राप्त कर सकता है, जबकि मनुष्य के काम से, चाहे वह कितने ही वर्षों का हो, इस तरह का परिणाम हासिल नहीं होगा। किसी मनुष्य का काम अंततः क्या हासिल कर सकता है? केवल यही कि लोग उस व्यक्ति की प्रशंसा, उसका समर्थन और उसका अनुकरण करें। अधिक से अधिक, लोगों में कुछ अच्छा व्यवहार होता है, और कुछ नहीं; जीवन-स्वभाव में परिवर्तन नहीं लाया जा सकता है, परमेश्वर के प्रति समर्पित होना और परमेश्वर का ज्ञान प्राप्त करना संभव नहीं हो सकता है, परमेश्वर के प्रति सम्मान रखना और बुराई से दूर रहना संभव नहीं हो सकता है, सच्ची शुद्धता और परमेश्वर को देख पाना संभव नहीं हो सकता है। इन कई महत्वपूर्ण पहलुओं के सभी परिणाम हासिल नहीं किए जा सकते हैं। क्या तुम समझे? (हाँ, हम समझ गए।) एक तरफ़, परमेश्वर के कार्य में, हम यह पता लगा सकते हैं कि परमेश्वर क्या है और उसके पास क्या है, हम परमेश्वर के स्वभाव को देख सकते हैं, और हम परमेश्वर के ज्ञान और उसकी सर्वशक्तिमत्ता को जान सकते हैं। यह वो परिणाम है जो सीधे परमेश्वर के वचनों से प्राप्त किया जा सकता है। इसके अलावा, परमेश्वर का वचन मनुष्य का जीवन बन सकता है। जब हमारे पास सच्चा अनुभव होता है और परमेश्वर के वचन की समझ होती है, तो हमारे भीतर परमेश्वर का सम्मान करने वाला एक दिल विकसित होगा, फिर हम परमेश्वर के वचन से, जीवन जल की अपनी आपूर्ति को निरंतर पा सकते हैं, और जैसे-जैसे परमेश्वर का वचन हमारे भीतर जड़ें जमाता है, हम हमेशा उस गवाही को जीने में सक्षम हो जाते हैं जिसकी परमेश्वर को हमसे अपेक्षा रहती है, अर्थात उसका वचन हमारा जीवन बन जाता है। परमेश्वर का वचन हमारे जीवन का अनंत और असीम झरना बन जाता है। और मनुष्य के काम के बारे में क्या कहें? चाहे मनुष्य के शब्द कितने भी सही या सत्य के अनुरूप हों, वे मनुष्य के जीवन के रूप में काम नहीं कर सकते; वे केवल व्यक्ति को अस्थायी सहायता और सुधार प्रदान कर सकते हैं। तुम इसे समझते हो, हाँ? (हाँ, हम समझते हैं।) ...परमेश्वर के वचन की बात करें तो, यदि तुम एक दिन इसका अनुभव करते हो और इसमें प्रवेश कर लेते हो, और इसकी समझ प्राप्त कर लेते हो, तो तुम देखोगे कि एक समूचा जीवनकाल भी वचनों का अनुभव करने के लिए पर्याप्त समय नहीं होगा। यह असीम है, अनंत है, और यह जीवन का झरना बन जाता है। क्या यही मनुष्य के काम और परमेश्वर के कार्य के बीच का भेद है? (हाँ।) इसका मतलब यह है कि मनुष्य वो व्यक्त करता है जो वह है, और परमेश्वर वो व्यक्त करता है जो कि वह है। मनुष्य केवल मनुष्य के लिए कुछ लाभ और कुछ सुधार लाता है; परमेश्वर मनुष्य के लिए जीवन की एक अनंत आपूर्ति है; यह भेद बहुत बड़ा है। अगर हम मनुष्य को छोड़ दें, तो हम फिर भी चल सकते हैं; परमेश्वर के वचन के बिना, हम जीवन का मूल स्रोत ही खो देंगे। इसलिए, परमेश्वर ने कहा, "मसीह सत्य, मार्ग और जीवन है।" ये वचन हमारा खजाना है, हमारा जीवन-रक्त है, जो किसी के लिए भी अनावश्यक नहीं है। इन वचनों के साथ हमारे जीवन को दिशा मिलती है, हमारे जीवन का एक लक्ष्य होता है, और हमारे जीवन का एक अत्यावश्यक स्रोत होता है; ये (वचन) हमारे जीवन के सिद्धांत हैं। अगलाप्रश्न २५: तुम यह गवाही देते हो कि प्रभु यीशु पहले से ही सर्वशक्तिमान परमेश्वर के रूप में वापस आ चुका है, कि वह पूरी सत्य को अभिव्यक्त करता है जिससे कि लोग शुद्धिकरण प्राप्त कर सकें और बचाए जा सकें, और वर्तमान में वह परमेश्वर के घर से शुरू होने वाले न्याय के कार्य को कर रहा है, लेकिन हम इसे स्वीकार करने की हिम्मत नहीं करते। यह इसलिए है क्योंकि धार्मिक पादरियों और प्राचीन लोगों का हमें बहुधा यह निर्देश है कि परमेश्वर के सभी वचन और कार्य बाइबल में अभिलेखित हैं और बाइबल के बाहर परमेश्वर का कोई और वचन या कार्य नहीं हो सकता है, और बाइबल के विरुद्ध या उससे परे जाने वाली हर बात विधर्म है। हम इस समस्या को समझ नहीं सकते हैं, तो तुम कृपया इसे हमें समझा दो। १. मसीह के दिव्य तत्व को कोई कैसे जान सकता है? संदर्भ के लिए बाइबल के पद: "यीशु ने उससे कहा, 'मार्ग और सत्य और जीवन मैं ही हूँ; बिना मेरे द्वारा कोई प कितने धर्मी लोग विपत्तिओं में परमेश्वर के पास वापस आ जाएँगे?
hindi
mpm legal is a niche employment law firm. Our practice was created out of an awareness of the changes that are taking place within the legal market, and a willingness to commit to a level of service and value that the current and future markets require. All of our lawyers are specialists, and we undertake only employment law advice. This allows us to ensure that we are focused on resolving your employment law issues in a way that is both cost effective and consistent with current best practice and thinking. We are not constrained by convention, nor management committees that might otherwise make our approach to our work more difficult than it needs to be. Simply, we like to work with employers of all sizes, and are free to do so in a manner which is most suitable both in terms of our approach to charging, and the way in which we deliver our advice. We won’t baffle you with legal jargon, nor complicate a simple issue. Rather, we establish at the outset the outcome that our clients seek, and then work with them within the law to achieve their expectations. We are aware of the ever increasing pressures imposed on legal budgets, and not only provide additional value with our aggressively priced fees, but also offer value add services such as manager training and support. We think our clients benefit from the type of service that we offer, and would like to do the same for you.
english
/* * Copyright (c) 2014 Spotify AB. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package com.spotify.helios.servicescommon; import com.google.common.base.Strings; import com.spotify.helios.serviceregistration.NopServiceRegistrar; import com.spotify.helios.serviceregistration.NopServiceRegistrarFactory; import com.spotify.helios.serviceregistration.ServiceRegistrar; import com.spotify.helios.serviceregistration.ServiceRegistrarFactory; import com.spotify.helios.serviceregistration.ServiceRegistrarLoader; import com.spotify.helios.serviceregistration.ServiceRegistrarLoadingException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.nio.file.Path; /** * Loads in the service registrar factory plugin (if specified) and returns an * appropriate {@link ServiceRegistrar}. */ public class ServiceRegistrars { private static final Logger log = LoggerFactory.getLogger(ServiceRegistrars.class); /** * Create a registrar. Attempts to load it from a plugin if path is not null or using the app * class loader otherwise. If no registrar plugin was found, returns a nop registrar. * * @param path Path to plugin jar. * @param address The address of the registry the registrar should connect to. * @param domain The domain that the registry should be managing. * @return The ServiceRegistrar object. */ public static ServiceRegistrar createServiceRegistrar(final Path path, final String address, final String domain) { // Get a registrar factory final ServiceRegistrarFactory factory; if (path == null) { factory = createFactory(); } else { factory = createFactory(path); } // Create the registrar if (address != null) { log.info("Creating service registrar with address: {}", address); return factory.create(address); } else if (!Strings.isNullOrEmpty(domain)) { log.info("Creating service registrar for domain: {}", domain); // TODO: localhost:4999 is pretty specific to Spotify's registrar, this should be // handled in createForDomain there, rather than here. Localhost should just pass through. return domain.equals("localhost") ? factory.create("tcp://localhost:4999") : factory.createForDomain(domain); } else { log.info("No address nor domain configured, not creating service registrar."); return new NopServiceRegistrar(); } } /** * Get a registrar factory from a plugin. * * @param path The path to the plugin jar. * @return The ServiceRegistrarFactory object. */ private static ServiceRegistrarFactory createFactory(final Path path) { final ServiceRegistrarFactory factory; final Path absolutePath = path.toAbsolutePath(); try { factory = ServiceRegistrarLoader.load(absolutePath); final String name = factory.getClass().getName(); log.info("Loaded service registrar plugin: {} ({})", name, absolutePath); } catch (ServiceRegistrarLoadingException e) { throw new RuntimeException("Unable to load service registrar plugin: " + absolutePath, e); } return factory; } /** * Get a registrar factory from the application class loader. * * @return The ServiceRegistrarFactory object. */ private static ServiceRegistrarFactory createFactory() { final ServiceRegistrarFactory factory; final ServiceRegistrarFactory installed; try { installed = ServiceRegistrarLoader.load(); } catch (ServiceRegistrarLoadingException e) { throw new RuntimeException("Unable to load service registrar", e); } if (installed == null) { log.debug("No service registrar plugin configured"); factory = new NopServiceRegistrarFactory(); } else { factory = installed; final String name = factory.getClass().getName(); log.info("Loaded installed service registrar: {}", name); } return factory; } }
code
\begin{document} \title{Neuron-based Pruning of Deep Neural Networks with Better Generalization using Kronecker Factored Curvature Approximation} \begin{abstract} \par Existing methods of pruning deep neural networks focus on removing unnecessary parameters of the trained network and fine tuning the model afterwards to find a good solution that recovers the initial performance of the trained model. Unlike other works, our method pays special attention to the quality of the solution in the compressed model and inference computation time by pruning neurons. The proposed algorithm directs the parameters of the compressed model toward a flatter solution by exploring the spectral radius of Hessian which results in better generalization on unseen data. Moreover, the method does not work with a pre-trained network and performs training and pruning simultaneously. Our result shows that it improves the state-of-the-art results on neuron compression. The method is able to achieve very small networks with small accuracy degradation across different neural network models. \end{abstract} \keywords{Neural Network Compression \and Network Pruning \and Flat minimum.} \section{Introduction} \label{introduction} Deep neural networks (DNNs) are now widely used in areas like object detection and natural language processing due to their unprecedented success. It is known that the performance of DNNs often improves by increasing the number of layers and neurons per layer. Training and deploying these deep networks sometimes call for devices with excessive computational power and high memory. In this situation, the idea of pruning parameters of DNNs with minimal drop in the performance is crucial for real-time applications on devices such as cellular phones with limited computational and memory resources. In this paper, we propose an algorithm for pruning neurons in DNNs with special attention to the performance of the pruned network. In other words, our algorithm directs the parameters of the pruned network toward flatter areas and therefore better generalization on unseen data. In all network pruning studies, the goal is to learn a network with a much smaller number of parameters which is able to virtually match the performance of the overparameterized network. Most of the works in this area have focused on pruning the weights in the network rather than pruning neurons. As \cite{molchanov2016pruning} and \cite{singh2019play} discuss in their work, pruning weights by setting them to zero does not necessarily translate to a faster inference time since specialized software designed to work with sparse tensors is needed to achieve this. Our work, however, is focused on pruning neurons from the network which directly translates to a faster inference time using current regular software since neuron pruning results in lower dimensional tensors. The pruning framework proposed herein iterates the following two steps: (1) given a subset of neurons, we train the underlying network with the aim of low loss and “flatness,” (2) given a trained network on a subset of neurons, we compute the gradient of each neuron with respect to being present or not in the next subset of neurons. Regarding the first step, we consider flatness as being measured by the spectral radius of Hessian. This requires computing the spectral radius which is approximated by computing the spectral radius of its Kronecker-factored Approximate Curvature (K-FAC) block diagonal approximation. The flatness concept comes from \cite{adam} but using K-FAC is brand new. In the second step, the selection of a neuron is binary which is approximated as continuous in the interval [0,1]. The gradient is then computed with respect to loss plus spectral radius of K-FAC. Neurons with large gradient values are selected for the next network. The methodology proposed improves the generalization accuracy results on validation datasets in comparison to the state-of-the-art results. In all of the four experiments on different architectures, our results outperform when the sparsity level is lower than a threshold value (around $50\%$ sparsity level for three datasets). The area under the curve with respect to sparsity and accuracy is higher for our approach from around $1\%$ and up to $9\%$. There is a vast body of work on network pruning and compression (see \cite{cheng2017survey} for a full list of works), however, their focus is devising algorithms to produce smaller networks without a substantial drop in accuracy. None of those works tries to find solutions with flatter minima in the compressed network. To the best of our knowledge, our work is the first study that takes that into account and tries to find solutions with a better generalization property. Specifically, we make the following contributions. \begin{itemize} \item We build upon the work of \cite{adam} and improve their work by using K-FAC to compute the spectral radius of Hessian and its corresponding eigenvector. Our algorithm can be easily parallelized for faster computation. This avoids using a power iteration algorithm that might not converge in a reasonable number of iterations and time. \item We provide an algorithm for learning a small network with a low spectral radius from a much bigger network. Despite pruning a big portion of the neurons, the accuracy remains almost the same as in the bigger network across different architectures. \item Our method allows for aggressive pruning of neurons of the neural network in each pruning epoch, whereas other methods such as \cite{molchanov2016pruning} are conservative and prune just one neuron per pruning epoch. \item Our algorithm is able to achieve very small networks with really small accuracy degradation across different neural network models and outperforms existing literature on neuron pruning across most of the network architectures we experiment with. \end{itemize} The rest of this paper is organized as follows. In Section \ref{sec:lr}, we overview the related works. We formally define our problem, model it as an optimization problem and provide an algorithm to solve it in Section \ref{sec:model_algorithm}. Finally, In Section \ref{sec:computation}, we apply the algorithm across different modern architectures and conduct an ablation study to demonstrate effectiveness of the method and discuss the results. \section{Literature Review} \label{sec:lr} Our work is closely related to two different streams of research: 1) network pruning and compression; and 2) better generalization in deep learning. We review each of the two streams in the following paragraphs. \textbf{Network pruning and compression.} \quad Study \cite{denil2013predicting} demonstrates significant redundancy of parameters across several different architectures. This redundancy results in waste of computation and memory and often culminates in overfitting. This opens up a huge opportunity to develop ideas to shrink overparameterized networks. Most of the works in the area focus on pruning weights. Earlier works on weight pruning started with Optimal Brain Damage \cite{lecun1990optimal} and Optimal Brain Surgeon \cite{hassibi1993second} algorithms. These methods focus on the Taylor expansion of loss functions and prune those weights which increase the loss function by the smallest value. They need to calculate Hessian of the loss function which could be cumbersome in big networks with millions of parameters. To avoid this, they use a diagonal approximation of Hessian. Regularization is another common way for pruning DNNs and coping with overfitting. \cite{ishikawa1996structural} and \cite{chauvin1989back} augment the loss function with $L_0$ or $L_1$ regularization terms. That sets some of the weights very close to zero. Eventually, one can remove the weights with value smaller than a threshold. \cite{han2015learning} is one of the most successful works in weight pruning. They simply use a trained model and drop the weights with the value below a threshold and fine-tune the resulting network. This procedure can be repeated until a specific sparsity level is achieved. However, this alternation between pruning and training can be long and tedious. To avoid this, \cite{lee2018snip} develops an algorithm that prunes the weights before training the network. They use a mask variable for each weight and calculate the derivative of the loss function with respect to those mask variables and use that as a proxy for sensitivity of the the loss function to that weight. Our pruning scheme is somewhat related to this work. We consider sensitivity of the loss function including spectral radius with respect to neurons rather than weights. On the other hand, some compression works focus on pruning neurons. Our work falls into this group of methods. As \cite{molchanov2016pruning} discusses, given current software, pruning neurons is more effective to result in a faster inference time and that is our main reason to focus on pruning neurons in this paper. \cite{srinivas2015data} uses a data-free pruning method based on the idea of similar neurons. Their method is only applicable to fully connected networks. \cite{hu2016network} notices that most of neurons in large networks have activation values close to zero regardless of the input to those neurons and do not play a substantial part in prediction accuracy. This observation leads to a scheme to prune these zero-activation neurons. To recover the performance of the network from pruning these neurons they alternate between pruning neurons and training. Some of the works on neuron pruning are specifically designed for Convolutional Neural Networks (CNNs) as they are much more computationally demanding compared to fully connected layers \cite{li2016pruning}. There is abundant work on pruning filters (feature maps) in CNNs (see \cite{singh2019play, li2016pruning, anwar2017structured} as examples). Our neuron pruning scheme has been inspired by the work of \cite{molchanov2016pruning} on CNN filter pruning. They adopt a Taylor expansion of the loss function to find how much pruning each filter in a convolution layer changes the loss function and prune those that contribute the least to the change. Although, we use the same pruning criteria, our work is different from theirs in the following three substantial ways. First, they do work with fully trained models and apply pruning to it. Instead, our work integrates the training and pruning parts together. Second, we augment the loss function with a term related to the spectral radius of Hessian and thus our neuron pruning scheme is sensitive to the change in flatness of a solution in addition to the change in the loss function. Finally, our pruning scheme is not just applied to prune filters in CNNs. It is more general and can be used to prune other type of layers such as fully connected layers. \textbf{Better generalization in deep learning.}\quad The initial work \cite{lecun2012efficient} observes that the Stochastic Gradient Descent (SGD) algorithm and similar methods such as RMSProp \cite{hinton2012neural} and ADAM \cite{kingma2014adam} generalize better on new data when the network is trained with smaller mini-batch sizes. \cite{keskar2016large} experimentally demonstrates that using a smaller mini-batch size with SGD tends to converge to flatter minima and that is the main reason for better generalization. \cite{jastrzebski2018finding} later shows that using larger learning rates also plays a key role in converging to flatter minima. Their work essentially studies the effect of the learning rate to the mini-batch size ratio on the generalization of found solutions on unseen data. Almost all works in this area focus on investigating the effect of hyperparameters such as the mini-batch size and learning rate on the curvature of the achieved solution. Despite this finding, there has been little effort on devising algorithms that are guaranteed to converge to flatter minima. Study \cite{adam} is the only work that explores this problem in deep learning settings and tries to find a flatter minimum solution algorithmically. To achieve flatter solutions, they augment the loss function with the spectral radius of Hessian of the loss function. To compute the latter, they use power iteration along with an efficient procedure known as R-Op \cite{pearlmutter1994fast} to calculate the Hessian vector product. This implies that they need to use the power iteration algorithm in every optimization step of their algorithm. One potential drawback of using power iteration is that it might not converge in a reasonable number of iterations. To avoid this, we borrow ideas from K-FAC which is originally developed by \cite{martens2015optimizing} and \cite{grosse2016kronecker} to apply natural gradient descent \cite{amari1998natural} to deep learning optimization. Our proposed method adds the same penalty term, however we use K-FAC as a different approach to find an approximate spectral radius of Hessian. In addition, we use the augmented loss function not only for training but also for the pruning purpose. K-FAC uses the Kronecker product to approximate the Fisher information matrix which is a positive semi-definite approximation of Hessian. K-FAC approximates the Fisher information matrix by a block diagonal matrix where each block is associated with a layer in the neural network architecture. Another positive semi-definite approximation that is widely used instead of Hessian is the Gauss-Newton (GN) matrix. Work \cite{botev2017practical} shows that when activation functions used in the neural network are piecewise linear, e.g., standard rectified linear unit (ReLU) and leaky rectified linear unit (Leaky ReLU), the diagonal blocks in Hessian and GN are the same. On the other hand, \cite{martens2014new} and \cite{pascanu2013revisiting} prove that for typical loss functions in machine learning such as cross-entropy and squared error, the Fisher information and the GN matrices are the same. These observations justify using K-FAC to approximate Hessian of each layer in a deep learning setting as most of the state-of-the-art neural network architectures use ReLU as their activation function. \section{Proposed Approach} \label{sec:model_algorithm} Our goal here is designing an algorithm that given a data set and a neural network architecture carefully prunes redundant neurons during training and produces a smaller network with a flat minimum solution. To this end, we first introduce the underlying optimization problem and then discuss the algorithm to solve it. \subsection{Optimization Problem for Integrated Pruning and Training} We are given training data $\mathcal{D} = \{(\mathbf{x}_i, y_i)\}_{i = 1}^{n}$ where $\mathbf{x}$ represents features and $y$ represents target values and a neural network with a total of $\mathcal{N}$ neurons. We define the set of all the parameters of the neural network with $\ell$ layers and the total number of parameters $d$ as $\mathcal{W} = (\mathbf{w}_1^1, \mathbf{w}_1^2, ..., \mathbf{w}_1^{n_1} , ..., \mathbf{w}_\ell ^{1}, \mathbf{w}_\ell ^{2}, ..., \mathbf{w}_\ell ^{n_\ell}) \in \mathbb{R}^d$. Here, $n_l$ represents the number of neurons in layer $l\in [\ell]$ (where $[x]:=\{1, 2, ..., x\}$) and $\mathbf{w}_l^j$ represents the parameters associated with neuron $j \in [n_\ell]$ in layer $l \in [\ell]$. Since we need the optimization problem to identify and prune redundant neurons, we introduce the set of binary (mask) variables $\mathcal{M_B} = (m_1^1, m_1^2, ..., m_1^{n_1} , ..., m_\ell ^{1}, m_\ell ^{2}, ..., m_\ell ^{n_\ell}) \in \{0, 1\}^\mathcal{N}$. We also define $\mathcal{M} = (m_1^1 e_{\alpha(1,1)}, m_1^2e_{\alpha(1,2)}, ..., m_1^{n_1}e_{\alpha(1,n_1)} , ..., m_\ell ^{1}e_{\alpha(\ell,1)}, m_\ell ^{2}e_{\alpha(\ell,2)}, ..., m_\ell ^{n_\ell}e_{\alpha(\ell,n_\ell)}) \in \{0, 1\}^d$ where $e_s = (1, ..., 1) \in \mathbb{R}^s$ and $\alpha(i,j)$ is the dimension of $\mathbf{w}_i^j$. Each element in $\mathcal{M}_B$ is associated with one of the neurons and identifies whether a neuron fires (value 1) or needs to be pruned (value 0) in the neural network architecture. To achieve a solution with the flatter curvature, we represent the spectral radius of Hessian (or its approximation) of the loss function $\mathcal{C}$ (e.g. cross entropy) at point $\mathcal{W}$ as $\rho_\mathcal{C}(H(\mathcal{W}))$. Then, given the maximum allowed number of neurons $K$ and upper bound $B$ on the spectral radius of Hessian, the optimization problem can be written as the following constrained problem: \begin{equation} \label{dfn:our_model1} \begin{aligned} \min_{\mathcal{W},\mathcal{M}_B} \ & \mathcal{C}(\mathcal{W} \odot \mathcal{M};\mathcal{D}), \\ \text{s.t. }\ & \rho_{\mathcal{C}}(H(\mathcal{\mathcal{W} \odot \mathcal{M}})) \leq B,\\ & \mathcal{M}^T_B\mathbf{e}_\mathcal{N} \leq K, \end{aligned} \end{equation} where $\odot$ is the Hadamard product. One drawback of formulation (\ref{dfn:our_model1}) is intractability of a direct derivation of Hessian of the loss function and subsequently its spectral radius in DNNs with millions of parameters. To cope with this, we follow \cite{adam} and rewrite $\rho(H(\mathcal{\mathcal{W} \odot \mathcal{M}}))$ in terms of vector $v(\mathcal{W},\mathcal{M})$ where $\norm{v(\mathcal{W},\mathcal{M})} = 1$ and it denotes the eigenvector corresponding to the eigenvalue of $H(\mathcal{\mathcal{W} \odot \mathcal{M}})$ with the largest absolute value. We also introduce the constraint on the spectral radius to the objective function as follows: \begin{equation} \label{dfn:our_model2} \begin{aligned} \min_{\mathcal{W},\mathcal{M}_B} \ & L(\mathcal{W}, \mathcal{M}_B; \mathcal{D}) = \mathcal{C}(\mathcal{W} \odot \mathcal{M};\mathcal{D}) + \mu g(\mathcal{W},\mathcal{M}) \\ \text{s.t. }\ & \mathcal{M}_B^T\mathbf{e}_\mathcal{N} \leq K, \end{aligned} \end{equation} where $g(\mathcal{W},\mathcal{M}) := \max\{0, \mathbf{v}^T(\mathcal{W},\mathcal{M})H(\mathcal{\mathcal{W} \odot \mathcal{M}})\mathbf{v}(\mathcal{W},\mathcal{M}) - B\}$ and $\mu$ is a hyperparameter which needs to be tuned. The larger the $\mu$ is, the optimization problem penalizes larger spectral radii more and the loss function does have a flatter curvature at the solution point. To calculate $g(\mathcal{W},\mathcal{M})$, one needs to be able to calculate $\mathbf{v}(\mathcal{W},\mathcal{M})$. To this end, we follow \cite{martens2015optimizing} and approximate the Hessian of the loss function with a block diagonal matrix where each block is the second order derivative of the loss function with respect to the parameters of each layer. In other words, denoting activations and inputs (a.k.a pre-activations) for layer $l \in [\ell]$ with $\mathbf{a}_l(\mathcal{W}, \mathcal{M})$ and $\mathbf{s}_l(\mathcal{W}, \mathcal{M})$, respectively, and defining $\mathbf{g}_l(\mathcal{W}, \mathcal{M}):= \frac{\partial L(\mathcal{W}, \mathcal{M}; \mathcal{D})}{\partial \mathbf{s}_l(\mathcal{W}, \mathcal{M})}$ we have: \begin{equation} \label{hessmatrix} H(\mathcal{W} \odot \mathcal{M}) \approx \left( \begin{array}{ccccc} \mathbf{\Psi}_0(\mathcal{W} \odot \mathcal{M})\otimes\mathbf{\Gamma}_1(\mathcal{W} \odot \mathcal{M}) & & \textbf{0}\\ & \ddots \\ \textbf{0} & & \mathbf{\Psi}_{\ell-1}(\mathcal{W} \odot \mathcal{M})\otimes\mathbf{\Gamma}_\ell(\mathcal{W} \odot \mathcal{M}) \end{array} \right). \end{equation} In (\ref{hessmatrix}), $\mathbf{\Psi}_l(\mathcal{W} \odot \mathcal{M}) := \mathbb{E}[\mathbf{a}_l(\mathcal{W}, \mathcal{M})\mathbf{a}_l^T(\mathcal{W}, \mathcal{M})]$ and $\mathbf{\Gamma}_l(\mathcal{W} \odot \mathcal{M}) := \mathbb{E}[\mathbf{g}_l(\mathcal{W}, \mathcal{M})\mathbf{g}_l^T(\mathcal{W}, \mathcal{M})]$ denote matrices of the second moment for activations and derivative of pre-activations for layer $l$, respectively. Notice that no additional computational resources are needed to derive $\mathbf{g}_l(\mathcal{W}, \mathcal{M})$ and $\mathbf{a}_l(\mathcal{W}, \mathcal{M})$ as they are derived as a byproduct of the back-propagation procedure. Let $\mathbf{v}_l(\mathcal{W}, \mathcal{M})$ represent the eigenvector corresponding to the largest absolute eigenvalue $\lambda_l(\mathcal{W}, \mathcal{M})$ for block (layer) $l$. We also define the same eigenvalues $ \lambda^\Psi_l(\mathcal{W}, \mathcal{M})$ and $ \lambda^\Gamma_l(\mathcal{W}, \mathcal{M})$ along with their corresponding eigenvectors $ \mathbf{v}^\Psi_l(\mathcal{W}, \mathcal{M})$ and $ \mathbf{v}^\Gamma_l(\mathcal{W}, \mathcal{M})$ for matrices $\mathbf{\Psi}_{l}(\mathcal{W} \odot \mathcal{M})$ and $\mathbf{\Gamma}_l(\mathcal{W} \odot \mathcal{M})$, respectively. Then, we can use properties of the Kronecker product (see \cite{schacke2004kronecker}) to produce: \begin{equation} \label{model:kfac} \begin{aligned} \mathbf{v}_l(\mathcal{W}, \mathcal{M}) &= \mathbf{v}^\Psi_{l-1}(\mathcal{W}, \mathcal{M}) \otimes \mathbf{v}^\Gamma_l(\mathcal{W}, \mathcal{M}) \\ \lambda_l(\mathcal{W}, \mathcal{M}) &= \lambda^\Psi_{l-1}(\mathcal{W}, \mathcal{M}) \lambda^\Gamma_l(\mathcal{W}, \mathcal{M}). \end{aligned} \end{equation} Once we have $\mathbf{v}_l(\mathcal{W}, \mathcal{M})$ for each block $l$, approximating $\mathbf{v}(\mathcal{W},\mathcal{M})$ is trivial given the properties of block diagonal matrices. Optimization problem (\ref{dfn:our_model2}) has $\mathcal{N}$ additional variables $\mathcal{M}_B$ and is not directly solvable using continuous optimization methods like SGD due to the existence of the non-continuous mask variables $\mathcal{M}_B$. However, the separation of each neuron parameters from neuron masks, facilitates measuring the contribution of each of the neurons to the loss function. In other words, we consider the sensitivity of the loss function to each neuron as a measure for importance and only keep the neurons with highest sensitivity. To this end, since variables in $\mathcal{M}_B$ are binary, sensitivity of the loss function with respect to neuron $j \in [n_\ell]$ in layer $l \in [\ell]$ is defined as: \begin{equation} \label{dfn:our_model3} \Delta \bar{L}_l^j(\mathcal{W}, \mathcal{M}_B) := \: \mid L(\mathcal{W}, \mathcal{M}_B| m_l^j = 0 ; \mathcal{D}) - L(\mathcal{W}, \mathcal{M}_B| m_l^j = 1 ; \mathcal{D})\mid, \end{equation} where $\mathcal{M}_B| m_l^j = 0$, $\mathcal{M}_B| m_l^j = 1$ are two vectors with same values for all neurons except for the neuron $j$ in layer $l$. Following \cite{molchanov2016pruning} and using first order Taylor approximation of $L(\mathcal{W}, \mathcal{M}_B| m_l^j = 0)$ around $\mathbf{a}_l^j(\mathcal{W}, \mathcal{M}) = 0$, we estimate (\ref{dfn:our_model3}) by \begin{equation} \label{dfn:equation} \Delta \bar{L} _l^j(\mathcal{W}, \mathcal{M}_B) := \: \mid \frac{\partial L(\mathcal{W}, \mathcal{M}_B; \mathcal{D})}{\partial \mathbf{a}_l^j(\mathcal{W}, \mathcal{M})}\mathbf{a}_l^j(\mathcal{W}, \mathcal{M}) \mid. \end{equation} This intuitively means that the importance of a neuron in the neural network is dependent on two values: the magnitude of the gradient of the loss function with respect to the neuron's activation and the magnitude of the activation. It is worth mentioning that the numeric value of $\frac{\partial L(\mathcal{W}, \mathcal{M}_B; \mathcal{D})}{\partial \mathbf{a}_l^j(\mathcal{W}, \mathcal{M})}$ is also derived during the back-propagation procedure and no further computation is necessary. Following our discussion from above, we provide the algorithm to solve minimization problem (\ref{dfn:our_model2}). \subsection{Algorithm} \label{Algorithm} The algorithm, Higher Generalization Neuron Pruning (HGNP), solves optimization problem \ref{dfn:our_model2} by updating $\mathcal{W}$ and $\mathcal{M}$ alternately. When the parameters $\mathcal{W}$ are being updated, we fix the binary mask vector $\mathcal{M}$ and vice versa. The alternation procedure between updating mask vector $\mathcal{M}$ and weights $\mathcal{W}$ happens gradually during training to avoid a substantial drop in the performance of the model. Pruning an excessive number of neurons in one shot might result in an irrecoverable performance drop and might result in low validation accuracy of the final pruned model. To update the parameters $\mathcal{W}$, we fix the mask variables $\mathcal{M}_B = \hat{\mathcal{M}}_B$ for a certain number of iterations and update weights $\mathcal{W}$ by following Algorithm \ref{algo1}. This algorithm essentially solves the optimization problem $min_\mathcal{W} L(\mathcal{W}, \hat{\mathcal{M}_B}; \mathcal{D})$ for fixed $\hat{\mathcal{M}_B}$. \begin{algorithm}[H] \textbf{Input:} $\mathcal{D}$ : training data, $\hat{\mathcal{M}}_B$: given mask vector, $\hat{\mathcal{W}}$: given initialization weights, $\alpha$ : learning rate, $\mu$ : coefficient for spectral radius, $B$: bound on spectral radius.\\ \textbf{Output:} $\mathcal{W}$ : the updated weights.\\ Initialize $\mathcal{W} \leftarrow \hat{\mathcal{W}}$.\\ \While{convergence criteria not met}{ \For{each batch of data $\mathcal{D}^b$} { Compute $\nabla_\mathcal{W} \mathcal{C} (\mathcal{W}\odot \hat{\mathcal{M}} ; \mathcal{D}^b)$\\ \tcc{Spectral radius term operations} \For{$l = 0, 1, ..., \ell$} { Derive $\mathbf{v}_l^\psi(\mathcal{W}, \hat{\mathcal{M}})$, $\mathbf{v}_l^\gamma(\mathcal{W}, \hat{\mathcal{M}})$ and eigenvalues $\lambda_l^\psi(\mathcal{W}, \hat{\mathcal{M}})$, $\lambda_l^\gamma(\mathcal{W}, \hat{\mathcal{M}})$.\\ Compute $\mathbf{v}_l(\mathcal{W}, \hat{\mathcal{M}})$ and $\lambda_l(\mathcal{W}, \hat{\mathcal{M}})$ using \ref{model:kfac}.} Approximate $v(\mathcal{W}, \hat{\mathcal{M}})$ and its corresponding eigenvalue $\rho_\mathcal{C}(H(\mathcal{W} \odot \hat{\mathcal{M}})$ using eigenvectors and eigenvalues derived for each block.\\ Compute using R$^2$-Op from \cite{adam}: $\nabla_\mathcal{W}\rho_\mathcal{C}(H(\mathcal{W} \odot \hat{\mathcal{M}})=\frac{1}{|\mathcal{D}^b|}\sum\limits_{i\in \mathcal{D}^b} v(\mathcal{W}, \hat{\mathcal{M}})^T\nabla_\mathcal{W} H_i(\mathcal{W} \odot \hat{\mathcal{M}}) v(\mathcal{W}, \hat{\mathcal{M}})$ where index $i$ shows that Hessian is computed with respect to single sample $i$.\\ Derive $\nabla_\mathcal{W} g(\mathcal{W}, \hat{\mathcal{M}})$ using $\nabla_ \mathcal{W}\rho_\mathcal{C}(H(\mathcal{W} \odot \hat{\mathcal{M}})$.\\ \tcc{Gradient descent} Set $\mathcal{W}=\mathcal{W}-\alpha \left(\nabla_\mathcal{W} \mathcal{C} (\mathcal{W}\odot \hat{\mathcal{M}} ; \mathcal{D}^b) +\mu \nabla_\mathcal{W} g(\mathcal{W}, \hat{\mathcal{M}})\right)$ }} \caption{Mini-Batch SGD Algorithm to update $\mathcal{W}$ for fixed mask vector $\hat{\mathcal{M}}_B$} \label{algo1} \end{algorithm} To update $\mathcal{M}_B$, we apply \ref{dfn:equation} to a random mini-batch to find the neurons that the loss function is least sensitive to. We update $\mathcal{M}_B$ by setting the mask variable associated with such neurons to zero. It means that we can remove such neurons from the neural network and recreate a new neural network with the remaining neurons and their associated weights in the parameters vector $\mathcal{W}$. We now can use this concept to prune neurons along with Algorithm \ref{algo1} to propose Algorithm \ref{main_algo}. The algorithm starts with random initialization of parameters $\mathcal{W}_0$. We fix the mask variables $\mathcal{M}_B$ to all one at first and update $\mathcal{W}$ using SGD for $E_1$ training epochs. After taking the pre-pruning training steps, we prune the neural network for the first time by pruning the $N$ neurons with the smallest effect on the loss function change. We then alternate between training and pruning each $E_2$ epochs and continue pruning until a specific sparsity level $\bar{\kappa}$ is achieved. Here, we define sparsity level $\kappa := \frac{\|\mathcal{W} \odot \mathcal{M}\|_0}{\|\mathcal{W} \odot e_d\|_0}$. Although we stop the algorithm using the same metric, i.e. sparsity level, like other works in weight pruning, there is a fundamental difference between the actual network architecture after pruning in our work and other works. In our case, we remove all the weights associated with the pruned neurons, but in other works, such weights are set to zero. Once the desired sparsity level $\bar{\kappa}$ is achieved, i.e. $\kappa \leq \bar{\kappa}$, we train the network for the last $E_3$ epochs. One should notice that values $E_1$, $E_2$, $E_3$ and $N$ are hyperparameters and need to be tuned and vary across different neural network architectures. We summarize the above procedure in Algorithm \ref{main_algo} where we drop the parameters $(\mathcal{W}, \mathcal{M})$ from $\Delta \bar{L} _l^j(\mathcal{W}, \mathcal{M})$ and refer to it as $\Delta \bar{L} _l^j$ for simplicity. \begin{algorithm}[ht] \ \textbf{Input:} $\mathcal{D}$ : training data, $\mathcal{W}_0$: given initialization weights, $\alpha$ : learning rate, $\mu$ : coefficient for spectral radius, $B$: bound on spectral radius, $\bar{\kappa}$: desired sparsity level, $N$: number of pruned neurons in each round, $E_1, E_2, E_3$: number of training epochs pre-pruning, between pruning and post-pruning.\\ \textbf{Output:} $\mathcal{W}$ : the updated weights, $\mathcal{M}$ : the final mask vector.\\ Initialize $\mathcal{W} \leftarrow {\mathcal{W}_0}$, $\mathcal{M} \leftarrow \mathbf{e}_d$, $\kappa = 1$, $epoch = 0$.\\ \While{$\kappa > \bar{\kappa}$}{ \If{$epoch \geq E_1$ and $epoch\bmod E_2 = 0$}{ Pick a random mini-batch $\mathcal{D}^b$.\\ Compute $\Delta \bar{L} _l^j$ using \ref{dfn:equation} for mini-batch $\mathcal{D}^b$.\\ Normalize $\Delta \bar{L} _l^j$ per layer using: $\Delta\bar{L} _l^j = \frac{\Delta \bar{L} _l^j}{\sqrt{\sum_{k=1}^{n_l}(\Delta \bar{L} _l^k)^2}}$\\ Select $N$ neurons with smallest $\Delta \bar{L} _l^j$ and set their corresponding mask value in vector $\mathcal{M}_B$ equal to zero.\\ Update: $\kappa \gets \frac{\|\mathcal{W} \odot \mathcal{M}\|_0}{\|\mathcal{W} \odot e_d\|_0}$ } Complete one epoch of training to solve $\min_\mathcal{W} L(\mathcal{W}, \mathcal{M}_B; \mathcal{D})$ using Algorithm \ref{algo1} and update $\mathcal{W}$.\\ Update: $epoch \gets epoch + 1$ } Train the neural network for another $E_3$ epochs using Algorithm \ref{algo1}. \caption{HGNP Algorithm to update parameters $\mathcal{W}$ and mask vector $\mathcal{M}_B$ for solving optimization problem \ref{dfn:our_model2}.} \label{main_algo} \end{algorithm} \section{Experimental Studies} \label{sec:computation} In this section, we use some widely-used datasets and the HGNP algorithm to select common network models in deep learning and analyze the results. The datasets used in the experiments are as follows. \textbf{Cifar-10}: Collected by \cite{krizhevsky2009learning}, this dataset contains 60,000, $32\times32$ color images with 10 different labels. We use 50,000 of samples for training and keep out the remaining for validation purposes. We use conventional technique such as random horizontal flip to augment the data. \textbf{Oxford Flowers 102 dataset}: This dataset is introduced by \cite{Nilsback08} and it has 2,040 training samples and 6,149 validation samples. The number of flower categories is 102 and each image is RGB with size $224 \times 224$. We use random horizontal flip for data augmentation. \textbf{Caltech-UCSD Birds 200 (CUB-200)}: The dataset is introduced in \cite{WelinderEtal2010} and contains 5,994 training samples and 5,794 test samples. It contains images of 200 species of birds. For the experiment on the birds dataset, we crop all the images to $160 \times 160$ to avoid memory issues while we train the model using the HGNP algorithm. We compare the quality of our pruned network and its minimum solution against Taylor \cite{molchanov2016pruning}. We chose Taylor as the only benchmark since their work is very recent and it outperforms other neuron pruning methods we mentioned in Section \ref{sec:lr}. Unless otherwise stated, we use the training data for training the model and applying the pruning scheme. In all the experiments, the accuracy on the same validation dataset is measured and is reported to compare our method with Taylor. We apply various common architecures AlexNet \cite{krizhevsky2012imagenet}, ResNet-18 \cite{he2016deep} and VGG-16 \cite{simonyan2014very}. We choose Alexnet and VGG-16 as they are used in \cite{molchanov2016pruning} in their experiments. To show that our method works well with newer architectures, we select Resnet-18 as it has the power of residual networks and it can be trained in a reasonable time.We use Pytorch \cite{paszke2017automatic} for training neural networks and automatic differentiation on GPU. The Alexnet experiments are conducted using one, three Tesla K80 GPUs for the HGNP and Taylor method, respectively. All other Taylor experiments use a single GeForce RTX 2080. For the HGNP method, we use four, four and five GeForce RTX 2080 GPUs to run VGG-16/Cifar-10, ResNet-18/Cifar-10 and ResNet-18/Birds 200 dataset experiments, respectively. \begin{figure} \caption{Alexnet pruned network accuracy results on Flowers dataset.} \label{alexnet} \end{figure} \subsection{Various Common Architectures} \textbf{AlexNet experiment:} \quad For the first experiment, we follow the Taylor paper and adopt AlexNet to predict different species of flowers on the Oxford Flowers dataset. Training the network from scratch does not result in acceptable validation accuracy, so we use optimal parameters of the model trained on ImageNet \cite{imagenet_cvpr09} to initialize the network parameters except for the last fully-connected layer which is initialized randomly. We use the same hyperparameters as \cite{molchanov2016pruning} and train the model with learning rate $\alpha = 0.001$ and weight decay $0.0001$. The training data is processed by the network in mini-batch size of $32$ and we use SGD with momentum $0.9$ to update the parameters. We use $\mu = 0.001$ and $B = 0.5$ to penalize the spectral radius of Hessian. The hyperparameters exactly follow the values chosen by the Taylor paper. We train the network for 5 epochs initially ($E_1$) and start pruning $N=100$ neurons after each $E_2 = 5$ epochs. Once we hit the desired sparsity level, we train the network for $E_3 = 50$ epochs. For the Taylor method results, we use a pre-trained network which is trained for 20 epochs and prune one neuron every 30 gradient descent steps afterwards. These values have been optimized by grid search. Figure \ref{alexnet} plots the relative validation accuracy percentage difference of the HGNP and Taylor method against the sparsity level. It shows that the HGNP and Taylor methods are performing closely with less sparse models, but when the sparsity level increases, the relative gap between the accuracy of HGNP and Taylor becomes huge ($120 \%$ when both models are at $35\%$ sparsity level). For sparsity above $55\%$, the performance of both methods are very similar. For sparsity of $50\%$, $40\%$ and $35\%$ the accuracy is $(78.9\%, 71.1\%)$, $(77.0\%, 60\%)$ and $(75.5\%, 34\%)$, respectively where the first value corresponds to HGNP and the second belongs to Taylor. \begin{figure} \caption{ResNet-18 pruned network accuracy results on Cifar-10 dataset.} \label{resnet} \end{figure} \textbf{ResNet-18 experiment on Cifar-10:} \quad For the second experiment, we train and prune ResNet-18 on the Cifar-10 dataset. We use mini-batch size $128$, along with weight decay $0.0005$ and initial learning rate $\alpha = 0.1$. SGD with momentum $0.9$ is used for a smoother update of the parameters. We set $\mu = 0.001$ and $B = 0.5$ for the spectral radius term. The model is trained without any pruning for $E_1 = 50$. We alternate between pruning $N=100$ neurons and training, each $E_2 = 5$ epochs, and fine-tune the model with desired sparsity level for another $E_3 = 20$ epochs. Moreover, due to the specific structure of the residual networks and existence of residual connections, some of the layers should have the same number of neurons to avoid dimension incompatibility during feed-forward. To comply with dimension compatibility, we group those specific layers together and prune the same number of neurons from each layer by averaging the number of neurons to prune suggested by our pruning method for those layers. For the Taylor method, we train the model for $200$ epochs. Then we start pruning one neuron from the network and fine-tune the parameters with one epoch of the data. We continue alternating between pruning and fine-tuning until a desired sparsity level is reached. Figure \ref{resnet} plots the relative percentage difference in accuracy on the validation dataset between the HGNP and the Taylor methods. The HGNP method consistently outperforms at sparsity level $85\%$ and below, although the improvement is not drastic. \textbf{ResNet-18 experiment on Birds 200 dataset:} \quad The third experiment is conducted on training ResNet-18 on the Birds 200 dataset. Like the first experiment, we rely on transfer learning and utilize the pretrained convolutional network weights of ResNet-18 on ImageNet. The learning and weight decay rates are set to $0.001$ and $0.0001$, respectively. We use mini-batch size 32 along with momentum $0.9$, $\mu= 0.001$ and $B=0.5$ for training the network. We set $E_1 = 25$ and we prune 100 neurons every 5 epochs afterwards. As our last fine tuning step, we train for $E_3=20$ more epochs after achieving the desired sparsity level. \begin{figure} \caption{ResNet-18 pruned network accuracy results on Birds 200 dataset.} \label{resnet-birds} \end{figure} For the Taylor method, we initially train the full model for 60 epochs with learning rate $0.01$ and weight decay rate $0.0001$ for 60 epochs. The hyperparameters are optimized to achieve the best performance. We then start pruning one neuron at each pruning step and train for an epoch to recover the performance afterwards. Figure \ref{resnet-birds} suggests that our model outperforms at really high and low sparsity levels. At the sparsity level $14\%$ and below, our model starts to outperform again. At the sparsity level $5\%$, the relative accuracy gap is at $6\%$. All accuracies range from $70.5\%$ to $51.9\%$ and from $70.5\%$ to $49.0\%$ for HGNP and Taylor, respectively. \textbf{VGG-16 experiment:} \quad Our last experiment is on training the VGG-16 neural network with batch normalization \cite{ioffe2015batch} on the Cifar-10 dataset. The Mini-batch size and the initial learning rate are set to $128$ and $0.1$, respectively. We decay the learning rate as training progresses and add a regularization term to the loss function with coefficient $0.0005$. Values $\mu$ and $B$ are set to $0.001$ and $0.5$, respectively. \begin{figure} \caption{VGG16 pruned network accuracy results on Cifar-10 dataset.} \label{vgg} \end{figure} For the Taylor, we pre-train it for 200 epochs and start pruning one neuron and training for one epoch alternately. For HGNP, the pre-pruning number of training epochs is $E_1 = 50$ and we prune $N=100$ neurons at each pruning epoch and train the model for $E_2 = 5$ epochs between each pruning epoch. Finally the model is fine-tuned for another $E_3 = 20$ epochs. Figure \ref{vgg} compares the two methods. At the $5 \%$ sparsity level, the relative gap between the accuracy of HGNP and Taylor is greater than $8\%$. Our model outperforms at all sparsity levels below $90\%$. At sparsity of $10\%$, the accuracy of Taylor is $87.3\%$ while HGNP is at $91.4\%$. \subsection{Train and Inference Times} In this section, we compare the training and inference times of the HGNP and Taylor methods. The total training and pruning time for the HGNP method is longer than for the Taylor method as it needs to calculate the largest eigenvalue and its corresponding eigenvector for each block. The times for the experiments are summarized in Table \ref{tab:my_label}. \begin{table}[ht] \centering \small \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio} \begin{tabular}{c|c|S|S|S} \textbf{Model/Dataset} & \textbf{Hardware} & \textbf{Target Sparsity \%} & \textbf{HGNP (hours)} & \textbf{Taylor (hours)}\\ \hline \multirow{1}{*}{AlexNet/Flowers 102} & TESLA K80 & 35.1 & 57.3 & 3.8\\ \hline \multirow{1}{*}{ResNet-18/Cifar-10} & GeForce RTX 2080 & 4.5 & 121.0 & 10.7\\ \hline \multirow{1}{*}{VGG-16/Cifar-10} & GeForce RTX 2080 & 4.5& 115.1 & 12.7\\ \hline \multirow{1}{*}{ResNet-18/Birds 200 dataset} & GeForce RTX 2080 & 5.0 & 129.9 & 27.3\\ \end{tabular} \end{adjustbox} \caption{Comparison of the total training and pruning time across different models/dataset of HGNP vs. Taylor.} \label{tab:my_label} \end{table} To compare the actual speed up for our pruned models, we measure the inference time of one mini-batch for the neuron-pruned (using HGNP), weight-pruned and full models. To calculate the inference time for weight-pruned models, we initialize the network with random weights and randomly kept a percentage (sparsity level percentage) of them and set other to zero. We believe that using optimal weights along with state-of-the-art methods in weight pruning would not have significant effects on inference time compared to the aforementioned strategy. The actual inference time can be dependent on many different parameters including the hardware. Table \ref{chap1:tab} compares the inference time for all three models using different GPUs and mini-batch sizes across various models. The full model corresponds to the $100 \%$ sparsity level for both neurons and weights. Our result shows that inference time for neuron-pruned models is lower than of the full model as expected. In addition, at the same sparsity level, neuron-pruned models are faster than weight-pruned models in feed-forward.\\ \begin{table}[ht] \centering \small \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio} \begin{tabular}{c|c|S|S|S|S} \textbf{Model/Dataset} & \textbf{Hardware} & \textbf{Sparsity (Neurons) \%} & \textbf{Sparsity (Weights) \%} &\textbf{Mini-batch} & \textbf{Time(ms)} \\ \hline \multirow{2}{*}{AlexNet/Flowers} & GPU:TESLA K80 & 100.0 & 100.0 & 5000 & 913\\ & GPU:TESLA K80 & & 30.4 & 5000 & 808\\ & GPU:TESLA K80 & 30.4 & & 5000 & 570\\ \hline \multirow{4}{*}{ResNet-18/Cifar-10} & CPU:3.4 GHz Intel Core i5 & 100.0 & 100.0 & 128 & 3437\\ & CPU:3.4 GHz Intel Core i5 & 4.5 & & 128 & 1013\\ & CPU:3.4 GHz Intel Core i5 & & 4.5 & 128 & 2402\\ & GPU:GeForce RTX 2080 & 100.0 & 100.0 & 2000 & 192\\ & GPU:GeForce RTX 2080 & & 4.5 & 2000 & 206\\ & GPU:GeForce RTX 2080 & 4.5 & & 2000 & 90\\ \hline \multirow{4}{*}{VGG-16/Cifar-10} & CPU:3.4 GHz Intel Core i5 & 100.0& 100.0 & 128 & 2251\\ & CPU:3.4 GHz Intel Core i5 & 4.5 & & 128 & 482\\ & CPU:3.4 GHz Intel Core i5 & & 4.5 & 128 & 1709\\ & GPU:GeForce RTX 2080 & 100.0 & 100.0 & 2000 & 132\\ & GPU:GeForce RTX 2080 & & 4.5 & 2000 & 187\\ & GPU:GeForce RTX 2080 & 4.5 & & 2000 & 39\\ \end{tabular} \end{adjustbox} \caption{Comparison of inference time for neuron-pruned, weight-pruned and full networks.} \label{chap1:tab} \end{table} We also compare neuron-pruned models with weight pruned models that produce the same level of accuracy. According to \cite{lee2018snip}, the VGG model on the Cifar-10 dataset has accuracy of $92\%$ at $3\%$ sparsity. This level of accuracy translates to $14.4\%$ sparsity in our neuron-pruned model. The inference time comparison is summarized in Table \ref{chap1:tab__}. The results show that even though the sparsity level is higher for neuron-pruned models compared to weight-pruned models to produce the same level of accuracy, neuron-pruned models are still faster in inference time. \begin{table}[ht] \centering \small \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio} \begin{tabular}{c|c|S|S|S|S} \textbf{Model/Dataset} & \textbf{Hardware} & \textbf{Sparsity (Neurons) \%} & \textbf{Sparsity (Weights) \%} &\textbf{Mini-batch} & \textbf{Time(ms)} \\ \hline \multirow{4}{*}{VGG-16/Cifar-10} & CPU:3.4 GHz Intel Core i5 & 14.4 & & 128 & 1414\\ & CPU:3.4 GHz Intel Core i5 & & 3 & 128 & 1605\\ & GPU:GeForce RTX 2080 & 14.4 & & 2000 & 102\\ & GPU:GeForce RTX 2080 & & 3 & 2000 & 184\\ \end{tabular} \end{adjustbox} \caption{Inference time comparison of neuron-pruned and weight-pruned models that create $92\%$ accuracy.} \label{chap1:tab__} \end{table} \subsection{Ablation Study} We conduct ablation experiments to show the effectiveness of our proposed components for pruning. To this end, we remove two components of HGNP one at a time and compare the results with the full algorithm: 1) We remove the component related to the spectral radius (set $\mu=0$) and train the model only using the cross-entropy loss. However, we use the whole loss function including the spectral radius part for deciding what neurons to prune. 2) We use the loss function with $\mu=0$ to specify which neurons to prune. However, the complete loss function is used for training and fine tuning the model. We conduct ablation experiments on the AlexNet model trained on the Flowers dataset since it trains fast. The results are shown in Figure \ref{ablation}. Full HGNP is consistently better for the sparsity level less than $58\%$ which shows the effectiveness of the algorithm. \begin{figure} \caption{Ablation study results for Alexnet on Flowers dataset.} \label{ablation} \end{figure} \subsection{Heuristic Method to Switch between Taylor and HGNP} In all the experiments, the benchmark method (Taylor) works better than HGNP up to a sparsity level and beyond that our algorithm outperforms in terms of the accuracy performance. For instance, for Alexnet this threshold value is around sparsity level $91\%$ (see Figure \ref{alexnet}). If we are after an algorithm that excels for all sparsity levels, this observation leads to a strategy of using Taylor for large sparsity levels and HGNP for low levels. The challenge is identifying the switching level. To this end, we use a linear regression model with 4 samples (each one of the experiments is a sample) where the response variable is the sparsity level threshold value specifying which model outperforms. We consider two types of independent variables. The first type is related to the dataset and those in the other type characterize the neural network architecture. The dataset variables considered are: the number of training and validation samples, the dimension of input images, and the number of classes in the model to predict. For the neural network architecture related variables we use: the number of convolutional layers, the number of linear layers, the number of trainable parameters of the neural network model, the average kernel size across all convolutional layers, and the number of kernels. We use all these predictors and the response variable to perform Lasso regression. The best model based on $R^2$ and all p-values below $0.05$ contains only two significant predictors: the number of classes and the number of training samples. By using the model we predict the threshold sparsity level for each of the experiments and use it to decide when to switch from Taylor to HGNP. We then create accuracy plots and compare the area under the curve (AUC) for the method which switches between HGNP and Taylor (i.e. the hybrid method) versus the Taylor method in addition to HGNP versus Taylor and summarize the results in Table \ref{table_rel}. Based on the results, AUC is improved for HGNP compared to Taylor across all model/dataset pairs except for Resnet-18/Birds. We observe a substantial improvement of hybrid method compared to Taylor forAlexNet/Flowers 102 and VGG-16/Cifar-10. The hybrid method outperforms Taylor in every experiment. \begin{table}[ht] \centering \small \begin{tabular}{ c|c|c|c } \textbf{Model/Dataset} & \textbf{Hybrid vs Taylor} & \textbf{HGNP vs Taylor} & \textbf{Predicted Threshold}\\ \hline AlexNet/Flowers 102 & 9.33 & 9.20 & 91.0 \\ Resnet-18/Birds & 0.08 & -0.29 & 48.0\\ VGG-16/Cifar-10 & 1.63 & 1.59 & 87.8\\ Resnet-18/Cifar-10 & 0.47 & 0.42 & 87.8\\ \end{tabular} \caption{Relative percentage of improvement in accuracy across different architectures/datasets.} \label{table_rel} \end{table} \section{Discussion} We propose an algorithm that prunes neurons in a DNN architecture with a special attention to the final solution of the pruned model. The algorithm is able to achieve flatter minima which generalize better on unseen data. The experiments show that the HGNP algorithm outperforms the existing literature across different neural network architectures and datasets. Our compact model achieves an actual improvement in the inference time given current software by pruning neurons instead of weights. \end{document}
math
ये ४ कम लागत वाली स्क्रीनिंग विकल्प आपको गर्भाशय ग्रीवा कैंसर से बचने में मदद कर सकते हैं - जिंदगी अमेरिकियों के लगभग एक तिमाही इस तरह पैसे कमाने। यहां आप कैसे कर सकते हैं जून - २०२० बीयर प्रेमी: कूल सिएरा नेवादा फ्रीबीज के लिए इस हॉलिडे को एक ग्लास उठाएं जून - २०२० क्या आप ब्लैक फ्राइडे के लिए तैयार हैं? विशेषज्ञ दुकानदारों से १० उत्तरजीविता युक्तियाँ जून - २०२० ये ४ कम लागत वाली स्क्रीनिंग विकल्प आपको गर्भाशय ग्रीवा कैंसर से बचने में मदद कर सकते हैं इस महीने के शुरू में जारी एक डरावनी अध्ययन से पता चलता है कि गर्भाशय ग्रीवा के कैंसर से मरने की एक महिला की संभावना पहले विचार से अधिक है। लेकिन बाहर निकलना मत करो। अच्छी खबर केंद्र रोग नियंत्रण केंद्र (सीडीसी) कहती है गर्भाशय ग्रीवा कैंसर रोकथाम है. अध्ययन से पता चलता है कि १००,००० सफेद महिलाओं में से ४.७ गर्भाशय ग्रीवा के कैंसर से मर जाते हैं (३.२ से ऊपर)। अफ्रीकी-अमेरिकी महिलाओं के लिए, दर १०.१ प्रति १००,००० (५.७ से ऊपर) तक पहुंच जाती है। दौड़ के बीच असमानता क्यों? हालांकि अध्ययन में कोई स्पष्टीकरण नहीं दिया गया है, चिकित्सा समुदाय में कई लोग कहते हैं कि उचित स्वास्थ्य जांच तक पहुंच की कमी आंशिक रूप से दोषी हो सकती है। डॉ जेनिफर कैडल कहते हैं, "मुझे लंबे समय से पता चला है कि सामाजिक समूहों के बीच स्वास्थ्य असमानताएं मौजूद हैं।" "यह एक नई घटना नहीं है और यह गर्भाशय ग्रीवा के कैंसर के लिए अद्वितीय नहीं है, लेकिन यह अभी भी मेरी सांस लेती है कि स्वास्थ्य में नस्लीय असमानता इतनी मजबूत है।" गर्भाशय ग्रीवा कैंसर स्क्रीनिंग इसे रोकने की कुंजी है अमेरिकन कैंसर सोसायटी के मुताबिक गर्भाशय ग्रीवा के कैंसर होने का मौका कम करने के लिए आप कुछ चीजें कर सकते हैं: मानव पेपिलोमा वायरस (एचपीवी) के संपर्क में आने से बचें एचपीवी टीका पाएं यू.एस. डिपार्टमेंट ऑफ हेल्थ एंड ह्यूमन सर्विसेज का हिस्सा महिला स्वास्थ्य कार्यालय, अनुशंसा करता है कि २१ से ६५ वर्ष की महिलाओं को गर्भाशय ग्रीवा के कैंसर के शुरुआती संकेतों के लिए स्क्रीन पर नियमित पाप परीक्षण (पाप पापों भी कहा जाता है) प्राप्त करना चाहिए। सस्ती गर्भाशय ग्रीवा कैंसर स्क्रीनिंग कहां खोजें अधिकांश बीमा योजनाएं पैप परीक्षणों की पूरी लागत को कवर करती हैं। यदि आपका नहीं है, या आपके पास स्वास्थ्य बीमा नहीं है, तो यहां हैं गर्भाशय ग्रीवा के कैंसर के लिए स्क्रीनिंग के लिए चार किफायती तरीके. यदि आप मेडिकेयर पार्ट बी के लिए अर्हता प्राप्त करते हैं, तो आप प्रत्येक २४ महीनों में एक बार मुफ्त गर्भाशय ग्रीवा कैंसर स्क्रीनिंग के लिए योग्य हैं। गर्भाशय ग्रीवा के कैंसर के विकास के लिए उच्च जोखिम वाले महिलाएं या बच्चे की उम्र बढ़ने की उम्र है और पिछले ३६ महीनों में असामान्य पाप परीक्षण हुआ है, हर १२ महीनों में एक बार मुफ्त स्क्रीनिंग के लिए पात्र हैं। २. योजनाबद्ध माता-पिता यहां फ्लोरिडा में, नियोजित माता-पिता पर एक पाप परीक्षण आपको $ २२८ वापस सेट करेगा। अच्छी खबर यह है कि अधिकतर स्थान आपकी आय के आधार पर स्लाइडिंग शुल्क पैमाने पर सेवाएं प्रदान करते हैं। योग्यता स्तर और प्रक्रिया लागत राज्य द्वारा भिन्न होती है इसलिए विवरण के लिए अपने स्थानीय नियोजित माता-पिता कार्यालय से संपर्क करें। ३. सामुदायिक स्वास्थ्य क्लीनिक पूरे देश में सामुदायिक स्वास्थ्य क्लीनिक गर्भाशय ग्रीवा कैंसर स्क्रीनिंग सहित कम लागत वाली महिलाओं की स्वास्थ्य सेवाओं को प्रदान करते हैं। यू.एस. डिपार्टमेंट ऑफ हेल्थ क्लिनिक लोकेटर का उपयोग अपने आस-पास के स्थान को ढूंढने के लिए करें। ४. सीडीसी-वित्त पोषित गर्भाशय ग्रीवा कैंसर स्क्रीनिंग प्रदाता सीडीसी देश भर में कम आय, बीमाकृत और बीमाकृत महिलाओं को स्क्रीनिंग प्रदान करने के लिए राष्ट्रीय स्तन और गर्भाशय ग्रीवा कैंसर प्रारंभिक जांच कार्यक्रम को अंडरराइट करता है। सीडीसी के इंटरैक्टिव मानचित्र के माध्यम से एक प्रदाता खोजें। गर्भाशय ग्रीवा कैंसर के बारे में मेरे डॉक्टर ने मुझे क्या बताया पाप परीक्षण कोई मजेदार नहीं हैं। मेरा मतलब है, क्या वे अधिक अपरिचित और अजीब हो सकते हैं? कृपया, हालांकि, इसे बंद मत करो। मैं अपने संपादकों और वास्तव में पूरे टीपीएच कार्यालय से पहले से माफ़ी मांगता हूं, लेकिन मैं अपने गर्भाशय के बारे में बात करने जा रहा हूं। मैं हमेशा वार्षिक पाप स्मीयर्स मिलता है। कुछ साल पहले, मेरे परिणाम असामान्य वापस आए और मैं पूरी तरह से बाहर निकला क्योंकि मुझे डर था कि इसका मतलब था कि मुझे गर्भाशय ग्रीवा कैंसर था। मेरे डॉक्टर ने धैर्यपूर्वक मुझे बताया कि गर्भाशय ग्रीवा कैंसर कुछ ऐसा नहीं है जो अचानक आता है और आपको कोई चेतावनी नहीं देता है। इसे विकसित करने में वर्षों और साल लगते हैं। उसने मुझे बताया कि बहुत सी चेतावनी संकेत नियमित स्क्रीनिंग उठा सकते हैं। उन मुद्दों का तुरंत इलाज करना लोगों को सड़क के नीचे ग्रीवा कैंसर के विकास से रोकता है। मेरे डॉक्टर ने यह भी बताया कि २० साल के अभ्यास में, केवल उन्हीं महिलाओं को उन्नत गर्भाशय ग्रीवा के कैंसर के लिए इलाज किया गया था, वे महिलाएं थीं जिन्हें नियमित पाप परीक्षण नहीं मिला था। सौभाग्य से, मेरे अनुवर्ती परीक्षण नकारात्मक वापस आ गए। अंत में, मेरा गर्भाशय और मैं ठीक था लेकिन नियमित स्क्रीनिंग के महत्व के बारे में उसने जो भी कहा वह मैं कभी नहीं भूल गया। पाप परीक्षण एक ड्रैग हैं, लेकिन विकल्प बहुत खराब है। गर्भाशय ग्रीवा के कैंसर के लिए स्क्रीनिंग कहां प्राप्त करें, यह जानने के लिए उपरोक्त विकल्पों में से एक का उपयोग करें या उपरोक्त विकल्पों में से एक का उपयोग करें। फिर उस नियुक्ति को करें। आपकी बारी: आपके क्षेत्र में गर्भाशय ग्रीवा कैंसर स्क्रीनिंग की लागत कितनी है? लिसा मैकग्रीवी द पेनी होर्डर में एक कर्मचारी लेखक हैं। वह केवल अपने गर्भाशय के बारे में बात करने का वादा करती है जब यह वास्तव में महत्वपूर्ण है। ऑनलाइन ट्यूटर के रूप में अतिरिक्त नकद कैसे बनाएं कैश के लिए अपना खाद्य पैकेजिंग कैसे बेचें हमारे क्रेडिट कार्ड ऋण दुनिया के बाकी हिस्सों की तुलना कैसे करते हैं? कैलिफोर्निया में रहते हैं? यह गैर-लाभ आपको घर से काम करने के लिए $ १५ / घंटा का भुगतान करेगा यह उबर चालक अपने यात्रियों को आभूषण बेचकर २५२,००० डॉलर सालाना बनाता है १७ व्यक्तिगत वित्त पॉडकास्ट: पैसे के बारे में जानने के मजेदार तरीके क्या आप घर इंटरनेट सेवा के बिना जी सकते हैं? यह लेखक किया था यहां बताया गया है कि आप $ १६ / घंटा या पार्टी के लिए अधिक भुगतान कैसे कर सकते हैं (गंभीरता से!) ७ सरल दी वस्त्र मरम्मत जो आपको कुछ गंभीर धन बचाएगी अपनी शादी की योजना बना रहे हो? ४ समझौता किए बिना लागत में कटौती करने के चालाक तरीके लाइन में बर्बाद समय का बीमार? यह ऐप आपको प्रतीक्षा करते समय पैसे कमाने में मदद करता है
hindi
Born 1945, in England; daugher of Peter Powell (an author and theatre director) and Joan Alexander Carnwath (a writer); stepEducation: Educated in England and Switzerland; studied painting at Richmond College; University of Surrey (Roehampton, England), M.A. (Hobbies and other interests: Animals, reading, travel, food, friends, film, children, music. Agent—c/o Author Mail, Walker Books, Ltd., 87 Vauxhall Walk, London SE11 5HJ, England. Freelance writer and illustrator of children's books, 1986—; previously worked as an interior designer and nursery school teacher; with designer Gerald Scarfe, creator of papier-mache and cloth sculptures; actor in stage productions of Mr William Shakespeare's Plays and Bravo, Mr William Shakespeare! The First Christmas, Random House (New York, NY), 1987, reprinted, Walker Books (London, England), 2000. The Amazing Story of Noah's Ark, Walker Books (London, England), 1988. When I Was Little, Walker Books (London, England), 1989. Jonah and the Whale, Random House (New York, NY), 1989. Not a Worry in the World, Walker Books (London, England), 1990, Crown (New York, NY), 1991. Joseph and His Magnificent Coat of Many Colors, Walker Books (London, England), 1990, Candlewick Press (Cambridge, MA), 1992. Greek Myths for Young Children, Walker Books (London, England), 1991, Candlewick Press (Cambridge, MA), 1992. (Reteller) Miguel de Cervantes, Don Quixote, Candlewick Press (Cambridge, MA), 1993. (Reteller) Sinbad the Sailor, Candlewick Press (Cambridge, MA), 1994. (Reteller) The Adventures of Robin Hood, Candlewick Press (Cambridge, MA), 1995. King Arthur and the Knights of the Round Table, Candlewick Press (Cambridge, MA), 1996. (Reteller) The Iliad and the Odyssey, Candlewick Press (Cambridge, MA), 1996. Mr. William Shakespeare's Plays, Walker Books (London, England), 1998, published as Tales From Shakespeare: Seven Plays, Candlewick Press (Cambridge, MA), 1998. Psyche and Eros, Cambridge University Press (Cambridge, England), 1998. Fabulous Monsters, Candlewick Press (Cambridge, MA), 1999. Bravo, Mr. William Shakespeare!, Candlewick Press (Cambridge, MA), 2000. No Worries!, Walker Books (London, England), 2000. (Reteller) Charles Dickens and Friends, Candlewick Press (Cambridge, MA), 2002. (Reteller) God and His Creations: Tales From the Old Testament, Candlewick Press (Cambridge, MA), 2004. Mr William Shakespeare's Plays and Bravo, Mr William Shakespeare! were adapted for the stage by Alan Durant. Picture-book author and illustrator Marcia Williams is a writer by tradition as well as by inclination. The daughter of a writer, she grew up with books readily available and saw firsthand the discipline needed to successfully write for a living. Now a popular author and illustrator, British-born Williams has brought the classic stories she recalls from her own childhood to life for young children: from Sinbad the Sailor and Robin Hood to Noah and the animals and the gods of Greek mythology, people from many ages and cultures live for modern readers through her tales. Born in England, Williams spent much of her childhood in boarding school, away from her parents. Homesick, she sent her mother and father self-illustrated letters recounting her day-to-day experiences. Sometimes she even wrote a poem to add to her letters. "This is where my career began," she would later quip to Something about the Author (SATA). Williams's mother, also a writer, had a passion for books, and when the two were together she would often read her daughter excerpts from classics and mythology. "I found Marcel Proust and the Greek myths a little hard going," the author recalled. "I was delighted, therefore, to discover later that many of these stories were exciting and amusing. I think this is why I enjoy making classic tales accessible to young children." Moving from school to school did not make Williams exactly fall in love with reading. "Always the first thing that happened in a new classroom was having to stand up and read in front of your peers to make sure you had reached the required level," she remembered. "I even find the memory of it a torture. Also, there were very few picture books available, so most classics were read from adult versions, not for pleasure but as preparation for a test. It was only when I had my own children that I came to realize the joy of books. So I think I create books now to make up for all those lost years of pleasure, and to give books to others like Alice and myself who can't see the point of books 'without pictures and conversation.'" While she had always enjoyed writing and illustrating stories and cards for friends, Williams never received formal art training; she viewed her creative outlet as a hobby rather than as a potential career. That would all change after the birth of her second child in the late 1980s. "I was very lucky to visit Walker Books with a Christmas picture on a day they were looking for someone to write and illustrate the Christmas story," she remembered. "When I look back on it now, I find it hard to believe that I had the nerve to present myself, or that the art designer had the nerve to give a book to a complete novice. Maybe he never realized!" The relationship Williams established with Walker Books has continued, and as the illustrator notes, "creating picture books has become as important to me as breathing." Published in 1988, Williams's interpretation of The First Christmas follows the traditional story while also adding images of the way the holiday is celebrated in different parts of the world. Called an "energetic and appealing book" by a School Library Journal reviewer, The First Christmas was the first of several books its author has created featuring Biblical themes. Her The Amazing Story of Noah's Ark closely follows the story from the Book of Genesis and is chock-full of animals and activity, with William's colorful drawings augmented by "text … ingeniously distributed over the illustrations," according to a Junior Bookshelf critic. Calling the book an "exuberant folk-art treatment," School Library Journal contributor Patricia Dooley praised William's adaptation of Joseph and His Magnificent Coat of Many Colors for a preschool audience responsive to bright colors, animals, and a sense of magic. Williams returns to Biblical tales in her 2004 volume God and His Creations: Tales from the Old Testament. The book features eleven well-known Bible stories, including "The Garden of Eden," "Noah's Ark," "David and Goliath," and "Daniel in the Lions' Den," all in under forty pages. As Wendy Lukehart noted in School Library Journal, Williams's retellings of these tales are "humorous, succinct, and rooted in traditional elements." Williams places funny and sometimes insightful comments in the mouths of the tales' human, angelic, and animal characters; in one frequently praised forty-panel, one-page spread in "Noah's Ark," the animals complain increasingly more strenuously about their not-so-varied daily menu as the trip progresses, while in another, the angels debate whether Adam and Eve or the serpent are to be blamed for human sin. However, God's words are drawn straight from the Bible (using the New International Version translation). The results, according to a Publishers Weekly reviewer, are "taut and trenchant renditions" of these well-known tales. Williams's signature style of illustration was also praised by critics; "The artistic details … are wonderfully clever as they flow from Williams' pen and paint-brush," Francisca Goldsmith wrote in Booklist. From Bible stories, Williams moved to the myths of Greece she recalled from her childhood. In the highly praised Greek Myths for Young Children she introduces youngsters to the timeless stories of Pandora's Box, Hercules, Daedalus and Icarus, Arachne, and the Minotaur, among others. Using a lighthearted tone to dilute some of the tales' darker moments—such as when Icarus drowns in the sea after flying too close to the sun—Betsy Hearne, writing in Bulletin of the Center for Children's Books, applauded Williams's collection for inducing "a broad range of kids to become culturally literate as they pore over her comic-strip versions" of otherwise offputting classic myths. The author/illustrator's "brightly colored cartoon figures and the witty asides they trade emphasize the vitality and down-to-earth character of the tales," in the opinion of School Library Journal essayist Patricia Dooley. The author continues her lighthearted approach in her retelling of Homer's epic stories in The Iliad and the Odyssey, published in 1996. Peter F. Neumeyer, in a Boston Globe review, lauded her use of the comic-panel format through which he estimated Williams provided over two hundred illustrations with endpapers comprising another forty-two panels, telling a "wartime story both serious and witty." He praised her ability to juggle "a sober, straightforward running narrative" with a "modern-lingo, ironic, and iconoclastic repartee," together with "illustrations that not only elucidate but themselves editorialize with wit and irony." Spanning the ages and the continents, Williams has also turned her attention to the legends of her native Great Britain. In The Adventures of Robin Hood she recounts numerous escapades of the outlaw of Sherwood Forest in her characteristic witty fashion, making "this rendition of the Robin Hood legend both an easy laugh and an easy read," according to Booklist contributor Julie Walton. Praising Williams's use of earthy greens, golds, and browns rather than her usual brilliant colors, a Publishers Weekly critic noted that The Adventures of Robin Hood "may well be her most child-appreciated work yet." The regal King Arthur comes in for much the same treatment at Williams's hands, as the adventures of the sturdy knights of the round table are augmented by quips, jokes, and a steady stream of one-liners. While Booklist reviewer Carolyn Phelan noted that the presentation "is not for every taste," critic Deborah Stevenson praised King Arthur and the Knights of the Round Table in her review for Bulletin of the Center for Children's Books as "an amiable and breezily told introduction to a durable legend, with adventure, broad comedy, and atmosphere aplenty." Williams combined legends from around the world in her thematic volume Fabulous Monsters. The five monsters of the title come from the legends of ancient Greece (the Chimera), Aboriginal Australia (Bunyips), the Bantus of Africa (Isikukumanderu), Atlantic islands (Basilisks), and the Vikings (Grendel from Beowulf), but all but one of the stories share the common mythical theme of a hero come to slay the beast. "The smiling, festively colored monsters," as John Peters described them in Booklist, make funny comments—declaring "Tasty tidbits!" as they chomp down on their victims and the like—and "have an unthreatening comic look." Although Williams does depict her monsters killing and eating humans (as mythological monsters are wont to do), "the hyperbolic humor works to soften any images of violence," thought a Publishers Weekly reviewer. Moving forward a bit in time, Williams retells fourteen of the plays of the classic English playwright William Shakespeare in Tales from Shakespeare—published in Great Britain as Mr. William Shakespeare's Plays,—and Bravo, Mr. William Shakespeare! In both titles, Williams captions scenes from well-known plays with simple, modern-English subtitles while a rowdy Elizabethan peanut gallery provides a humorous running commentary on the action. "I don't think this is quite suitable for children," one such spectator declares during a bloody scene from Macbeth in the former title, while in the latter book another playgoer shouts in frustration at fellow viewers, "They're mummies, you dummies!" as Antony and Cleopatra are buried. Reviewing Bravo, Mr. William Shakespeare! in Booklist, Shelle Rosenfeld declared it "an enjoyable, accessible vehicle to help children experience and appreciate Shakespeare," while Booklist reviewer John Peters commented that Tales from Shakespeare "offers an inviting taste of the Shakespearean buffet, as well as a rare glimpse into the character of Elizabethan theater." Williams retells the stories of another classic English author, nineteenth-century novelist Charles Dickens, in Charles Dickens and Friends. The book contains condensed versions of five of Dickens' novels: Oliver Twist, Great Expectations, A Tale of Two Cities, David Copperfield, and A Christmas Carol. Forcing such long novels into a mere six to ten illustrated pages "requires judicious use of words," Marie Orlando noted in School Library Journal, "and Williams rises to the challenge, providing the salient events in a reasonably smooth narrative flow." Still, as Francisca Goldsmith wrote in Booklist, "This recap of classics is best for an audience already familiar with Dickens' stories." Williams considers herself a very disciplined worker; she has been known to devote seven days a week, ten hours a day, to complete a book project. "I work in my bedroom, so it is sometimes difficult to shut off, but one day I hope to have a studio," the author-illustrator explained to SATA. "I spend a long time getting the story right as, although my books are short on texts, I believe this means the story has to be even stronger to hold the weight of the illustrations." She employs a style of illustration called "comic-strip" style, wherein inked drawings tinted with watercolor flow from scene to scene along a linear "strip," with captions printed below and "bubbles" within each picture providing additional dialogue. This style grew out of her desire to communicate with young readers on more than one level. "A child recently told me that he understood my books perfectly," Williams noted. "The main text and pictures were for him to share with his Mum and Dad, but the speech bubbles were just for him. I was delighted at his perception and the feeling that we had formed this special bond, and of course he was right. The speech bubbles are also a wonderful opportunity to add a bit of anarchic humor and animation to the stories," Williams added, "helping to make them accessible by bringing them into the child's own orbit of experience." Because of the comic-book style she employs, there is a sense of theatricality about Williams's books that is intentional on the part of the author/illustrator. "I have always loved the theatre and in many ways I see my books as theatre on the page, and I am the lucky one who gets all the parts!," she explained. "Sheer greed and sheer delight. I hope the delight communicates itself to the reader." Text and illustration remain equally important to Williams: "I strive … to weave them together to build up character and atmosphere until they become a satisfying whole." "I love my work and can't imagine any other career," the author/illustrator readily admitted to SATA. "I enjoy every part of making a book and also enjoy visiting schools and talking to children who are an endless inspiration and always manage to look at things from unexpected angles." Williams, Marcia, Bravo, Mr. William Shakespeare!, Candlewick Press (Cambridge, MA), 2000. Williams, Marcia, Fabulous Monsters, Candlewick Press (Cambridge, MA), 1999. Williams, Marcia, Mr. William Shakespeare's Plays, Walker Books (London, England), 1998, published as Tales from Shakespeare: Seven Plays, Candlewick Press (Cambridge, MA), 1998. Booklist, June 15, 1992, Ilene Cooper, review of Joseph and His Magnificent Coat of Many Colors, p. 1843; March 1, 1994, Mary Harris Veeder, review of Sinbad the Sailor, p. 1267; March 15, 1995, Julie Walton, review of The Adventures of Robin Hood, p. 1327; April 15, 1996, Carolyn Phelan, review of King Arthur and the Knights of the Round Table, p. 1438; November 1, 1998, John Peters, review of Tales from Shakespeare, p. 490; January 1, 2000, John Peters, review of Fabulous Monsters, p. 935; March 1, 2001, Shelle Rosenfeld, review of Bravo, Mr. William Shakespeare!, p. 1274; October 15, 2002, Francisca Goldsmith, review of Charles Dickens and Friends, p. 405; March 15, 2004, Francisca Goldsmith, review of God and His Creations: Tales from the Old Testament, p. 1308. Books for Your Children, spring, 1992, p. 13. Boston Globe, September 7, 1997, Peter K. Neumeyer, review of The Classics Illustrated. Bulletin of the Center for Children's Books, November, 1992, Betsy Hearne, review of Greek Myths for Young Children, pp. 94-95; May, 1995, p. 326; March, 1996, Deborah Stevenson, review of King Arthur and the Knights of the Round Table, p. 247; February, 1997, p. 227. Horn Book, January-February, 1997, Amy Chamberlain, review of The Iliad and the Odyssey, p. 82. Junior Bookshelf, February, 1989, review of The Amazing Story of Noah's Ark, p. 15; December, 1989, p. 269; February, 1992, p. 35; April, 1993, p. 72. Kirkus Reviews, March 1, 1994, p. 312; October 15, 2002, review of Charles Dickens and Friends: Five Lively Retellings, p. 1540; February 15, 2004, review of God and His Creations, p. 187. Magpies, July, 1995, review of The Adventures of Robin Hood, p. 8; September, 1995, pp. 16-17. Publishers Weekly, August 26, 1988, review of The First Christmas, p. 85; April 27, 1992, review of Joseph and His Magnificent Coat of Many Colors, p. 267; October 19, 1992, review of Greek Myths for Young Children, p. 78; March 1, 1993, review of Don Quixote, p. 57; January 30, 1995, review of The Adventures of Robin Hood, p. 100; March 11, 1996, review of King Arthur and the Knights of the Round Table, p. 66; November 25, 1996, review of The Iliad and the Odyssey, p. 75; November 22, 1999, review of Fabulous Monsters, p. 55; November 6, 2000, review of Bravo for the Bard, p. 93; January 26, 2004, review of God and His Creations, p. 251. School Librarian, May, 1992, review of Greek Myths for Young Children, p. 58. School Library Journal, October, 1988, review of The First Christmas, p. 38; July, 1989, Celia A. Huffman, review of Jonah and the Whale, p. 81; April, 1992, Patricia Dooley, review of Joseph and His Magnificent Coat of Many Colors, p. 102; June, 1994, Patricia Dooley, "Beyond Cultural Literacy: The Enduring Power of Myths," pp. 52-53; April, 1995, JoAnn Rees, review of The Adventures of Robin Hood, p. 148; December, 2000, Chapman Collaghan, review of Bravo, Mr. William Shakespeare!, p. 167; February, 2003, Marie Orlando, review of, Charles Dickens and Friends, p. 150; May, 2004, Wendy Lukehart, review of God and His Creations, p. 138. Times Educational Supplement, September 9, 1994, review of Sinbad the Sailor, p. 20. Candlewick Press Web site, http://www.candlewick.com/ (April 2, 2005), "Marcia Williams."
english
/* FUSE: Filesystem in Userspace Copyright (C) 2001-2007 Miklos Szeredi <[email protected]> This program can be distributed under the terms of the GNU LGPLv2. See the file COPYING.LIB */ /* For pthread_rwlock_t */ #define _GNU_SOURCE #include "fuse_i.h" #include "fuse_lowlevel.h" #include "fuse_opt.h" #include "fuse_misc.h" #include "fuse_common_compat.h" #include "fuse_compat.h" #include "fuse_kernel.h" #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stddef.h> #include <unistd.h> #include <time.h> #include <fcntl.h> #include <limits.h> #include <errno.h> #include <signal.h> #include <dlfcn.h> #include <assert.h> #include <sys/param.h> #include <sys/uio.h> #include <sys/time.h> #define FUSE_DEFAULT_INTR_SIGNAL SIGUSR1 #define FUSE_UNKNOWN_INO 0xffffffff #define OFFSET_MAX 0x7fffffffffffffffLL struct fuse_config { unsigned int uid; unsigned int gid; unsigned int umask; double entry_timeout; double negative_timeout; double attr_timeout; double ac_attr_timeout; int ac_attr_timeout_set; int noforget; int debug; int hard_remove; int use_ino; int readdir_ino; int set_mode; int set_uid; int set_gid; int direct_io; int kernel_cache; int auto_cache; int intr; int intr_signal; int help; char *modules; }; struct fuse_fs { struct fuse_operations op; struct fuse_module *m; void *user_data; int compat; int debug; #ifdef __APPLE__ struct fuse *fuse; #endif /* __APPLE__ */ }; struct fusemod_so { void *handle; int ctr; }; struct lock_queue_element { struct lock_queue_element *next; pthread_cond_t cond; }; struct fuse { struct fuse_session *se; struct node **name_table; size_t name_table_size; struct node **id_table; size_t id_table_size; fuse_ino_t ctr; unsigned int generation; unsigned int hidectr; pthread_mutex_t lock; struct fuse_config conf; int intr_installed; struct fuse_fs *fs; int nullpath_ok; int curr_ticket; struct lock_queue_element *lockq; }; struct lock { int type; off_t start; off_t end; pid_t pid; uint64_t owner; struct lock *next; }; struct node { struct node *name_next; struct node *id_next; fuse_ino_t nodeid; unsigned int generation; int refctr; struct node *parent; char *name; uint64_t nlookup; int open_count; struct timespec stat_updated; struct timespec mtime; off_t size; struct lock *locks; unsigned int is_hidden : 1; unsigned int cache_valid : 1; int treelock; int ticket; }; struct fuse_dh { pthread_mutex_t lock; struct fuse *fuse; fuse_req_t req; char *contents; int allocated; unsigned len; unsigned size; unsigned needlen; int filled; uint64_t fh; int error; fuse_ino_t nodeid; }; /* old dir handle */ struct fuse_dirhandle { fuse_fill_dir_t filler; void *buf; }; struct fuse_context_i { struct fuse_context ctx; fuse_req_t req; }; static pthread_key_t fuse_context_key; static pthread_mutex_t fuse_context_lock = PTHREAD_MUTEX_INITIALIZER; static int fuse_context_ref; static struct fusemod_so *fuse_current_so; static struct fuse_module *fuse_modules; static int fuse_load_so_name(const char *soname) { struct fusemod_so *so; so = calloc(1, sizeof(struct fusemod_so)); if (!so) { fprintf(stderr, "fuse: memory allocation failed\n"); return -1; } fuse_current_so = so; so->handle = dlopen(soname, RTLD_NOW); fuse_current_so = NULL; if (!so->handle) { fprintf(stderr, "fuse: %s\n", dlerror()); goto err; } if (!so->ctr) { fprintf(stderr, "fuse: %s did not register any modules\n", soname); goto err; } return 0; err: if (so->handle) dlclose(so->handle); free(so); return -1; } static int fuse_load_so_module(const char *module) { int res; char *soname = malloc(strlen(module) + 64); if (!soname) { fprintf(stderr, "fuse: memory allocation failed\n"); return -1; } sprintf(soname, "libfusemod_%s.so", module); res = fuse_load_so_name(soname); free(soname); return res; } static struct fuse_module *fuse_find_module(const char *module) { struct fuse_module *m; for (m = fuse_modules; m; m = m->next) { if (strcmp(module, m->name) == 0) { m->ctr++; break; } } return m; } static struct fuse_module *fuse_get_module(const char *module) { struct fuse_module *m; pthread_mutex_lock(&fuse_context_lock); m = fuse_find_module(module); if (!m) { int err = fuse_load_so_module(module); if (!err) m = fuse_find_module(module); } pthread_mutex_unlock(&fuse_context_lock); return m; } static void fuse_put_module(struct fuse_module *m) { pthread_mutex_lock(&fuse_context_lock); assert(m->ctr > 0); m->ctr--; if (!m->ctr && m->so) { struct fusemod_so *so = m->so; assert(so->ctr > 0); so->ctr--; if (!so->ctr) { struct fuse_module **mp; for (mp = &fuse_modules; *mp;) { if ((*mp)->so == so) *mp = (*mp)->next; else mp = &(*mp)->next; } dlclose(so->handle); free(so); } } pthread_mutex_unlock(&fuse_context_lock); } static struct node *get_node_nocheck(struct fuse *f, fuse_ino_t nodeid) { size_t hash = nodeid % f->id_table_size; struct node *node; for (node = f->id_table[hash]; node != NULL; node = node->id_next) if (node->nodeid == nodeid) return node; return NULL; } static struct node *get_node(struct fuse *f, fuse_ino_t nodeid) { struct node *node = get_node_nocheck(f, nodeid); if (!node) { fprintf(stderr, "fuse internal error: node %llu not found\n", (unsigned long long) nodeid); abort(); } return node; } static void free_node(struct node *node) { free(node->name); free(node); } static void unhash_id(struct fuse *f, struct node *node) { size_t hash = node->nodeid % f->id_table_size; struct node **nodep = &f->id_table[hash]; for (; *nodep != NULL; nodep = &(*nodep)->id_next) if (*nodep == node) { *nodep = node->id_next; return; } } static void hash_id(struct fuse *f, struct node *node) { size_t hash = node->nodeid % f->id_table_size; node->id_next = f->id_table[hash]; f->id_table[hash] = node; } static unsigned int name_hash(struct fuse *f, fuse_ino_t parent, const char *name) { unsigned int hash = *name; if (hash) for (name += 1; *name != '\0'; name++) hash = (hash << 5) - hash + *name; return (hash + parent) % f->name_table_size; } static void unref_node(struct fuse *f, struct node *node); static void unhash_name(struct fuse *f, struct node *node) { if (node->name) { size_t hash = name_hash(f, node->parent->nodeid, node->name); struct node **nodep = &f->name_table[hash]; for (; *nodep != NULL; nodep = &(*nodep)->name_next) if (*nodep == node) { *nodep = node->name_next; node->name_next = NULL; unref_node(f, node->parent); free(node->name); node->name = NULL; node->parent = NULL; return; } fprintf(stderr, "fuse internal error: unable to unhash node: %llu\n", (unsigned long long) node->nodeid); abort(); } } static int hash_name(struct fuse *f, struct node *node, fuse_ino_t parentid, const char *name) { size_t hash = name_hash(f, parentid, name); struct node *parent = get_node(f, parentid); node->name = strdup(name); if (node->name == NULL) return -1; parent->refctr ++; node->parent = parent; node->name_next = f->name_table[hash]; f->name_table[hash] = node; return 0; } static void delete_node(struct fuse *f, struct node *node) { if (f->conf.debug) fprintf(stderr, "DELETE: %llu\n", (unsigned long long) node->nodeid); assert(node->treelock == 0); assert(!node->name); unhash_id(f, node); free_node(node); } static void unref_node(struct fuse *f, struct node *node) { assert(node->refctr > 0); node->refctr --; if (!node->refctr) delete_node(f, node); } static fuse_ino_t next_id(struct fuse *f) { do { f->ctr = (f->ctr + 1) & 0xffffffff; if (!f->ctr) f->generation ++; } while (f->ctr == 0 || f->ctr == FUSE_UNKNOWN_INO || get_node_nocheck(f, f->ctr) != NULL); return f->ctr; } static struct node *lookup_node(struct fuse *f, fuse_ino_t parent, const char *name) { size_t hash = name_hash(f, parent, name); struct node *node; for (node = f->name_table[hash]; node != NULL; node = node->name_next) if (node->parent->nodeid == parent && strcmp(node->name, name) == 0) return node; return NULL; } static struct node *find_node(struct fuse *f, fuse_ino_t parent, const char *name) { struct node *node; pthread_mutex_lock(&f->lock); if (!name) node = get_node(f, parent); else node = lookup_node(f, parent, name); if (node == NULL) { node = (struct node *) calloc(1, sizeof(struct node)); if (node == NULL) goto out_err; if (f->conf.noforget) node->nlookup = 1; node->refctr = 1; node->nodeid = next_id(f); node->generation = f->generation; node->open_count = 0; node->is_hidden = 0; node->treelock = 0; node->ticket = 0; if (hash_name(f, node, parent, name) == -1) { free(node); node = NULL; goto out_err; } hash_id(f, node); } node->nlookup ++; out_err: pthread_mutex_unlock(&f->lock); return node; } static char *add_name(char **buf, unsigned *bufsize, char *s, const char *name) { size_t len = strlen(name); if (s - len <= *buf) { unsigned pathlen = *bufsize - (s - *buf); unsigned newbufsize = *bufsize; char *newbuf; while (newbufsize < pathlen + len + 1) { if (newbufsize >= 0x80000000) newbufsize = 0xffffffff; else newbufsize *= 2; } newbuf = realloc(*buf, newbufsize); if (newbuf == NULL) return NULL; *buf = newbuf; s = newbuf + newbufsize - pathlen; memmove(s, newbuf + *bufsize - pathlen, pathlen); *bufsize = newbufsize; } s -= len; strncpy(s, name, len); s--; *s = '/'; return s; } static void unlock_path(struct fuse *f, fuse_ino_t nodeid, struct node *wnode, struct node *end, int ticket) { struct node *node; if (wnode) { assert(wnode->treelock == -1); wnode->treelock = 0; if (!wnode->ticket) wnode->ticket = ticket; } for (node = get_node(f, nodeid); node != end && node->nodeid != FUSE_ROOT_ID; node = node->parent) { assert(node->treelock > 0); node->treelock--; if (!node->ticket) node->ticket = ticket; } } static void release_tickets(struct fuse *f, fuse_ino_t nodeid, struct node *wnode, int ticket) { struct node *node; if (wnode) { if (wnode->ticket != ticket) return; wnode->ticket = 0; } for (node = get_node(f, nodeid); node->nodeid != FUSE_ROOT_ID; node = node->parent) { if (node->ticket != ticket) return; node->ticket = 0; } } static int try_get_path(struct fuse *f, fuse_ino_t nodeid, const char *name, char **path, struct node **wnodep, int ticket) { unsigned bufsize = 256; char *buf; char *s; struct node *node; struct node *wnode = NULL; int err; *path = NULL; err = -ENOMEM; buf = malloc(bufsize); if (buf == NULL) goto out_err; s = buf + bufsize - 1; *s = '\0'; if (name != NULL) { s = add_name(&buf, &bufsize, s, name); err = -ENOMEM; if (s == NULL) goto out_free; } if (wnodep) { assert(ticket); wnode = lookup_node(f, nodeid, name); if (wnode) { if (wnode->treelock != 0 || (wnode->ticket && wnode->ticket != ticket)) { if (!wnode->ticket) wnode->ticket = ticket; err = -EAGAIN; goto out_free; } wnode->treelock = -1; wnode->ticket = 0; } } err = 0; for (node = get_node(f, nodeid); node->nodeid != FUSE_ROOT_ID; node = node->parent) { err = -ENOENT; if (node->name == NULL || node->parent == NULL) goto out_unlock; err = -ENOMEM; s = add_name(&buf, &bufsize, s, node->name); if (s == NULL) goto out_unlock; if (ticket) { err = -EAGAIN; if (node->treelock == -1 || (node->ticket && node->ticket != ticket)) goto out_unlock; node->treelock++; node->ticket = 0; } } if (s[0]) memmove(buf, s, bufsize - (s - buf)); else strcpy(buf, "/"); *path = buf; if (wnodep) *wnodep = wnode; return 0; out_unlock: if (ticket) unlock_path(f, nodeid, wnode, node, ticket); out_free: free(buf); out_err: if (ticket && err != -EAGAIN) release_tickets(f, nodeid, wnode, ticket); return err; } static void wake_up_first(struct fuse *f) { if (f->lockq) pthread_cond_signal(&f->lockq->cond); } static void wake_up_next(struct lock_queue_element *qe) { if (qe->next) pthread_cond_signal(&qe->next->cond); } static int get_ticket(struct fuse *f) { do f->curr_ticket++; while (f->curr_ticket == 0); return f->curr_ticket; } static void debug_path(struct fuse *f, const char *msg, fuse_ino_t nodeid, const char *name, int wr) { if (f->conf.debug) { struct node *wnode = NULL; if (wr) wnode = lookup_node(f, nodeid, name); if (wnode) fprintf(stderr, "%s %li (w)\n", msg, wnode->nodeid); else fprintf(stderr, "%s %li\n", msg, nodeid); } } static void queue_path(struct fuse *f, struct lock_queue_element *qe, fuse_ino_t nodeid, const char *name, int wr) { struct lock_queue_element **qp; debug_path(f, "QUEUE PATH", nodeid, name, wr); pthread_cond_init(&qe->cond, NULL); qe->next = NULL; for (qp = &f->lockq; *qp != NULL; qp = &(*qp)->next); *qp = qe; } static void dequeue_path(struct fuse *f, struct lock_queue_element *qe, fuse_ino_t nodeid, const char *name, int wr) { struct lock_queue_element **qp; debug_path(f, "DEQUEUE PATH", nodeid, name, wr); pthread_cond_destroy(&qe->cond); for (qp = &f->lockq; *qp != qe; qp = &(*qp)->next); *qp = qe->next; } static void wait_on_path(struct fuse *f, struct lock_queue_element *qe, fuse_ino_t nodeid, const char *name, int wr) { debug_path(f, "WAIT ON PATH", nodeid, name, wr); pthread_cond_wait(&qe->cond, &f->lock); } static int get_path_common(struct fuse *f, fuse_ino_t nodeid, const char *name, char **path, struct node **wnode) { int err; int ticket; pthread_mutex_lock(&f->lock); ticket = get_ticket(f); err = try_get_path(f, nodeid, name, path, wnode, ticket); if (err == -EAGAIN) { struct lock_queue_element qe; queue_path(f, &qe, nodeid, name, !!wnode); do { wait_on_path(f, &qe, nodeid, name, !!wnode); err = try_get_path(f, nodeid, name, path, wnode, ticket); wake_up_next(&qe); } while (err == -EAGAIN); dequeue_path(f, &qe, nodeid, name, !!wnode); } pthread_mutex_unlock(&f->lock); return err; } static int get_path(struct fuse *f, fuse_ino_t nodeid, char **path) { return get_path_common(f, nodeid, NULL, path, NULL); } static int get_path_nullok(struct fuse *f, fuse_ino_t nodeid, char **path) { int err = get_path_common(f, nodeid, NULL, path, NULL); if (err == -ENOENT && f->nullpath_ok) err = 0; return err; } static int get_path_name(struct fuse *f, fuse_ino_t nodeid, const char *name, char **path) { return get_path_common(f, nodeid, name, path, NULL); } static int get_path_wrlock(struct fuse *f, fuse_ino_t nodeid, const char *name, char **path, struct node **wnode) { return get_path_common(f, nodeid, name, path, wnode); } static int try_get_path2(struct fuse *f, fuse_ino_t nodeid1, const char *name1, fuse_ino_t nodeid2, const char *name2, char **path1, char **path2, struct node **wnode1, struct node **wnode2, int ticket) { int err; /* FIXME: locking two paths needs deadlock checking */ err = try_get_path(f, nodeid1, name1, path1, wnode1, ticket); if (!err) { err = try_get_path(f, nodeid2, name2, path2, wnode2, ticket); if (err) { struct node *wn1 = wnode1 ? *wnode1 : NULL; unlock_path(f, nodeid1, wn1, NULL, ticket); free(path1); if (ticket && err != -EAGAIN) release_tickets(f, nodeid1, wn1, ticket); } } return err; } static int get_path2(struct fuse *f, fuse_ino_t nodeid1, const char *name1, fuse_ino_t nodeid2, const char *name2, char **path1, char **path2, struct node **wnode1, struct node **wnode2) { int err; int ticket; pthread_mutex_lock(&f->lock); ticket = get_ticket(f); err = try_get_path2(f, nodeid1, name1, nodeid2, name2, path1, path2, wnode1, wnode2, ticket); if (err == -EAGAIN) { struct lock_queue_element qe; queue_path(f, &qe, nodeid1, name1, !!wnode1); debug_path(f, " path2", nodeid2, name2, !!wnode2); do { wait_on_path(f, &qe, nodeid1, name1, !!wnode1); debug_path(f, " path2", nodeid2, name2, !!wnode2); err = try_get_path2(f, nodeid1, name1, nodeid2, name2, path1, path2, wnode1, wnode2, ticket); wake_up_next(&qe); } while (err == -EAGAIN); dequeue_path(f, &qe, nodeid1, name1, !!wnode1); debug_path(f, " path2", nodeid2, name2, !!wnode2); } pthread_mutex_unlock(&f->lock); return err; } static void free_path_wrlock(struct fuse *f, fuse_ino_t nodeid, struct node *wnode, char *path) { pthread_mutex_lock(&f->lock); unlock_path(f, nodeid, wnode, NULL, 0); wake_up_first(f); pthread_mutex_unlock(&f->lock); free(path); } static void free_path(struct fuse *f, fuse_ino_t nodeid, char *path) { if (path) free_path_wrlock(f, nodeid, NULL, path); } static void free_path2(struct fuse *f, fuse_ino_t nodeid1, fuse_ino_t nodeid2, struct node *wnode1, struct node *wnode2, char *path1, char *path2) { pthread_mutex_lock(&f->lock); unlock_path(f, nodeid1, wnode1, NULL, 0); unlock_path(f, nodeid2, wnode2, NULL, 0); wake_up_first(f); pthread_mutex_unlock(&f->lock); free(path1); free(path2); } static void forget_node(struct fuse *f, fuse_ino_t nodeid, uint64_t nlookup) { struct node *node; if (nodeid == FUSE_ROOT_ID) return; pthread_mutex_lock(&f->lock); node = get_node(f, nodeid); /* * Node may still be locked due to interrupt idiocy in open, * create and opendir */ while (node->nlookup == nlookup && node->treelock) { struct lock_queue_element qe; queue_path(f, &qe, node->nodeid, NULL, 0); do { wait_on_path(f, &qe, node->nodeid, NULL, 0); wake_up_next(&qe); } while (node->nlookup == nlookup && node->treelock); dequeue_path(f, &qe, node->nodeid, NULL, 0); } assert(node->nlookup >= nlookup); node->nlookup -= nlookup; if (!node->nlookup) { unhash_name(f, node); unref_node(f, node); } pthread_mutex_unlock(&f->lock); } static void unlink_node(struct fuse *f, struct node *node) { if (f->conf.noforget) { assert(node->nlookup > 1); node->nlookup--; } unhash_name(f, node); } static void remove_node(struct fuse *f, fuse_ino_t dir, const char *name) { struct node *node; pthread_mutex_lock(&f->lock); node = lookup_node(f, dir, name); if (node != NULL) unlink_node(f, node); pthread_mutex_unlock(&f->lock); } static int rename_node(struct fuse *f, fuse_ino_t olddir, const char *oldname, fuse_ino_t newdir, const char *newname, int hide) { struct node *node; struct node *newnode; int err = 0; pthread_mutex_lock(&f->lock); node = lookup_node(f, olddir, oldname); newnode = lookup_node(f, newdir, newname); if (node == NULL) goto out; if (newnode != NULL) { if (hide) { fprintf(stderr, "fuse: hidden file got created during hiding\n"); err = -EBUSY; goto out; } unlink_node(f, newnode); } unhash_name(f, node); if (hash_name(f, node, newdir, newname) == -1) { err = -ENOMEM; goto out; } if (hide) node->is_hidden = 1; out: pthread_mutex_unlock(&f->lock); return err; } static void set_stat(struct fuse *f, fuse_ino_t nodeid, struct stat *stbuf) { if (!f->conf.use_ino) stbuf->st_ino = nodeid; if (f->conf.set_mode) stbuf->st_mode = (stbuf->st_mode & S_IFMT) | (0777 & ~f->conf.umask); if (f->conf.set_uid) stbuf->st_uid = f->conf.uid; if (f->conf.set_gid) stbuf->st_gid = f->conf.gid; } static struct fuse *req_fuse(fuse_req_t req) { return (struct fuse *) fuse_req_userdata(req); } static void fuse_intr_sighandler(int sig) { (void) sig; /* Nothing to do */ } struct fuse_intr_data { pthread_t id; pthread_cond_t cond; int finished; }; static void fuse_interrupt(fuse_req_t req, void *d_) { struct fuse_intr_data *d = d_; struct fuse *f = req_fuse(req); if (d->id == pthread_self()) return; pthread_mutex_lock(&f->lock); while (!d->finished) { struct timeval now; struct timespec timeout; pthread_kill(d->id, f->conf.intr_signal); gettimeofday(&now, NULL); timeout.tv_sec = now.tv_sec + 1; timeout.tv_nsec = now.tv_usec * 1000; pthread_cond_timedwait(&d->cond, &f->lock, &timeout); } pthread_mutex_unlock(&f->lock); } static void fuse_do_finish_interrupt(struct fuse *f, fuse_req_t req, struct fuse_intr_data *d) { pthread_mutex_lock(&f->lock); d->finished = 1; pthread_cond_broadcast(&d->cond); pthread_mutex_unlock(&f->lock); fuse_req_interrupt_func(req, NULL, NULL); pthread_cond_destroy(&d->cond); } static void fuse_do_prepare_interrupt(fuse_req_t req, struct fuse_intr_data *d) { d->id = pthread_self(); pthread_cond_init(&d->cond, NULL); d->finished = 0; fuse_req_interrupt_func(req, fuse_interrupt, d); } static inline void fuse_finish_interrupt(struct fuse *f, fuse_req_t req, struct fuse_intr_data *d) { if (f->conf.intr) fuse_do_finish_interrupt(f, req, d); } static inline void fuse_prepare_interrupt(struct fuse *f, fuse_req_t req, struct fuse_intr_data *d) { if (f->conf.intr) fuse_do_prepare_interrupt(req, d); } #if (!__FreeBSD__ && !__APPLE__) static int fuse_compat_open(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { int err; if (!fs->compat || fs->compat >= 25) err = fs->op.open(path, fi); else if (fs->compat == 22) { struct fuse_file_info_compat tmp; memcpy(&tmp, fi, sizeof(tmp)); err = ((struct fuse_operations_compat22 *) &fs->op)->open(path, &tmp); memcpy(fi, &tmp, sizeof(tmp)); fi->fh = tmp.fh; } else err = ((struct fuse_operations_compat2 *) &fs->op) ->open(path, fi->flags); return err; } static int fuse_compat_release(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { if (!fs->compat || fs->compat >= 22) return fs->op.release(path, fi); else return ((struct fuse_operations_compat2 *) &fs->op) ->release(path, fi->flags); } static int fuse_compat_opendir(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { if (!fs->compat || fs->compat >= 25) return fs->op.opendir(path, fi); else { int err; struct fuse_file_info_compat tmp; memcpy(&tmp, fi, sizeof(tmp)); err = ((struct fuse_operations_compat22 *) &fs->op) ->opendir(path, &tmp); memcpy(fi, &tmp, sizeof(tmp)); fi->fh = tmp.fh; return err; } } static void convert_statfs_compat(struct fuse_statfs_compat1 *compatbuf, struct statvfs *stbuf) { stbuf->f_bsize = compatbuf->block_size; stbuf->f_blocks = compatbuf->blocks; stbuf->f_bfree = compatbuf->blocks_free; stbuf->f_bavail = compatbuf->blocks_free; stbuf->f_files = compatbuf->files; stbuf->f_ffree = compatbuf->files_free; stbuf->f_namemax = compatbuf->namelen; } static void convert_statfs_old(struct statfs *oldbuf, struct statvfs *stbuf) { stbuf->f_bsize = oldbuf->f_bsize; stbuf->f_blocks = oldbuf->f_blocks; stbuf->f_bfree = oldbuf->f_bfree; stbuf->f_bavail = oldbuf->f_bavail; stbuf->f_files = oldbuf->f_files; stbuf->f_ffree = oldbuf->f_ffree; stbuf->f_namemax = oldbuf->f_namelen; } static int fuse_compat_statfs(struct fuse_fs *fs, const char *path, struct statvfs *buf) { int err; if (!fs->compat || fs->compat >= 25) { err = fs->op.statfs(fs->compat == 25 ? "/" : path, buf); } else if (fs->compat > 11) { struct statfs oldbuf; err = ((struct fuse_operations_compat22 *) &fs->op) ->statfs("/", &oldbuf); if (!err) convert_statfs_old(&oldbuf, buf); } else { struct fuse_statfs_compat1 compatbuf; memset(&compatbuf, 0, sizeof(struct fuse_statfs_compat1)); err = ((struct fuse_operations_compat1 *) &fs->op) ->statfs(&compatbuf); if (!err) convert_statfs_compat(&compatbuf, buf); } return err; } #else /* !__FreeBSD__ && !__APPLE__ */ static inline int fuse_compat_open(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { return fs->op.open(path, fi); } static inline int fuse_compat_release(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { return fs->op.release(path, fi); } static inline int fuse_compat_opendir(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { return fs->op.opendir(path, fi); } static inline int fuse_compat_statfs(struct fuse_fs *fs, const char *path, struct statvfs *buf) { return fs->op.statfs(fs->compat == 25 ? "/" : path, buf); } int fuse_fs_setattr_x(struct fuse_fs *fs, const char *path, struct setattr_x *attr) { fuse_get_context()->private_data = fs->user_data; if (fs->op.setattr_x) return fs->op.setattr_x(path, attr); else return -ENOSYS; } int fuse_fs_fsetattr_x(struct fuse_fs *fs, const char *path, struct setattr_x *attr, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.fsetattr_x) return fs->op.fsetattr_x(path, attr, fi); else return -ENOSYS; } #endif /* !__FreeBSD__ && !__APPLE__ */ int fuse_fs_getattr(struct fuse_fs *fs, const char *path, struct stat *buf) { fuse_get_context()->private_data = fs->user_data; if (fs->op.getattr) { if (fs->debug) fprintf(stderr, "getattr %s\n", path); return fs->op.getattr(path, buf); } else { return -ENOSYS; } } int fuse_fs_fgetattr(struct fuse_fs *fs, const char *path, struct stat *buf, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.fgetattr) { if (fs->debug) fprintf(stderr, "fgetattr[%llu] %s\n", (unsigned long long) fi->fh, path); return fs->op.fgetattr(path, buf, fi); } else if (path && fs->op.getattr) { if (fs->debug) fprintf(stderr, "getattr %s\n", path); return fs->op.getattr(path, buf); } else { return -ENOSYS; } } int fuse_fs_rename(struct fuse_fs *fs, const char *oldpath, const char *newpath) { fuse_get_context()->private_data = fs->user_data; if (fs->op.rename) { if (fs->debug) fprintf(stderr, "rename %s %s\n", oldpath, newpath); return fs->op.rename(oldpath, newpath); } else { return -ENOSYS; } } #ifdef __APPLE__ int fuse_fs_setvolname(struct fuse_fs *fs, const char *volname) { fuse_get_context()->private_data = fs->user_data; if (fs->op.setvolname) return fs->op.setvolname(volname); else return -ENOSYS; } int fuse_fs_exchange(struct fuse_fs *fs, const char *path1, const char *path2, unsigned long options) { fuse_get_context()->private_data = fs->user_data; if (fs->op.exchange) return fs->op.exchange(path1, path2, options); else return -ENOSYS; } int fuse_fs_getxtimes(struct fuse_fs *fs, const char *path, struct timespec *bkuptime, struct timespec *crtime) { fuse_get_context()->private_data = fs->user_data; if (fs->op.getxtimes) return fs->op.getxtimes(path, bkuptime, crtime); else return -ENOSYS; } int fuse_fs_setbkuptime(struct fuse_fs *fs, const char *path, const struct timespec *tv) { fuse_get_context()->private_data = fs->user_data; if (fs->op.setbkuptime) return fs->op.setbkuptime(path, tv); else return -ENOSYS; } int fuse_fs_setchgtime(struct fuse_fs *fs, const char *path, const struct timespec *tv) { fuse_get_context()->private_data = fs->user_data; if (fs->op.setchgtime) return fs->op.setchgtime(path, tv); else return -ENOSYS; } int fuse_fs_setcrtime(struct fuse_fs *fs, const char *path, const struct timespec *tv) { fuse_get_context()->private_data = fs->user_data; if (fs->op.setcrtime) return fs->op.setcrtime(path, tv); else return -ENOSYS; } #endif /* __APPLE__ */ int fuse_fs_unlink(struct fuse_fs *fs, const char *path) { fuse_get_context()->private_data = fs->user_data; if (fs->op.unlink) { if (fs->debug) fprintf(stderr, "unlink %s\n", path); return fs->op.unlink(path); } else { return -ENOSYS; } } int fuse_fs_rmdir(struct fuse_fs *fs, const char *path) { fuse_get_context()->private_data = fs->user_data; if (fs->op.rmdir) { if (fs->debug) fprintf(stderr, "rmdir %s\n", path); return fs->op.rmdir(path); } else { return -ENOSYS; } } int fuse_fs_symlink(struct fuse_fs *fs, const char *linkname, const char *path) { fuse_get_context()->private_data = fs->user_data; if (fs->op.symlink) { if (fs->debug) fprintf(stderr, "symlink %s %s\n", linkname, path); return fs->op.symlink(linkname, path); } else { return -ENOSYS; } } int fuse_fs_link(struct fuse_fs *fs, const char *oldpath, const char *newpath) { fuse_get_context()->private_data = fs->user_data; if (fs->op.link) { if (fs->debug) fprintf(stderr, "link %s %s\n", oldpath, newpath); return fs->op.link(oldpath, newpath); } else { return -ENOSYS; } } int fuse_fs_release(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.release) { if (fs->debug) fprintf(stderr, "release%s[%llu] flags: 0x%x\n", fi->flush ? "+flush" : "", (unsigned long long) fi->fh, fi->flags); return fuse_compat_release(fs, path, fi); } else { return 0; } } int fuse_fs_opendir(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.opendir) { int err; if (fs->debug) fprintf(stderr, "opendir flags: 0x%x %s\n", fi->flags, path); err = fuse_compat_opendir(fs, path, fi); if (fs->debug && !err) fprintf(stderr, " opendir[%lli] flags: 0x%x %s\n", (unsigned long long) fi->fh, fi->flags, path); return err; } else { return 0; } } int fuse_fs_open(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.open) { int err; if (fs->debug) fprintf(stderr, "open flags: 0x%x %s\n", fi->flags, path); err = fuse_compat_open(fs, path, fi); if (fs->debug && !err) fprintf(stderr, " open[%lli] flags: 0x%x %s\n", (unsigned long long) fi->fh, fi->flags, path); return err; } else { return 0; } } int fuse_fs_read(struct fuse_fs *fs, const char *path, char *buf, size_t size, off_t off, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.read) { int res; if (fs->debug) fprintf(stderr, "read[%llu] %lu bytes from %llu flags: 0x%x\n", (unsigned long long) fi->fh, (unsigned long) size, (unsigned long long) off, fi->flags); res = fs->op.read(path, buf, size, off, fi); if (fs->debug && res >= 0) fprintf(stderr, " read[%llu] %u bytes from %llu\n", (unsigned long long) fi->fh, res, (unsigned long long) off); if (res > (int) size) fprintf(stderr, "fuse: read too many bytes\n"); return res; } else { return -ENOSYS; } } int fuse_fs_write(struct fuse_fs *fs, const char *path, const char *buf, size_t size, off_t off, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.write) { int res; if (fs->debug) fprintf(stderr, "write%s[%llu] %lu bytes to %llu flags: 0x%x\n", fi->writepage ? "page" : "", (unsigned long long) fi->fh, (unsigned long) size, (unsigned long long) off, fi->flags); res = fs->op.write(path, buf, size, off, fi); if (fs->debug && res >= 0) fprintf(stderr, " write%s[%llu] %u bytes to %llu\n", fi->writepage ? "page" : "", (unsigned long long) fi->fh, res, (unsigned long long) off); if (res > (int) size) fprintf(stderr, "fuse: wrote too many bytes\n"); return res; } else { return -ENOSYS; } } int fuse_fs_fsync(struct fuse_fs *fs, const char *path, int datasync, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.fsync) { if (fs->debug) fprintf(stderr, "fsync[%llu] datasync: %i\n", (unsigned long long) fi->fh, datasync); return fs->op.fsync(path, datasync, fi); } else { return -ENOSYS; } } int fuse_fs_fsyncdir(struct fuse_fs *fs, const char *path, int datasync, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.fsyncdir) { if (fs->debug) fprintf(stderr, "fsyncdir[%llu] datasync: %i\n", (unsigned long long) fi->fh, datasync); return fs->op.fsyncdir(path, datasync, fi); } else { return -ENOSYS; } } int fuse_fs_flush(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.flush) { if (fs->debug) fprintf(stderr, "flush[%llu]\n", (unsigned long long) fi->fh); return fs->op.flush(path, fi); } else { return -ENOSYS; } } int fuse_fs_statfs(struct fuse_fs *fs, const char *path, struct statvfs *buf) { fuse_get_context()->private_data = fs->user_data; if (fs->op.statfs) { if (fs->debug) fprintf(stderr, "statfs %s\n", path); return fuse_compat_statfs(fs, path, buf); } else { buf->f_namemax = 255; buf->f_bsize = 512; return 0; } } int fuse_fs_releasedir(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.releasedir) { if (fs->debug) fprintf(stderr, "releasedir[%llu] flags: 0x%x\n", (unsigned long long) fi->fh, fi->flags); return fs->op.releasedir(path, fi); } else { return 0; } } static int fill_dir_old(struct fuse_dirhandle *dh, const char *name, int type, ino_t ino) { int res; struct stat stbuf; memset(&stbuf, 0, sizeof(stbuf)); stbuf.st_mode = type << 12; stbuf.st_ino = ino; res = dh->filler(dh->buf, name, &stbuf, 0); return res ? -ENOMEM : 0; } int fuse_fs_readdir(struct fuse_fs *fs, const char *path, void *buf, fuse_fill_dir_t filler, off_t off, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.readdir) { if (fs->debug) fprintf(stderr, "readdir[%llu] from %llu\n", (unsigned long long) fi->fh, (unsigned long long) off); return fs->op.readdir(path, buf, filler, off, fi); } else if (fs->op.getdir) { struct fuse_dirhandle dh; if (fs->debug) fprintf(stderr, "getdir[%llu]\n", (unsigned long long) fi->fh); dh.filler = filler; dh.buf = buf; return fs->op.getdir(path, &dh, fill_dir_old); } else { return -ENOSYS; } } int fuse_fs_create(struct fuse_fs *fs, const char *path, mode_t mode, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.create) { int err; if (fs->debug) fprintf(stderr, "create flags: 0x%x %s 0%o umask=0%03o\n", fi->flags, path, mode, fuse_get_context()->umask); err = fs->op.create(path, mode, fi); if (fs->debug && !err) fprintf(stderr, " create[%llu] flags: 0x%x %s\n", (unsigned long long) fi->fh, fi->flags, path); return err; } else { return -ENOSYS; } } int fuse_fs_lock(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi, int cmd, struct flock *lock) { fuse_get_context()->private_data = fs->user_data; if (fs->op.lock) { if (fs->debug) fprintf(stderr, "lock[%llu] %s %s start: %llu len: %llu pid: %llu\n", (unsigned long long) fi->fh, (cmd == F_GETLK ? "F_GETLK" : (cmd == F_SETLK ? "F_SETLK" : (cmd == F_SETLKW ? "F_SETLKW" : "???"))), (lock->l_type == F_RDLCK ? "F_RDLCK" : (lock->l_type == F_WRLCK ? "F_WRLCK" : (lock->l_type == F_UNLCK ? "F_UNLCK" : "???"))), (unsigned long long) lock->l_start, (unsigned long long) lock->l_len, (unsigned long long) lock->l_pid); return fs->op.lock(path, fi, cmd, lock); } else { return -ENOSYS; } } int fuse_fs_chown(struct fuse_fs *fs, const char *path, uid_t uid, gid_t gid) { fuse_get_context()->private_data = fs->user_data; if (fs->op.chown) { if (fs->debug) fprintf(stderr, "chown %s %lu %lu\n", path, (unsigned long) uid, (unsigned long) gid); return fs->op.chown(path, uid, gid); } else { return -ENOSYS; } } int fuse_fs_truncate(struct fuse_fs *fs, const char *path, off_t size) { fuse_get_context()->private_data = fs->user_data; if (fs->op.truncate) { if (fs->debug) fprintf(stderr, "truncate %s %llu\n", path, (unsigned long long) size); return fs->op.truncate(path, size); } else { return -ENOSYS; } } int fuse_fs_ftruncate(struct fuse_fs *fs, const char *path, off_t size, struct fuse_file_info *fi) { fuse_get_context()->private_data = fs->user_data; if (fs->op.ftruncate) { if (fs->debug) fprintf(stderr, "ftruncate[%llu] %s %llu\n", (unsigned long long) fi->fh, path, (unsigned long long) size); return fs->op.ftruncate(path, size, fi); } else if (path && fs->op.truncate) { if (fs->debug) fprintf(stderr, "truncate %s %llu\n", path, (unsigned long long) size); return fs->op.truncate(path, size); } else { return -ENOSYS; } } int fuse_fs_utimens(struct fuse_fs *fs, const char *path, const struct timespec tv[2]) { fuse_get_context()->private_data = fs->user_data; if (fs->op.utimens) { if (fs->debug) fprintf(stderr, "utimens %s %li.%09lu %li.%09lu\n", path, tv[0].tv_sec, tv[0].tv_nsec, tv[1].tv_sec, tv[1].tv_nsec); return fs->op.utimens(path, tv); } else if(fs->op.utime) { struct utimbuf buf; if (fs->debug) fprintf(stderr, "utime %s %li %li\n", path, tv[0].tv_sec, tv[1].tv_sec); buf.actime = tv[0].tv_sec; buf.modtime = tv[1].tv_sec; return fs->op.utime(path, &buf); } else { return -ENOSYS; } } int fuse_fs_access(struct fuse_fs *fs, const char *path, int mask) { fuse_get_context()->private_data = fs->user_data; if (fs->op.access) { if (fs->debug) fprintf(stderr, "access %s 0%o\n", path, mask); return fs->op.access(path, mask); } else { return -ENOSYS; } } int fuse_fs_readlink(struct fuse_fs *fs, const char *path, char *buf, size_t len) { fuse_get_context()->private_data = fs->user_data; if (fs->op.readlink) { if (fs->debug) fprintf(stderr, "readlink %s %lu\n", path, (unsigned long) len); return fs->op.readlink(path, buf, len); } else { return -ENOSYS; } } int fuse_fs_mknod(struct fuse_fs *fs, const char *path, mode_t mode, dev_t rdev) { fuse_get_context()->private_data = fs->user_data; if (fs->op.mknod) { if (fs->debug) fprintf(stderr, "mknod %s 0%o 0x%llx umask=0%03o\n", path, mode, (unsigned long long) rdev, fuse_get_context()->umask); return fs->op.mknod(path, mode, rdev); } else { return -ENOSYS; } } int fuse_fs_mkdir(struct fuse_fs *fs, const char *path, mode_t mode) { fuse_get_context()->private_data = fs->user_data; if (fs->op.mkdir) { if (fs->debug) fprintf(stderr, "mkdir %s 0%o umask=0%03o\n", path, mode, fuse_get_context()->umask); return fs->op.mkdir(path, mode); } else { return -ENOSYS; } } #ifdef __APPLE__ int fuse_fs_setxattr(struct fuse_fs *fs, const char *path, const char *name, const char *value, size_t size, int flags, uint32_t position) #else int fuse_fs_setxattr(struct fuse_fs *fs, const char *path, const char *name, const char *value, size_t size, int flags) #endif /* __APPLE__ */ { fuse_get_context()->private_data = fs->user_data; if (fs->op.setxattr) { if (fs->debug) #ifdef __APPLE__ fprintf(stderr, "setxattr %s %s %lu 0x%x %lu\n", path, name, (unsigned long) size, flags, (unsigned long) position); #else fprintf(stderr, "setxattr %s %s %lu 0x%x\n", path, name, (unsigned long) size, flags); #endif /* __APPLE__ */ #ifdef __APPLE__ return fs->op.setxattr(path, name, value, size, flags, position); #else return fs->op.setxattr(path, name, value, size, flags); #endif /* __APPLE__ */ } else { return -ENOSYS; } } #ifdef __APPLE__ int fuse_fs_getxattr(struct fuse_fs *fs, const char *path, const char *name, char *value, size_t size, uint32_t position) #else int fuse_fs_getxattr(struct fuse_fs *fs, const char *path, const char *name, char *value, size_t size) #endif /* __APPLE__ */ { fuse_get_context()->private_data = fs->user_data; if (fs->op.getxattr) { if (fs->debug) #ifdef __APPLE__ fprintf(stderr, "getxattr %s %s %lu %lu\n", path, name, (unsigned long) size, (unsigned long) position); #else fprintf(stderr, "getxattr %s %s %lu\n", path, name, (unsigned long) size); #endif /* __APPLE__ */ #ifdef __APPLE__ return fs->op.getxattr(path, name, value, size, position); #else return fs->op.getxattr(path, name, value, size); #endif /* __APPLE__ */ } else { return -ENOSYS; } } int fuse_fs_listxattr(struct fuse_fs *fs, const char *path, char *list, size_t size) { fuse_get_context()->private_data = fs->user_data; if (fs->op.listxattr) { if (fs->debug) fprintf(stderr, "listxattr %s %lu\n", path, (unsigned long) size); return fs->op.listxattr(path, list, size); } else { return -ENOSYS; } } int fuse_fs_bmap(struct fuse_fs *fs, const char *path, size_t blocksize, uint64_t *idx) { fuse_get_context()->private_data = fs->user_data; if (fs->op.bmap) { if (fs->debug) fprintf(stderr, "bmap %s blocksize: %lu index: %llu\n", path, (unsigned long) blocksize, (unsigned long long) *idx); return fs->op.bmap(path, blocksize, idx); } else { return -ENOSYS; } } int fuse_fs_removexattr(struct fuse_fs *fs, const char *path, const char *name) { fuse_get_context()->private_data = fs->user_data; if (fs->op.removexattr) { if (fs->debug) fprintf(stderr, "removexattr %s %s\n", path, name); return fs->op.removexattr(path, name); } else { return -ENOSYS; } } int fuse_fs_ioctl(struct fuse_fs *fs, const char *path, int cmd, void *arg, struct fuse_file_info *fi, unsigned int flags, void *data) { fuse_get_context()->private_data = fs->user_data; if (fs->op.ioctl) { if (fs->debug) fprintf(stderr, "ioctl[%llu] 0x%x flags: 0x%x\n", (unsigned long long) fi->fh, cmd, flags); return fs->op.ioctl(path, cmd, arg, fi, flags, data); } else return -ENOSYS; } int fuse_fs_poll(struct fuse_fs *fs, const char *path, struct fuse_file_info *fi, struct fuse_pollhandle *ph, unsigned *reventsp) { fuse_get_context()->private_data = fs->user_data; if (fs->op.poll) { int res; if (fs->debug) fprintf(stderr, "poll[%llu] ph: %p\n", (unsigned long long) fi->fh, ph); res = fs->op.poll(path, fi, ph, reventsp); if (fs->debug && !res) fprintf(stderr, " poll[%llu] revents: 0x%x\n", (unsigned long long) fi->fh, *reventsp); return res; } else return -ENOSYS; } static int is_open(struct fuse *f, fuse_ino_t dir, const char *name) { struct node *node; int isopen = 0; pthread_mutex_lock(&f->lock); node = lookup_node(f, dir, name); if (node && node->open_count > 0) isopen = 1; pthread_mutex_unlock(&f->lock); return isopen; } static char *hidden_name(struct fuse *f, fuse_ino_t dir, const char *oldname, char *newname, size_t bufsize) { struct stat buf; struct node *node; struct node *newnode; char *newpath; int res; int failctr = 10; do { pthread_mutex_lock(&f->lock); node = lookup_node(f, dir, oldname); if (node == NULL) { pthread_mutex_unlock(&f->lock); return NULL; } do { f->hidectr ++; snprintf(newname, bufsize, ".fuse_hidden%08x%08x", (unsigned int) node->nodeid, f->hidectr); newnode = lookup_node(f, dir, newname); } while(newnode); try_get_path(f, dir, newname, &newpath, NULL, 0); pthread_mutex_unlock(&f->lock); if (!newpath) break; res = fuse_fs_getattr(f->fs, newpath, &buf); if (res == -ENOENT) break; free(newpath); newpath = NULL; } while(res == 0 && --failctr); return newpath; } static int hide_node(struct fuse *f, const char *oldpath, fuse_ino_t dir, const char *oldname) { char newname[64]; char *newpath; int err = -EBUSY; newpath = hidden_name(f, dir, oldname, newname, sizeof(newname)); if (newpath) { err = fuse_fs_rename(f->fs, oldpath, newpath); if (!err) err = rename_node(f, dir, oldname, dir, newname, 1); free(newpath); } return err; } static int mtime_eq(const struct stat *stbuf, const struct timespec *ts) { return stbuf->st_mtime == ts->tv_sec && ST_MTIM_NSEC(stbuf) == ts->tv_nsec; } #ifndef CLOCK_MONOTONIC #define CLOCK_MONOTONIC CLOCK_REALTIME #endif static void curr_time(struct timespec *now) { #ifdef __APPLE__ #define FUSE4X_TIMEVAL_TO_TIMESPEC(tv, ts) { \ (ts)->tv_sec = (tv)->tv_sec; \ (ts)->tv_nsec = (tv)->tv_usec * 1000; \ } struct timeval tp; gettimeofday(&tp, NULL); /* XXX: TBD: We are losing resolution here. */ FUSE4X_TIMEVAL_TO_TIMESPEC(&tp, now); #else static clockid_t clockid = CLOCK_MONOTONIC; int res = clock_gettime(clockid, now); if (res == -1 && errno == EINVAL) { clockid = CLOCK_REALTIME; res = clock_gettime(clockid, now); } if (res == -1) { perror("fuse: clock_gettime"); abort(); } #endif /* __APPLE__ */ } static void update_stat(struct node *node, const struct stat *stbuf) { if (node->cache_valid && (!mtime_eq(stbuf, &node->mtime) || stbuf->st_size != node->size)) node->cache_valid = 0; node->mtime.tv_sec = stbuf->st_mtime; node->mtime.tv_nsec = ST_MTIM_NSEC(stbuf); node->size = stbuf->st_size; curr_time(&node->stat_updated); } static int lookup_path(struct fuse *f, fuse_ino_t nodeid, const char *name, const char *path, struct fuse_entry_param *e, struct fuse_file_info *fi) { int res; memset(e, 0, sizeof(struct fuse_entry_param)); if (fi) res = fuse_fs_fgetattr(f->fs, path, &e->attr, fi); else res = fuse_fs_getattr(f->fs, path, &e->attr); if (res == 0) { struct node *node; node = find_node(f, nodeid, name); if (node == NULL) res = -ENOMEM; else { e->ino = node->nodeid; e->generation = node->generation; e->entry_timeout = f->conf.entry_timeout; e->attr_timeout = f->conf.attr_timeout; if (f->conf.auto_cache) { pthread_mutex_lock(&f->lock); update_stat(node, &e->attr); pthread_mutex_unlock(&f->lock); } set_stat(f, e->ino, &e->attr); if (f->conf.debug) fprintf(stderr, " NODEID: %lu\n", (unsigned long) e->ino); } } return res; } static struct fuse_context_i *fuse_get_context_internal(void) { struct fuse_context_i *c; c = (struct fuse_context_i *) pthread_getspecific(fuse_context_key); if (c == NULL) { c = (struct fuse_context_i *) malloc(sizeof(struct fuse_context_i)); if (c == NULL) { /* This is hard to deal with properly, so just abort. If memory is so low that the context cannot be allocated, there's not much hope for the filesystem anyway */ fprintf(stderr, "fuse: failed to allocate thread specific data\n"); abort(); } pthread_setspecific(fuse_context_key, c); } return c; } static void fuse_freecontext(void *data) { free(data); } static int fuse_create_context_key(void) { int err = 0; pthread_mutex_lock(&fuse_context_lock); if (!fuse_context_ref) { err = pthread_key_create(&fuse_context_key, fuse_freecontext); if (err) { fprintf(stderr, "fuse: failed to create thread specific key: %s\n", strerror(err)); pthread_mutex_unlock(&fuse_context_lock); return -1; } } fuse_context_ref++; pthread_mutex_unlock(&fuse_context_lock); return 0; } static void fuse_delete_context_key(void) { pthread_mutex_lock(&fuse_context_lock); fuse_context_ref--; if (!fuse_context_ref) { free(pthread_getspecific(fuse_context_key)); pthread_key_delete(fuse_context_key); } pthread_mutex_unlock(&fuse_context_lock); } static struct fuse *req_fuse_prepare(fuse_req_t req) { struct fuse_context_i *c = fuse_get_context_internal(); const struct fuse_ctx *ctx = fuse_req_ctx(req); c->req = req; c->ctx.fuse = req_fuse(req); c->ctx.uid = ctx->uid; c->ctx.gid = ctx->gid; c->ctx.pid = ctx->pid; c->ctx.umask = ctx->umask; return c->ctx.fuse; } static inline void reply_err(fuse_req_t req, int err) { /* fuse_reply_err() uses non-negated errno values */ fuse_reply_err(req, -err); } static void reply_entry(fuse_req_t req, const struct fuse_entry_param *e, int err) { if (!err) { struct fuse *f = req_fuse(req); if (fuse_reply_entry(req, e) == -ENOENT) { /* Skip forget for negative result */ if (e->ino != 0) forget_node(f, e->ino, 1); } } else reply_err(req, err); } void fuse_fs_init(struct fuse_fs *fs, struct fuse_conn_info *conn) { fuse_get_context()->private_data = fs->user_data; if (fs->op.init) fs->user_data = fs->op.init(conn); } static void fuse_lib_init(void *data, struct fuse_conn_info *conn) { struct fuse *f = (struct fuse *) data; struct fuse_context_i *c = fuse_get_context_internal(); memset(c, 0, sizeof(*c)); c->ctx.fuse = f; conn->want |= FUSE_CAP_EXPORT_SUPPORT; fuse_fs_init(f->fs, conn); } void fuse_fs_destroy(struct fuse_fs *fs) { fuse_get_context()->private_data = fs->user_data; if (fs->op.destroy) fs->op.destroy(fs->user_data); if (fs->m) fuse_put_module(fs->m); free(fs); } static void fuse_lib_destroy(void *data) { struct fuse *f = (struct fuse *) data; struct fuse_context_i *c = fuse_get_context_internal(); memset(c, 0, sizeof(*c)); c->ctx.fuse = f; fuse_fs_destroy(f->fs); f->fs = NULL; } static void fuse_lib_lookup(fuse_req_t req, fuse_ino_t parent, const char *name) { struct fuse *f = req_fuse_prepare(req); struct fuse_entry_param e; char *path; int err; struct node *dot = NULL; if (name[0] == '.') { int len = strlen(name); if (len == 1 || (name[1] == '.' && len == 2)) { pthread_mutex_lock(&f->lock); if (len == 1) { if (f->conf.debug) fprintf(stderr, "LOOKUP-DOT\n"); dot = get_node_nocheck(f, parent); if (dot == NULL) { pthread_mutex_unlock(&f->lock); reply_entry(req, &e, -ESTALE); return; } dot->refctr++; } else { if (f->conf.debug) fprintf(stderr, "LOOKUP-DOTDOT\n"); parent = get_node(f, parent)->parent->nodeid; } pthread_mutex_unlock(&f->lock); name = NULL; } } err = get_path_name(f, parent, name, &path); if (!err) { struct fuse_intr_data d; if (f->conf.debug) fprintf(stderr, "LOOKUP %s\n", path); fuse_prepare_interrupt(f, req, &d); err = lookup_path(f, parent, name, path, &e, NULL); if (err == -ENOENT && f->conf.negative_timeout != 0.0) { e.ino = 0; e.entry_timeout = f->conf.negative_timeout; err = 0; } fuse_finish_interrupt(f, req, &d); free_path(f, parent, path); } if (dot) { pthread_mutex_lock(&f->lock); unref_node(f, dot); pthread_mutex_unlock(&f->lock); } reply_entry(req, &e, err); } static void fuse_lib_forget(fuse_req_t req, fuse_ino_t ino, unsigned long nlookup) { struct fuse *f = req_fuse(req); if (f->conf.debug) fprintf(stderr, "FORGET %llu/%lu\n", (unsigned long long)ino, nlookup); forget_node(f, ino, nlookup); fuse_reply_none(req); } static void fuse_lib_getattr(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct stat buf; char *path; int err; memset(&buf, 0, sizeof(buf)); if (fi != NULL) err = get_path_nullok(f, ino, &path); else err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); if (fi) err = fuse_fs_fgetattr(f->fs, path, &buf, fi); else err = fuse_fs_getattr(f->fs, path, &buf); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) { if (f->conf.auto_cache) { pthread_mutex_lock(&f->lock); update_stat(get_node(f, ino), &buf); pthread_mutex_unlock(&f->lock); } set_stat(f, ino, &buf); fuse_reply_attr(req, &buf, f->conf.attr_timeout); } else reply_err(req, err); } int fuse_fs_chmod(struct fuse_fs *fs, const char *path, mode_t mode) { fuse_get_context()->private_data = fs->user_data; if (fs->op.chmod) return fs->op.chmod(path, mode); else return -ENOSYS; } #ifdef __APPLE__ int fuse_fs_chflags(struct fuse_fs *fs, const char *path, uint32_t flags) { fuse_get_context()->private_data = fs->user_data; if (fs->op.chflags) return fs->op.chflags(path, flags); else return -ENOSYS; } static void fuse_lib_setattr_x(fuse_req_t req, fuse_ino_t ino, struct setattr_x *attr, int valid, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct stat buf; char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = 0; if (!err && valid) { if (fi) err = fuse_fs_fsetattr_x(f->fs, path, attr, fi); else err = fuse_fs_setattr_x(f->fs, path, attr); if (err == -ENOSYS) err = 0; else goto done; } if (!err && (valid & FUSE_SET_ATTR_FLAGS)) { err = fuse_fs_chflags(f->fs, path, attr->flags); /* XXX: don't complain if flags couldn't be written */ if (err == -ENOSYS) err = 0; } if (!err && (valid & FUSE_SET_ATTR_BKUPTIME)) { err = fuse_fs_setbkuptime(f->fs, path, &attr->bkuptime); } if (!err && (valid & FUSE_SET_ATTR_CHGTIME)) { err = fuse_fs_setchgtime(f->fs, path, &attr->chgtime); } if (!err && (valid & FUSE_SET_ATTR_CRTIME)) { err = fuse_fs_setcrtime(f->fs, path, &attr->crtime); } if (!err && (valid & FUSE_SET_ATTR_MODE)) err = fuse_fs_chmod(f->fs, path, attr->mode); if (!err && (valid & (FUSE_SET_ATTR_UID | FUSE_SET_ATTR_GID))) { uid_t uid = (valid & FUSE_SET_ATTR_UID) ? attr->uid : (uid_t) -1; gid_t gid = (valid & FUSE_SET_ATTR_GID) ? attr->gid : (gid_t) -1; err = fuse_fs_chown(f->fs, path, uid, gid); } if (!err && (valid & FUSE_SET_ATTR_SIZE)) { if (fi) err = fuse_fs_ftruncate(f->fs, path, attr->size, fi); else err = fuse_fs_truncate(f->fs, path, attr->size); } if (!err && (valid & FUSE_SET_ATTR_MTIME)) { struct timespec tv[2]; if (valid & FUSE_SET_ATTR_ATIME) { tv[0] = attr->acctime; } else { struct timeval now; gettimeofday(&now, NULL); tv[0].tv_sec = now.tv_sec; tv[0].tv_nsec = now.tv_usec * 1000; } tv[1] = attr->modtime; err = fuse_fs_utimens(f->fs, path, tv); } done: if (!err) err = fuse_fs_getattr(f->fs, path, &buf); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) { if (f->conf.auto_cache) { pthread_mutex_lock(&f->lock); update_stat(get_node(f, ino), &buf); pthread_mutex_unlock(&f->lock); } set_stat(f, ino, &buf); fuse_reply_attr(req, &buf, f->conf.attr_timeout); } else reply_err(req, err); } #endif /* __APPLE__ */ static void fuse_lib_setattr(fuse_req_t req, fuse_ino_t ino, struct stat *attr, int valid, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct stat buf; char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = 0; #ifdef __APPLE__ if (!err && (valid & FUSE_SET_ATTR_FLAGS)) { err = fuse_fs_chflags(f->fs, path, attr->st_flags); /* XXX: don't complain if flags couldn't be written */ if (err == -ENOSYS) err = 0; } if (!err && (valid & FUSE_SET_ATTR_BKUPTIME)) { struct timespec tv; tv.tv_sec = (uint64_t)(attr->st_qspare[0]); tv.tv_nsec = (uint32_t)(attr->st_lspare); err = fuse_fs_setbkuptime(f->fs, path, &tv); } if (!err && (valid & FUSE_SET_ATTR_CHGTIME)) { struct timespec tv; tv.tv_sec = (uint64_t)(attr->st_ctime); tv.tv_nsec = (uint32_t)(attr->st_ctimensec); err = fuse_fs_setchgtime(f->fs, path, &tv); } if (!err && (valid & FUSE_SET_ATTR_CRTIME)) { struct timespec tv; tv.tv_sec = (uint64_t)(attr->st_qspare[1]); tv.tv_nsec = (uint32_t)(attr->st_gen); err = fuse_fs_setcrtime(f->fs, path, &tv); } #endif /* __APPLE__ */ if (!err && (valid & FUSE_SET_ATTR_MODE)) err = fuse_fs_chmod(f->fs, path, attr->st_mode); if (!err && (valid & (FUSE_SET_ATTR_UID | FUSE_SET_ATTR_GID))) { uid_t uid = (valid & FUSE_SET_ATTR_UID) ? attr->st_uid : (uid_t) -1; gid_t gid = (valid & FUSE_SET_ATTR_GID) ? attr->st_gid : (gid_t) -1; err = fuse_fs_chown(f->fs, path, uid, gid); } if (!err && (valid & FUSE_SET_ATTR_SIZE)) { if (fi) err = fuse_fs_ftruncate(f->fs, path, attr->st_size, fi); else err = fuse_fs_truncate(f->fs, path, attr->st_size); } #ifdef __APPLE__ if (!err && (valid & FUSE_SET_ATTR_MTIME)) { struct timespec tv[2]; if (valid & FUSE_SET_ATTR_ATIME) { tv[0].tv_sec = attr->st_atime; tv[0].tv_nsec = ST_ATIM_NSEC(attr); } else { struct timeval now; gettimeofday(&now, NULL); tv[0].tv_sec = now.tv_sec; tv[0].tv_nsec = now.tv_usec * 1000; } tv[1].tv_sec = attr->st_mtime; tv[1].tv_nsec = ST_MTIM_NSEC(attr); err = fuse_fs_utimens(f->fs, path, tv); } #else if (!err && (valid & (FUSE_SET_ATTR_ATIME | FUSE_SET_ATTR_MTIME)) == (FUSE_SET_ATTR_ATIME | FUSE_SET_ATTR_MTIME)) { struct timespec tv[2]; tv[0].tv_sec = attr->st_atime; tv[0].tv_nsec = ST_ATIM_NSEC(attr); tv[1].tv_sec = attr->st_mtime; tv[1].tv_nsec = ST_MTIM_NSEC(attr); err = fuse_fs_utimens(f->fs, path, tv); } #endif /* __APPLE__ */ if (!err) err = fuse_fs_getattr(f->fs, path, &buf); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) { if (f->conf.auto_cache) { pthread_mutex_lock(&f->lock); update_stat(get_node(f, ino), &buf); pthread_mutex_unlock(&f->lock); } set_stat(f, ino, &buf); fuse_reply_attr(req, &buf, f->conf.attr_timeout); } else reply_err(req, err); } static void fuse_lib_access(fuse_req_t req, fuse_ino_t ino, int mask) { struct fuse *f = req_fuse_prepare(req); char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_access(f->fs, path, mask); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } reply_err(req, err); } static void fuse_lib_readlink(fuse_req_t req, fuse_ino_t ino) { struct fuse *f = req_fuse_prepare(req); char linkname[PATH_MAX + 1]; char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_readlink(f->fs, path, linkname, sizeof(linkname)); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) { linkname[PATH_MAX] = '\0'; fuse_reply_readlink(req, linkname); } else reply_err(req, err); } static void fuse_lib_mknod(fuse_req_t req, fuse_ino_t parent, const char *name, mode_t mode, dev_t rdev) { struct fuse *f = req_fuse_prepare(req); struct fuse_entry_param e; char *path; int err; err = get_path_name(f, parent, name, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = -ENOSYS; if (S_ISREG(mode)) { struct fuse_file_info fi; memset(&fi, 0, sizeof(fi)); fi.flags = O_CREAT | O_EXCL | O_WRONLY; err = fuse_fs_create(f->fs, path, mode, &fi); if (!err) { err = lookup_path(f, parent, name, path, &e, &fi); fuse_fs_release(f->fs, path, &fi); } } if (err == -ENOSYS) { err = fuse_fs_mknod(f->fs, path, mode, rdev); if (!err) err = lookup_path(f, parent, name, path, &e, NULL); } fuse_finish_interrupt(f, req, &d); free_path(f, parent, path); } reply_entry(req, &e, err); } static void fuse_lib_mkdir(fuse_req_t req, fuse_ino_t parent, const char *name, mode_t mode) { struct fuse *f = req_fuse_prepare(req); struct fuse_entry_param e; char *path; int err; err = get_path_name(f, parent, name, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_mkdir(f->fs, path, mode); if (!err) err = lookup_path(f, parent, name, path, &e, NULL); fuse_finish_interrupt(f, req, &d); free_path(f, parent, path); } reply_entry(req, &e, err); } static void fuse_lib_unlink(fuse_req_t req, fuse_ino_t parent, const char *name) { struct fuse *f = req_fuse_prepare(req); struct node *wnode; char *path; int err; err = get_path_wrlock(f, parent, name, &path, &wnode); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); if (!f->conf.hard_remove && is_open(f, parent, name)) { err = hide_node(f, path, parent, name); } else { err = fuse_fs_unlink(f->fs, path); if (!err) remove_node(f, parent, name); } fuse_finish_interrupt(f, req, &d); free_path_wrlock(f, parent, wnode, path); } reply_err(req, err); } static void fuse_lib_rmdir(fuse_req_t req, fuse_ino_t parent, const char *name) { struct fuse *f = req_fuse_prepare(req); struct node *wnode; char *path; int err; err = get_path_wrlock(f, parent, name, &path, &wnode); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_rmdir(f->fs, path); fuse_finish_interrupt(f, req, &d); if (!err) remove_node(f, parent, name); free_path_wrlock(f, parent, wnode, path); } reply_err(req, err); } static void fuse_lib_symlink(fuse_req_t req, const char *linkname, fuse_ino_t parent, const char *name) { struct fuse *f = req_fuse_prepare(req); struct fuse_entry_param e; char *path; int err; err = get_path_name(f, parent, name, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_symlink(f->fs, linkname, path); if (!err) err = lookup_path(f, parent, name, path, &e, NULL); fuse_finish_interrupt(f, req, &d); free_path(f, parent, path); } reply_entry(req, &e, err); } static void fuse_lib_rename(fuse_req_t req, fuse_ino_t olddir, const char *oldname, fuse_ino_t newdir, const char *newname) { struct fuse *f = req_fuse_prepare(req); char *oldpath; char *newpath; struct node *wnode1; struct node *wnode2; int err; err = get_path2(f, olddir, oldname, newdir, newname, &oldpath, &newpath, &wnode1, &wnode2); if (!err) { struct fuse_intr_data d; err = 0; fuse_prepare_interrupt(f, req, &d); if (!f->conf.hard_remove && is_open(f, newdir, newname)) err = hide_node(f, newpath, newdir, newname); if (!err) { err = fuse_fs_rename(f->fs, oldpath, newpath); if (!err) err = rename_node(f, olddir, oldname, newdir, newname, 0); } fuse_finish_interrupt(f, req, &d); free_path2(f, olddir, newdir, wnode1, wnode2, oldpath, newpath); } reply_err(req, err); } #ifdef __APPLE__ static int exchange_node(struct fuse *f, fuse_ino_t olddir, const char *oldname, fuse_ino_t newdir, const char *newname, __unused unsigned long options) { struct node *node; struct node *newnode; int err = 0; pthread_mutex_lock(&f->lock); node = lookup_node(f, olddir, oldname); newnode = lookup_node(f, newdir, newname); if (node == NULL) goto out; if (newnode != NULL) { off_t tmpsize; struct timespec tmpspec; tmpsize = node->size; node->size = newnode->size; newnode->size = tmpsize; tmpspec.tv_sec = node->mtime.tv_sec; tmpspec.tv_nsec = node->mtime.tv_nsec; node->mtime.tv_sec = newnode->mtime.tv_sec; node->mtime.tv_nsec = newnode->mtime.tv_nsec; newnode->mtime.tv_sec = tmpspec.tv_sec; newnode->mtime.tv_nsec = tmpspec.tv_nsec; node->cache_valid = newnode->cache_valid = 0; curr_time(&node->stat_updated); curr_time(&newnode->stat_updated); } out: pthread_mutex_unlock(&f->lock); return err; } static void fuse_lib_setvolname(fuse_req_t req, const char *volname) { struct fuse *f = req_fuse_prepare(req); int err; pthread_mutex_lock(&f->lock); struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_setvolname(f->fs, volname); pthread_mutex_unlock(&f->lock); fuse_finish_interrupt(f, req, &d); reply_err(req, err); } static void fuse_lib_exchange(fuse_req_t req, fuse_ino_t olddir, const char *oldname, fuse_ino_t newdir, const char *newname, unsigned long options) { struct fuse *f = req_fuse_prepare(req); char *oldpath; char *newpath; int err; err = get_path_name(f, olddir, oldname, &oldpath); if (!err) { err = get_path_name(f, newdir, newname, &newpath); if (!err) { struct fuse_intr_data d; if (f->conf.debug) fprintf(stderr, "EXCHANGE %s -> %s\n", oldpath, newpath); err = 0; fuse_prepare_interrupt(f, req, &d); if (!err) { err = fuse_fs_exchange(f->fs, oldpath, newpath, options); if (!err) err = exchange_node(f, olddir, oldname, newdir, newname, options); } fuse_finish_interrupt(f, req, &d); free_path(f, newdir, newpath); } free_path(f, olddir, oldpath); } reply_err(req, err); } static void fuse_lib_getxtimes(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct timespec bkuptime; struct timespec crtime; char *path; int err; (void) fi; memset(&bkuptime, 0, sizeof(bkuptime)); memset(&crtime, 0, sizeof(crtime)); err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_getxtimes(f->fs, path, &bkuptime, &crtime); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) { fuse_reply_xtimes(req, &bkuptime, &crtime); } else reply_err(req, err); } #endif /* __APPLE__ */ static void fuse_lib_link(fuse_req_t req, fuse_ino_t ino, fuse_ino_t newparent, const char *newname) { struct fuse *f = req_fuse_prepare(req); struct fuse_entry_param e; char *oldpath; char *newpath; int err; err = get_path2(f, ino, NULL, newparent, newname, &oldpath, &newpath, NULL, NULL); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_link(f->fs, oldpath, newpath); if (!err) err = lookup_path(f, newparent, newname, newpath, &e, NULL); fuse_finish_interrupt(f, req, &d); free_path2(f, ino, newparent, NULL, NULL, oldpath, newpath); } reply_entry(req, &e, err); } static void fuse_do_release(struct fuse *f, fuse_ino_t ino, const char *path, struct fuse_file_info *fi) { struct node *node; int unlink_hidden = 0; fuse_fs_release(f->fs, (path || f->nullpath_ok) ? path : "-", fi); pthread_mutex_lock(&f->lock); node = get_node(f, ino); assert(node->open_count > 0); --node->open_count; if (node->is_hidden && !node->open_count) { unlink_hidden = 1; node->is_hidden = 0; } pthread_mutex_unlock(&f->lock); if(unlink_hidden && path) fuse_fs_unlink(f->fs, path); } static void fuse_lib_create(fuse_req_t req, fuse_ino_t parent, const char *name, mode_t mode, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; struct fuse_entry_param e; char *path; int err; err = get_path_name(f, parent, name, &path); if (!err) { fuse_prepare_interrupt(f, req, &d); err = fuse_fs_create(f->fs, path, mode, fi); if (!err) { err = lookup_path(f, parent, name, path, &e, fi); if (err) fuse_fs_release(f->fs, path, fi); else if (!S_ISREG(e.attr.st_mode)) { err = -EIO; fuse_fs_release(f->fs, path, fi); forget_node(f, e.ino, 1); } else { if (f->conf.direct_io) fi->direct_io = 1; if (f->conf.kernel_cache) fi->keep_cache = 1; } } fuse_finish_interrupt(f, req, &d); } if (!err) { pthread_mutex_lock(&f->lock); get_node(f, e.ino)->open_count++; pthread_mutex_unlock(&f->lock); if (fuse_reply_create(req, &e, fi) == -ENOENT) { /* The open syscall was interrupted, so it must be cancelled */ fuse_prepare_interrupt(f, req, &d); fuse_do_release(f, e.ino, path, fi); fuse_finish_interrupt(f, req, &d); forget_node(f, e.ino, 1); } } else { reply_err(req, err); } free_path(f, parent, path); } static double diff_timespec(const struct timespec *t1, const struct timespec *t2) { return (t1->tv_sec - t2->tv_sec) + ((double) t1->tv_nsec - (double) t2->tv_nsec) / 1000000000.0; } static void open_auto_cache(struct fuse *f, fuse_ino_t ino, const char *path, struct fuse_file_info *fi) { struct node *node; pthread_mutex_lock(&f->lock); node = get_node(f, ino); if (node->cache_valid) { struct timespec now; curr_time(&now); if (diff_timespec(&now, &node->stat_updated) > f->conf.ac_attr_timeout) { struct stat stbuf; int err; pthread_mutex_unlock(&f->lock); err = fuse_fs_fgetattr(f->fs, path, &stbuf, fi); pthread_mutex_lock(&f->lock); #ifdef __APPLE__ if (!err) { if (stbuf.st_size != node->size) fi->purge_attr = 1; update_stat(node, &stbuf); } else node->cache_valid = 0; #else if (!err) update_stat(node, &stbuf); else node->cache_valid = 0; #endif /* __APPLE__ */ } } if (node->cache_valid) fi->keep_cache = 1; #ifdef __APPLE__ else fi->purge_ubc = 1; #endif /* __APPLE__ */ node->cache_valid = 1; pthread_mutex_unlock(&f->lock); } static void fuse_lib_open(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; char *path; int err; err = get_path(f, ino, &path); if (!err) { fuse_prepare_interrupt(f, req, &d); err = fuse_fs_open(f->fs, path, fi); if (!err) { if (f->conf.direct_io) fi->direct_io = 1; if (f->conf.kernel_cache) fi->keep_cache = 1; if (f->conf.auto_cache) open_auto_cache(f, ino, path, fi); } fuse_finish_interrupt(f, req, &d); } if (!err) { pthread_mutex_lock(&f->lock); get_node(f, ino)->open_count++; pthread_mutex_unlock(&f->lock); if (fuse_reply_open(req, fi) == -ENOENT) { /* The open syscall was interrupted, so it must be cancelled */ fuse_prepare_interrupt(f, req, &d); fuse_do_release(f, ino, path, fi); fuse_finish_interrupt(f, req, &d); } } else reply_err(req, err); free_path(f, ino, path); } static void fuse_lib_read(fuse_req_t req, fuse_ino_t ino, size_t size, off_t off, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); char *path; char *buf; int res; buf = (char *) malloc(size); if (buf == NULL) { reply_err(req, -ENOMEM); return; } res = get_path_nullok(f, ino, &path); if (res == 0) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); res = fuse_fs_read(f->fs, path, buf, size, off, fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (res >= 0) fuse_reply_buf(req, buf, res); else reply_err(req, res); free(buf); } static void fuse_lib_write(fuse_req_t req, fuse_ino_t ino, const char *buf, size_t size, off_t off, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); char *path; int res; res = get_path_nullok(f, ino, &path); if (res == 0) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); res = fuse_fs_write(f->fs, path, buf, size, off, fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (res >= 0) fuse_reply_write(req, res); else reply_err(req, res); } static void fuse_lib_fsync(fuse_req_t req, fuse_ino_t ino, int datasync, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); char *path; int err; err = get_path_nullok(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_fsync(f->fs, path, datasync, fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } reply_err(req, err); } static struct fuse_dh *get_dirhandle(const struct fuse_file_info *llfi, struct fuse_file_info *fi) { struct fuse_dh *dh = (struct fuse_dh *) (uintptr_t) llfi->fh; memset(fi, 0, sizeof(struct fuse_file_info)); fi->fh = dh->fh; fi->fh_old = dh->fh; return dh; } static void fuse_lib_opendir(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *llfi) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; struct fuse_dh *dh; struct fuse_file_info fi; char *path; int err; dh = (struct fuse_dh *) malloc(sizeof(struct fuse_dh)); if (dh == NULL) { reply_err(req, -ENOMEM); return; } memset(dh, 0, sizeof(struct fuse_dh)); dh->fuse = f; dh->contents = NULL; dh->len = 0; dh->filled = 0; dh->nodeid = ino; fuse_mutex_init(&dh->lock); llfi->fh = (uintptr_t) dh; memset(&fi, 0, sizeof(fi)); fi.flags = llfi->flags; err = get_path(f, ino, &path); if (!err) { fuse_prepare_interrupt(f, req, &d); err = fuse_fs_opendir(f->fs, path, &fi); fuse_finish_interrupt(f, req, &d); dh->fh = fi.fh; } if (!err) { if (fuse_reply_open(req, llfi) == -ENOENT) { /* The opendir syscall was interrupted, so it must be cancelled */ fuse_prepare_interrupt(f, req, &d); fuse_fs_releasedir(f->fs, path, &fi); fuse_finish_interrupt(f, req, &d); pthread_mutex_destroy(&dh->lock); free(dh); } } else { reply_err(req, err); pthread_mutex_destroy(&dh->lock); free(dh); } free_path(f, ino, path); } static int extend_contents(struct fuse_dh *dh, unsigned minsize) { if (minsize > dh->size) { char *newptr; unsigned newsize = dh->size; if (!newsize) newsize = 1024; while (newsize < minsize) { if (newsize >= 0x80000000) newsize = 0xffffffff; else newsize *= 2; } newptr = (char *) realloc(dh->contents, newsize); if (!newptr) { dh->error = -ENOMEM; return -1; } dh->contents = newptr; dh->size = newsize; } return 0; } static int fill_dir(void *dh_, const char *name, const struct stat *statp, off_t off) { struct fuse_dh *dh = (struct fuse_dh *) dh_; struct stat stbuf; size_t newlen; if (statp) stbuf = *statp; else { memset(&stbuf, 0, sizeof(stbuf)); stbuf.st_ino = FUSE_UNKNOWN_INO; } if (!dh->fuse->conf.use_ino) { stbuf.st_ino = FUSE_UNKNOWN_INO; if (dh->fuse->conf.readdir_ino) { struct node *node; pthread_mutex_lock(&dh->fuse->lock); node = lookup_node(dh->fuse, dh->nodeid, name); if (node) stbuf.st_ino = (ino_t) node->nodeid; pthread_mutex_unlock(&dh->fuse->lock); } } if (off) { if (extend_contents(dh, dh->needlen) == -1) return 1; dh->filled = 0; newlen = dh->len + fuse_add_direntry(dh->req, dh->contents + dh->len, dh->needlen - dh->len, name, &stbuf, off); if (newlen > dh->needlen) return 1; } else { newlen = dh->len + fuse_add_direntry(dh->req, NULL, 0, name, NULL, 0); if (extend_contents(dh, newlen) == -1) return 1; fuse_add_direntry(dh->req, dh->contents + dh->len, dh->size - dh->len, name, &stbuf, newlen); } dh->len = newlen; return 0; } static int readdir_fill(struct fuse *f, fuse_req_t req, fuse_ino_t ino, size_t size, off_t off, struct fuse_dh *dh, struct fuse_file_info *fi) { char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; dh->len = 0; dh->error = 0; dh->needlen = size; dh->filled = 1; dh->req = req; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_readdir(f->fs, path, dh, fill_dir, off, fi); fuse_finish_interrupt(f, req, &d); dh->req = NULL; if (!err) err = dh->error; if (err) dh->filled = 0; free_path(f, ino, path); } return err; } static void fuse_lib_readdir(fuse_req_t req, fuse_ino_t ino, size_t size, off_t off, struct fuse_file_info *llfi) { struct fuse *f = req_fuse_prepare(req); struct fuse_file_info fi; struct fuse_dh *dh = get_dirhandle(llfi, &fi); pthread_mutex_lock(&dh->lock); /* According to SUS, directory contents need to be refreshed on rewinddir() */ if (!off) dh->filled = 0; if (!dh->filled) { int err = readdir_fill(f, req, ino, size, off, dh, &fi); if (err) { reply_err(req, err); goto out; } } if (dh->filled) { if (off < dh->len) { if (off + size > dh->len) size = dh->len - off; } else size = 0; } else { size = dh->len; off = 0; } fuse_reply_buf(req, dh->contents + off, size); out: pthread_mutex_unlock(&dh->lock); } static void fuse_lib_releasedir(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *llfi) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; struct fuse_file_info fi; struct fuse_dh *dh = get_dirhandle(llfi, &fi); char *path; get_path(f, ino, &path); fuse_prepare_interrupt(f, req, &d); fuse_fs_releasedir(f->fs, (path || f->nullpath_ok) ? path : "-", &fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); pthread_mutex_lock(&dh->lock); pthread_mutex_unlock(&dh->lock); pthread_mutex_destroy(&dh->lock); free(dh->contents); free(dh); reply_err(req, 0); } static void fuse_lib_fsyncdir(fuse_req_t req, fuse_ino_t ino, int datasync, struct fuse_file_info *llfi) { struct fuse *f = req_fuse_prepare(req); struct fuse_file_info fi; char *path; int err; get_dirhandle(llfi, &fi); err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_fsyncdir(f->fs, path, datasync, &fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } reply_err(req, err); } static void fuse_lib_statfs(fuse_req_t req, fuse_ino_t ino) { struct fuse *f = req_fuse_prepare(req); struct statvfs buf; char *path = NULL; int err = 0; memset(&buf, 0, sizeof(buf)); if (ino) err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_statfs(f->fs, path ? path : "/", &buf); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) fuse_reply_statfs(req, &buf); else reply_err(req, err); } #ifdef __APPLE__ static void fuse_lib_setxattr(fuse_req_t req, fuse_ino_t ino, const char *name, const char *value, size_t size, int flags, uint32_t position) #else static void fuse_lib_setxattr(fuse_req_t req, fuse_ino_t ino, const char *name, const char *value, size_t size, int flags) #endif /* __APPLE__ */ { struct fuse *f = req_fuse_prepare(req); char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); #ifdef __APPLE__ err = fuse_fs_setxattr(f->fs, path, name, value, size, flags, position); #else err = fuse_fs_setxattr(f->fs, path, name, value, size, flags); #endif /* __APPLE__ */ fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } reply_err(req, err); } #ifdef __APPLE__ static int common_getxattr(struct fuse *f, fuse_req_t req, fuse_ino_t ino, const char *name, char *value, size_t size, uint32_t position) #else static int common_getxattr(struct fuse *f, fuse_req_t req, fuse_ino_t ino, const char *name, char *value, size_t size) #endif /* __APPLE__ */ { int err; char *path; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); #ifdef __APPLE__ err = fuse_fs_getxattr(f->fs, path, name, value, size, position); #else err = fuse_fs_getxattr(f->fs, path, name, value, size); #endif /* __APPLE__ */ fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } return err; } #ifdef __APPLE__ static void fuse_lib_getxattr(fuse_req_t req, fuse_ino_t ino, const char *name, size_t size, uint32_t position) #else static void fuse_lib_getxattr(fuse_req_t req, fuse_ino_t ino, const char *name, size_t size) #endif /* __APPLE__ */ { struct fuse *f = req_fuse_prepare(req); int res; if (size) { char *value = (char *) malloc(size); if (value == NULL) { reply_err(req, -ENOMEM); return; } #ifdef __APPLE__ res = common_getxattr(f, req, ino, name, value, size, position); #else res = common_getxattr(f, req, ino, name, value, size); #endif /* __APPLE__ */ if (res > 0) fuse_reply_buf(req, value, res); else reply_err(req, res); free(value); } else { #ifdef __APPLE__ res = common_getxattr(f, req, ino, name, NULL, 0, position); #else res = common_getxattr(f, req, ino, name, NULL, 0); #endif /* __APPLE__ */ if (res >= 0) fuse_reply_xattr(req, res); else reply_err(req, res); } } static int common_listxattr(struct fuse *f, fuse_req_t req, fuse_ino_t ino, char *list, size_t size) { char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_listxattr(f->fs, path, list, size); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } return err; } static void fuse_lib_listxattr(fuse_req_t req, fuse_ino_t ino, size_t size) { struct fuse *f = req_fuse_prepare(req); int res; if (size) { char *list = (char *) malloc(size); if (list == NULL) { reply_err(req, -ENOMEM); return; } res = common_listxattr(f, req, ino, list, size); if (res > 0) fuse_reply_buf(req, list, res); else reply_err(req, res); free(list); } else { res = common_listxattr(f, req, ino, NULL, 0); if (res >= 0) fuse_reply_xattr(req, res); else reply_err(req, res); } } static void fuse_lib_removexattr(fuse_req_t req, fuse_ino_t ino, const char *name) { struct fuse *f = req_fuse_prepare(req); char *path; int err; err = get_path(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_removexattr(f->fs, path, name); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } reply_err(req, err); } static struct lock *locks_conflict(struct node *node, const struct lock *lock) { struct lock *l; for (l = node->locks; l; l = l->next) if (l->owner != lock->owner && lock->start <= l->end && l->start <= lock->end && (l->type == F_WRLCK || lock->type == F_WRLCK)) break; return l; } static void delete_lock(struct lock **lockp) { struct lock *l = *lockp; *lockp = l->next; free(l); } static void insert_lock(struct lock **pos, struct lock *lock) { lock->next = *pos; *pos = lock; } static int locks_insert(struct node *node, struct lock *lock) { struct lock **lp; struct lock *newl1 = NULL; struct lock *newl2 = NULL; if (lock->type != F_UNLCK || lock->start != 0 || lock->end != OFFSET_MAX) { newl1 = malloc(sizeof(struct lock)); newl2 = malloc(sizeof(struct lock)); if (!newl1 || !newl2) { free(newl1); free(newl2); return -ENOLCK; } } for (lp = &node->locks; *lp;) { struct lock *l = *lp; if (l->owner != lock->owner) goto skip; if (lock->type == l->type) { if (l->end < lock->start - 1) goto skip; if (lock->end < l->start - 1) break; if (l->start <= lock->start && lock->end <= l->end) goto out; if (l->start < lock->start) lock->start = l->start; if (lock->end < l->end) lock->end = l->end; goto delete; } else { if (l->end < lock->start) goto skip; if (lock->end < l->start) break; if (lock->start <= l->start && l->end <= lock->end) goto delete; if (l->end <= lock->end) { l->end = lock->start - 1; goto skip; } if (lock->start <= l->start) { l->start = lock->end + 1; break; } *newl2 = *l; newl2->start = lock->end + 1; l->end = lock->start - 1; insert_lock(&l->next, newl2); newl2 = NULL; } skip: lp = &l->next; continue; delete: delete_lock(lp); } if (lock->type != F_UNLCK) { *newl1 = *lock; insert_lock(lp, newl1); newl1 = NULL; } out: free(newl1); free(newl2); return 0; } static void flock_to_lock(struct flock *flock, struct lock *lock) { memset(lock, 0, sizeof(struct lock)); lock->type = flock->l_type; lock->start = flock->l_start; lock->end = flock->l_len ? flock->l_start + flock->l_len - 1 : OFFSET_MAX; lock->pid = flock->l_pid; } static void lock_to_flock(struct lock *lock, struct flock *flock) { flock->l_type = lock->type; flock->l_start = lock->start; flock->l_len = (lock->end == OFFSET_MAX) ? 0 : lock->end - lock->start + 1; flock->l_pid = lock->pid; } static int fuse_flush_common(struct fuse *f, fuse_req_t req, fuse_ino_t ino, const char *path, struct fuse_file_info *fi) { struct fuse_intr_data d; struct flock lock; struct lock l; int err; int errlock; fuse_prepare_interrupt(f, req, &d); memset(&lock, 0, sizeof(lock)); lock.l_type = F_UNLCK; lock.l_whence = SEEK_SET; err = fuse_fs_flush(f->fs, path, fi); errlock = fuse_fs_lock(f->fs, path, fi, F_SETLK, &lock); fuse_finish_interrupt(f, req, &d); if (errlock != -ENOSYS) { flock_to_lock(&lock, &l); l.owner = fi->lock_owner; pthread_mutex_lock(&f->lock); locks_insert(get_node(f, ino), &l); pthread_mutex_unlock(&f->lock); /* if op.lock() is defined FLUSH is needed regardless of op.flush() */ if (err == -ENOSYS) err = 0; } return err; } static void fuse_lib_release(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; char *path; int err = 0; get_path(f, ino, &path); if (fi->flush) { err = fuse_flush_common(f, req, ino, path, fi); if (err == -ENOSYS) err = 0; } fuse_prepare_interrupt(f, req, &d); fuse_do_release(f, ino, path, fi); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); reply_err(req, err); } static void fuse_lib_flush(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) { struct fuse *f = req_fuse_prepare(req); char *path; int err; get_path(f, ino, &path); err = fuse_flush_common(f, req, ino, path, fi); free_path(f, ino, path); reply_err(req, err); } static int fuse_lock_common(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, struct flock *lock, int cmd) { struct fuse *f = req_fuse_prepare(req); char *path; int err; err = get_path_nullok(f, ino, &path); if (!err) { struct fuse_intr_data d; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_lock(f->fs, path, fi, cmd, lock); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } return err; } static void fuse_lib_getlk(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, struct flock *lock) { int err; struct lock l; struct lock *conflict; struct fuse *f = req_fuse(req); flock_to_lock(lock, &l); l.owner = fi->lock_owner; pthread_mutex_lock(&f->lock); conflict = locks_conflict(get_node(f, ino), &l); if (conflict) lock_to_flock(conflict, lock); pthread_mutex_unlock(&f->lock); if (!conflict) err = fuse_lock_common(req, ino, fi, lock, F_GETLK); else err = 0; if (!err) fuse_reply_lock(req, lock); else reply_err(req, err); } static void fuse_lib_setlk(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, struct flock *lock, int sleep) { int err = fuse_lock_common(req, ino, fi, lock, sleep ? F_SETLKW : F_SETLK); if (!err) { struct fuse *f = req_fuse(req); struct lock l; flock_to_lock(lock, &l); l.owner = fi->lock_owner; pthread_mutex_lock(&f->lock); locks_insert(get_node(f, ino), &l); pthread_mutex_unlock(&f->lock); } reply_err(req, err); } static void fuse_lib_bmap(fuse_req_t req, fuse_ino_t ino, size_t blocksize, uint64_t idx) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; char *path; int err; err = get_path(f, ino, &path); if (!err) { fuse_prepare_interrupt(f, req, &d); err = fuse_fs_bmap(f->fs, path, blocksize, &idx); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!err) fuse_reply_bmap(req, idx); else reply_err(req, err); } static void fuse_lib_ioctl(fuse_req_t req, fuse_ino_t ino, int cmd, void *arg, struct fuse_file_info *fi, unsigned int flags, const void *in_buf, size_t in_bufsz, size_t out_bufsz) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; char *path, *out_buf = NULL; int err; err = -EPERM; if (flags & FUSE_IOCTL_UNRESTRICTED) goto err; if (out_bufsz) { err = -ENOMEM; out_buf = malloc(out_bufsz); if (!out_buf) goto err; } assert(!in_bufsz || !out_bufsz || in_bufsz == out_bufsz); if (out_buf) memcpy(out_buf, in_buf, in_bufsz); err = get_path(f, ino, &path); if (err) goto err; fuse_prepare_interrupt(f, req, &d); err = fuse_fs_ioctl(f->fs, path, cmd, arg, fi, flags, out_buf ?: (void *)in_buf); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); fuse_reply_ioctl(req, err, out_buf, out_bufsz); goto out; err: reply_err(req, err); out: free(out_buf); } static void fuse_lib_poll(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, struct fuse_pollhandle *ph) { struct fuse *f = req_fuse_prepare(req); struct fuse_intr_data d; char *path; int ret; unsigned revents = 0; ret = get_path(f, ino, &path); if (!ret) { fuse_prepare_interrupt(f, req, &d); ret = fuse_fs_poll(f->fs, path, fi, ph, &revents); fuse_finish_interrupt(f, req, &d); free_path(f, ino, path); } if (!ret) fuse_reply_poll(req, revents); else reply_err(req, ret); } static struct fuse_lowlevel_ops fuse_path_ops = { .init = fuse_lib_init, .destroy = fuse_lib_destroy, .lookup = fuse_lib_lookup, .forget = fuse_lib_forget, .getattr = fuse_lib_getattr, .setattr = fuse_lib_setattr, .access = fuse_lib_access, .readlink = fuse_lib_readlink, .mknod = fuse_lib_mknod, .mkdir = fuse_lib_mkdir, .unlink = fuse_lib_unlink, .rmdir = fuse_lib_rmdir, .symlink = fuse_lib_symlink, .rename = fuse_lib_rename, .link = fuse_lib_link, .create = fuse_lib_create, .open = fuse_lib_open, .read = fuse_lib_read, .write = fuse_lib_write, .flush = fuse_lib_flush, .release = fuse_lib_release, .fsync = fuse_lib_fsync, .opendir = fuse_lib_opendir, .readdir = fuse_lib_readdir, .releasedir = fuse_lib_releasedir, .fsyncdir = fuse_lib_fsyncdir, .statfs = fuse_lib_statfs, .setxattr = fuse_lib_setxattr, .getxattr = fuse_lib_getxattr, .listxattr = fuse_lib_listxattr, .removexattr = fuse_lib_removexattr, .getlk = fuse_lib_getlk, .setlk = fuse_lib_setlk, .bmap = fuse_lib_bmap, .ioctl = fuse_lib_ioctl, .poll = fuse_lib_poll, #ifdef __APPLE__ .setvolname = fuse_lib_setvolname, .exchange = fuse_lib_exchange, .getxtimes = fuse_lib_getxtimes, .setattr_x = fuse_lib_setattr_x, #endif }; int fuse_notify_poll(struct fuse_pollhandle *ph) { return fuse_lowlevel_notify_poll(ph); } static void free_cmd(struct fuse_cmd *cmd) { free(cmd->buf); free(cmd); } void fuse_process_cmd(struct fuse *f, struct fuse_cmd *cmd) { fuse_session_process(f->se, cmd->buf, cmd->buflen, cmd->ch); free_cmd(cmd); } int fuse_exited(struct fuse *f) { return fuse_session_exited(f->se); } struct fuse_session *fuse_get_session(struct fuse *f) { return f->se; } static struct fuse_cmd *fuse_alloc_cmd(size_t bufsize) { struct fuse_cmd *cmd = (struct fuse_cmd *) malloc(sizeof(*cmd)); if (cmd == NULL) { fprintf(stderr, "fuse: failed to allocate cmd\n"); return NULL; } cmd->buf = (char *) malloc(bufsize); if (cmd->buf == NULL) { fprintf(stderr, "fuse: failed to allocate read buffer\n"); free(cmd); return NULL; } return cmd; } struct fuse_cmd *fuse_read_cmd(struct fuse *f) { struct fuse_chan *ch = fuse_session_next_chan(f->se, NULL); size_t bufsize = fuse_chan_bufsize(ch); struct fuse_cmd *cmd = fuse_alloc_cmd(bufsize); if (cmd != NULL) { int res = fuse_chan_recv(&ch, cmd->buf, bufsize); if (res <= 0) { free_cmd(cmd); if (res < 0 && res != -EINTR && res != -EAGAIN) fuse_exit(f); return NULL; } cmd->buflen = res; cmd->ch = ch; } return cmd; } int fuse_loop(struct fuse *f) { if (f) return fuse_session_loop(f->se); else return -1; } int fuse_invalidate(struct fuse *f, const char *path) { (void) f; (void) path; return -EINVAL; } void fuse_exit(struct fuse *f) { fuse_session_exit(f->se); } struct fuse_context *fuse_get_context(void) { return &fuse_get_context_internal()->ctx; } /* * The size of fuse_context got extended, so need to be careful about * incompatibility (i.e. a new binary cannot work with an old * library). */ struct fuse_context *fuse_get_context_compat22(void); struct fuse_context *fuse_get_context_compat22(void) { return &fuse_get_context_internal()->ctx; } FUSE_SYMVER(".symver fuse_get_context_compat22,fuse_get_context@FUSE_2.2"); int fuse_getgroups(int size, gid_t list[]) { fuse_req_t req = fuse_get_context_internal()->req; return fuse_req_getgroups(req, size, list); } int fuse_interrupted(void) { return fuse_req_interrupted(fuse_get_context_internal()->req); } void fuse_set_getcontext_func(struct fuse_context *(*func)(void)) { (void) func; /* no-op */ } enum { KEY_HELP, }; #define FUSE_LIB_OPT(t, p, v) { t, offsetof(struct fuse_config, p), v } static const struct fuse_opt fuse_lib_opts[] = { FUSE_OPT_KEY("-h", KEY_HELP), FUSE_OPT_KEY("--help", KEY_HELP), FUSE_OPT_KEY("debug", FUSE_OPT_KEY_KEEP), FUSE_OPT_KEY("-d", FUSE_OPT_KEY_KEEP), FUSE_LIB_OPT("debug", debug, 1), FUSE_LIB_OPT("-d", debug, 1), FUSE_LIB_OPT("hard_remove", hard_remove, 1), FUSE_LIB_OPT("use_ino", use_ino, 1), FUSE_LIB_OPT("readdir_ino", readdir_ino, 1), FUSE_LIB_OPT("direct_io", direct_io, 1), FUSE_LIB_OPT("kernel_cache", kernel_cache, 1), FUSE_LIB_OPT("auto_cache", auto_cache, 1), FUSE_LIB_OPT("noauto_cache", auto_cache, 0), FUSE_LIB_OPT("umask=", set_mode, 1), FUSE_LIB_OPT("umask=%o", umask, 0), FUSE_LIB_OPT("uid=", set_uid, 1), FUSE_LIB_OPT("uid=%d", uid, 0), FUSE_LIB_OPT("gid=", set_gid, 1), FUSE_LIB_OPT("gid=%d", gid, 0), FUSE_LIB_OPT("entry_timeout=%lf", entry_timeout, 0), FUSE_LIB_OPT("attr_timeout=%lf", attr_timeout, 0), FUSE_LIB_OPT("ac_attr_timeout=%lf", ac_attr_timeout, 0), FUSE_LIB_OPT("ac_attr_timeout=", ac_attr_timeout_set, 1), FUSE_LIB_OPT("negative_timeout=%lf", negative_timeout, 0), FUSE_LIB_OPT("noforget", noforget, 1), FUSE_LIB_OPT("intr", intr, 1), FUSE_LIB_OPT("intr_signal=%d", intr_signal, 0), FUSE_LIB_OPT("modules=%s", modules, 0), FUSE_OPT_END }; static void fuse_lib_help(void) { fprintf(stderr, " -o hard_remove immediate removal (don't hide files)\n" " -o use_ino let filesystem set inode numbers\n" " -o readdir_ino try to fill in d_ino in readdir\n" " -o direct_io use direct I/O\n" " -o kernel_cache cache files in kernel\n" " -o [no]auto_cache enable caching based on modification times (off)\n" " -o umask=M set file permissions (octal)\n" " -o uid=N set file owner\n" " -o gid=N set file group\n" " -o entry_timeout=T cache timeout for names (1.0s)\n" " -o negative_timeout=T cache timeout for deleted names (0.0s)\n" " -o attr_timeout=T cache timeout for attributes (1.0s)\n" " -o ac_attr_timeout=T auto cache timeout for attributes (attr_timeout)\n" " -o intr allow requests to be interrupted\n" " -o intr_signal=NUM signal to send on interrupt (%i)\n" " -o modules=M1[:M2...] names of modules to push onto filesystem stack\n" "\n", FUSE_DEFAULT_INTR_SIGNAL); } static void fuse_lib_help_modules(void) { struct fuse_module *m; fprintf(stderr, "\nModule options:\n"); pthread_mutex_lock(&fuse_context_lock); for (m = fuse_modules; m; m = m->next) { struct fuse_fs *fs = NULL; struct fuse_fs *newfs; struct fuse_args args = FUSE_ARGS_INIT(0, NULL); if (fuse_opt_add_arg(&args, "") != -1 && fuse_opt_add_arg(&args, "-h") != -1) { fprintf(stderr, "\n[%s]\n", m->name); newfs = m->factory(&args, &fs); assert(newfs == NULL); } fuse_opt_free_args(&args); } pthread_mutex_unlock(&fuse_context_lock); } static int fuse_lib_opt_proc(void *data, const char *arg, int key, struct fuse_args *outargs) { (void) arg; (void) outargs; if (key == KEY_HELP) { struct fuse_config *conf = (struct fuse_config *) data; fuse_lib_help(); conf->help = 1; } return 1; } int fuse_is_lib_option(const char *opt) { return fuse_lowlevel_is_lib_option(opt) || fuse_opt_match(fuse_lib_opts, opt); } static int fuse_init_intr_signal(int signum, int *installed) { struct sigaction old_sa; if (sigaction(signum, NULL, &old_sa) == -1) { perror("fuse: cannot get old signal handler"); return -1; } if (old_sa.sa_handler == SIG_DFL) { struct sigaction sa; memset(&sa, 0, sizeof(struct sigaction)); sa.sa_handler = fuse_intr_sighandler; sigemptyset(&sa.sa_mask); if (sigaction(signum, &sa, NULL) == -1) { perror("fuse: cannot set interrupt signal handler"); return -1; } *installed = 1; } return 0; } static void fuse_restore_intr_signal(int signum) { struct sigaction sa; memset(&sa, 0, sizeof(struct sigaction)); sa.sa_handler = SIG_DFL; sigaction(signum, &sa, NULL); } static int fuse_push_module(struct fuse *f, const char *module, struct fuse_args *args) { struct fuse_fs *fs[2] = { f->fs, NULL }; struct fuse_fs *newfs; struct fuse_module *m = fuse_get_module(module); if (!m) return -1; newfs = m->factory(args, fs); if (!newfs) { fuse_put_module(m); return -1; } newfs->m = m; f->fs = newfs; f->nullpath_ok = newfs->op.flag_nullpath_ok && f->nullpath_ok; return 0; } struct fuse_fs *fuse_fs_new(const struct fuse_operations *op, size_t op_size, void *user_data) { struct fuse_fs *fs; if (sizeof(struct fuse_operations) < op_size) { fprintf(stderr, "fuse: warning: library too old, some operations may not not work\n"); op_size = sizeof(struct fuse_operations); } fs = (struct fuse_fs *) calloc(1, sizeof(struct fuse_fs)); if (!fs) { fprintf(stderr, "fuse: failed to allocate fuse_fs object\n"); return NULL; } fs->user_data = user_data; #ifdef __APPLE__ fs->fuse = NULL; #endif /* __APPLE__ */ if (op) memcpy(&fs->op, op, op_size); return fs; } struct fuse *fuse_new_common(struct fuse_chan *ch, struct fuse_args *args, const struct fuse_operations *op, size_t op_size, void *user_data, int compat) { struct fuse *f; struct node *root; struct fuse_fs *fs; struct fuse_lowlevel_ops llop = fuse_path_ops; if (fuse_create_context_key() == -1) goto out; f = (struct fuse *) calloc(1, sizeof(struct fuse)); if (f == NULL) { fprintf(stderr, "fuse: failed to allocate fuse object\n"); goto out_delete_context_key; } fs = fuse_fs_new(op, op_size, user_data); if (!fs) goto out_free; fs->compat = compat; f->fs = fs; f->nullpath_ok = fs->op.flag_nullpath_ok; /* Oh f**k, this is ugly! */ if (!fs->op.lock) { llop.getlk = NULL; llop.setlk = NULL; } f->conf.entry_timeout = 1.0; f->conf.attr_timeout = 1.0; f->conf.negative_timeout = 0.0; f->conf.intr_signal = FUSE_DEFAULT_INTR_SIGNAL; if (fuse_opt_parse(args, &f->conf, fuse_lib_opts, fuse_lib_opt_proc) == -1) goto out_free_fs; if (f->conf.modules) { char *module; char *next; for (module = f->conf.modules; module; module = next) { char *p; for (p = module; *p && *p != ':'; p++); next = *p ? p + 1 : NULL; *p = '\0'; if (module[0] && fuse_push_module(f, module, args) == -1) goto out_free_fs; } } if (!f->conf.ac_attr_timeout_set) f->conf.ac_attr_timeout = f->conf.attr_timeout; #if ( __FreeBSD__ || __APPLE__ ) /* * In FreeBSD, we always use these settings as inode numbers * are needed to make getcwd(3) work. */ f->conf.readdir_ino = 1; #endif if (compat && compat <= 25) { if (fuse_sync_compat_args(args) == -1) goto out_free_fs; } f->se = fuse_lowlevel_new_common(args, &llop, sizeof(llop), f); if (f->se == NULL) { if (f->conf.help) fuse_lib_help_modules(); goto out_free_fs; } fuse_session_add_chan(f->se, ch); if (f->conf.debug) fprintf(stderr, "nullpath_ok: %i\n", f->nullpath_ok); /* Trace topmost layer by default */ f->fs->debug = f->conf.debug; f->ctr = 0; f->generation = 0; /* FIXME: Dynamic hash table */ f->name_table_size = 14057; f->name_table = (struct node **) calloc(1, sizeof(struct node *) * f->name_table_size); if (f->name_table == NULL) { fprintf(stderr, "fuse: memory allocation failed\n"); goto out_free_session; } f->id_table_size = 14057; f->id_table = (struct node **) calloc(1, sizeof(struct node *) * f->id_table_size); if (f->id_table == NULL) { fprintf(stderr, "fuse: memory allocation failed\n"); goto out_free_name_table; } fuse_mutex_init(&f->lock); root = (struct node *) calloc(1, sizeof(struct node)); if (root == NULL) { fprintf(stderr, "fuse: memory allocation failed\n"); goto out_free_id_table; } root->name = strdup("/"); if (root->name == NULL) { fprintf(stderr, "fuse: memory allocation failed\n"); goto out_free_root; } if (f->conf.intr && fuse_init_intr_signal(f->conf.intr_signal, &f->intr_installed) == -1) goto out_free_root_name; root->parent = NULL; root->nodeid = FUSE_ROOT_ID; root->generation = 0; root->refctr = 1; root->nlookup = 1; hash_id(f, root); return f; out_free_root_name: free(root->name); out_free_root: free(root); out_free_id_table: free(f->id_table); out_free_name_table: free(f->name_table); out_free_session: fuse_session_destroy(f->se); out_free_fs: /* Horrible compatibility hack to stop the destructor from being called on the filesystem without init being called first */ fs->op.destroy = NULL; fuse_fs_destroy(f->fs); free(f->conf.modules); out_free: free(f); out_delete_context_key: fuse_delete_context_key(); out: return NULL; } struct fuse *fuse_new(struct fuse_chan *ch, struct fuse_args *args, const struct fuse_operations *op, size_t op_size, void *user_data) { return fuse_new_common(ch, args, op, op_size, user_data, 0); } void fuse_destroy(struct fuse *f) { size_t i; if (f->conf.intr && f->intr_installed) fuse_restore_intr_signal(f->conf.intr_signal); if (f->fs) { struct fuse_context_i *c = fuse_get_context_internal(); memset(c, 0, sizeof(*c)); c->ctx.fuse = f; for (i = 0; i < f->id_table_size; i++) { struct node *node; for (node = f->id_table[i]; node != NULL; node = node->id_next) { if (node->is_hidden) { char *path; if (try_get_path(f, node->nodeid, NULL, &path, NULL, 0) == 0) { fuse_fs_unlink(f->fs, path); free(path); } } } } } for (i = 0; i < f->id_table_size; i++) { struct node *node; struct node *next; for (node = f->id_table[i]; node != NULL; node = next) { next = node->id_next; free_node(node); } } free(f->id_table); free(f->name_table); pthread_mutex_destroy(&f->lock); fuse_session_destroy(f->se); free(f->conf.modules); free(f); fuse_delete_context_key(); } static struct fuse *fuse_new_common_compat25(int fd, struct fuse_args *args, const struct fuse_operations *op, size_t op_size, int compat) { struct fuse *f = NULL; struct fuse_chan *ch = fuse_kern_chan_new(fd); if (ch) f = fuse_new_common(ch, args, op, op_size, NULL, compat); return f; } /* called with fuse_context_lock held or during initialization (before main() has been called) */ void fuse_register_module(struct fuse_module *mod) { mod->ctr = 0; mod->so = fuse_current_so; if (mod->so) mod->so->ctr++; mod->next = fuse_modules; fuse_modules = mod; } #if (!__FreeBSD__ && !__APPLE__ ) static struct fuse *fuse_new_common_compat(int fd, const char *opts, const struct fuse_operations *op, size_t op_size, int compat) { struct fuse *f; struct fuse_args args = FUSE_ARGS_INIT(0, NULL); if (fuse_opt_add_arg(&args, "") == -1) return NULL; if (opts && (fuse_opt_add_arg(&args, "-o") == -1 || fuse_opt_add_arg(&args, opts) == -1)) { fuse_opt_free_args(&args); return NULL; } f = fuse_new_common_compat25(fd, &args, op, op_size, compat); fuse_opt_free_args(&args); return f; } struct fuse *fuse_new_compat22(int fd, const char *opts, const struct fuse_operations_compat22 *op, size_t op_size) { return fuse_new_common_compat(fd, opts, (struct fuse_operations *) op, op_size, 22); } struct fuse *fuse_new_compat2(int fd, const char *opts, const struct fuse_operations_compat2 *op) { return fuse_new_common_compat(fd, opts, (struct fuse_operations *) op, sizeof(struct fuse_operations_compat2), 21); } struct fuse *fuse_new_compat1(int fd, int flags, const struct fuse_operations_compat1 *op) { const char *opts = NULL; if (flags & FUSE_DEBUG_COMPAT1) opts = "debug"; return fuse_new_common_compat(fd, opts, (struct fuse_operations *) op, sizeof(struct fuse_operations_compat1), 11); } FUSE_SYMVER(".symver fuse_exited,__fuse_exited@"); FUSE_SYMVER(".symver fuse_process_cmd,__fuse_process_cmd@"); FUSE_SYMVER(".symver fuse_read_cmd,__fuse_read_cmd@"); FUSE_SYMVER(".symver fuse_set_getcontext_func,__fuse_set_getcontext_func@"); FUSE_SYMVER(".symver fuse_new_compat2,fuse_new@"); FUSE_SYMVER(".symver fuse_new_compat22,fuse_new@FUSE_2.2"); #endif /* !__FreeBSD__ && !__APPLE__ */ struct fuse *fuse_new_compat25(int fd, struct fuse_args *args, const struct fuse_operations_compat25 *op, size_t op_size) { return fuse_new_common_compat25(fd, args, (struct fuse_operations *) op, op_size, 25); } FUSE_SYMVER(".symver fuse_new_compat25,fuse_new@FUSE_2.5");
code
\begin{document} \title{Multistage Online Maxmin Allocation of \\ Indivisible Entities\thanks{Research supported by the Research Grants Council, Hong Kong, China (project no.~16207419).}} \author{Siu-Wing Cheng\footnote{Department of Computer Science and Engineering, HKUST, Hong Kong, China.}} \date{} \maketitle \begin{abstract} We consider an online allocation problem that involves a set $P$ of $n$ players and a set $E$ of $m$ indivisible entities over discrete time steps $1,2,\ldots,\tau$. At each time step $t \in [1,\tau]$, for every entity $e \in E$, there is a restriction list $L_t(e)$ that prescribes the subset of players to whom $e$ can be assigned and a non-negative value $v_t(e,p)$ of $e$ to every player $p \in P$. The sets $P$ and $E$ are fixed beforehand. The sets $L_t(\cdot)$ and values $v_t(\cdot,\cdot)$ are given in an online fashion. An allocation is a distribution of $E$ among $P$, and we are interested in the minimum total value of the entities received by a player according to the allocation. In the static case, it is NP-hard to find an optimal allocation the maximizes this minimum value. On the other hand, $\rho$-approximation algorithms have been developed for certain values of $\rho \in (0,1]$. We propose a $w$-lookahead algorithm for the multistage online maxmin allocation problem for any fixed $w \geqslant 1$ in which the restriction lists and values of entities may change between time steps, and there is a fixed stability reward for an entity to be assigned to the same player from one time step to the next. The objective is to maximize the sum of the minimum values and stability rewards over the time steps $1, 2, \ldots, \tau$. Our algorithm achieves a competitive ratio of $(1-c)\rho$, where $c$ is the positive root of the equation $wc^2 = \rho (w+1)(1-c)$. When $w = 1$, it is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which improves upon the previous ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ obtained for the case of 1-lookahead. \end{abstract} \section{Introduction} Distributing a set $E$ of indivisible \emph{entities} to a set $P$ of \emph{players} is a very common optimization problem. The problem can model an assignment of non-premptable computer jobs to machines, a division of tasks among workers, allocating classrooms to lectures, etc. The \emph{value} of an entity $e \in E$ to a player $p \in P$ is usually measured by a non-negative real number. In the single-shot case, the problem is to assign the entities to the players in order to optimize some function of the values of entities received by the players. Every entity is assigned to at most one player. The problem of maximizing the minimum total value of entities assigned to a player is known as the \emph{maxmin fair allocation} or the \emph{Santa Claus} problem. No polynomial-time algorithm can give an approximation ratio less than 2 unless P = NP~\cite{BD05}. An LP relaxation of the problem, called configuration LP, has been developed; although its size is exponential, it can be solved by the ellipsoid method in polynomial time without an explicit construction of the entire LP~\cite{BS06}. A polynomial-time algorithm was developed to round the fractional solution of the configuration LP to obtain an $\Omega(n^{-1/2}\log^{-3} n)$ approximation ratio~\cite{AS10}. Subsequently, the approximation ratio was improved to $\Omega((n\log n)^{-1/2})$~\cite{SS}. A tradeoff was obtained in~\cite{CCK09} between the approximation ratio and the exponent in the polynomial running time:~for any $\eps \geqslant 9\log\log n/\log n$, an $\Omega(n^{-\eps})$-approximate allocation can be computed in $n^{O(1/\eps)}$ time. An important special case, the \emph{restricted} maxmin allocation problem, is that for every entity $e$, the value of $e$ is the same for players who want it and zero for the other players. In this case, the configuration LP can be used to give an~$\Omega(\log\log\log n/\log\log n)$-approximate allocation~\cite{BS06}. Later, it was shown that the approximation ratio can be bounded by a large, unspecified constant~\cite{F,HSS}. Subsequently, for any $\delta \in (0,1)$, the approximation ratio has been improved to $\frac{1}{6 + 2\sqrt{10} + \delta}$ in~\cite{AKS}, $\frac{1}{6+\delta}$ in~\cite{CM18,DRZ18}, and $\frac{1}{4+\delta}$ in~\cite{CM19,DRZ20}. Recently, there has been interest in solving online optimization problems in a way that balances the optimality at each time step and the stability of the solutions between successive time steps~\cite{BEM18,BCN14,BCNS12,CCDL16,GTW14}. In the context of allocating indivisible entities, the following setting has been proposed in~\cite{BEM18}. The sets of players and entities are fixed over a time horizon $t = 1, 2, \ldots, \tau$. The value of $\tau$ may not be given in advance. At the current time $t$, for every entity $e$, we are given the restriction list $L_t(e)$ of players to whom $e$ can be assigned and the value $v_t(e,p)$ of $e$ for every player $p \in P$. We assume that $v_t(e,p) = 0$ if $p \not\in L_t(e)$. In the strict online setting, no further information is provided. In the $w$-lookahead setting for any $w \geqslant 1$, we are given $L_{t+i}(\cdot)$ and $v_{t+i}(\cdot,\cdot)$ for every $i \in [0,w]$ at time $t$. Note that $L_t(e) \subseteq P$; if $p \not= q$, $v_t(e,p)$ and $v_t(e,q)$ may be different; if $s \not= t$, $L_s(e)$ and $L_t(e)$ may be different and so may $v_s(e,p)$ and $v_t(e,p)$. At current time $t$, we need to decide irrevocably an allocation $A_t$ of the entities to the players so that the constraints given in $L_t(\cdot)$ are satisfied. The objective is to maximize $\sum_{t=1}^\tau \min_{p \in P} \bigl\{\sum_{(e,p) \in A_t} v_t(e,p)\bigr\} + \sum_{t=1}^{\tau-1} \sum_{(e,p) \in E \times P} \Delta \cdot \bigl|\{(e,p) : (e,p) \in A_t \cap A_{t+1}\}\bigr|$, where $\Delta$ is some fixed non-negative value specified by the user. The first term is the sum of the minimum total value of entities assigned to a player at each time $t$. A stability reward of $\Delta$ is given for keeping an entity at the same player between two successive time steps. The second term is the sum of all stability rewards over all entities and all pairs of successive time steps. The following results are obtained in~\cite{BEM18} for the multistage online maxmin allocation problem. Let $\mathcal{A}$ be a $\rho$-approximation algorithm for some $\rho \leqslant 1$ for the single-shot maxmin allocation problem. If $L_t(e) = P$ for every $e \in E$ and every $t \in [1,\tau]$, one can use $\mathcal{A}$ to obtain a competitive ratio of $\frac{\rho}{\rho+1}$. It takes $O(mn+T(m,n))$ time at each time step, where $T(m,n)$ denotes the running time of $\mathcal{A}$. When the restriction lists $L_t(\cdot)$ are arbitrary subsets of $P$, it is impossible to achieve a bounded competitive ratio in the strict online setting. On the other hand, using 1-lookahead, one can obtain a competitive ratio of $\frac{\rho}{4\rho + 2 - 2^{1-\tau}(2\rho+1)}$. It takes $O(mn + T(m,n))$ time at each time step. Two examples for the multistage online maxmin allocation problem are as follows. Given a set of computing servers and some daily analytic tasks, the goal is to assign the executions of these tasks to the servers so that the minimum utilization of a server is maximized. On each day, a task may only be executable at a particular subset of the servers due to resource requirements and data availability. Moreover, there is a fixed gain in system efficiency by executing the same task at the same server on two successive days. As the allocation of the daily analytic tasks to servers have to be performed quickly, one may choose $\mathcal{A}$ to be a polynomial-time approximation algorithm. Nevertheless, in some planning problem, one may have enough time to solve the single-shot maxmin allocation problem exactly. Consider an example in which a construction company is to produce assignments of engineers to different construction sites on an annual basis. Due to expertise and other considerations, an engineer can only work at a subset of the sites in the coming year. The company wants to maximize the minimize annual progress of a site, and there is a fixed gain in efficiency in keeping an engineer at the same site from one year to the next. If only a moderate number of engineers are involved, there may be enough time to take $\mathcal{A}$ to be an exact algorithm for solving the single-shot maxmin allocation problem. In this paper, we improve the competitive ratio for the multistage online maxmin allocation problem and generalize to the case of $w$-lookahead for any fixed $w \geqslant1$. We design a new online algorithm that achieves a competitive ratio of $(1-c)\rho$, where $c$ is the positive root of the equation $wc^2 = \rho(w+1)(1-c)$. Our algorithm takes $O(wmn\log (wn) + w\cdot T(m,n))$ time at each time step. The total time spent in invoking $\mathcal{A}$ for the entire time horizon $[1,\tau]$ is $O(\tau \cdot T(m,n))$. If $w = 1$, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which is better than the ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ for the case of 1-lookahead in~\cite{BEM18}. \section{Notation} Let $I_t$ denote the input instance at time $t$ which specifies $L_t(\cdot)$ and $v_t(\cdot,\cdot)$. Let $I_{a:b}$ denote the set of input instances $I_a, I_{a+1},\ldots, I_b$. An allocation $C_t$ for $I_t$ is a set of ordered pairs $(e,p)$ for some $e \in E$ and $p \in P$ such that $p \in L_t(e)$ and every $e$ belongs to at most one pair in $C_t$. We use $C_{a:b}$ to denote the set of allocations $C_a,\ldots, C_b$ for the input instances $I_{a:b}$. For every entity~$e$, $C_t[e]$ denotes the assignment of $e$ at time $t$ specified in $C_t$. It is possible that $e$ is unassigned at time $t$. For any interval $[a,b] \subseteq [1,\tau]$ and any entity $e$, we use $C_{a:b}[e]$ to denote the set of assignments $C_a[e],\ldots,C_b[e]$. An alternative way to view $C_{1:\tau}$ is that it specifies a sequence of disjoint time intervals for every entity $e$. In each time interval, $e$ is assigned to a single player. We call these intervals \emph{assignment intervals}. Our online algorithm generates a set of allocations $C_{1:\tau}$ by specifying these assignment intervals for the entities. Because our algorithm does not know the all future instances, it is possible that it may generate two consecutive assignment intervals in which $e$ is assigned to the same player in them. Ideally, we would like to merge such a pair of intervals; however, it is more convenient for our analysis to keep them separate. Therefore, we do not assume that an assignment interval is a maximal interval such that $e$ is assigned to the same player, although this would be the case for the optimal offline solution for $I_{1:\tau}$. Given $C_{1:\tau}$ and any $[a,b] \subseteq [1,\tau]$, the assignment interval endpoints in $C_{a:b}$ refer to the endpoints of the assignment intervals in $C_{1:\tau}$ that lie in $[a,b]$. The assignment intervals in $C_{a:b}$ refer to the assignment intervals in $C_{1:\tau}$ that are contained in $[a,b]$. For every entity $e$, we can similarly interpret the notions of assignment interval endpoints in $C_{a:b}[e]$ and assignment intervals in $C_{a:b}[e]$. Due to the constraints posed by $L_t(\cdot)$, it is possible that an entity $e$ is unassigned at some time step, so there may be a gap between an assignment interval end time and the next assignment interval start time in $C_{1:\tau}[e]$. Take any set of allocations $C_{1:\tau}$ for $I_{1:\tau}$. Define the following quantities: \begin{align*} \nu(C_t) & = \min_{p \in P} \left\{\sum_{(e,p) \in C_t} v_t(e,p)\right\}, \\ \nu(C_{a:b}) & = \sum_{t=a}^b \nu(C_t), \\ \lambda(C_{t:t+1}[e]) & = \left\{\begin{array}{lcl} \Delta, & & \mbox{if $[t,t+1]$ is contained in an assignment interval of $e$}; \\ 0, & & \mbox{otherwise}, \end{array}\right. \\ \lambda(C_{a:b}[e]) & = \sum_{t=a}^{b-1} \lambda(C_{t:t+1}[e]), \\ \lambda(C_{a:b}) & = \sum_{e \in E} \lambda(C_{a:b}[e]). \end{align*} We call $\lambda(C_{a:b})$ the \emph{stability value} of $C_{a:b}$. The value $\lambda(C_{t:t+1}[e])$ is \emph{stability reward of $e$ from $t$ to $t+1$}. Our online algorithm requires a $w$-lookahead for any fixed $w \geqslant 1$. That is, the input instances $I_{t+i}$ for all $i \in [0,w]$ are given at the current time step $t$. We assume that $I_{\tau+j}$ for any $j \geqslant 1$ is an empty input instance (i.e., instances in which $L_t(\cdot)$ are empty sets and $v_t(\cdot,\cdot)$ are zeros) so that we can talk about the $w$-lookahead at $\tau - i$ for any $i \in [0,w-1]$. \section{Multistage online maxmin allocation} \subsection{Overview and periods} We start off by initializing a set $S_{1:\tau}$ of empty allocations. Then, we use a greedy algorithm to update $S_{1:1+w}$ to be a set of allocations that maximize the stability value with respect to $I_{1:1+w}$. We also use $S_{1:1+w}$ to generate the first \emph{period} as follows. The time step 1 is taken as a default \emph{period start time}. In general, suppose that the current time step $s$ is a period start time. Then, we use a greedy algorithm to compute the assignment intervals for some entities for $I_{s:s+w}$ provided by the $w$-lookahead. This gives an updated $S_{s:s+w}$. Every assignment interval end time in $S_{s:s+w}$ is a \emph{candidate end time}. The time step $s+w$ is a default candidate end time. For every assignment interval start time $i$ in $S_{s+1:s+w}$, $i-1$ is also a candidate end time. (If $s$ is an assignment start time, $s$ does not induce $s-1$ as a candidate end time.) Let $t$ be the smallest candidate end time within $[s,s+w]$. Then, $[s,t]$ is the next period. It is possible that $t = s+w$. It is also possible that $t$ lies inside an assignment interval of an entity $e$ in $S_{1:s+w}[e]$. To determine the allocations for $[s,t]$, we compute a set of allocations $B_{s:t}$ by running the $\rho$-approximation algorithm $\mathcal{A}$ on the instances $I_{s:t}$. By a judicious comparison of $\nu(B_{s:t})$ and $\lambda(S_{s:t})$, we set the allocations $A_{s:t}$ to be $S_{s:t}$ or $B_{s:t}$. $A_s$ will be returned at the current time step $s$; for each future time step $i \in [s+1,t]$, $A_i$ will be returned. The next period start time is $t+1$ and we will repeat the above at that time. There are two main reasons for our improvement over the result in~\cite{BEM18}. First, we do not recompute after some waiting time that is fixed beforehand. The periods are dynamically generated and updated using the allocations produced by a greedy algorithm. This fact allows us to make better use of the stability values offered by these greedy allocations. Second, the greedy allocations and the $\rho$-approximate allocations are also compared in~\cite{BEM18} in determining the allocation for the current time step; however, our comparison is different because it allows us to reap the potential stability reward from the previous period to the current period, and at the same time, the potential stability reward from the current period to the next. \subsection{Greedy allocations} \begin{algorithm}[b] \begin{algorithmic}[1] \caption{StableAllocate$(a,b)$} \label{alg:2} \FOR{every entity $e$} \IF{no assignment interval in $S_{1:\tau}[e]$ starts before $a$ and contains $a$} \STATE{$S_{a:b}[e] \leftarrow \mathrm{StableEntity}(e,a,b)$} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} During the execution of our online algorithm, we maintain a set of allocations $S_{1:\tau}$ that are initially set to be empty allocations. At any time step, the allocations in $S_{1:\tau}$ are possibly empty beyond some time in the future due to our limited knowledge of the future. Suppose that $a$ is the start time of the next period. Our online algorithm will call StableAllocate$(a,a+w)$, which is shown in Algorithm~\ref{alg:2}, to update $S_{a:a+w}$. StableAllocate works by running a greedy algorithm for some of the entities. Given an entity $e$ and an interval $[a,b]$, a greedy algorithm is described in~\cite{BEM18} to compute some assignments intervals of $e$ within $[a,b]$ that have the maximum stability value with respect to the instances $I_{a:b}$. We give the pseudocode, StableEntity, of this algorithm in Algorithm~\ref{alg:2-2}. For every entity $e$, if no assignment interval in $S_{1:\tau}[e]$ starts before $a$ and contains $a$, we call StableEntity$(e,a,a+w)$ to recompute $S_{a:a+w}[e]$. The updated allocations $S_{a:a+w}$ will serve two purposes. First, they will determine the end time of the next period that starts at $a$. Second, they will help us to determine the allocations that will be returned for the next period. \begin{algorithm}[t] \begin{algorithmic}[1] \caption{StableEntity$(e,a,b)$} \label{alg:2-2} \STATE{initialize $C_{a:b}[e]$ to be empty allocations} \STATE{$i \leftarrow a$} \WHILE{$i \leqslant b$} \FOR{every player $q$} \IF{$q \in L_i(e)$} \STATE{$k_q \leftarrow \max\{ k \in [i,b] : q \in \bigcap_{j=i}^{k} L_j(e)\}$} \ELSE \STATE{$k_q \leftarrow 0$} \ENDIF \ENDFOR \STATE{$p \leftarrow \mathrm{argmax}\{k_q : q \in P \}$} \IF{$k_p \geqslant i$} \STATE{add $[i,k_p]$ as an assignment interval to $C_{a:b}[e]$ and assign $e$ to $p$ during $[i,k_p]$} \STATE{$i \leftarrow k_{p} + 1$} \ELSE \STATE{$i \leftarrow i + 1$} \ENDIF \ENDWHILE \STATE{return $C_{a:b}[e]$} \end{algorithmic} \end{algorithm} The following result gives some structural conditions under which the stability value of some allocations $Y_{a:b}[e]$ is at least the stability value of some other allocations $X_{a:b}[e]$. The greediness of StableEntity ensures that these conditions are satisfied by its output when compared with any $X_{a:b}[e]$. As a result, Lemma~\ref{lem:1} proves the optimality of the stability value of the output of StableEntity. The optimality of the greedy algorithm was also proved in~\cite{BEM18}, but we make the structural conditions more explicit in Lemma~\ref{lem:1}. \begin{lemma} \label{lem:1} Let $[a,b]$ be any time interval. Let $X_{a:b}[e]$ and $Y_{a:b}[e]$ be two sets of assignment intervals for $e$ that are contained in $[a,b]$. If the following conditions hold, then for every $j \in [0,b-a]$, $\lambda(X_{a:a+j}[e]) \leqslant \lambda(Y_{a:a+j}[e])$. \begin{enumerate}[{\em (i)}] \item The first assignment interval in $Y_{a:b}[e]$ starts no later than the first assignment interval in $X_{a:b}[e]$. \item If the start time of an assignment interval $J$ in $Y_{a:b}[e]$ lies in an assignment interval $J'$ in $X_{a:b}[e]$, the end time of $J$ is not less than the end time of $J'$. \item For every $t \in [a,b]$, if $e$ is assigned in $X_t[e]$, then $e$ is also assigned in $Y_t[e]$. \end{enumerate} \end{lemma} \begin{proof} We show that $\lambda(X_{a:a+j}[e]) \leqslant \lambda(Y_{a:a+j}[e])$ for $j \in [0,b-a]$ by induction on $j$. The base case of $j = 0$ is trivial as both $\lambda(X_{a:a}[e])$ and $\lambda(Y_{a:a}[e])$ are zero by definition. Consider $a+j$ for some $j \in [1,b-a]$. There are two cases depending the value of $\lambda(X_{a+j-1:a+j}[e])$. Case~1: $\lambda(X_{a+j-1:a+j}[e]) = 0$. Then, $\lambda(X_{a:a+j}[e]) = \lambda(X_{a:a+j-1}[e]) \leqslant \lambda(Y_{a:a+j-1}[e]) \leqslant \lambda(Y_{a:a+j}[e])$. Case~2: $\lambda(X_{a+j-1:a+j}[e]) = \Delta$. Some assignment interval $J'$ in $X_{a:b}[e]$ contains $[a+j-1,a+j]$ in this case. Let $p$ be the player to whom $e$ is assigned during $J'$. Let $a+i$ be the start time of $J'$. Note that $i \in [0,j-1]$. If $i = 0$, then $[a,a+j] \subseteq J'$ and $J'$ is the first assignment interval in $X_{a:b}[e]$. By conditions~(i)~and~(ii), the first assignment interval in $Y_{a:b}[e]$ starts at $a$ and ends no earlier than $a+j$. Therefore, $\lambda(X_{a:a+j}[e]) = \lambda(Y_{a:a+j}[e])$. Suppose that $i > 0$. Because $e$ is assigned to $p$ at $a+i$ in $X_{a:b}[e]$, by condition~(iii), there exists an assignment interval $J$ in $Y_{a:b}[e]$ that contains $a+i$. So the start time of $J$ is less than or equal to $a+i$. There are two cases. \begin{itemize} \item If the end time of $J$ is at least $a+j$, then $\lambda(X_{a+i:a+j}[e]) = \lambda(Y_{a+i:a+j}[e])$ and hence \begin{align*} \lambda(X_{a:a+j}[e]) & = \lambda(X_{a:a+i}[e]) + \lambda(X_{a+i:a+j}[e]) \\ & \leqslant \lambda(Y_{a:a+i}[e]) + \lambda(X_{a+i:a+j}[e]) & (\because \text{induction assumption}) \\ & = \lambda(Y_{a:a+i}[e]) + \lambda(Y_{a+i:a+j}[e]) \\ & = \lambda(Y_{a:a+j}[e]). \end{align*} \item The other case is that $J$ ends at some time $t \in [a+i,a+j-1]$. By condition~(ii), the start time of $J$ cannot be $a+i$, which means that the start time of $J$ is less than or equal to $a+i-1$. Since $e$ is assigned to $p$ from $a+i$ to $a+j$ in $X_{a:b}[e]$, condition~(iii) implies that $e$ is assigned in $Y_{a:b}[e]$ at every time step in $[a+i,a+j]$. Therefore, there is another assignment interval $K$ in $Y_{a:b}[e]$ that starts at $t+1$. Condition~(ii) implies that the end time of $K$ is at least $a+j$. There is thus a loss of a stability reward of $\Delta$ for $e$ from $t$ to $t+1$ in $Y_{a+i-1:a+j}[e]$, which matches the loss of a stability reward of $\Delta$ for $e$ from $a+i-1$ to $a+i$ in $X_{a+i-1:a+j}[e]$. As a result, $\lambda(X_{a+i-1:a+j}[e]) = \lambda(Y_{a+i-1:a+j}[e])$. Hence, \begin{align*} \lambda(X_{a:a+j}[e]) & = \lambda(X_{a:a+i-1}[e]) + \lambda(X_{a+i-1:a+j}[e]) \\ & \leqslant \lambda(Y_{a:a+i-1}[e]) + \lambda(X_{a+i-1:a+j}[e]) & (\because \text{induction assumption}) \\ & = \lambda(Y_{a:a+i-1}[e]) + \lambda(Y_{a+i-1:a+j}[e]) \\ & = \lambda(Y_{a:a+j}[e]). \end{align*} \end{itemize} \end{proof} \subsection{Online Algorithm} The pseudocode of our online algorithm, MSMaxmin, is shown in Algorithm~\ref{alg:3}. The parameter $c_0$ in line~\ref{code:3} is a real number chosen from the range $(0,1)$ that will be specified later when we analyze the performance of MSMaxmin. MSMaxmin initializes $S_{1:\tau}$ to be a set of empty allocations and then iteratively computes $A_1,A_2,\ldots,A_\tau$. At the $s$-th time step, MSMaxmin calls StableAllocate$(s,s+w)$ if $s$ is the start time of the next period. (By default, 1 is the start time of the first period.) This call of StableAllocate updates $S_{1:\tau}$ by changing $S_{s:s+w}$. Afterwards, we determine the period end time $t$ using the assignment interval start and end times in $S_{s:s+w}$. The next task is to determine the allocations $A_{s:t}$ for $I_{s:t}$. \begin{algorithm}[t] \begin{algorithmic}[1] \caption{MSMaxmin} \label{alg:3} \STATE{$S_{1:\tau} \leftarrow \text{empty allocations}$} \STATE{$\mathit{period}\_\mathit{start} \leftarrow 1$} \FOR{$s$ = 1 to $\tau$} \IF{$s = \mathit{period}\_\mathit{start}$} \IF{$s > 1$} \STATE{$[r,s-1] \leftarrow \mathit{period}$ \quad\quad\quad\quad\quad /* $[r,s-1]$ is the previous period */} \ENDIF \STATE{$\mathrm{StableAllocate}(s,s+w)$} \STATE{$t \leftarrow \min\bigl(s+w,\min\{\beta : \text{assignment interval end time $\beta$ in $S_{s:s+w}$}\}\bigr)$} \label{code:1} \STATE{$t \leftarrow \min\bigl(t, \min\{\beta-1: \text{assignment start time $\beta$ in $S_{s+1:s+w}$}\}\bigr)$} \label{code:1-1} \STATE{$\mathit{period} \leftarrow [s,t]$ \quad\quad\quad\quad\quad\quad\quad\quad /* $[s,t]$ is the next period */} \FOR{$j$ = $s$ to $t$} \label{code:1-3} \STATE{$B_j\leftarrow \text{$\rho$-approximate maxmin allocation for $I_j$}$} \ENDFOR \label{code:1-4} \STATE{$L \leftarrow 0$} \label{code:2-1} \IF{$s > 1$ and $s \leqslant r+w$ and $A_{s-1} = S_{s-1}$} \STATE{$L \leftarrow \lambda(S_{s-1:s})$} \ENDIF \STATE{$R \leftarrow 0$} \IF{$t < s+w$} \STATE{$R \leftarrow \lambda(S_{t:t+1})$} \ENDIF \IF{$\nu(B_{s:t}) \geqslant L + \lambda(S_{s:t}) + c_0\cdot R$} \label{code:3} \STATE{$A_{s:t} \leftarrow B_{s:t}$} \ELSE \STATE{$A_{s:t} \leftarrow S_{s:t}$} \ENDIF \STATE{$\mathit{period}\_\mathit{start} \leftarrow t+1$} \label{code:2} \ENDIF \STATE{output $A_s$} \ENDFOR \end{algorithmic} \end{algorithm} We invoke the $\rho$-approximation algorithm for the single-shot maxmin allocation problem. Specifically, we compute the $\rho$-approximate allocations $B_{s:t}$ for the instances $I_{s:t}$. That is, for each $i \in [s,t]$, $\nu(B_i) \geqslant \rho \cdot\nu(X_i)$ for any allocation $X_i$ for $I_i$. The $\rho$-approximation algorithm may not take the restriction lists into account. Nevertheless, since we assume that $v_i(e,p) = 0$ if $p \not\in L_i(e)$, we can remove such an assignment $(e,p)$ from $B_i$ without affecting $\nu(B_i)$. Therefore, we assume without loss of generality that every $B_i$ respects the restriction lists $L_i(\cdot)$. We set $A_{s:t}$ to be $S_{s:t}$ or $B_{s:t}$. It is natural to check whether $\lambda(S_{s:t})$ is larger than $\nu(B_{s:t})$. However, if $s \leqslant r+w$ and $A_{s-1} = S_{s-1}$, where $r$ is the start time of the previous period, then $S_s$ was computed at time $r$ and it is possible that $\lambda(S_{s-1:s}[e]) = \Delta$ for some entity $e$. The call StableAllocate$(s,s+w)$ does not invoke StableEntity for such an entity $e$, and so $S_s[e]$ will be preserved. Therefore, if we set $A_{s:t}$ to be $S_{s,t}$, we will gain the stability reward of $\lambda(S_{s-1:s})$. Similarly, if $t < s+w$, then setting $A_{s:t}$ to be $S_{s,t}$ provides the opportunity to gain the stability reward of $\lambda(S_{t:t+1})$ in the future. On the other hand, if $t = s+w$, we do not know $I_{t+1}$ at time $s$, and we do not compute $S_{t+1}$ at time $s$. In this case, after calling StableAllocate at time $s$, all assignment intervals in the current $S_{1:\tau}$ must end at or before $t = s+w$, implying that there is no stability reward in $S_{t:t+1}$ irrespective of how we will set $S_{t+1}$ in the future. Hence, we compare $\nu(B_{s:t})$ with $\lambda(S_{s:t})$ and possibly $\lambda(S_{s-1:s})$ and $\lambda(S_{t:t+1})$ depending on the situation. \section{Analysis} Because MSMaxmin calls StableAllocate from time to time to update $S_{1:\tau}$, the set of allocations $S_{a:b}$ for any $[a,b] \subseteq [1,\tau]$ may change over time. To differentiate these allocations computed at different times, we introduce the notation $S_{i|s}$ to denote $S_i$ at the end of the time step $s$, i.e., at the end of the $s$-th iteration of the for-loop in MSMaxmin. Similarly, $S_{a:b|s}$ denotes $S_{a:b}$ at the end of the time step $s$. First, we give some properties of the start and end times of periods and assignment intervals. \begin{lemma} \label{lem:period} Let $s$ be a period start time. For every entity $e$, \begin{enumerate}[{\em (i)}] \item if $\alpha$ is an assignment interval end time in $S_{s:s+w|s}[e]$, then $\alpha$ will remain an assignment interval end time for $e$ in the future and $\alpha$ will be a period end time; \item if $\alpha$ is an assignment interval start time in $S_{s+1:s+w|s}[e]$, then $\alpha-1$ will be a period end time, $\alpha$ will remain an assignment interval start time for $e$ in the future, and $\alpha$ will be a period start time. \end{enumerate} \end{lemma} \begin{proof} Take any entity $e$. For any period start time $s$, the call StableAllocate$(s,s+w)$ invokes StableEntity$(e,s,s+w)$ only if $e$ is unassigned at time $s$ in $S_{1:\tau|s-1}[e]$ or $s$ is an assignment interval start time in $S_{1:\tau|s-1}[e]$. Therefore, by the greediness of StableEntity, if a time step $\alpha\in [s,s+w]$ was already determined by some previous call of StableEntity on $e$ as an assignment interval start or end time, that decision will remain the same in this call StableEntity$(e,s,s+w)$. Therefore, if $\alpha$ is an assignment interval end time for $e$ in $[s,s+w]$ after calling StableAllocate$(s,s+w)$, it will remain an assignment interval end time for $e$ in the future. It will also be a candidate end time used in line~\ref{code:1} of MSMaxmin in the $s$-th iteration of the for-loop and thereafter until $\alpha$ becomes the next period end time. We can similarly argue that if $\alpha$ is an assignment interval start time for $e$ in $[s,s+w]$ after calling StableAllocate$(s,s+w)$, it will remain an assignment interval start time for $e$ in the future. Also, $\alpha-1$ will be a candidate end time used in line~\ref{code:1-1} of MSMaxmin until it becomes the next period end time. When this happens, $\alpha$ will be made the subsequent period start time in line~\ref{code:2} of MSMaxmin. \end{proof} Next, we show that for every entity $e$, the stability value of any set of allocations $X_{1:t}[e]$ cannot be much larger than that of $S_{1:t|\alpha}[e]$, where $\alpha$ is the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$, provided that $t \leqslant \alpha+w$. \begin{lemma} \label{lem:2} Let $X_{1:\tau}$ be a set of allocations for $I_{1:\tau}$. Let $t$ be any time step. Let $\alpha$ be the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$. If $t \leqslant \alpha+w$, then $\lambda(X_{1:t}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:t|\alpha}[e])$. \end{lemma} \begin{proof} We prove the lemma by induction on the assignment interval start time $\alpha$. In the base case, $\alpha$ is the smallest assignment interval start time for $e$ computed by StableEntity. It follows from the greediness of StableEntity that $L_s(e) = \emptyset$ for all $s \in [1,\alpha-1]$, which implies that $X_{1:\alpha-1}[e]$ is empty. By Lemma~\ref{lem:period}, $\alpha$ is a period start time, so MSMaxmin calls StableAllocate$(\alpha,\alpha+w)$ at $\alpha$ which calls StableEntity$(e,\alpha,\alpha+w)$. By the greediness of StableEntity and Lemma~\ref{lem:1}, for every time step $t \in [\alpha,\alpha+w]$, $\lambda(X_{\alpha:t}[e]) \leqslant \lambda(S_{\alpha:t|\alpha}[e])$. Hence, $\lambda(X_{1:t}[e]) = \lambda(X_{\alpha:t}[e]) \leqslant \lambda(S_{\alpha:t|\alpha}[e]) =\lambda(S_{1:t|\alpha}[e])$, i.e., the base case is true. Consider the induction step. Let $\gamma$ be the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$. To prove the lemma, we are only concerned with the case of $t \leqslant \gamma+w$. By Lemma~\ref{lem:period}, $\gamma$ is a period start time, so MSMaxmin calls StableAllocate$(\gamma,\gamma+w)$ at $\gamma$ which calls StableEntity$(e,\gamma,\gamma+w)$. By the greediness of StableEntity and Lemma~\ref{lem:1}, we get \begin{equation} \lambda(X_{\gamma:t}[e]) \leqslant \lambda(S_{\gamma:t|\gamma}[e]). \label{eq:2-4} \end{equation} Let $[\alpha,\beta]$ be the assignment interval in $S_{1:\tau|\gamma}[e]$ before $\gamma$. Note that $\beta \leqslant \alpha+w$. By Lemma~\ref{lem:period}, $\alpha$ is a period start time. So MSMaxmin calls StableAllocate$(\alpha,\alpha+w)$ at $\alpha$ which calls StableEntity$(e,\alpha,\alpha+w)$. The call StableEntity$(e,\gamma,\gamma+w)$ at $\gamma$ cannot modify assignment intervals that end before $\gamma$. Also, as StableAllocate does not call StableEntity for $e$ within $[\alpha+1,\beta]$, $\alpha$ is the last time before $\gamma$ at which the assignment intervals for $e$ was updated by a call of StableEntity. Therefore, \begin{equation} S_{1:\beta|\alpha}[e] = S_{1:\beta|\gamma}[e]. \label{eq:2-2-1} \end{equation} We claim that: \begin{equation} \forall\, s \in [\alpha,\gamma-1], \quad \lambda(X_{1:s}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e]). \label{eq:2-1} \end{equation} For $s \in [\alpha,\beta]$, we have $\lambda(X_{1:s}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e])$ by the induction assumption. If $\beta+1\leqslant \gamma-1$, by the greediness of StableEntity, it must be the case that $L_s(e) = \emptyset$ for all $s \in [\beta+1,\gamma-1]$ so that StableEntity does not assign $e$ to any player during $[\beta+1,\gamma-1]$. Therefore, $X_{\beta+1:s}[e]$ is empty for all $s \in [\beta+1,\gamma-1]$, which means that $\lambda(X_{1:s}[e]) = \lambda(X_{1:\beta}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) = \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e])$. Hence, \eqref{eq:2-1} holds. Suppose that $\gamma \leqslant \alpha+w$. In this case, we know the instances $I_{\alpha:\gamma}$ at time $\alpha$. By the greediness of StableEntity and Lemma~\ref{lem:1}, we have \begin{equation} \lambda(X_{\alpha:\gamma}[e]) \leqslant \lambda(S_{\alpha:\gamma|\alpha}[e]). \label{eq:2-2} \end{equation} As $\gamma \leqslant \alpha+w$, after calling StableEntity$(e,\alpha,\alpha+w)$ at $\alpha$, we already know that $\gamma$ is the start time of the next assignment interval for $e$. Therefore, there is no stability reward for $e$ from $\beta$ to $\gamma$ in $S_{\alpha:\gamma|\alpha}[e]$. Then, it follows from \eqref{eq:2-2-1} that \begin{equation} \lambda(S_{\alpha:\gamma|\alpha}[e]) = \lambda(S_{\alpha:\gamma|\gamma}[e]). \label{eq:2-3} \end{equation} Hence, \begin{align*} \lambda(X_{1:t}[e]) & = \lambda(X_{1:\alpha}[e]) + \lambda(X_{\alpha:\gamma}[e]) + \lambda(X_{\gamma:t}[e]) \\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\gamma|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}, \eqref{eq:2-1}, \text{and}~\eqref{eq:2-2}) \\ & = \frac{w+1}{w}\lambda(S_{1:\alpha|\gamma}[e]) + \lambda(S_{\alpha:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-2-1}~\text{and}~\eqref{eq:2-3}) \\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]). \end{align*} Suppose that $\gamma \geqslant \alpha+w+1$. If $\beta < \alpha+w$, there is a gap $[\beta+1,\gamma-1]$ during which StableEntity does not assign $e$ to any player. It means that $L_s(e) = \emptyset$ for all $s \in [\beta+1,\gamma-1]$. Therefore, $e$ is also unassigned in $X_{\beta+1:\gamma-1}$ and we can conclude that \begin{align*} \lambda(X_{1:t}[e]) & = \lambda(X_{1:\beta}[e]) +\lambda(X_{\gamma:t}[e]) \\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}~\text{and}~\eqref{eq:2-1}) \\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\beta|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-2-1}) \\ & = \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]). & (\because \forall s \in [\beta+1,\gamma-1],\, L_s(e) = \emptyset) \end{align*} The remaining case is that $\beta \geqslant \alpha+w$. At time $\alpha$, MSMaxmin computes assignment intervals up to time $\alpha+w$ only. It follows that $\beta = \alpha+w$, which implies that $\lambda(S_{\alpha:\beta|\alpha}[e]) = w\Delta$. If the interval $[\beta+1,\gamma-1]$ is not empty, $e$ must be unassigned in $X_{\beta+1:\gamma-1}$ as we argued previously. We can thus conclude as in the above that $\lambda(X_{1:t}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e])$. Suppose that $[\beta+1,\gamma-1]$ is empty. It means that $\gamma = \beta+1$. Then, \begin{align*} & \lambda(X_{1:t}[e]) \\ & = \lambda(X_{1:\beta}[e]) + \lambda(X_{\beta:\beta+1}[e]) + \lambda(X_{\gamma:t}[e]) \\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(X_{\beta:\beta+1}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}~\text{and}~\eqref{eq:2-1})\\ & \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \Delta + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \lambda(X_{\beta:\beta+1}[e]) \leqslant \Delta) \\ & = \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \frac{w+1}{w}\lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \lambda(S_{\alpha:\beta|\alpha}[e]) = w\Delta) \\ & = \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]). & (\because \eqref{eq:2-2-1}) \end{align*} \end{proof} We are ready to analyze the performance of MSMaxmin. It depends on the parameter $c_0$ in line~\ref{code:3} of MSMaxmin which will be set based on the values of $\rho$ and $w$. \begin{theorem} \label{thm:main} \emph{MSMaxmin} takes $O(wmn\log (wn) + w \cdot T(m,n))$ time at each period start time and $O(m)$ time at any other time step. The total time taken by \emph{MSMaxmin} in running $\mathcal{A}$ for the entire time horizon $[1,\tau]$ is $O(\tau \cdot T(m,n))$. Let $A_{1:\tau}$ be the solution returned by \emph{MSMaxmin}. Then, $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant \frac{wc_0^2}{w+1}\cdot\lambda(O_{1:\tau}) + (1-c_0)\rho \cdot \nu(O_{1:\tau})$, where $O_{1:\tau}$ is the optimal offline solution. Hence, the competitive ratio is $(1-c_0)\rho$, where $c_0$ is the positive root of the equation $wc^2 = \rho (w+1)(1-c)$, that is, \[ c_0 = \frac{\sqrt{\rho^2(w+1)^2 + 4\rho w(w+1)} - \rho(w+1)}{2w}. \] \end{theorem} \begin{proof} Let $s$ be the current time step. If $s$ is not a period start time, MSMaxmin spends $O(m)$ time just to output $A_s$. Suppose that $s$ is a period start time. After calling StableAllocate$(s,s+w)$, we obtain $O(wm)$ assignment interval start and end times in $S_{s:s+w|s}$. Selecting the next period end time in lines~\ref{code:1} and~\ref{code:1-1} can be done in $O(wm)$ time. Running $\mathcal{A}$ in lines~\ref{code:1-3}--\ref{code:1-4} take $O(w \cdot T(m,n))$ time. Lines~\ref{code:2-1}--\ref{code:2} clearly take $O(wm)$ time. It remains to analyze the running time of the call StableAllocate$(s,s+w)$. Take an entity $e$ for which StableAllocate will call StableEntity. We describe an efficient implementation of StableEntity as follows. For every player $p$, we can construct the maximal interval(s) within $[s,s+w]$ in which $e$ can be assigned to $p$. There are fewer than $w$ intervals for $p$. We store these intervals for all players in a priority search tree $T_e$~\cite{mccrieght85}. The tree $T_e$ uses $O(wn)$ space and can be organized in $O(wn\log (wn))$ time. Given a time step~$i$, $T_e$ can be queried in $O(\log(wn))$ time to retrieve the interval that contains $i$ and has the largest right endpoint. This capability is exactly what we need for determining $k_p$ in lines~4--11 of StableEntity. It follows that the while-loop in StableEntity takes $O(w\log(wn))$ time. The total running time of StableEntity for $e$ is thus $O(wn\log(wn))$, implying that StableAllocate takes $O(wmn\log(wn))$ time. This completes the running time analysis. We analyze the competitive ratio as follows. Consider a period $[s,t]$. Let $[r,s-1]$ be the period before $[s,t]$. There are several cases. \begin{itemize} \item Case~1: $s > 1$ and $s \leqslant r+w$ and $A_{s-1} = S_{s-1|r}$. \begin{itemize} \item Case~1.1: $t < s+w$. MSMaxmin compares $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) > \nu(B_{s:t})$, MSMaxmin sets $A_{s:t}$ to be $S_{s:t|s}$. Because $s \leqslant r+w$, $A_{s} = S_{s|s}$ allows us to collect the stability reward of $\lambda(S_{s-1:s|s})$, which makes the contribution of $A_{s:t}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ greater than or equal to \begin{equation} \lambda(S_{s-1:t|s}) > (1-c_1)\cdot\lambda(S_{s-1:t|s}) - c_0c_1\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm0} \end{equation} If $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) \leqslant \nu(B_{s:t})$, MSMaxmin sets $A_{s:t}$ to be $B_{s:t}$ and the contribution of $A_{s:t}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ is at least \begin{equation} \nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_0(1- c_1)\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm1} \end{equation} \item Case~1.2: $t = s+w$. In this case, MSMaxmin compares $\lambda(S_{s-1:t|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s-1:t|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ is greater than or equal to \begin{equation} \lambda(S_{s-1:t|s}) > (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm2} \end{equation} If $\lambda(S_{s-1:t|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least \begin{equation} \nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm3} \end{equation} \end{itemize} \item Case~2: $s = 1$, or $s = r+w+1$, or $A_{s-1} = B_{s-1}$. \begin{itemize} \item Case~2.1: $t < s+w$. MSMaxmin compares $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ is greater than or equal to \begin{equation} \lambda(S_{s:t|s}) > (1-c_1)\cdot\lambda(S_{s:t|s}) - c_0c_1\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm4} \end{equation} If $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least \begin{equation} \nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s:t|s}) + c_0(1-c_1)\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm5} \end{equation} \item Case~2.2: $t = s+w$. In this case, MSMaxmin compares $\lambda(S_{s:t|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s:t|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ is greater than or equal to \begin{equation} \lambda(S_{s:t|s}) > (1-c_1)\cdot\lambda(S_{s:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm6} \end{equation} If $\lambda(S_{s:t|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least \begin{equation} \nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm7} \end{equation} \end{itemize} \end{itemize} Let $O_{1:\tau}$ be the optimal offline solution for $I_{1:\tau}$. In the sum of the applications of \eqref{eq:thm0}--\eqref{eq:thm7} to all the periods, the $\nu(\cdot)$ terms sum to $c_1\cdot\nu(B_{1:\tau})$, which is at least $c_1\rho\cdot\nu(O_{1:\tau})$. Consider the sum of the $\lambda(\cdot)$ terms. Let $[r,s-1]$ and $[s,t]$ be two consecutive periods. If \eqref{eq:thm6} or \eqref{eq:thm7} is applicable to $[r,s-1]$, then $s-1 = r+w$ and one of the inequalities \eqref{eq:thm4}--\eqref{eq:thm7} is applicable to $[s,t]$, implying that the sum of the $\lambda(\cdot)$ terms does not include $\lambda(S_{s-1:s|s})$. Nevertheless, as $s-1 = r+w$, all assignment intervals computed at or before time $r$ do not extend beyond $s-1$. Therefore, $\lambda(S_{s-1:s|s}) = 0$ and there is no harm done. For all other kinds of transition from $s-1$ to $s$, the sum of the $\lambda(\cdot)$ terms includes the stability reward of the entities from $s-1$ to $s$ multiplied by a coefficient that is less than 1. We analyze the smallest coefficient of the $\lambda(\cdot)$ terms as follows. If \eqref{eq:thm0} or \eqref{eq:thm4} is applicable to $[r,s-1]$, then $s-1 < r+w$ and we get a $-c_0c_1\cdot\lambda(S_{s-1:s|r})$ term. In this case, one of the inequalities~\eqref{eq:thm0}--\eqref{eq:thm3} must be applicable to $[s,t]$, which contains the term $(1-c_1)\cdot\lambda(S_{s-1:s|s})$. We claim that these two terms combine into $(1-c_1-c_0c_1)\cdot\lambda(S_{s-1:s|s})$. Take any entity $e$. If StableEntity is not invoked for $e$ at $s$, then $S_{s-1:s|r}[e] = S_{s-1:s|s}[e]$. If StableEntity is invoked for $e$ at $s$, no assignment interval in $S_{r:r+w|r}[e]$ or $S_{r:r+w|s}[e]$ contains $[s-1,s]$ and so $\lambda(S_{s-1:s|r}[e]) = \lambda(S_{s-1:s|s}[e]) = 0$. This proves our claim. If \eqref{eq:thm1} or \eqref{eq:thm5} is applicable to $[r,s-1]$, we get the term $c_0(1-c_1)\cdot \lambda(S_{s-1:s|r})$ which is equal to $c_0(1-c_1)\cdot\lambda(S_{s-1:s|s})$ as explained in the previous paragraph. Among the coefficients of the $\lambda(\cdot)$ terms, the smallest ones are $1-c_1-c_0c_1$ and $c_0(1-c_1)$. Balancing $1-c_1-c_0c_1$ and $c_0(1-c_1)$ gives the relation $c_0+c_1 = 1$. As a result, $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant c_0^2\cdot\lambda(S_{1:\tau|\tau}) + c_1\cdot\nu(B_{1:\tau}) \geqslant c_0^2\cdot\lambda(S_{1:\tau|\tau}) + (1-c_0)\rho \cdot \nu(O_{1:\tau})$. Here, we use the fact that $S_{s:t|s}$ for a period $[s,t]$ will not be changed after $s$, and so $S_{s:t|s} = S_{s:t|\tau}$. By Lemma~\ref{lem:2}, we get $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant \frac{wc_0^2}{w+1} \cdot \lambda(O_{1:\tau|\tau}) + (1-c_0)\rho\cdot\nu(O_{1:\tau})$. To maximize the competitive ratio, we balance the coefficients $\frac{wc_0^2}{w+1}$ and $(1-c_0)\rho$. The only positive root of the quadratic equation $wc_0^2 = \rho(w+1)(1-c_0)$ is \[ \frac{\sqrt{\rho^2(w+1)^2 + 4\rho w(w+1)} - \rho(w+1)}{2w}. \] This positive root is less than $\bigl((\rho(w+1) + 2w) - \rho(w+1)\bigr)/(2w) = 1$. \end{proof} Suppose that we keep $\rho$ general and set $w = 1$. Then, $c_0 = \sqrt{\rho^2 + 2\rho} - \rho$ and our competitive ratio is $(\rho+1-\sqrt{\rho^2+2\rho})\rho$. To compare with the $\frac{\rho}{4\rho+2}$ bound in~\cite{BEM18}, we consider the difference in the coefficients $\rho+1 - \sqrt{\rho^2+2\rho} - 1/(4\rho+2)$. Treating this as a function in $\rho$, the derivative of this difference is $1 - (\rho+1)(\rho^2+2\rho)^{-1/2} + (2\rho+1)^{-2}$. This derivative is negative for $\rho \in (0,1]$, so the smallest difference is roughly $0.2679 - 0.1667 > 0.1$ when $\rho=1$. Therefore, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$. \section{Conclusion} We presented a $w$-lookahead online algorithm for the multistage online maxmin allocation problem for any fixed $w \geqslant 1$. It is more general than the 1-lookahead online algorithm in the literature~\cite{BEM18}. For the case of $w=1$, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which improves upon the previous ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ in~\cite{BEM18}. It is unclear whether our analysis of MSMaxmin is tight. When we set $A_{s:t}$ to be $S_{s:t}$, we only analyze $\lambda(S_{s:t})$ and ignore $\nu(S_{s:t})$. Conversely, when we set $A_{s:t}$ to be $B_{s:t}$, we only analyze $\nu(B_{s:t})$ and ignore $\lambda(B_{s:t})$. There may be some opportunities for improvement. \end{document}
math
\begin{document} \title{Numerical analysis of nonlinear eigenvalue problems} \author{Eric Canc\`es\footnote{Universit\'e Paris-Est, CERMICS, Project-team Micmac, INRIA-Ecole des Ponts, 6 \& 8 avenue Blaise Pascal, 77455 Marne-la-Vall\'ee Cedex 2, France.}, Rachida Chakir\footnote{UPMC Univ Paris 06, UMR 7598 LJLL, Paris, F-75005 France ; CNRS, UMR 7598 LJLL, Paris, F-75005 France} $\,$ and Yvon Maday$^\dag$\footnote{Division of Applied Mathematics, Brown University, Providence, RI, USA} } \maketitle \begin{abstract} We provide {\it a priori} error estimates for variational approximations of the ground state energy, eigenvalue and eigenvector of nonlinear elliptic eigenvalue problems of the form $-\mbox{div} (A\nabla u) + Vu + f(u^2) u = \lambda u$, $\|u\|_{L^2}=1$. We focus in particular on the Fourier spectral approximation (for periodic problems) and on the $\mathbb{P}_1$ and $\mathbb{P}_2$ finite-element discretizations. Denoting by $(u_\delta,\lambda_\delta)$ a variational approximation of the ground state eigenpair $(u,\lambda)$, we are interested in the convergence rates of $\|u_\delta-u\|_{H^1}$, $\|u_\delta-u\|_{L^2}$, $|\lambda_\delta-\lambda|$, and the ground state energy, when the discretization parameter $\delta$ goes to zero. We prove in particular that if $A$, $V$ and $f$ satisfy certain conditions, $|\lambda_\delta-\lambda|$ goes to zero as $\|u_\delta-u\|_{H^1}^2+\|u_\delta-u\|_{L^2}$. We also show that under more restrictive assumptions on $A$, $V$ and $f$, $|\lambda_\delta-\lambda|$ converges to zero as $\|u_\delta-u\|_{H^1}^2$, thus recovering a standard result for {\em linear} elliptic eigenvalue problems. For the latter analysis, we make use of estimates of the error $u_\delta-u$ in negative Sobolev norms. \end{abstract} \section{Introduction} Many mathematical models in science and engineering give rise to nonlinear eigenvalue problems. Let us mention for instance the calculation of the vibration modes of a mechanical structure in the framework of nonlinear elasticity, the Gross-Pitaevskii equation describing the steady states of Bose-Einstein condensates~\cite{GrossPitaevskii}, or the Hartree-Fock and Kohn-Sham equations used to calculate ground state electronic structures of molecular systems in quantum chemistry and materials science~(see~\cite{handbook} for a mathematical introduction). The numerical analysis of {\em linear} eigenvalue problems has been thoroughly studied in the past decades (see e.g. \cite{linear}). On the other hand, only a few results on {\em nonlinear} eigenvalue problems have been published so far~\cite{Zhou1,Zhou2}. In this article, we focus on a particular class of nonlinear eigenvalue problems arising in the study of variational models of the form \begin{equation} \label{eq:min_pb_u} I = \inf \left\{ E(v), \; v \in X, \; \int_{\Omega} v^2 = 1 \right\} \end{equation} where \begin{equation*} \left| \begin{array}{l} \Omega \mbox{ is a regular bounded domain or a rectangular brick of $\mathbb{R}^d$ and } X = H^1_0(\Omega) \\ \mbox{or} \\ \Omega \mbox{ is the unit cell of a periodic lattice ${\cal R}$ of $\mathbb{R}^d$ and } X = H^1_\#(\Omega) \end{array} \right. \end{equation*} with $d=1$, $2$ or $3$, and where the energy functional $E$ is of the form $$ E(v) = \frac 12 a(v,v) + \frac 12 \int_\Omega F(v^2(x)) \, dx $$ with $$ a(u,v) = \int_\Omega (A \nabla u) \cdot \nabla v + \int_\Omega V uv. $$ Recall that if $\Omega$ is the unit cell of a periodic lattice ${\cal R}$ of $\mathbb{R}^d$, then for all $s \in \mathbb{R}$ and $k \in \mathbb{N}$, \begin{eqnarray*} H^s_\#(\Omega) & = & \left\{ v|_\Omega, \; v \in H^s_{\rm loc}(\mathbb{R}^d) \; | \; v \; {\cal R}\mbox{-periodic} \right\}, \\ C^k_\#(\Omega) & = & \left\{ v|_\Omega, \; v \in C^k(\mathbb{R}^d) \; | \; v \; {\cal R}\mbox{-periodic} \right\} . \end{eqnarray*} We assume in addition that \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\! &\bullet& A \in (L^\infty({\Omega}))^{d\times d} \mbox{ and } A(x) \mbox{ is symmetric for almost all } x \in \Omega \label{eq:Hyp1} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! && \exists \alpha > 0 \mbox{ s.t. } \textit {\textbf x}i^T A(x) \textit {\textbf x}i \ge \alpha |\textit {\textbf x}i|^2 \mbox{ for all } \textit {\textbf x}i \in \mathbb{R}^d \mbox{ and almost all } x \in \Omega \label{eq:Hyp2} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! && \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! &\bullet& V \in L^p(\Omega) \mbox{ for some } p > \max(1,d/2) \label{eq:Hyp3} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! && \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! &\bullet& F \in C^1([0,+\infty),\mathbb{R}) \cap C^2((0,\infty),\mathbb{R}) \mbox{ and } F'' > 0 \mbox{ on } (0,+\infty) \label{eq:Hyp6} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! && \exists 0 \le q < 2, \; \exists C \in \mathbb{R}_+ \mbox{ s.t. } \forall t \ge 0, \; |F'(t)| \le C (1+t^q) \label{eq:Hyp7} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! && F''(t)t \mbox{ remains bounded in the vicinity of } 0. \label{eq:Hyp7p} \end{eqnarray} To establish some of our results, we will also need to make the additional assumption that there exists $1 < r \le 2$ and $0 \le s \le 5-r$ such that \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\! && \forall R > 0, \; \exists C_R \in \mathbb{R}_+ \mbox{ s.t. } \forall 0 < t_1 \le R, \; \forall t_2 \in \mathbb{R}, \; \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\! & & \quad \left| F'(t_2^2)t_2-F'(t_1^2)t_2-2F''(t_1^2)t_1^2(t_2-t_1)\right| \le C_R \left(1+|t_2|^{s} \right) |t_2-t_1|^r. \label{eq:Hyp8} \end{eqnarray} Note that for all $1 < m < 3$ and all $c > 0$, the function $F(t)=c t^m$ satisfies (\ref{eq:Hyp6})-(\ref{eq:Hyp7p}) and (\ref{eq:Hyp8}), for some $1 < r \le 2$. It satisfies (\ref{eq:Hyp8}) with $r=2$ if $3/2 \le m < 3$. This allows us to handle the Thomas-Fermi kinetic energy functional ($m=\frac 53$) as well as the repulsive interaction in Bose-Einstein condensates ($m=2$). \begin{remark} Assumption (\ref{eq:Hyp7}) is sharp for $d=3$, but is useless for $d=1$ and can be replaced with the weaker assumption that there exist $q < \infty$ and $C \in \mathbb{R}_+$ such that $|F'(t)| \le C(1+t^q)$ for all $t \in \mathbb{R}_+$, for $d=2$. Likewise, the condition $0 \le s \le 5-r$ in assumption (\ref{eq:Hyp8}) is sharp for $d=3$ but can be replaced with $0 \le s < \infty$ if $d=1$ or $d=2$. \end{remark} In order to simplify the notation, we denote by $f(t)=F'(t)$. Making the change of variable $\rho=v^2$ and noticing that $a(|v|,|v|) = a(v,v)$ for all $v \in X$, it is easy to check that \begin{equation} \label{eq:min_pb_rho} I = \inf \left\{ {\cal E}(\rho), \; \rho \ge 0, \; \sqrt\rho \in X, \; \int_{\Omega} \rho = 1 \right\}, \end{equation} where $$ {\cal E}(\rho) = \frac 12 a(\sqrt\rho,\sqrt\rho) + \frac 12 \int_\Omega F(\rho). $$ We will see that under assumptions (\ref{eq:Hyp1})-(\ref{eq:Hyp7}), (\ref{eq:min_pb_rho}) has a unique solution $\rho_0$ and (\ref{eq:min_pb_u}) has exactly two solutions: $u = \sqrt{\rho_0}$ and $-u$. Moreover, $E$ is $C^1$ on $X$ and for all $v \in X$, $E'(v) = A_vv$ where $$ A_v = -\mbox{div}(A \nabla \cdot) + V + f(v^2). $$ Note that $A_v$ defines a self-adjoint operator on $L^2(\Omega)$, with form domain $X$. The function $u$ therefore is solution to the Euler equation \begin{equation} \label{eq:Euler} \forall v \in X, \quad \langle A_uu-\lambda u,v \rangle_{X',X} = 0 \end{equation} for some $\lambda \in \mathbb{R}$ (the Lagrange multiplier of the constraint $\|u\|_{L^2}^2 = 1$) and equation~(\ref{eq:Euler}), complemented with the constraint $\|u\|_{L^2} = 1$, takes the form of the nonlinear eigenvalue problem \begin{equation}\label{eq:10} \left\{ \begin{array}{l} A_u u = \lambda u \\ \| u\|_{L^2} = 1. \end{array} \right. \end{equation} In addition, $u \in C^0(\overline{\Omega})$, $u > 0$ in $\Omega$ and $\lambda$ is the ground state eigenvalue of the linear operator $A_u$. An important result is that $\lambda$ is a {\em simple} eigenvalue of $A_u$. It is interesting to note that $\lambda$ is also the ground state eigenvalue of the {\em nonlinear} eigenvalue problem \begin{equation} \label{eq:nonlinear_eigenvalue_pb_0} \left\{ \begin{array}{l} \mbox{search } (\mu,v) \in \mathbb{R} \times X \mbox{ such that} \\ A_v v = \mu v \\ \| v \|_{L^2} = 1, \end{array} \right. \end{equation} in the following sense: if $(\mu,v)$ is solution to (\ref{eq:nonlinear_eigenvalue_pb_0}) then either $\mu > \lambda$ or $\mu=\lambda$ and $v= \pm u$. All these properties, except maybe the last one, are classical. For the sake of completeness, their proofs are however given in the Appendix. Let us now turn to the main topic of this article, namely the derivation of a priori error estimates for variational approximations of the ground state eigenpair $(\lambda,u)$. We denote by $(X_\delta)_{\delta > 0}$ a family of finite-dimensional subspaces of $X$ such that \begin{equation} \label{eq:densite} \min \left\{ \|u-v_\delta\|_{H^1}, \; v_\delta \in X_\delta \right\} \; \mathop{\longrightarrow}_{\delta \to 0^+} \; 0 \end{equation} and consider the variational approximation of (\ref{eq:min_pb_u}) consisting in solving \begin{equation} \label{eq:min_pb_u_delta} I_\delta = \inf \left\{ E(v_\delta), \; v_\delta \in X_\delta, \; \int_{\Omega} v_\delta^2 = 1 \right\}. \end{equation} Problem (\ref{eq:min_pb_u_delta}) has at least one minimizer $u_\delta$, which satisfies \begin{equation}\label{eq:14} \forall v_\delta \in X_\delta, \quad \langle A_{u_\delta}u_\delta-\lambda_\delta u_\delta,v_\delta \rangle_{X',X} = 0 \end{equation} for some $\lambda_\delta \in \mathbb{R}$. Obviously, $-u_\delta$ also is a minimizer associated with the same eigenvalue $\lambda_\delta$. On the other hand, it is not known whether $u_\delta$ and $-u_\delta$ are the only minimizers of (\ref{eq:min_pb_u_delta}). One of the reasons why the argument used in the infinite-dimensional setting cannot be transposed to the discrete case is that the set $$ \left\{\rho \; | \; \exists u_\delta \in X_\delta \mbox{ s.t. } \|u_\delta \|_{L^2} = 1, \; \rho = u_\delta ^2 \right\} $$ is not convex in general. We will see however (cf. Theorem~\ref{Th:basic}) that for any family $(u_\delta)_{\delta > 0}$ of global minimizers of (\ref{eq:min_pb_u_delta}) such that $(u,u_\delta) \ge 0$ for all $\delta > 0$, the following holds true $$ \|u_\delta-u\|_{H^1} \mathop{\longrightarrow}_{\delta \to 0^+} \; 0. $$ In addition, a simple calculation leads to \begin{equation} \label{eq:estimate_eigenvalue} \lambda_\delta-\lambda = \langle (A_u-\lambda)(u_\delta-u),(u_\delta-u) \rangle_{X',X} + \int_\Omega w_{u,u_\delta} (u_\delta-u) \end{equation} where $$ w_{u,u_\delta} = u_\delta^2 \frac{f(u_\delta^2)-f(u^2)}{u_\delta-u}. $$ The first term of the right-hand side of (\ref{eq:estimate_eigenvalue}) is nonnegative and goes to zero as $\|u_\delta-u\|_{H^1}^2$. We will prove in Theorem~\ref{Th:basic} that the second term goes to zero at least as $\|u_\delta-u\|_{L^{6/(5-2q)}}$. Therefore, $|\lambda_\delta-\lambda|$ converges to zero with $\delta$ at least as $\|u_\delta-u\|_{H^1}^2+\|u_\delta-u\|_{L^{6/(5-2q)}}$. The purpose of this article is to provide more precise {\it a priori} error bounds on $|\lambda_\delta-\lambda|$, as well as on $\|u_\delta-u\|_{H^1}$, $\|u_\delta-u\|_{L^2}$ and $E(u_\delta)-E(u)$. In Section~\ref{sec:basic}, we prove a series of estimates valid in the general framework described above. We then turn to more specific examples, where the analysis can be pushed further. In Section~\ref{sec:Fourier}, we concentrate on the discretization of problem (\ref{eq:min_pb_u}) with \begin{eqnarray} && \Omega=(0,2\pi)^d, \nonumber \\ && X=H^1_\#(0,2\pi)^d, \nonumber \\ && E(v) = \frac 12 \int_\Omega |\nabla v|^2 + \frac 12 \int_\Omega V v^2 + \frac {1}{2} \int_\Omega F(v^2), \nonumber \end{eqnarray} in Fourier modes. In Section~\ref{sec:FE}, we deal with the ${\mathbb P}_1$ and ${\mathbb P}_2$ finite element discretizations of problem (\ref{eq:min_pb_u}) with \begin{eqnarray} && \Omega \mbox{ rectangular brick of $\mathbb{R}^d$}, \nonumber \\ && X=H^1_0(\Omega), \nonumber \\ && E(v) = \frac 12 \int_\Omega |\nabla v|^2 + \frac 12 \int_\Omega V v^2 + \frac {1}{2} \int_\Omega F(v^2). \nonumber \end{eqnarray} Lastly, we discuss the issue of numerical integration in Section~\ref{sec:integration}. \section{Basic error analysis} \label{sec:basic} The aim of this section is to establish error bounds on $\|u_\delta-u\|_{H^1}$, $\|u_\delta-u\|_{L^2}$, $|\lambda_\delta-\lambda|$ and $E(u_\delta)-E(u)$, in a general framework. In the whole section, we make the assumptions (\ref{eq:Hyp1})-(\ref{eq:Hyp7p}) and (\ref{eq:densite}), and we denote by $u$ the unique positive solution of (\ref{eq:min_pb_u}) and by $u_\delta$ a minimizer of the discretized problem (\ref{eq:min_pb_u_delta}) such that $(u_\delta,u)_{L^2} \ge 0$. We also introduce the bilinear form $E''(u)$ defined on $X \times X$ by $$ \langle E''(u)v,w \rangle_{X',X} = \langle A_u v,w \rangle_{X',X} + 2 \, \int_\Omega f'(u^2)u^2 vw. $$ When $F \in C^2([0,+\infty),\mathbb{R})$, then $E$ is twice differentiable on $X$ and $E''(u)$ is the second derivative of $E$ at $u$. \begin{lem} \label{lem:technical} There exists $\beta > 0$ and $M \in \mathbb{R}_+$ such that for all $v \in X$, \begin{eqnarray} 0 & \le & \langle (A_u-\lambda) v,v \rangle_{X',X} \le M \|v\|_{H^1}^2 \label{eq:Au-lambda_2} \\ \beta \|v\|_{H^1}^2 & \le & \langle (E''(u)-\lambda) v,v \rangle_{X',X} \le M \|v\|_{H^1}^2.\label{eq:Euu-lambda} \end{eqnarray} There exists $\gamma > 0$ such that for all $\delta > 0$, \begin{equation} \gamma \|u_\delta-u\|_{H^1}^2 \le \langle (A_u-\lambda) (u_\delta-u),(u_\delta-u) \rangle_{X',X} . \label{eq:Au-lambda_1} \end{equation} \end{lem} \begin{proof} We have for all $v \in X$, $$ \langle (A_u-\lambda) v,v \rangle_{X',X} \le \|A\|_{L^\infty} \|\nabla v\|_{L^2}^2 + \|V\|_{L^p} \|v\|_{L^{2p'}}^2 + \|f(u^2)\|_{L^\infty} \|v\|_{L^2}^2 $$ where $p'=(1-p^{-1})^{-1}$ and $$ \langle (E''(u)-\lambda) v,v \rangle_{X',X} \le \langle (A_u-\lambda) v,v \rangle_{X',X} + 2 \|f'(u^2)u^2\|_{L^\infty} \|v\|_{L^2}^2. $$ Hence the upper bounds in (\ref{eq:Au-lambda_2}) and (\ref{eq:Euu-lambda}). We now use the fact that $\lambda$, the lowest eigenvalue of $A_u$, is simple (see Lemma~\ref{lem:theory} in the Appendix). This implies that there exists $\eta > 0$ such that \begin{equation} \label{eq:gap} \forall v \in X, \quad \langle (A_u-\lambda) v,v \rangle_{X',X} \ge \eta (\|v\|_{L^2}^2-|(u,v)_{L^2}|^2) \ge 0. \end{equation} This provides on the one hand the lower bound (\ref{eq:Au-lambda_2}), and leads on the other hand to the inequality $$ \forall v \in X, \quad \langle (E''(u)-\lambda) v,v \rangle_{X',X} \ge 2\int_{\Omega} f'(u^2)u^2 v^2. $$ As $f' =F'' > 0$ in $(0,+\infty)$ and $u > 0$ in $\Omega$, we therefore have $$ \forall v \in X \setminus \left\{0\right\}, \quad \langle (E''(u)-\lambda) v,v \rangle_{X',X} > 0. $$ Reasoning by contradiction, we deduce from the above inequality and the first inequality in (\ref{eq:gap}) that there exists $\widetilde \eta > 0$ such that \begin{equation} \label{eq:borne_inf_1} \forall v \in X, \quad \langle (E''(u)-\lambda) v,v \rangle_{X',X} \ge \widetilde \eta \|v\|_{L^2}^2. \end{equation} Besides, there exists a constant $C \in \mathbb{R}_+$ such that \begin{equation} \label{eq:borne_inf_2} \forall v \in X, \quad \langle (A_u-\lambda) v,v \rangle_{X',X} \ge \frac{\alpha}2 \|\nabla v\|_{L^2}^2 - C \|v\|_{L^2}^2. \end{equation} Let us establish this inequality for $d=3$ (the case when $d=1$ is straightforward and the case when $d=2$ can be dealt with in the same way). For all $x \in X$, \begin{eqnarray*} \langle (A_u-\lambda) v,v \rangle_{X',X} & = & \int_\Omega (A\nabla v) \cdot \nabla v + \int_\Omega (V+f(v^2)-\lambda) v^2 \nonumber \\ & \ge & \alpha \|\nabla v\|_{L^2}^2 - \|V\|_{L^p} \|v\|_{L^{2p'}}^2 + (f(0)-\lambda) \|v\|_{L^2}^2 \nonumber \\ & \ge & \alpha \|\nabla v\|_{L^2}^2 - \|V\|_{L^p} \|v\|_{L^2}^{2-3/p} \|v\|_{L^6}^{3/p} + (f(0)-\lambda) \|v\|_{L^2}^2 \nonumber \\ & \ge & \alpha \|\nabla v\|_{L^2}^2 - C_6^{3/p} \|V\|_{L^p} \|v\|_{L^2}^{2-3/p} \|v\|_{H^1}^{3/p} + (f(0)-\lambda) \|v\|_{L^2}^2 \nonumber \\ & \ge & \frac \alpha 2 \|\nabla v\|_{L^2}^2 + \left(f(0) - \lambda - \frac{3-2p}{2p} \left( \frac{3C_6^2\|V\|_{L^p}^{2p/3}}{p\alpha} \right)^{3/(2p-3)} - \frac \alpha 2 \right) \|v\|_{L^2}^2, \end{eqnarray*} where $C_6$ is the Sobolev constant such that $\forall v \in X$, $\|v\|_{L^6} \le C_6 \|v\|_{H^1}$. The coercivity of $E''(u)-\lambda$ (i.e. the lower bound in (\ref{eq:Euu-lambda})) is a straightforward consequence of (\ref{eq:borne_inf_1}) and (\ref{eq:borne_inf_2}). To prove (\ref{eq:Au-lambda_1}), we notice that $$ \|u_\delta\|_{L^2}^2-|(u,u_\delta)_{L^2}|^2 \ge 1-(u,u_\delta)_{L^2}=\frac 12 \|u_\delta-u\|_{L^2}^2. $$ It therefore readily follows from (\ref{eq:gap}) that $$ \langle (A_u-\lambda) (u_\delta-u),(u_\delta-u) \rangle_{X',X} \ge \frac{\eta}2 \|u_\delta-u\|_{L^2}^2. $$ Combining with (\ref{eq:borne_inf_2}), we finally obtain (\ref{eq:Au-lambda_1}). \end{proof} For $w \in X'$, we denote by $\psi_w$ the unique solution to the adjoint problem \begin{equation} \label{eq:adjoint} \left\{ \begin{array}{l} \mbox{find } \psi_w \in u^\perp \mbox{ such that} \\ \forall v \in u^\perp, \quad \langle (E''(u)-\lambda) \psi_w,v \rangle_{X',X} = \langle w,v \rangle_{X',X}, \end{array} \right. \end{equation} where $$ u^\perp = \left\{ v \in X \; | \; \int_\Omega uv = 0 \right\}. $$ The existence and uniqueness of the solution to (\ref{eq:adjoint}) is a straightforward consequence of (\ref{eq:Euu-lambda}) and the Lax-Milgram lemma. Besides, \begin{equation} \label{eq:bound_psi} \forall w \in L^2(\Omega), \quad \|\psi_w\|_{H^1} \le \beta^{-1}M \|w\|_{X'} \le \beta^{-1}M \|w\|_{L^2}. \end{equation} We can now state the main result of this section. \begin{thm} \label{Th:basic} Under assumptions (\ref{eq:Hyp1})-(\ref{eq:Hyp7}) and (\ref{eq:densite}), it holds $$ \|u_\delta-u\|_{H^1} \mathop{\longrightarrow}_{\delta \to 0^+} \; 0. $$ If in addition, (\ref{eq:Hyp7p}) is satisfied, then there exists $C \in \mathbb{R}_+$ such that for all $\delta > 0$, \begin{equation} \label{eq:error_energy} \frac \gamma 2 \|u_\delta-u\|_{H^1}^2 \le E(u_\delta)-E(u) \le \frac M 2 \|u_\delta-u\|_{H^1}^2 + C \|u_\delta-u\|_{L^{6/(5-2q)}}, \end{equation} and \begin{equation} |\lambda_\delta-\lambda| \le C \left( \|u_\delta-u\|_{H^1}^2 + \|u_\delta-u\|_{L^{6/(5-2q)}} \right). \label{eq:error_lambda} \end{equation} Besides, if assumption (\ref{eq:Hyp8}) is satisfied for some $1 < r \le 2$ and $0 \le s \le 5-r$, then there exists $\delta_0 > 0$ and $C \in \mathbb{R}_+$ such that for all $0 < \delta < \delta_0$, \begin{eqnarray} \|u_\delta-u\|_{H^1} & \le & C \min_{v_\delta \in X_\delta} \|v_\delta-u\|_{H^1} \label{eq:error_H1} \\ \|u_\delta-u\|_{L^2}^2 & \le & C \, \bigg( \|u_\delta-u\|_{L^2} \|u_\delta-u\|_{L^{6r/(5-s)}}^r \nonumber \\ & & \qquad + \|u_\delta-u\|_{H^1} \min_{\psi_\delta \in X_\delta} \|\psi_{u_\delta-u}-\psi_\delta\|_{H^1} \bigg). \label{eq:error_L2} \end{eqnarray} Lastly, if $F''$ is bounded in the vicinity of $0$, there exists $C \in \mathbb{R}_+$ such that for all $\delta > 0$, \begin{equation} \label{eq:error_energy_2} \frac \gamma 2 \|u_\delta-u\|_{H^1}^2 \le E(u_\delta)-E(u) \le C \|u_\delta-u\|_{H^1}^2. \end{equation} \end{thm} \begin{remark} If $0 \le r+s \le 3$, then $$ \|u_\delta-u\|_{L^{6r/(5-s)}}^r \le \|u_\delta-u\|_{L^2}^{(5-r-s)/2} \|u_\delta-u\|_{L^6}^{(3r-5+s)/2} \le \|u_\delta-u\|_{L^2} \|u_\delta-u\|_{H^1}^{r-1}, $$ so that (\ref{eq:error_L2}) implies the simpler inequality \begin{equation} \label{eq:error_L2_simpler} \|u_\delta-u\|_{L^2}^2 \le C \|u_\delta-u\|_{H^1} \min_{\psi_\delta \in X_\delta} \|\psi_{u_\delta-u}-\psi_\delta\|_{H^1}. \end{equation} \end{remark} \begin{proof}{\bf $\!\!\!\!\!$ of Theorem~\ref{Th:basic}} We have \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! E(u_\delta ) - E(u) & = & \frac 12 \langle A_u u_\delta ,u_\delta \rangle_{X',X} - \frac 12 \langle A_u u,u \rangle_{X',X} \nonumber \\ & & + \frac 12 \int_\Omega F\left( u_\delta ^2 \right) - F\left( u^2 \right) - f\left( u^2 \right) (u_\delta ^2-u^2) \nonumber \\ & = & \frac 12 \langle (A_u-\lambda) (u_\delta-u) ,(u_\delta-u) \rangle_{X',X} \nonumber \\ & & + \frac 12 \int_\Omega F\left( u_\delta \right) - F\left( u^2 \right) - f\left( u^2 \right) (u_\delta ^2-u^2). \label{eq:Eudelta-Eu} \end{eqnarray} Using (\ref{eq:Au-lambda_1}) and the convexity of $F$, we get $$ E(u_\delta ) - E(u) \ge \frac \gamma 2 \|u_\delta -u\|_{H^1}^2. $$ Let $\mathbb{P}i_\delta u \in X_\delta$ be such that $$ \|u-\mathbb{P}i_\delta u\|_{H^1} = \min \left\{ \|u-v_\delta\|_{H^1}, v_\delta \in X_\delta \right\}. $$ We deduce from (\ref{eq:densite}) that $(\mathbb{P}i_\delta u)_{\delta > 0}$ converges to $u$ in $X$ when $\delta$ goes to zero. Denoting by $\widetilde u_\delta = \|\mathbb{P}i_\delta u\|_{L^2}^{-1} \mathbb{P}i_\delta u$ (which is well defined, at least for $\delta$ small enough), we also have $$ \lim_{\delta \to 0^+} \|\widetilde u_\delta - u \|_{H^1} = 0. $$ The functional $E$ being strongly continuous on $X$, we obtain $$ \|u_\delta -u\|_{H^1}^2 \le \frac 2\gamma \left(E(u_\delta ) - E(u)\right) \le \frac 2\gamma \left(E(\widetilde u_\delta ) - E(u)\right) \mathop{\longrightarrow}_{\delta \to 0^+} 0. $$ It follows that there exists $\delta_1 > 0$ such that $$ \forall 0 < \delta \le \delta_1, \quad \|u_\delta\|_{H^1} \le 2 \|u\|_{H^1}, \quad \|u_\delta-u\|_{H^1} \le \frac 12. $$ We then easily deduce from (\ref{eq:Eudelta-Eu}) the upper bounds in (\ref{eq:error_energy}) and (\ref{eq:error_energy_2}). Next, we remark that \begin{eqnarray} \lambda_\delta-\lambda & = &\langle E'(u_\delta ),u_\delta \rangle_{X',X} - \langle E'(u),u \rangle_{X',X} \nonumber \\ &=& a ( u_\delta, u_\delta) - a(u,u) + \int_\Omega f(u_\delta^2)u_\delta^2 - \int_\Omega f(u^2)u^2 \nonumber \\ & =& a(u_\delta-u ,u_\delta-u) + 2 a(u, u_\delta-u) + \int_\Omega f(u_\delta^2)u_\delta^2 - \int_\Omega f(u^2)u^2 \nonumber \\ &=&a (u_\delta-u ,u_\delta-u) + 2\lambda \int_\Omega u (u_\delta-u) - 2\int_\Omega f(u^2)u(u_\delta-u) \nonumber \\ & & \qquad + \int_\Omega f(u_\delta^2)u_\delta^2 - \int_\Omega f(u^2)u^2 \nonumber \\ &=& a (u_\delta-u ,u_\delta-u) - \lambda \|u_\delta-u\|_{L^2}^2 - 2\int_\Omega f(u^2)u(u_\delta-u) \nonumber \\ & & \qquad + \int_\Omega f(u_\delta^2)u_\delta^2 - \int_\Omega f(u^2)u^2 \nonumber \\ &= & \langle (A_u-\lambda)(u_\delta-u ),(u_\delta-u ) \rangle_{X',X} + \int_\Omega w_{u,u_\delta } (u_\delta-u ) \label{Cal_lambda} \end{eqnarray} where $$ w_{u,u_\delta} = u_\delta^2 \frac{f(u_\delta^2)-f(u^2)}{u_\delta-u}. $$ As $u \in L^\infty(\Omega)$, we have $$ |w_{u,u_\delta}| \le \left| \begin{array}{lll} \dps 12 u \sup_{t \in (0,4\|u\|_{L^\infty}^2]} F''(t)t & \quad & \mbox{if } |u_\delta| < 2 u \\ \dps 2 \left( |f(u_\delta^2)|+\max_{t \in [0,\|u\|_{L^\infty}^2]} |f(t)| \right) |u_\delta| & \quad & \mbox{if } |u_\delta| \ge 2 u, \end{array} \right. $$ and we deduce from assumptions (\ref{eq:Hyp7})-(\ref{eq:Hyp7p}) that $$ |w_{u,u_\delta}| \le C (1+|u_\delta|^{2q+1}), $$ for some constant $C$ independent of $\delta$. Using (\ref{eq:Au-lambda_2}), we therefore obtain that for all $0 < \delta \le \delta_1$, \begin{eqnarray} |\lambda_\delta-\lambda| & \le & M \| u_\delta-u \|_{H^1}^2 + \| w_{u,u_\delta}\|_{L^{6/(2q+1)}} \|u_\delta-u\|_{L^{6/(5-2q)}} \nonumber \\ & \le & M \| u_\delta-u \|_{H^1}^2 + C (1+\|u_\delta\|_{H^1}^{2q+1}) \|u_\delta-u\|_{L^{6/(5-2q)}} \nonumber \\ & \le & C \left( \| u_\delta-u \|_{H^1}^2 + \|u_\delta-u\|_{L^{6/(5-2q)}} \right), \label{eq:estim_36} \end{eqnarray} where $C$ denotes constants independent of $\delta$. In order to evaluate the $H^1$-norm of the error $u_\delta-u$, we first notice that \begin{equation}\label{eq:4:1:H1} \forall v_\delta \in X_\delta, \quad \| u_\delta-u \|_{H^1} \le \| u_\delta - v_\delta \|_{H^1} +\| v_\delta - u \|_{H^1}, \end{equation} and that \begin{eqnarray} \| u_\delta - v_\delta \|_{H^1}^2 & \le & \beta^{-1} \, \langle (E''(u) - \lambda)(u_\delta - v_\delta ), (u_\delta - v_\delta ) \rangle_{X',X} \nonumber \\ & = & \beta^{-1} \bigg( \langle (E''(u) - \lambda)(u_\delta - u), (u_\delta - v_\delta ) \rangle_{X',X} \nonumber \\ & & \quad \qquad +\langle (E''(u) - \lambda)(u- v_\delta ),(u_\delta - v_\delta ) \rangle_{X',X} \bigg). \label{eq:4:3:H1} \end{eqnarray} For all $w_\delta \in X_\delta$ \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & & \!\!\!\!\!\! \langle( E''(u) - \lambda)(u_\delta-u),w_\delta \rangle_{X',X} \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & & = - \int_\Omega \left( f(u^2_\delta) u_\delta -f(u^2)u_\delta - 2 f'(u^2)u^2(u_\delta-u) \right) w_\delta +(\lambda_\delta- \lambda)\int_\Omega u_\delta w_\delta. \label{eq:new36} \end{eqnarray} On the other hand, we have for all $v_\delta \in X_\delta$ such that $\|v_\delta\|_{L^2}=1$, $$ \int_\Omega u_\delta (u_\delta-v_\delta) = 1 - \int_\Omega u_\delta v_\delta = \frac 12 \|u_\delta-v_\delta\|_{L^2}^2. $$ Using (\ref{eq:Hyp8}) and (\ref{eq:estim_36}), we therefore obtain that for all $0 < \delta \le \delta_1$ and all $v_\delta \in X_\delta$ such that $\|v_\delta\|_{L^2}=1$, \begin{eqnarray} \left| \langle (E''(u) - \lambda)(u_\delta-u),(u_\delta-v_\delta) \rangle_{X',X}\right| & \le & C \bigg( \|u_\delta-u\|_{L^{6r/(5-s)}}^r \| u_\delta-v_\delta \|_{H^1} \nonumber \\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \left( \|u_\delta-u\|_{H^1}^2 + \|u_\delta-u\|_{L^{6/(5-2q)}} \right) \| u_\delta-v_\delta \|_{L^2}^2 \bigg). \label{eq:4:3:H2} \end{eqnarray} It then follows from (\ref{eq:Euu-lambda}), (\ref{eq:4:3:H1}) and (\ref{eq:4:3:H2}) that for all $0 < \delta \le \delta_1$ and all $v_\delta \in X_\delta$ such that $\|v_\delta\|_{L^2}=1$, \begin{eqnarray*} \| u_\delta - v_\delta \|_{H^1} & \le & C \left( \|u_\delta-u\|_{H^1}^r + \|u_\delta-u\|_{H^1} \| u_\delta - v_\delta \|_{H^1} + \|v_\delta-u \|_{H^1} \right) . \end{eqnarray*} Combining with (\ref{eq:4:1:H1}) we obtain that there exists $0 < \delta_2 \le \delta_1$ and $C \in \mathbb{R}_+$ such that for all $0 < \delta \le \delta_2$ and all $v_\delta \in X_\delta$ such that $\|v_\delta\|_{L^2}=1$, $$ \|u_\delta -u \|_{H^1} \le C \|v_\delta-u \|_{H^1}. $$ Hence, for all $0 < \delta \le \delta_2$ $$ \|u_\delta -u \|_{H^1} \le C J_\delta \qquad \mbox{where} \qquad J_\delta = \min_{v_\delta \in X_\delta \, | \, \|v_\delta\|_{L^2}=1} \|v_\delta-u \|_{H^1}. $$ We now denote by $$ \widetilde J_\delta = \min_{v_\delta \in X_\delta} \|v_\delta-u \|_{H^1}, $$ and by $u_\delta^0$ a minimizer of the above minimization problem. We know from (\ref{eq:densite}) that $u_\delta^0$ converges to $u$ in $H^1$ when $\delta$ goes to zero. Besides, \begin{eqnarray*} J_\delta & \le & \|u_\delta^0/\|u_\delta^0\|_{L^2} - u \|_{H^1} \\ & \le & \|u_\delta^0 - u \|_{H^1} + \frac{\|u_\delta^0\|_{H^1}}{\|u_\delta^0\|_{L^2}} \left| 1- \|u_\delta^0\|_{L^2} \right| \\ & \le & \|u_\delta^0 - u \|_{H^1} + \frac{\|u_\delta^0\|_{H^1}}{\|u_\delta^0\|_{L^2}} \|u-u_\delta^0\|_{L^2} \\ & \le & \left( 1 + \frac{\|u_\delta^0\|_{H^1}}{\|u_\delta^0\|_{L^2}} \right) \widetilde J_\delta. \end{eqnarray*} For $0 < \delta \le \delta_2 \le \delta_1$, we have $\|u_\delta^0-u\|_{H^1} \le \|u_\delta-u\|_{H^1} \le 1/2$, and therefore $\|u_\delta^0\|_{H^1} \le \|u\|_{H^1}+1/2$ and $\|u_\delta^0\|_{L^2} \ge 1/2$, yielding $J_\delta \le 2(\|u\|_{H^1}+1) \widetilde J_\delta$. Thus (\ref{eq:error_H1}) is proved. Let $u_\delta ^*$ be the orthogonal projection, for the $L^2$ inner product, of $u_\delta$ on the affine space $\left\{v \in L^2(\Omega) \, | \, \int_\Omega uv=1 \right\}$. One has $$ u_\delta ^* \in X, \qquad u_\delta ^*-u \in u^\perp, \qquad u_\delta^*-u_\delta = \frac 12 \|u_\delta-u \|_{L^2}^2 u, $$ from which we infer that \begin{eqnarray*} \|u_\delta -u\|_{L^2}^2 & = & \int_\Omega (u_\delta-u )(u_\delta ^*-u) + \int_\Omega (u_\delta-u)(u_\delta - u_\delta ^*) \\ & = & \int_\Omega (u_\delta-u)(u_\delta^*-u) - \frac 12 \|u_\delta-u\|_{L^2}^2 \int_\Omega (u_\delta-u) u \\ & = & \int_\Omega (u_\delta-u)(u_\delta^*-u) + \frac 12 \|u_\delta-u\|_{L^2}^2 \left( 1 - \int_\Omega u_\delta u \right) \\ & = & \int_\Omega (u_\delta-u)(u_\delta^*-u) + \frac 14 \|u_\delta-u\|_{L^2}^4 \\ & = & \langle u_\delta-u ,u_\delta^*-u \rangle_{X',X} + \frac 14 \|u_\delta-u \|_{L^2}^4 \\ & = & \langle (E''(u)-\lambda) \psi_{u_\delta-u}, u_\delta^*-u \rangle_{X',X} + \frac 14 \|u_\delta-u \|_{L^2}^4 \\ & = & \langle (E''(u)-\lambda) (u_\delta-u), \psi_{u_\delta-u} \rangle_{X',X} \\ & & + \frac 12 \|u_\delta-u\|_{L^2}^2 \langle (E''(u)-\lambda) u , \psi_{u_\delta-u} \rangle_{X',X} + \frac 14 \|u_\delta-u \|_{L^2}^4 \\ & = & \langle (E''(u)-\lambda) (u_\delta-u), \psi_{u_\delta-u} \rangle_{X',X} \\ & & + \|u_\delta-u\|_{L^2}^2 \int_\Omega f'(u^2) u^3 \psi_{u_\delta-u} + \frac 14 \|u_\delta-u \|_{L^2}^4 . \end{eqnarray*} For all $\psi_\delta \in X_\delta$, it therefore holds \begin{eqnarray*} \|u_\delta-u\|_{L^2}^2 & = & \langle (E''(u)-\lambda) (u_\delta-u),\psi_\delta \rangle_{X',X} \\ & & + \langle (E''(u)-\lambda)(u_\delta-u),\psi_{u_\delta-u}-\psi_\delta \rangle_{X',X} \\ & & + \|u_\delta-u\|_{L^2}^2 \int_\Omega f'(u^2) u^3 \psi_{u_\delta-u} + \frac 14 \|u_\delta-u \|_{L^2}^4 . \end{eqnarray*} From (\ref{eq:new36}), we obtain that for all $\psi_\delta \in X_\delta \cap u^\perp$, \begin{eqnarray*} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & & \!\!\!\!\!\! \langle( E''(u) - \lambda)(u_\delta-u),\psi_\delta \rangle_{X',X} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & & = - \int_\Omega \left( f(u^2_\delta) u_\delta -f(u^2)u_\delta - 2 f'(u^2)u^2(u_\delta-u) \right) \psi_\delta +(\lambda_\delta- \lambda)\int_\Omega (u_\delta-u) \psi_\delta \end{eqnarray*} and therefore that for all $\psi_\delta \in X_\delta \cap u^\perp$, \begin{eqnarray} \left| \langle (E''(u) - \lambda)(u_\delta-u),\psi_\delta \rangle_{X',X}\right| & \le & C \bigg( \|u_\delta-u\|_{L^{6r/(5-s)}}^r \nonumber \\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \| u_\delta-u \|_{L^{6/5}} \left( \|u_\delta-u\|_{H^1}^2 + \|u_\delta-u\|_{L^{6/(5-2q)}} \right) \bigg) \|\psi_\delta\|_{H^1} . \label{eq:new37} \end{eqnarray} Let $\psi_\delta^0 \in X_\delta \cap u^\perp$ be such that $$ \|\psi_{u_\delta-u}-\psi_\delta^0\|_{H^1} = \min_{\psi_\delta \in X_\delta \cap u^\perp} \|\psi_{u_\delta-u}-\psi_\delta\|_{H^1}. $$ Noticing that $\|\psi_\delta^0\|_{H^1} \le \|\psi_{u_\delta-u}\|_{H^1} \le \beta^{-1} M\|u_\delta-u\|_{L^2}$, we obtain from (\ref{eq:Euu-lambda}) and (\ref{eq:new37}) that there exists $C \in \mathbb{R}_+$ such that for all $0 < \delta \le \delta_1$, \begin{eqnarray*} \|u_\delta-u\|_{L^2}^2 &\le & C \bigg( \|u_\delta-u\|_{L^2} \\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \times \left( \|u_\delta-u\|_{L^{6r/(5-s)}}^r + \|u_\delta-u\|_{L^{6/5}} \left( \|u_\delta-u\|_{H^1}^2 + \|u_\delta-u\|_{L^{6/(5-2q)}} \right) \right) \\ & & + \|u_\delta-u\|_{H^1} \|\psi_{u_\delta-u}-\psi_\delta^0\|_{H^1} + \|u_\delta-u\|_{L^2}^3 + \|u_\delta-u\|_{L^2}^4 \bigg). \end{eqnarray*} Therefore, there exists $0 < \delta_0 \le \delta_2$ and $C \in \mathbb{R}_+$ such that for all $0 < \delta \le \delta_0$, $$ \|u_\delta-u\|_{L^2}^2 \le C \, \bigg( \|u_\delta-u\|_{L^2} \|u_\delta-u\|_{L^{6r/(5-s)}}^r + \|u_\delta-u\|_{H^1} \|\psi_{u_\delta-u}-\psi_\delta^0\|_{H^1} \bigg). $$ Lastly, denoting by $\mathbb{P}i_{X_\delta}^0$ the orthogonal projector on $X_\delta$ for the $L^2$ inner product, a simple calculation leads to \begin{equation} \label{eq:minuperp} \forall v \in u^\perp, \quad \min_{v_\delta \in X_\delta \cap u^\perp} \|v_\delta - v \|_{H^1} \le \left( 1 + \frac{\|\mathbb{P}i_{X_\delta}^0 u \|_{H^1}} {\|\mathbb{P}i_{X_\delta}^0 u \|_{L^2}^2} \right) \min_{v_\delta \in X_\delta} \|v_\delta - v \|_{H^1}, \end{equation} which completes the proof of Theorem~\ref{Th:basic}. \end{proof} \begin{remark} \label{rem:negative_Sobolev} In the proof of Theorem~\ref{Th:basic}, we have obtained bounds on $|\lambda_\delta-\lambda|$ from (\ref{Cal_lambda}), using $L^p$ estimates on $w_{u,u_\delta}$ and $(u_\delta-u)$ to control the second term of the right hand side. Remarking that \begin{eqnarray*} \nabla w_{u,u_\delta} &=& -u \, \frac{f(u^2)u-f(u_\delta^2)u-2f'(u_\delta^2)u_\delta^2(u-u_\delta)}{(u_\delta-u)^2} \, \nabla u_\delta \\ && -u_\delta \, \frac{f(u_\delta^2)u_\delta-f(u^2)u_\delta-2f'(u^2)u^2(u_\delta-u)}{(u_\delta-u)^2} \, \nabla u \\ && + 2uu_\delta \left(f'(u_\delta^2) \, \nabla u_\delta+f'(u^2)\nabla u\right) + 2u_\delta \,\frac{f(u_\delta^2)-f(u^2)}{u_\delta-u} \, \nabla u_\delta \, , \end{eqnarray*} we can see that if $u_\delta$ is uniformly bounded in $L^\infty(\Omega)$ and if $F$ satisfies (\ref{eq:Hyp8}) for $r=2$ and is such that $F''(t)t^{1/2}$ is bounded in the vicinity of $0$, then $w_{u,u_\delta}$ is uniformly bounded in $X$. It then follows from (\ref{Cal_lambda}) that $$ |\lambda_\delta-\lambda| \le C \left(\|u_\delta-u\|_{H^1}^2 + \|u_\delta-u\|_{X'} \right), $$ an estimate which is an improvement of (\ref{eq:error_lambda}). In the next two sections, we will see that this approach (or analogous strategies making use of negative Sobolev norms of higher orders), can be used in certain cases to obtain optimal estimates on $|\lambda_\delta-\lambda|$ of the form $$ |\lambda_\delta-\lambda| \le C \|u_\delta-u\|_{H^1}^2, $$ similar to what is obtained for the linear eigenvalue problem $-\Delta u + V u = \lambda u$. \end{remark} \section{Fourier expansion} \label{sec:Fourier} In this section, we consider the problem \begin{equation} \label{eq:Fourier} \inf \left\{ E(v), \; v \in X, \; \int_\Omega v^2=1 \right\}, \end{equation} where \begin{eqnarray} && \Omega=(0,2\pi)^d, \quad \mbox{with $d=1$, $2$ or $3$,} \nonumber \\ && X=H^1_\#(\Omega), \nonumber \\ && E(v) = \frac 12 \int_\Omega |\nabla v|^2 + \frac 12 \int_\Omega V v^2 + \frac {1}{2} \int_\Omega F(v^2). \nonumber \end{eqnarray} We assume that $V \in H^\sigma_\#(\Omega)$ for some $\sigma > d/2$ and that the function $F$ satisfies (\ref{eq:Hyp6})-(\ref{eq:Hyp7p}), (\ref{eq:Hyp8}) for some $1 < r \le 2$ and $0 \le s \le 5-r$, and is in $C^{[\sigma]+1,\sigma-[\sigma]+\epsilon}((0,+\infty),\mathbb{R})$ (with the convention that $C^{k,0}=C^k$ if $k \in \mathbb{N}$). The positive solution $u$ to (\ref{eq:Fourier}), which satisfies the elliptic equation $$ -\Delta u + V u + f(u^2) u = \lambda u, $$ then is in $H^{\sigma+2}_\#(\Omega)$ and is bounded away from $0$. To obtain this result, we have used the fact \cite{Sickel} that if $s > d/2$, $g \in C^{[s],s-[s]+\epsilon}(\mathbb{R},\mathbb{R})$ and $v \in H^s_\#(\Omega)$, then $g(v) \in H^s_\#(\Omega)$. A natural discretization of (\ref{eq:Fourier}) consists in using a Fourier basis. Denoting by $e_k(x) = (2\pi)^{-d/2} e^{ik \cdot x}$, we have for all $v \in L^2(\Omega)$, $$ v(x) = \sum_{\textit {\textbf k}\in \mathbb{Z}^d} \widehat v_k e_k(x), $$ where $\widehat v_k$ is the $k^{\rm th}$ Fourier coefficient of $v$: $$ \widehat v_k := \int_\Omega v(x) \, \overline{e_k(x)} \, dx = (2\pi)^{-d/2} \int_\Omega v(x) \, e^{-ik \cdot x} \, dx. $$ The approximation of the solution to (\ref{eq:Fourier}) by the spectral Fourier approximation is based on the choice $$ X_\delta = \widetilde X_N= \mbox{Span}\{ e_k, \; |k|_*\le N \}, $$ where $|k|_*$ denotes either the $l^2$-norm or the $l^\infty$-norm of $k$ (i.e. either $|k|=(\sum_{i=1}^d|k_i|^2)^{1/2}$ or $|k|_\infty=\max_{1 \le i \le d}|k_i|$). For convenience, the discretization parameter for this approximation will be denoted as $N$. Endowing $H^r_\#(\Omega)$ with the norm defined by $$ \|v\|_{H^r} = \left( \sum_{k \in \mathbb{Z}^d} \left(1+|k|_*^2\right)^r |\widehat v_k|^2 \right)^{1/2}, $$ we obtain that for all $s \in \mathbb{R}$, and all $v \in H^s_\#(\Omega)$, the best approximation of $v$ in $H^r_\#(\Omega)$ for any $r \le s$ is $$ \mathbb{P}i_N v = \sum_{\textit {\textbf k}\in \mathbb{Z}^d, |\textit {\textbf k}|_* \le N} \widehat v_k e_k. $$ The more regular $v$ (the regularity being measured in terms of the Sobolev norms $H^r$), the faster the convergence of this truncated series to $v$: for all real numbers $r$ and $s$ with $r \le s$, we have \begin{equation} \label{eq:app-Fourier} \forall v\in H^s_\#(\Omega), \quad \|v - \mathbb{P}i_Nv\|_{H^r} \le \frac{1}{N^{s-r}} \|v\|_{H^s}. \end{equation} Let $u_N$ be a solution to the variational problem $$ \inf \left\{E(v_N), \; v_N \in \widetilde X_N, \; \int_\Omega v_N^2 = 1 \right\} $$ such that $(u_N,u)_{L^2} \ge 0$. Using (\ref{eq:app-Fourier}), we obtain $$ \|u-\mathbb{P}i_Nu\|_{H^1} \le \frac{1}{N^{\sigma+1}} \|u\|_{H^{\sigma+2}}, $$ and it therefore follows from the first assertion of Theorem~\ref{Th:basic} that $$ \lim_{N \to \infty} \|u_N-u\|_{H^1} = 0. $$ We then observe that $u_N$ is solution to the elliptic equation \begin{equation} \label{eq:6N} -\Delta u_N + \mathbb{P}i_N \left[ Vu_N+f(u_N^2)u_N \right] = \lambda_N u_N. \end{equation} Thus $u_N$ is uniformly bounded in $H^2_\#(\Omega)$, hence in $L^\infty(\Omega)$, and \begin{eqnarray} \label{eq:6Nbis} \Delta \left(u_N-u\right) & = & \mathbb{P}i_N \left( V(u_N-u)+f(u_N^2)u_N-f(u^2)u \right) \nonumber \\ & & - (I-\mathbb{P}i_N) (Vu+f(u^2)u) - \lambda_N(u_N-u) - (\lambda_N-\lambda) u. \end{eqnarray} As $(u_N)_{N \in \mathbb{N}}$ in bounded in $L^\infty(\Omega)$ and converges to $u$ in $H^1_\#(\Omega)$, the right hand side of the above equality converges to $0$ in $L^2_\#(\Omega)$, which implies that $(u_N)_{N \in \mathbb{N}}$ converges to $0$ in $H^2_\#(\Omega)$, and therefore in $C^0_\#(\Omega)$. In particular, $u/2 \le u_N \le 2u$ on $\Omega$ for $N$ large enough, so that we can assume in our analysis, without loss of generality, that $F$ satisfies (\ref{eq:Hyp7}) with $q=0$ and (\ref{eq:Hyp8}) with $r=2$ and $s=0$. We also deduce from (\ref{eq:6N}) that $u_N$ converges to $u$ in $H^{\sigma+2}_\#(\Omega)$. Besides, the unique solution to (\ref{eq:adjoint}) solves the elliptic equation \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && - \Delta \psi_w + \left( V+f(u^2)+2f'(u^2)u^2-\lambda \right) \psi_w \nonumber \\ & & \qquad\qquad = 2 \left(\int_\Omega f'(u^2)u^3\psi_w\right) u + w - (w,u)_{L^2} u, \label{eq:phi_w_Fourier} \end{eqnarray} from which we infer that $\psi_{u_N-u} \in H^2_\#(\Omega)$ and $\|\psi_{u_N-u}\|_{H^2} \le C \|u_N-u\|_{L^2}$. Hence, $$ \|\psi_{u_N-u}-\mathbb{P}i_{N}\psi_{u_N-u}\|_{H^1} \le \frac{1}{N} \|\psi_{u_N-u}\|_{H^2} \le \frac{C}{N} \|u_N-u\|_{L^2}. $$ We therefore deduce from Theorem~\ref{Th:basic} that \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\! \|u_N-u\|_{H^s} & \le & \frac{C}{N^{\sigma+2-s}} \qquad \mbox{for } s=0 \mbox{ and } s=1 \label{eq:error_H1_F_0} \\ \!\!\!\!\!\!\!\!\!\!\! |\lambda_N-\lambda| & \le & \frac{C}{N^{\sigma+2}} \label{eq:error_lambda_F} \\ \!\!\!\!\!\!\!\!\!\!\! \frac \gamma 2 \|u_N-u\|_{H^1}^2 \le E(u_N)-E(u) & \le & C \|u_N-u\|_{H^1}^2. \nonumber \end{eqnarray} From (\ref{eq:error_H1_F_0}) and the inverse inequality $$ \forall v_N \in \widetilde X_N, \quad \|v_N\|_{H^r} \le 2^{(r-s)/2} N^{r-s} \|v_N\|_{H^s}, $$ which holds true for all $s \le r$ and all $N \ge 1$, we then obtain using classical arguments that \begin{equation} \label{eq:error_H1_F} \|u_N-u\|_{H^s} \le \frac{C}{N^{\sigma+2-s}} \qquad \mbox{for all } 0 \le s < \sigma+2. \end{equation} The estimate (\ref{eq:error_lambda_F}) is slightly deceptive since, in the case of a linear eigenvalue problem (i.e. for $-\Delta u +V u = \lambda u$) the convergence of the eigenvalues goes twice as fast as the convergence of the eigenvector in the $H^1$-norm. We are going to prove that this is also the case for the nonlinear eigenvalue problem under study in this section, at least under the assumption that $F \in C^{[\sigma]+2,\sigma-[\sigma]+\epsilon}((0,+\infty),\mathbb{R})$. Let us first come back to (\ref{Cal_lambda}), which we rewrite as, \begin{equation} \label{eq:lambda_N} \lambda_N-\lambda = \langle (A_u-\lambda)(u_N-u ),(u_N-u ) \rangle_{X',X} + \int_\Omega w_{u,u_N } (u_N-u ) \end{equation} with $$ w_{u,u_N} = u_N^2 \frac{f(u_N^2)-f(u^2)}{u_N-u} = u_N^2 (u_N+u) \frac{f(u_N^2)-f(u^2)}{u_N^2-u^2} . $$ As $u/2 \le u_N \le 2u$ on $\Omega$ for $N$ large enough, as $u_N$ converges, hence is uniformly bounded, in $H^{\sigma+2}_\#(\Omega)$ and as $f \in C^{[\sigma]+1,\sigma-[\sigma]+\epsilon}([\|u\|_{L^\infty}^2/4,4\|u\|_{L^\infty}^2],\mathbb{R})$, we obtain that $w_{u,u_N}$ is uniformly bounded in $H^{\sigma}_\#(\Omega)$ (at least for $N$ large enough). We therefore infer from (\ref{eq:lambda_N}) that for $N$ large enough \begin{equation} \label{eq:lambda_N_2} |\lambda_N-\lambda| \le C \left( \|u_N -u\|_{H^1}^2 + \|u_N-u\|_{H^{-\sigma}} \right). \end{equation} Let us now compute the $H^{-r}$-norm of the error for $0 < r \le \sigma$. Let $w \in H^{r}_\#(\Omega)$. Proceeding as in Section~\ref{sec:basic}, we obtain \begin{eqnarray} \int_\Omega w (u_N-u) & = & \langle (E''(u)-\lambda)(u_N-u), \mathbb{P}i^1_{\widetilde X_N \cap u^\perp} \psi_{w} \rangle_{X',X} \nonumber \\ & & + \langle (E''(u)-\lambda)(u_N-u),\psi_{w}-\mathbb{P}i^1_{\widetilde X_N \cap u^\perp}\psi_{w} \rangle_{X',X} \nonumber \\ & & + \|u_N-u\|_{L^2}^2 \int_\Omega f'(u^2) u^3 \psi_{w} - \frac 12 \|u_N-u \|_{L^2}^2 \int_\Omega uw, \label{eq:intom} \end{eqnarray} where $\mathbb{P}i^1_{\widetilde X_N \cap u^\perp}$ denotes the orthogonal projector on $\widetilde X_N \cap u^\perp$ for the $H^1$ inner product. We then get from (\ref{eq:phi_w_Fourier}) that $\psi_w$ is in $H^{r+2}_\#(\Omega)$ and satisfies \begin{equation} \label{eq:borne_psiw} \|\psi_w\|_{H^{r+2}} \le C \|w\|_{H^r}, \end{equation} for some constant $C$ independent of $w$. Combining (\ref{eq:Euu-lambda}), (\ref{eq:new37}), (\ref{eq:minuperp}), (\ref{eq:error_H1_F}), (\ref{eq:lambda_N}), (\ref{eq:intom}) and (\ref{eq:borne_psiw}), we obtain that there exists a constant $C \in \mathbb{R}_+$ such that for all $N \in \mathbb{N}$ and all $w \in H^{r}_\#(\Omega)$, \begin{eqnarray*} \int_\Omega w (u_N-u) & \le & C' \left( \|u_N-u\|_{L^2}^2 + N^{-(r+1)} \|u_N-u\|_{H^1} \right) \|w\|_{H^r} \\ & \le & \frac{C}{N^{\sigma+2+r}} \|w\|_{H^{r}} . \end{eqnarray*} Therefore \begin{equation} \label{eq:Hm1boundFourier} \|u_N-u\|_{H^{-r}} = \sup_{w \in H^{r}_\#(\Omega) \setminus \left\{0\right\}} \frac{\dps \int_\Omega w (u_N-u)}{\|w\|_{H^{r}}} \le \frac{C}{N^{\sigma+2+r}}, \end{equation} for some constant $C \in \mathbb{R}_+$ independent of $N$. Using (\ref{eq:error_H1_F}) and (\ref{eq:lambda_N_2}), we end up with $$ |\lambda_N-\lambda| \le \frac{C}{N^{2(\sigma+1)}}. $$ We can summarize the results obtained in this section in the following theorem. \begin{thm} \label{Th:Fourier} Assume that $V \in H^\sigma_\#(\Omega)$ for some $\sigma > d/2$ and that the function $F$ satisfies (\ref{eq:Hyp6})-(\ref{eq:Hyp7p}) and is in $C^{[\sigma]+1,\sigma-[\sigma]+\epsilon}((0,+\infty),\mathbb{R})$. Then $(u_N)_{N \in \mathbb{N}}$ converges to $u$ in $H^{\sigma+2}_\#(\Omega)$ and there exists $C \in \mathbb{R}_+$ such that for all $N \in \mathbb{N}$, \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\! \|u_N-u\|_{H^s} & \le & \frac{C}{N^{\sigma+2-s}} \qquad \mbox{for all } -\sigma \le s < \sigma+2 \label{eq:H1boundFourier} \\ \!\!\!\!\!\!\!\!\!\!\! |\lambda_N-\lambda| & \le & \frac{C}{N^{\sigma+2}} \nonumber \\ \!\!\!\!\!\!\!\!\!\!\! \frac \gamma 2 \|u_N-u\|_{H^1}^2 \le E(u_N)-E(u) & \le & C \|u_N-u\|_{H^1}^2. \end{eqnarray} If, in addition, $F \in C^{[\sigma]+2,\sigma-[\sigma]+\epsilon}((0,+\infty),\mathbb{R})$, then \begin{equation} \label{eq:lambdaboundFourier} |\lambda_N-\lambda| \le \frac{C}{N^{2(\sigma+1)}}. \end{equation} \end{thm} In order to evaluate the quality of the error bounds obtained in Theorem~\ref{Th:Fourier}, we have performed numerical tests with $\Omega = (0,2\pi)$, $V(x)=\sin(|x-\pi|/2)$ and $F(t^2)=t^{2}/2$. The Fourier coefficients of the potential $V$ are given by \begin{equation} \label{eq:coeff_Fourier} \widehat V_k = - \frac{1}{\sqrt{2\pi}} \frac{1}{|k|^2- \frac 14}, \end{equation} from which we deduce that $V \in H^\sigma_\#(0,2\pi)$ for all $\sigma < 3/2$. It can be see on Figure~1 that $\|u_N-u\|_{H^1}$, $\|u_N-u\|_{L^2}$, $\|u_N-u\|_{H^{-1}}$, and $|\lambda_N-\lambda|$ decay respectively as $N^{-2.67}$, $N^{-3.67}$, $N^{-4.67}$ and $N^{-5}$ (the reference values for $u$ and $\lambda$ are those obtained for $N=65$). These results are in good agreement with the upper bounds (\ref{eq:H1boundFourier}) (for $s=1$ and $s=0$), (\ref{eq:Hm1boundFourier}) (for $r=1$) and (\ref{eq:lambdaboundFourier}), which respectively decay as $N^{-2.5+\epsilon}$, $N^{-3.5+\epsilon}$, $N^{-4.5+\epsilon}$ and $N^{-5+\epsilon}$, for $\epsilon > 0$ arbitrarily small. \begin{figure} \caption{Numerical errors $\|u_N-u\|_{H^1} \label{fig:Fourier} \end{figure} \section{Finite element discretization} \label{sec:FE} In this section, we consider the problem \begin{equation} \label{eq:F.E} \inf \left\{ E(v), \; v \in X, \; \int_\Omega v^2=1 \right\}, \end{equation} where \begin{eqnarray*} && \Omega \textrm{ is a rectangular brick of } \mathbb{R}^d, \quad \mbox{with $d=1$, $2$ or $3$,} \\ && X=H^1_0(\Omega), \nonumber \\ && E(v) = \frac 12 \int_\Omega |\nabla v|^2 + \frac 12 \int_\Omega V v^2 + \frac {1}{2} \int_\Omega F(v^2). \nonumber \end{eqnarray*} We assume that $V \in L^2(\Omega)$ and that the function $F$ satisfies (\ref{eq:Hyp6})-(\ref{eq:Hyp7p}), as well as (\ref{eq:Hyp8}) for some $1 < r \le 2$ and $0 \le r+s \le 3$. Throughout this section, we denote by $u$ the unique positive solution of (\ref{eq:F.E}) and by $\lambda$ the corresponding Lagrange multiplier. In the non periodic case considered here, a classical variational approximation of~(\ref{eq:min_pb_u}) is provided by the finite element method. We consider a family of regular triangulations $({\cal T}_h)_h$ of $\Omega$. This means, in the case when $d=3$ for instance, that for each $h > 0$, ${\cal T}_h$ is a collection of tetrahedra such that \begin{itemize} \item $\overline\Omega$ is the union of all the elements of ${\cal T}_h$; \item the intersection of two different elements of ${\cal T}_h$ is either empty, a vertex, a whole edge, or a whole face of both of them; \item the ratio of the diameter $h_K$ of any element $K$ of ${\cal T}_h$ to the diameter of its inscribed sphere is smaller than a constant independent of $h$. \end{itemize} As usual, $h$ denotes the maximum of the diameters $h_K$, $K\in {\cal T}_h$. The parameter of the discretization then is $\delta=h > 0$. For each $K$ in ${\cal T}_h$ and each nonnegative integer $k$, we denote by $\mathbb P_k(K)$ the space of the restrictions to $K$ of the polynomials with $d$ variables and total degree lower or equal to $k$. The finite element space $X_{h,k}$ constructed from ${\cal T}_h$ and $\mathbb P_k(K)$ is the space of all continuous functions on $\Omega$ vanishing on $\partial\Omega$ such that their restrictions to any element $K$ of ${\cal T}_h$ belong to $\mathbb P_k(K)$. Recall that $X_{h,k} \subset H^1_0(\Omega)$ as soon as $k \ge 1$. We denote by $\pi_{h,k}^0$ and $\pi_{h,k}^1$ the orthogonal projectors on $X_{h,k}$ for the $L^2$ and $H^1$ inner products respectively. The following estimates are classical (see e.g. \cite{Ern}): there exists $C \in \mathbb{R}_+$ such that for all $r \in \mathbb{N}$ such that $1 \le r \le k+1$, \begin{eqnarray} \forall \phi \in H^r(\Omega) \cap H^1_0(\Omega), & \quad & \|\phi-\pi_{h,k}^0\phi\|_{L^2} \le C h^r \|\phi\|_{H^r}, \nonumber \\ \forall \phi \in H^r(\Omega) \cap H^1_0(\Omega), & \quad & \|\phi-\pi_{h,k}^1\phi\|_{H^1} \le C h^{r-1} \|\phi\|_{H^r}. \label{eq:proj_H1} \end{eqnarray} Let $u_{h,k}$ be a solution to the variational problem $$ \inf \left\{E(v_{h,k}), \; v_{h,k} \in X_{h,k}, \; \int_\Omega v_{h,k}^2 = 1 \right\} $$ such that $(u_{h,k},u)_{L^2} \ge 0$. In this setting, we obtain the following {\it a priori} error estimates. \begin{thm} \label{Th:FE} Assume that $V \in L^2(\Omega)$ and that the function $F$ satisfies (\ref{eq:Hyp6}), (\ref{eq:Hyp7}) for $q=1$, (\ref{eq:Hyp7p}), and (\ref{eq:Hyp8}) for some $1 < r \le 2$ and $0 \le r+s \le 3$. Then there exists $h_0 > 0$ and $C \in \mathbb{R}_+$ such that for all $0 < h \le h_0$, \begin{eqnarray} \|u_{h,1}-u\|_{H^1} & \le & C \, h \label{eq:estim_P1_H1} \\ \|u_{h,1}-u\|_{L^2} & \le & C \, h^2 \label{eq:estim_P1_L2} \\ |\lambda_{h,1}-\lambda| & \le & C \, h^2 \label{eq:estim_P1_lambda} \\ \frac \gamma 2 \|u_{h,1}-u\|_{H^1}^2 \le E(u_{h,1})-E(u) & \le & C \, h^2. \label{eq:estim_P1_energy} \end{eqnarray} If in addition, $V \in H^1(\Omega)$, $F$ satisfies (\ref{eq:Hyp8}) for $r=2$ and is such that $F \in C^3((0,+\infty),\mathbb{R})$ and $F''(t)t^{1/2}$ and $F'''(t)t^{3/2}$ are bounded in the vicinity of $0$, then there exists $h_0 > 0$ and $C \in \mathbb{R}_+$ such that for all $0 < h \le h_0$, \begin{eqnarray} \|u_{h,2}-u\|_{H^1} & \le & C \, h^2 \label{eq:estim_P2_H1} \\ \|u_{h,2}-u\|_{L^2} & \le & C \, h^3 \label{eq:estim_P2_L2} \\ |\lambda_{h,2}-\lambda| & \le & C \, h^4 \label{eq:estim_P2_lambda} \\ \frac \gamma 2 \|u_{h,2}-u\|_{H^1}^2 \le E(u_{h,2})-E(u) & \le & C \, h^4. \label{eq:estim_P2_energy} \end{eqnarray} \end{thm} \begin{proof} As $\Omega$ is a rectangular brick, $V$ satisfies (\ref{eq:Hyp3}) and $F$ satisfies (\ref{eq:Hyp6})-(\ref{eq:Hyp7p}), we have $u \in H^2(\Omega)$. We then use the fact that $\psi_{u_{h,k}-u}$ is solution to \begin{eqnarray} & & - \Delta \psi_{u_{h,k}-u} + (V+f(u^2)+2f'(u^2)u^2-\lambda)\psi_{u_{h,k}-u} \nonumber \\ && \qquad \qquad = 2 \left(\int_\Omega f'(u^2)u^3\psi_{u_{h,k}-u}\right) u + (u_{h,k}-u) - (u_{h,k}-u,u)_{L^2} u, \qquad\qquad \nonumber \end{eqnarray} to establish that $\psi_{u_{h,k}-u} \in H^2(\Omega) \cap H^1_0(\Omega)$ and that \begin{equation} \label{eq:bound_psi_hk} \| \psi_{u_{h,k}-u} \|_{H^2} \le C \| u_{h,k}-u \|_{L^2} \end{equation} for some constant $C$ independent of $h$ and $k$. The estimates (\ref{eq:estim_P1_H1})-(\ref{eq:estim_P1_energy}) then are directly consequences of Theorem~\ref{Th:basic}, (\ref{eq:error_L2_simpler}), (\ref{eq:proj_H1}) and (\ref{eq:bound_psi_hk}). Under the additional assumptions that $V \in H^1(\Omega)$, we obtain by standard elliptic regularity arguments that $u \in H^3(\Omega)$. The $H^1$ and $L^2$ estimates (\ref{eq:estim_P2_H1}) and (\ref{eq:estim_P2_L2}) immediately follows from Theorem~\ref{Th:basic}, (\ref{eq:error_L2_simpler}), (\ref{eq:proj_H1}) and (\ref{eq:bound_psi_hk}). We also have $$ |\lambda_{2,h}-\lambda| \le C h^3 $$ for a constant $C$ independent of $h$. In order to prove (\ref{eq:estim_P2_lambda}), we proceed as in Section~\ref{sec:Fourier}. We start from the equality $$ \lambda_{2,h}-\lambda = \langle (A_u-\lambda)(u_{2,h}-u ),(u_{2,h}-u ) \rangle_{X',X} + \int_\Omega \widetilde w^h (u_{2,h}-u ) $$ where $$ \widetilde w^h = u_{2,h}^2 \frac{f(u_{2,h}^2)-f(u^2)}{u_{2,h}-u}. $$ We now claim that $u_{h,2}$ converges to $u$ in $L^\infty(\Omega)$ when $h$ goes to zero. To establish this result, we first remark that $$ \|u_{h,2}-u\|_{L^\infty} \le \|u_{h,2}- {\cal I}_{h,2} u \|_{L^\infty} + \|{\cal I}_{h,2} u-u\|_{L^\infty}, $$ where ${\cal I}_{h,2}$ is the interpolation projector on $X_{h,2}$. As $u \in H^3(\Omega) \hookrightarrow C^1(\overline{\Omega})$, we have $$ \lim_{h \to 0^+} \|{\cal I}_{h,2} u-u\|_{L^\infty} = 0. $$ On the other hand, using the inverse inequality $$ \exists C \in \mathbb{R}_+ \mbox{ s.t. } \forall 0 < h \le h_0, \; \forall v_h \in X_{h,2}, \quad \|v_{h,2}\|_{L^\infty} \le C \rho(h) \|v_{h,2}\|_{H^1}, $$ with $\rho(h)=1$ if $d=1$, $\rho(h)=1+\ln h$ if $d=2$ and $\rho(h)=h^{-1/2}$ if $d=3$ (see \cite{Ern} for instance), we obtain \begin{eqnarray*} \|u_{h,2}- {\cal I}_{h,2} u\|_{L^\infty} & \le & C \rho(h) \|u_{h,2}- {\cal I}_{h,2} u \|_{H^1} \\ & \le & C \rho(h) \left( \|u_{h,2}-u \|_{H^1} + \|u-{\cal I}_{h,2} u\|_{H^1} \right) \\ & \le & C' \, \rho(h) \, h^2 \; \mathop{\longrightarrow}_{h \to 0^+} \; 0. \end{eqnarray*} Hence the announced result. This implies in particular that $\widetilde w^h$ is bounded in $H^1(\Omega)$, uniformly in $h$. Consequently, there exists $C \in \mathbb{R}_+$ such that for all $0 < h \le h_0$, \begin{equation} \label{eq:estim_lambda_FE} |\lambda_{h,2}-\lambda| \le C \left( \|u_{h,2}-u\|_{H^1}^2 + \|u_{h,2}-u\|_{H^{-1}} \right). \end{equation} To estimate the $H^{-1}$-norm of $u_{h,2}-u$, we write that for all $w \in H^1_0(\Omega)$, \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\! \int_\Omega w (u_{h,2}-u) & = & \langle (E''(u)-\lambda)(u_{h,2}-u), \pi^1_{X_{h,2}\cap u^\perp} \psi_w \rangle_{X',X} \nonumber \\ & & + \langle (E''(u)-\lambda)(u_{h,2}-u), \psi_w - \pi^1_{X_{h,2}\cap u^\perp} \psi_w \rangle_{X',X} \nonumber \\ & & + \|u_{h,2}-u\|_{L^2}^2 \int_\Omega f'(u^2) u^3 \psi_w - \frac 12 \|u_{h,2}-u \|_{L^2}^2 \int_\Omega u w, \nonumber \end{eqnarray} where $\psi_w$ is solution to \begin{eqnarray} && - \Delta \psi_w + (V+f(u^2)+2f'(u^2)u^2-\lambda) \psi_w \nonumber \\ && \qquad \qquad = 2 \left(\int_\Omega f'(u^2)u^3 \psi_w\right) u + w - ( w,u)_{L^2} u,\qquad \qquad \label{eq:eq_sur_psit} \end{eqnarray} and where $\pi^1_{X_{h,2}\cap u^\perp}$ denotes the orthogonal projector on $X_{h,2}\cap u^\perp$ for the $H^1$ inner product. Using the assumptions that $V \in H^1(\Omega)$, $F \in C^3((0,+\infty),\mathbb{R})$, and $F''(t)t^{1/2}$ and $F'''(t)t^{3/2}$ are bounded in the vicinity of $0$, we deduce from (\ref{eq:eq_sur_psit}) that $\psi_w$ is in $H^3(\Omega)$ and that there exists $C \in \mathbb{R}_+$ such that for all $w \in H^1_0(\Omega)$ and all $0 < h \le h_0$, $$ \| \psi_w\|_{H^3} \le C \| w\|_{H^1}. $$ We therefore obtain the inequality \begin{equation} \label{eq:bound_psih} \| \psi_w-\pi^1_{h,2} \psi_w\|_{H^1} \le C h^2 \|w\|_{H^1}, \end{equation} where the constant $C$ is independent of $h$. Putting together (\ref{eq:Hyp8}) (for $r=2$), (\ref{eq:Euu-lambda}), (\ref{eq:new37}), (\ref{eq:minuperp}), (\ref{eq:proj_H1}), (\ref{eq:estim_P2_H1}), (\ref{eq:estim_P2_L2}) and (\ref{eq:bound_psih}), we get $$ \|u_{h,2}-u\|_{H^{-1}} = \sup_{w \in H^1_0(\Omega) \setminus \left\{0\right\}} \frac{\int_\Omega w(u_{h,2}-u)}{\|w\|_{H^1}} \le C \, h^4. $$ Combining with (\ref{eq:estim_P2_H1}) and (\ref{eq:estim_lambda_FE}), we end up with (\ref{eq:estim_P2_lambda}). Lastly, we deduce (\ref{eq:estim_P2_energy}) from the equality \begin{eqnarray*} E(u_{h,2}) - E(u) & = &\frac 12 \langle (A_u-\lambda) (u_{h,2}-u) ,(u_{h,2}-u) \rangle_{X',X} \\ & & + \frac 12 \int_\Omega F\left( u^2 + (u_{h,2} ^2-u^2) \right) - F\left( u^2 \right) - f\left( u^2 \right) (u_{h,2} ^2-u^2), \end{eqnarray*} Taylor expanding the integrand and exploiting the boundedness of the function $F''(t)t^{1/2}$ in the vicinity of $0$. \end{proof} Numerical results for the case when $\Omega = (0,\pi)^2$, $V(x_1,x_2) = x_1^2+x_2^2$ and $F(t^2)=t^2/2$ are reported on Figure~2. The agreement with the error estimates obtained in Theorem~\ref{Th:FE} is good for the $\mathbb P_1$ approximation and excellent for the $\mathbb P_2$ approximation. \begin{figure} \caption{Errors $\|u_{h,k} \label{fig:FE} \end{figure} \section{The effect of numerical integration} \label{sec:integration} Let us now address one further consideration that is related to the practical implementation of the method, and more precisely to the numerical integration of the nonlinear term. For simplicity, we focus on the case when $A = 1$. From a practical viewpoint, the solution $(u_\delta,\lambda_\delta)$ to the nonlinear eigenvalue problem (\ref{eq:14}) can be computed iteratively, using for instance the optimal damping algorithm~\cite{ODA1,ODA2,GP}. At the $p^{\rm th}$ iteration ($p \ge 1$), the ground state $(u_\delta^p,\lambda_\delta^p) \in X_\delta \times \mathbb{R}$ of some {\it linear}, finite dimensional, eigenvalue problem of the form \begin{equation} \label{eq:6SCF} \int_\Omega \overline{\nabla u^p_\delta} \cdot \nabla v_\delta + \int_\Omega \left(V+f(\widetilde\rho_\delta^{p-1})\right) \, \overline{u_\delta^{p}} \, v_\delta = \lambda_\delta^{p}\int_\Omega \overline{u_\delta^{p}} v_\delta,\quad \forall v_\delta \in X_\delta, \end{equation} has to be computed. In the optimal damping algorithm, the density $\widetilde\rho_\delta^{p-1}$ is a convex linear combination of the densites $\rho_\delta^q = |u^q_\delta|^2$, for $0 \le q \le p-1$. Solving (\ref{eq:6SCF}) amounts to finding the lowest eigenelement of the matrix $H^p$ with entries \begin{equation} \label{eq:6SCFmat} H^p_{kl} := \int_\Omega \overline{\nabla \phi_k} \cdot \nabla \phi_l + \int_\Omega V \, \overline{\phi_k} \, \phi_l + \int_\Omega f(\widetilde\rho_\delta^{p-1}) \, \overline{\phi_k} \, \phi_l, \end{equation} where $(\phi_k)_{1 \le k \le \mbox{dim}(X_\delta)}$ stands for the canonical basis of $X_\delta$. In order to evaluate the last two terms of the right-hand side of (\ref{eq:6SCFmat}), numerical integration has to be resorted to. In the finite element approximation of (\ref{eq:F.E}), it is generally made use of a numerical quadrature formula over each triangle (2D) or tetrahedron (3D) based on Gauss points. In the Fourier approximation of the periodic problem (\ref{eq:Fourier}), the terms $$ \int_\Omega V \, \overline{e_k} \, e_l \quad \mbox{and} \quad \int_\Omega f(\widetilde\rho_\delta^{p-1}) \, \overline{e_k} \, e_l, $$ which are in fact, up to a multiplicative constant, the $(k-l)^{\rm th}$ Fourier coefficients of $V$ and $f(\widetilde\rho_\delta^{p-1})$ respectively, are evaluated by Fast Fourier Transform (FFT), using an integration grid which may be different from the natural discretization grid $$ \left\{\left(\frac{2\pi}{2N+1}j_1, \cdots, \frac{2\pi}{2N+1}j_d, \right), \; 0 \le j_1,\cdots,j_d \le 2N \right\} $$ associated with $\widetilde X_N$. This raises the question of the influence of the numerical integration on the convergence results obtained in Theorems~~\ref{Th:basic}, \ref{Th:Fourier} and~\ref{Th:FE}. \begin{remark} \label{rem:doublegrid} In the case of the periodic problem considered in Section~\ref{sec:Fourier} and when $F(t) = ct^2$ for some $c > 0$, the last term of the right-hand side of (\ref{eq:6SCFmat}) can be computed exactly (up to round-off errors) by means of a Fast Fourier Transform (FFT) on an integration grid twice as fine as the discretization grid. This is due to the fact that the function $\widetilde\rho_\delta^{p-1} \, \overline{e_k} \, e_l$ belongs to the space $\mbox{Span} \{e_n \, | \, |n|_* \le 4N \}$. An analogous property is used in the evaluation of the Coulomb term in the numerical simulation of the Kohn-Sham equations for periodic systems. \end{remark} In the sequel, we focus on the simple case when $d=1$, $\Omega=(0,2\pi)$, $X = H^1_\#(0,2\pi)$, and $$ E(v) = \frac 12 \int_0^{2\pi} |v'|^2 + \frac 12 \int_0^{2\pi} V v^2 + \frac 14 \int_0^{2\pi} |v|^4 $$ with $V \in H^\sigma_\#(0,2\pi)$ for some $\sigma > 1/2$. More difficult cases will be addressed elsewhere~\cite{CCM2}. In view of Remark~\ref{rem:doublegrid}, we consider an integration grid $$ \frac{2\pi}{N_g}\mathbb{Z} \cap [0,2\pi) = \left\{0,\frac{2\pi}{N_g},\frac{4\pi}{N_g}, \cdots, \frac{2\pi(N_g-1)}{N_g} \right\}, $$ with $N_g \ge 4N+1$ for which we have $$ \forall v_N \in \widetilde X_N, \quad \int_0^{2\pi} |v_N|^4 = \frac{2\pi}{N_g} \sum_{r \in \frac{2\pi}{N_g}\mathbb{Z} \cap [0,2\pi)} |v_N(r)|^4, $$ and for all $\rho \in \widetilde X_{2N}$, \begin{equation} \label{eq:intu4} \forall |k|,|l|\le N, \quad \int_0^{2\pi} \rho \, \overline{e_k} \, e_l = \frac{1}{N_g} \sum_{r \in \frac{2\pi}{N_g}\mathbb{Z} \cap [0,2\pi)} \rho(r) e^{-i(k-l)r} = \widehat{\rho^{{\rm FFT}}_{k-l}}, \end{equation} where $\widehat{\rho^{{\rm FFT}}_{k-l}}$ is the $(k-l)^{\rm th}$ coefficient of the discrete Fourier transform of $\rho$. Recall that if $\phi = \sum_{g \in \mathbb{Z}} \widehat \phi_g \, e_g \in C^0_\#(0,2\pi)$, the discrete Fourier transform of $\phi$ is the $N_g\mathbb{Z}$-periodic sequence $(\widehat{\phi^{{\rm FFT}}_{g}})_{g \in \mathbb{Z}}$ defined by $$ \forall g \in \mathbb{Z}, \quad \widehat{\phi^{{\rm FFT}}_{g}} = \frac{1}{N_g} \sum_{r \in \frac{2\pi}{N_g}\mathbb{Z} \cap [0,2\pi)} \phi(r) e^{-igr}. $$ We now introduce the subspaces $W_M$ for $M \in \mathbb{N}^\ast$ such that $W_{M} = \widetilde X_{(M-1)/2}$ if $M$ is odd and $W_{M} = \widetilde X_{M/2-1} \oplus \mathbb{C} (e_{M/2}+e_{-M/2})$ is $M$ is even (note that $\mbox{dim}(W_{M})=M$ for all $M \in \mathbb{N}^\ast$). It is then possible to define an interpolation projector ${\cal I}_{N_g}$ from $C^0_\#(0,2\pi)$ onto $W_{N_g}$ by $$ \forall x \in \frac{2\pi}{N_g}\mathbb{Z} \cap [0,2\pi), \quad [{\cal I}_{N_g}(\phi)](x) = \phi(x). $$ The expansion of ${\cal I}_{N_g}(\phi)$ in the canonical basis of $W_{N_g}$ is given by $$ {\cal I}_{N_g}(\phi) \; = \; \left| \begin{array}{lll} (2\pi)^{1/2} \dps \sum_{|g| \le (N_g-1)/2} \widehat{\phi^{{\rm FFT}}_{g}} \, e_g & \quad & (N_g \mbox{ odd}), \\ (2\pi)^{1/2} \dps \sum_{|g| \le N_g/2-1} \widehat{\phi^{{\rm FFT}}_{g}} \, e_g + (2\pi)^{1/2} \widehat{\phi^{{\rm FFT}}_{N_g/2}} \, \left( \frac{e_{N_g/2}+e_{-N_g/2}}{2} \right) & \quad & (N_g \mbox{ even}). \end{array} \right. $$ Under the condition that $N_g \ge 4N+1$, the following property holds: for all $\phi \in C^0_\#(0,2\pi)$, $$ \forall |k|,|l| \le N, \quad \int_0^{2\pi} {\cal I}_{N_g}(\phi) \, \overline{e_k} \, e_l = \widehat{\phi^{{\rm FFT}}_{k-l}}. $$ It is therefore possible, in the particular case considered here, to efficiently evaluate the entries of the matrix $H^p$ using the formula \begin{eqnarray} H^p_{kl} & := & \int_0^{2\pi} \overline{e_k'} \cdot e_l' + \int_0^{2\pi} V \, \overline{e_k} \, e_l + \int_0^{2\pi} \widetilde\rho_N^{p-1} \overline{e_k} \, e_l \nonumber \\ & \simeq & |k|^2 \delta_{kl} + \widehat{V^{{\rm FFT}}_{k-l}} + \widehat{[\widetilde\rho_{N}^{p-1}]^{\rm FFT}_{k-l}}, \label{eq:approx_FFT} \end{eqnarray} and resorting to Fast Fourier Transform (FFT) algorithms to compute the discrete Fourier transforms. Note that only the second term is computed approximatively. The third term is computed exactly since, at each iteration, $\widetilde\rho_N^{p-1}$ belongs to $\widetilde X_{2N}$ (see Eq.~(\ref{eq:intu4})). Of course, this situation is specific to the nonlinearity $F(t)=t^2/2$ considered here. Using the approximation formula (\ref{eq:approx_FFT}) amounts to replace the original problem \begin{equation} \label{eq:uN} \inf \left\{E(v_N), \; v_N \in \widetilde X_N, \; \int_0^{2\pi} |v_N|^2 = 1 \right\}, \end{equation} with the approximate problem \begin{equation} \label{eq:uNNg} \inf \left\{E_{N_g}(v_N), \; v_N \in \widetilde X_N , \; \int_0^{2\pi} |v_N|^2 = 1 \right\}, \end{equation} where $$ E_{N_g}(v_N) = \frac 12 \int_0^{2\pi} |v_N'|^2 + \frac 12 \int_0^{2\pi} {\cal I}_{N_g}(V) v_N^2 + \frac 14 \int_0^{2\pi} |v_N|^4. $$ Let us denote by $u_N$ a solution of (\ref{eq:uN}) such that $(u_N,u)_{L^2} \ge 0$ and by $u_{N,N_g}$ a solution to (\ref{eq:uNNg}) such that $(u_{N,N_g},u)_{L^2} \ge 0$. It is easy to check that $u_{N,N_g}$ is bounded in $H^1_\#(0,2\pi)$ uniformly in $N$ and $N_g$. Besides, we know from Theorem~\ref{Th:Fourier} that $(u_N)_{N \in \mathbb{N}}$ converges to $u$ in $H^1_\#(0,2\pi)$, hence in $L^\infty_\#(2,\pi)$, when $N$ goes to infinity. This implies that the sequence $(A_u-A_{u_N})_{N \in \mathbb{N}}$ converges to $0$ in operator norm. Consequently, for all $N$ large enough and all $N_g$ such that $N_g \ge 4N+1$, \begin{eqnarray*} \frac{\gamma}4 \|u_{N,N_g}-u_N\|_{H^1}^2 & \le & E(u_{N,N_g})-E(u_N) \\ & \le & E_{N_g}(u_{N,N_g})-E_{N_g}(u_N) \\ & & + \int_0^{2\pi} (V-{\cal I}_{N_g}(V)) \left(|u_{N,N_g}|^2-|u_N|^2\right) \\ & \le & \int_0^{2\pi} (V-{\cal I}_{N_g}(V)) \left(|u_{N,N_g}|^2-|u_N|^2\right) \\ & \le & C \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2} \|u_{N,N_g}-u_N\|_{H^1}, \end{eqnarray*} where we have used the fact that $\left(|u_{N,N_g}|^2-|u_N|^2\right) \in \widetilde X_{2N}$. Therefore, \begin{equation} \label{eq:uNNgVIV} \|u_{N,N_g}-u_N\|_{H^1} \le C \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2}, \end{equation} for a constant $C$ independent of $N$ and $N_g$. Likewise, \begin{eqnarray*} \lambda_{N,N_g}-\lambda_N & = & \langle (A_{u_N}-\lambda_N) (u_{N,N_g}-u_N), (u_{N,N_g}-u_N) \rangle_{X',X} \\ & & + \int_0^{2\pi} (V-{\cal I}_N(V)) |u_{N,N_g}|^2 \\ & & + \int_0^{2\pi} |u_{N,N_g}|^2 (u_{N,N_g}+u_N)(u_{N,N_g}-u_N), \end{eqnarray*} from which we deduce, using (\ref{eq:uNNgVIV}), $$ |\lambda_{N,N_g}-\lambda_N| \le C \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2}. $$ An error analysis of the interpolation operator ${\cal I}_{N_g}$ is given in~\cite{CHQZ}: for all non-negative real numbers $0 \le r \le s$ with $s > 1/2$ (for $d=1$), $$ \|\varphi - {\cal I}_{N_g}(\varphi)\|_{H^r} \le \frac{C}{N_g^{s-r}} \|\varphi\|_{H^s},\quad \forall\varphi\in H^s_\#(0,2\pi). $$ Thus, \begin{equation} \label{eq:estimeIV} \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2} \le \|V - {\cal I}_{N_g}(V)\|_{L^2} \le \frac{C}{N_g^{\sigma}}, \end{equation} and the above inequality provides the following estimates: \begin{eqnarray} \|u_{N,N_g}-u\|_{H^1} & \le & C \left(N^{-\sigma-1}+N_g^{-\sigma}\right) \label{eq:erroruNNgL2} \\ \|u_{N,N_g}-u\|_{L^2} & \le & C \left(N^{-\sigma-2}+N_g^{-\sigma}\right) \label{eq:NNN} \\ |\lambda_{N,N_g}-\lambda| & \le & C \left(N^{-2\sigma-2}+N_g^{-\sigma}\right), \label{eq:errorlambdaNNg} \end{eqnarray} for a constant $C$ independent of $N$ and $N_g$. The first component of the error bound (\ref{eq:erroruNNgL2}) corresponds to the error $\|u_N-u\|_{H^1}$ while the second component corresponds to the numerical integration error $\|u_{N,N_g}-u_N\|_{H^1}$ (the same remark applies to the error bounds (\ref{eq:NNN}) and (\ref{eq:errorlambdaNNg})). It is classical that for the norm $\|\varphi - {\cal I}_{N_g}\varphi\|_{H^{r}}$ for $r < 0$ is in general of the same order of magnitude as $\|\varphi - {\cal I}_{N_g}\varphi\|_{L^2}$. As the existence of better estimates in negative norms is a corner stone in the derivation of the improvement of the error estimate (\ref{eq:error_lambda_F}) for the eigenvalues (doubling of the convergence rate), we expect that the eigenvalue approximation will be dramatically polluted by the use of the numerical integration formula. This can be checked numerically. Considering again the one-dimensional example used in Section~\ref{sec:Fourier} ($\Omega=(0,2\pi)$, $V(x) = \sin(|x-\pi|/2)$, $F(t)=t^2/2$), we have computed for $4 \le N \le 30$ and $N_g=2^p$ with $7 \le p \le 15$, the errors $\|u_{N,N_g}-u\|_{H^1}$, $\|u_{N,N_g}-u\|_{L^2}$, $\|u_{N,N_g}-u\|_{H^{-1}}$, and $|\lambda_{N,N_g}-\lambda|$. On Figure~3, these quantities are plotted as functions of $2N+1$ (the dimension of $\widetilde X_N$), for various values of $N_g$. The non-monotonicity of the curve $N \mapsto |\lambda_{N,N_g}-\lambda|$ originates from the fact that $\lambda_{N,N_g}-\lambda$ can be positive or negative depending on the values of $N$ and $N_g$. \begin{figure} \caption{Numerical errors $\|u_{N,N_g} \label{fig:FourierNg} \end{figure} The numerical errors $\|u_{N,N_g}-u\|_{H^1}$, $\|u_{N,N_g}-u\|_{L^2}$, $\|u_{N,N_g}-u\|_{H^{-1}}$, and $|\lambda_{N,N_g}-\lambda|$, for $N=30$, as functions of $N_g$ (in log scales) are plotted on Figure~4. When $N_g$ goes to infinity, the sequences $\log_{10} \|u_{N,N_g}-u\|_{H^1}$, $\log_{10} \|u_{N,N_g}-u\|_{L^2}$, $\log_{10} \|u_{N,N_g}-u\|_{H^{-1}}$, and $\log_{10} |\lambda_{N,N_g}-\lambda|$ converge to $\log_{10} \|u_{N}-u\|_{H^1}$, $\log_{10} \|u_{N}-u\|_{L^2}$, $\log_{10} \|u_{N}-u\|_{H^{-1}}$, and $\log_{10} |\lambda_{N}-\lambda|$ respectively. For smaller values of $N_g$, the numerical integration error dominates and these functions all decay linearly with $\log_{10}N_g$ with a slope very close to $-2$. For fixed $N$, the upper bounds (\ref{eq:erroruNNgL2})-(\ref{eq:errorlambdaNNg}) also decay linearly with $\log_{10}N_g$, but with a slope equal to $-1.5$. To obtain sharper upper bounds for the numerical integration error, we need to replace (\ref{eq:estimeIV}) with a sharper estimate of $\|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2}$, which is possible for the particular example under consideration here. Indeed, remarking that under the condition $N_g \ge 4N+1$, $$ \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2} = \left( \sum_{|g| \le 2N} \left| \sum_{k \in \mathbb{Z}^\ast} \widehat V_{g+kN_g} \right|^2 \right)^{1/2}, $$ we can, using (\ref{eq:coeff_Fourier}), show that $$ \|\mathbb{P}i_{2N}(V-{\cal I}_{N_g}(V))\|_{L^2} \le \frac{C \, N^{1/2}}{N_g^2}, $$ for a constant $C$ independent of $N$ and $N_g$. We deduce that for this specific example \begin{eqnarray*} \|u_{N,N_g}-u\|_{H^1} & \le & C \left(N^{-5/2}+N^{1/2} N_g^{-2}\right) \\ \|u_{N,N_g}-u\|_{L^2} & \le & C \left(N^{-7/2}+N^{1/2} N_g^{-2}\right) \\ |\lambda_{N,N_g}-\lambda| & \le & C \left(N^{-9/2}+N^{1/2} N_g^{-2}\right). \end{eqnarray*} \begin{figure} \caption{Numerical errors $\|u_{N,N_g} \label{fig:FourierNg2} \end{figure} \section{Appendix: properties of the ground state} The mathematical properties of the minimization problems (\ref{eq:min_pb_u}) and (\ref{eq:min_pb_rho}) which are useful for the numerical analysis reported in this article are gathered in the following lemma. Recall that $d=1$, $2$ or $3$. \begin{lem} \label{lem:theory} Under assumptions (\ref{eq:Hyp1})-(\ref{eq:Hyp7}), (\ref{eq:min_pb_rho}) has a unique minimizer $\rho_0$ and (\ref{eq:min_pb_u}) has exactly two minimizers $u=\sqrt{\rho_0}$ and $-u$. The function $u$ is solution to the nonlinear eigenvalue problem (\ref{eq:10}) for some $\lambda \in \mathbb{R}$. Besides, $u \in C^{0,\alpha}(\overline{\Omega})$ for some $0 < \alpha < 1$, $u > 0$ in $\Omega$, and $\lambda$ is the lowest eigenvalue of $A_u$ and is non-degenerate. \end{lem} \begin{proof} As $A$ is uniformly bounded and coercive on $\Omega$ and $V \in L^q(\Omega)$ for some $q > \max(1,d/2)$, $v \mapsto a(v,v)$ is a quadratic form on $X$, bounded from below on the set $\left\{v \in X \; | \; \|v\|_{L^2}=1\right\}$. Replacing $a(v,v)$ with $a(v,v)+C\|v\|_{L^2}^2$ and $F(t)$ with $F(t)-F(0)-tF'(0)$ does not change the minimizers of (\ref{eq:min_pb_u}) and (\ref{eq:min_pb_rho}). We can therefore assume, {\em without loss of generality}, that \begin{equation} \label{eq:hyp9} \forall v \in X, \; a(v,v) \ge \|v\|_{L^2}^2 \quad \mbox{ and } \quad F(0)=F'(0)=0. \end{equation} It then follows from (\ref{eq:Hyp7}) and (\ref{eq:hyp9}) that $0 \le F(v^2) \le C (v^2+v^{6})$. As $X \hookrightarrow L^{6}(\Omega)$, $E(v)$ is finite for all $v \in X$, $I > -\infty$ and the minimizing sequences of (\ref{eq:min_pb_u}) are bounded in $X$. Let $(v_n)_{n \in \mathbb{N}}$ be a minimizing sequence of (\ref{eq:min_pb_u}). Using the fact that $X$ is compactly embedded in $L^2(\Omega)$, we can extract from $(v_n)_{n \in \mathbb{N}}$ a subsequence $(v_{n_k})_{k \in \mathbb{N}}$ which converges weakly in $X$, strongly in $L^2(\Omega)$ and almost everywhere in $\Omega$ to some $u \in X$. As $\|v_{n_k}\|_{L^2}=1$ and $E(v_{n_k}) \downarrow I$, we obtain $\|u\|_{L^2}=1$ and $E(u) \le I$ ($E$ is convex and strongly continuous, hence weakly l.s.c., on $X$). Hence $u$ is a minimizer of (\ref{eq:min_pb_u}). As $|u| \in X$, $\||u|\|_{L^2} = 1$ and $E(|u|)=E(u)$, we can assume without loss of generality that $u \ge 0$. Assumptions (\ref{eq:Hyp1})-(\ref{eq:Hyp7}) imply that $E$ is $C^1$ on $X$ and that $E'(u) = A_u u$. It follows that $u$ is solution to (\ref{eq:Euler}) for some $\lambda \in \mathbb{R}$. By elliptic regularity arguments~\cite{GT}, we get $u \in C^{0,\alpha}(\overline{\Omega})$ for some $0 < \alpha < 1$. We also have $u > 0$ in $\Omega$; this is a consequence of the Harnack inequality~\cite{Stampacchia}. Making the change of variable $\rho=v^2$, it is easily seen that if $v$ is a minimizer of (\ref{eq:min_pb_u}), then $v^2$ is a minimizer of (\ref{eq:min_pb_rho}), and that, conversely, if $\rho$ is a minimizer of (\ref{eq:min_pb_rho}), then $\sqrt{\rho}$ and $-\sqrt{\rho}$ are minimizers of (\ref{eq:min_pb_u}). Besides, the functional $\cal E$ is strictly convex on the convex set $\left\{ \rho \ge 0 \; | \, \sqrt{\rho} \in X, \; \int_\Omega \rho = 1 \right\}$. Therefore $\rho_0=u^2$ is the unique minimizer of (\ref{eq:min_pb_rho}) and $u$ and $-u$ are the only minimizers of (\ref{eq:min_pb_u}). It is easy to see that $A_u$ is bounded below and has a compact resolvent. It therefore possesses a lowest eigenvalue $\lambda_0$, which, according to the min-max principle, satisfies \begin{equation} \label{eq:minmax} \lambda_0 = \inf \left\{ \int_{\Omega} (A \nabla v) \cdot \nabla v + \int_\Omega (V+f(u^2)) v^2, \; v \in X, \; \int_\Omega v^2=1 \right\}. \end{equation} Let $v_0$ be a normalized eigenvector of $A_u$ associated with $\lambda_0$. Clearly, $v_0$ is a minimizer of (\ref{eq:minmax}) and so is $|v_0|$. Therefore, $|v_0|$ is solution to the Euler equation $A_u|v_0|=\lambda_0|v_0|$. Using again elliptic regularity arguments and the Harnack inequality, we obtain that $|v_0| \in C^{0,\alpha}(\overline{\Omega})$ for some $0 < \alpha < 1$ and that $|v_0| > 0$ on $\Omega$. This implies that either $v_0 = |v_0| > 0$ in $\Omega$ or $v_0=-|v_0| < 0$ in $\Omega$. In particular $(u,v_0)_{L^2} \neq 0$. Consequently, $\lambda=\lambda_0$ and $\lambda$ is a simple eigenvalue of $A_u$. \end{proof} Let us finally prove that $\lambda$ is also the ground state eigenvalue of the {\em nonlinear} eigenvalue problem \begin{equation} \label{eq:nonlinear_eigenvalue_pb} \left\{ \begin{array}{l} \mbox{search } (\mu,v) \in \mathbb{R} \times X \mbox{ such that} \\ A_v v = \mu v \\ \| v \|_{L^2} = 1, \end{array} \right. \end{equation} in the following sense: if $(\mu,v)$ is solution to (\ref{eq:nonlinear_eigenvalue_pb}) then either $\mu > \lambda$ or $\mu=\lambda$ and $v= \pm u$. To see this, let us consider a solution $(\mu,v) \in \mathbb{R} \times X$ to (\ref{eq:nonlinear_eigenvalue_pb}) and denote by $\widetilde w = |v|-u$. As for $u$, we infer from elliptic regularity arguments~\cite{GT} that $v \in C^{0,\alpha}(\overline{\Omega})$. We have $\|v\|_{L^2} = \|u\|_{L^2} = 1$. Therefore, if $w \le 0$ in $\Omega$, then $|v|=u$, which yields $v=\pm u$ and $\mu=\lambda$. Otherwise, there exists $x_0 \in \Omega$ such that $\widetilde w(x_0) > 0$, and, up to replacing $v$ with $-v$, we can consider that the function $w=v-u$ is such that $w(x_0) > 0$. The function $w$ is in $X \cap C^{0,\alpha}(\overline{\Omega})$ and satisfies \begin{equation} \label{eq:on_w} (A_u-\lambda)w + \frac{f(v^2)-f(u^2)}{v^2-u^2} v (u+v) w = (\mu-\lambda) v. \end{equation} Let $\omega = \left\{ x \in \Omega \; | \; w(x) > 0 \right\} = \left\{ x \in \Omega \; | \; v(x) > u(x) \right\}$ and $w_+ = \max(w,0)$. As $w_+ \in X$, we deduce from (\ref{eq:on_w}) that $$ \langle (A_u-\lambda)w_+,w_+\rangle_{X',X} + \int_\omega \frac{f(v^2)-f(u^2)}{v^2-u^2} v (u+v) w^2 = (\mu-\lambda) \int_\omega vw. $$ The left hand side of the above equality is positive and $\int_\omega vw > 0$. Therefore, $\mu > \lambda$. \end{document}
math
बॉलीवुड अभिनेत्री करीना कपूर खान अब रेडियो जॉकी की भूमिका में नजर आने वाली हैं। वह जल्द ही अपना रेडियो शो 'व्हेट वोमेन वंत वित करीना कपूर खान' लेकर आ रही हैं। यह शो १० दिसंबर से शुरू होगा। इस रेडियो शो में करीना बॉलीवुड की वुमन सेलेब्स से बात करेंगी। शो में करीना कपूर, स्वरा भास्कर, करिश्मा कपूर, सोहा अली खान, जोया अख्तर, रेगा झा, सनी लियोनी, मल्लिका दुआ, अमृता अरोड़ा से बात करती नजर आएंगी। हाल ही में इस शो का प्रोमो जारी हुआ है। प्रोमो के दौरान रेगा ने करीना से पूछा कि क्या उनका सीक्रेट सोशल मीडिया अकाउंट है, तो करीना ने कहा कि वे न हां, बोलेंगी और न ही न। रेडियो जॉकी बनने को लेकर करीना ने कहा कि जब मैंने यह आइडिया सुना तो काफी नर्वस थी। लेकिन फिर मुझे लगा इस शो से जुड़ने का यह सही समय है। शो के दौरान करिश्मा कई सीक्रेट्स उगाजर करेंगी। करीना प्रोमो में करिश्मा से पूछती हैं कि उन्हें किस बात पर ट्रोल किया गया, जवाब में करिश्मा कहती हैं कि आपको लेकर। पर्दे पर अपनी अदाकारी का लोहा मनवाने वाली करीना का यह नया आरजे अवतार काफी मजेदार होने वाला है। करीना कपूर जल्द ही अक्षय कुमार की फिल्म गुड न्यूज और करण जौहर की मल्टी स्टारर फिल्म तख्त में नजर आने वाली हैं।
hindi
कैलाश विजयवर्गीय का दावा-तृणमूल के लोगों ने बीजेपी के ३ कार्यकर्ताओं की हत्या - वेस्ट बंगाल पश्चिम बंगाल में लोकसभा चुनावों से शुरू हुई राजनीतिक हिंसा थमने का नाम नहीं ले रही है. कैलाश विजयवर्गीय ने दावा किया कि पश्चिम बंगाल की बशीरहाट लोकसभा क्षेत्र के संदेशखली में बीजेपी के तीन कार्यकर्ताओं की हत्या कर दी गई है. पश्चिम बंगाल में लोकसभा चुनावों से शुरू हुई राजनीतिक हिंसा थमने का नाम नहीं ले रही है. शनिवार को बीजेपी महासचिव और बंगाल के प्रभारी कैलाश विजयवर्गीय ने दावा किया कि बंगाल में तृणमूल के लोगों ने बीजेपी के ३ कार्यकर्ताओं की हत्या कर दी. विजयवर्गीय ने अपने ट्वीटर पर लिखा कि पश्चिम बंगाल की बशीरहाट लोकसभा क्षेत्र के संदेशखली में बीजेपी के तीन कार्यकर्ताओं की हत्या कर दी गई है. लोकसभा चुनाव २०१९ में बीजेपी ने ममता के गढ़ में सेंध लगाते हुए १८ लोकसभा सीटें अपने नाम कर ली हैं. बीजेपी को राज्य में इतनी बड़ी कामयाबी पहली बार मिली है. इसके साथ कई नगर पालिका और नगर निगम पर भी बीजेपी का कब्जा हो गया है. पश्चिम बंगाल के उत्तरी दिनाजपुर जिले के गंगारामपुर इलाके में शनिवार को भाजपा की विजय रैली के दौरान पार्टी नेताओं और पुलिसकर्मियों के बीच झपड़ में दोनों ओर से कई लोग जख्मी हो गए. पुलिस के एक अधिकारी ने कहा कि भाजपा कार्यकर्ताओं ने बिना संबंधित अधिकारियों की अनुमति के"जबरन" 'अभिनंदन यात्रा' की. उन्होंने कहा कि पुलिसकर्मियों ने जब रैली को रोकना चाहा तो गुस्साए भाजपा कार्यकर्ताओं की उनसे झड़प हो गई, जिसमें दोनों ओर से कई लोग घायल हो गए. केन्द्र सरकार को पश्चिम बंगाल की सरकार को बर्खास्त कर ६ महीने के अंदर चुनाव कराना चाहिए। बंगाल में राष्ट्रपति शासन के बिना ये आतंक ममता रोकेगी नही। बंगाल में ममता को हटाना जरूरी है। बंगाल पुलिस खुद ममता के गुंडे बन गए है। बंगाल में जब तक ममता के जिहादी आतंक है ये थमेगा नही क्यो की ममता बिधान सभा हारने के डर है बंगाल को सीरिया की तर्ज पर ले जा रही है। ये दूसरा कश्मीर बने जैसे लग राह मालदीव की मजलिस से प्म मोदी ने बिना नाम लिए पाकिस्तान को सुनाई खरी खरी, जानें ५ बड़ी बातें
hindi
Template.faq.helpers({ questions: function () { return Questions.find({}, {sort: {order: 1}}); } });
code
श्रीसंत को ब्क्सी से मिली बड़ी राहत, अगले साल से फिर खेल सकेंगे क्रिकेट | वेब्दुनिया हिन्दी बीसीसीआई ने पूर्व भारतीय क्रिकेटर श्रीसंत को बड़ी राहत देते हुए मंगलवार को उन पर लगा आजीवन प्रतिबंध घटाकर ७ साल का कर दिया। इस तरह अब १३ सितंबर २०२० को उन पर लगा बैन खत्म हो जाएगा और वह क्रिकेट की दुनिया में वापसी कर सकेंगे। ब्क्सी लोकपाल डीके जैन ने अपने फैसले में कहा कि श्रीसंत पर लगे प्रतिबंध को घटाकर सात साल कर दिया है और वह अगले साल खेल सकेगा। जैन ने कहा कि अब श्रीसंत ३५ पार का हो चुका है। बतौर क्रिकेटर उसका सर्वश्रेष्ठ दौर बीत चुका है। मेरा मानना है कि किसी भी तरह के व्यावसायिक क्रिकेट या बीसीसीआई या उसके सदस्य संघ से जुड़ने पर श्रीसंत पर लगा प्रतिबंध १३ सितंबर 20१३ से सात बरस का करना न्यायोचित होगा। गौरतलब है कि बीसीसीआई ने अगस्त २०१३ में आईपीएल में स्पॉट फिक्सिंग के आरोप में श्रीसंत पर प्रतिबंध लगाया था। उनके अलावा राजस्थान रॉयल्स के अजीत चंदीला और अंकित चव्हाण पर भी प्रतिबंध लगाया गया था। सुप्रीम कोर्ट ने इस साल १५ मार्च को बीसीसीआई की अनुशासन समिति का फैसला बदल दिया था। बीसीसीआई ने २८ फरवरी को न्यायालय में कहा था कि श्रीसंत पर लगा आजीवन प्रतिबंध सही है क्योंकि उसने मैच के परिणाम को प्रभावित करने की कोशिश की थी। वहीं श्रीसंत के वकील ने कहा कि आईपीएल मैच के दौरान कोई स्पाट फिक्सिंग नहीं हुई और श्रीसंत पर लगाए गए आरोपों के पक्ष में कोई सबूत भी नहीं मिले।
hindi
As a 23 year old, I am finding half of my friends are getting married and having babies and the other half are getting drunk on Tuesdays to the point where they can’t find their shoes…. Even the pair that’s on their feet. This is a very strange lifestage that I can’t quite seem to figure out. Everyone is on such different paths and there doesn’t seem to be a “right” answer to just about anything. As I navigate through the beginning of adulthood, I have concluded that there must be this middle of the road stage where you are labelled as an adult by society, but you aren’t quite at the stage where you have anything figured out, your career path, bigger life plans, or even what you are having for dinner tomorrow. I call this weird stage “Adulting”. I can be used as a verb and a noun. The act of trying to be “adult like”, or the weird stage of not really knowing what you are doing with your life, but you are still expected to pay your rent on time. There are a few key attributes to Adulting that make it complicated. First off, there is no correct answer to anything. What makes that so difficult, is that it’s OK for there not to be any answers. How often are you supposed to work out? Is it bad to go to multiple happy hours a week? Will you really get sick if you eat raw cookie dough from the roll that you got at the grocery store? All difficult questions with answers that are different for everyone. And that is A-Oh-Kay. Not knowing the answers is part of this “figuring it out” stage that we need to go through to get to the point where we fake knowing what we are doing a little better. Secondly, now is the best time to take care of yourself and ONLY yourself. For (most of) us, this is the one time in our lives where we are only responsible for one person, ourselves. That is a bit of a blessing and a curse. On one hand, we can make ourselves a complete and total priority. We can spend a little too much money on travel and the bougie gym membership and go at life full throttle. But on the other hand, no one will know if you go home and pizza and netflix binge every night for a week straight. Is one really better than the other? I guess it really just depends on how your week is going. If you need a pizza and netflix session, now is the best time to do it. No one will be disappointed in you for it. No one will get hurt by it. You just do what you have to do. Life is hard, sometimes taking care of yourself means cutting yourself some slack. Lastly, this is a transition period, and change is hard. Like really hard. And the transition period doesn’t start and end with the end of college and the start of your first job. This transition is YEARS LONG. Dude, I know. That’s a long time. But this is a time of such exponential personal growth, that you need all of that time to work out your kinks and knots. Transitions are hard in all types of ways. Moving is hard, starting new jobs is hard, and figuring out who your post-college friends are is insanely difficult. Acknowledging that this transition period is difficult is half the battle. You have to try and be kind to yourself while all of this craziness is going on. It’s hard on all of us, which is more of a reason to be kind to others too. In summary, being 23 is weird, awkward, and at points can be all together uncomfortable. But we all gotta do it. And we will all get through this, too. It may not be graceful or cute, but this “Adulting” stage is just another level of life that we can accomplish. We got this. “There is no correct answer to anything” CORRECT. Lol. Loved this- so horrifically true. That was excellent. I met your mom when I was 25 and you and Alex had just arrived in the world. She was my mentor for three years and provided some much needed wisdom as I was transitioning from student to working adult. So, great to see such wisdom manifest in the next generation.
english
रियामे ३ सले इंडिया: १२ मार्च दोपहर १२ बजे से रियलमी ३ की पहली सेल, फ्लिपकार्ट और @रियामे.कॉम पर खरीदें लेटेस्ट बजट स्मार्टफोन रियामे ३ सले इंडिया: फ्लिपकार्ट और @रियामे.कॉम पर १२ मार्च दोपहर १२ बजे से रियलमी ३ मोबाइल फोन की पहली सेल है. रियलमी ३ बजट स्मार्टफोन को कंपनी ने भारत में ४ मार्च को लॉन्च किया था. यह दो वैरिएंट में उपलब्ध है. रियामे ३ स्मार्टफोन की शुरुआती कीमत ८,९९९ रुपये है. जानिए इस फोन की कीमत, फीचर्स और स्पेसिफिकेशन. स्मारफोन अन्डर १००००, स्मारफोन अन्डर १०क रियलमी ३ की मंगलवार शाम ८ बजे फ्लिपकार्ट और रियलमी वेबसाइट पर दूसरी सेल लगेगी. नई दिल्ली. ४ मार्च को भारत में लॉन्च हुए चाइनीज स्मार्टफोन ब्रांड रियलमी के बजट मोबाइल फोन रियामे ३ की मंगलवार को पहली सेल है. आप फ्लिपकार्ट और रियलमी की आधिकारिक वेबसाइट रियामे.कॉम पर १२ मार्च दोपहर १२ बजे से यह फोन खरीद सकते हैं. आपको बता दें कि चाइनीज स्मार्टफोन ब्रांड रियलमी ने अपने लेटेस्ट बजट मोबाइल फोन ४ मार्च को भारत में लॉंच किया था. इसकी शुरुआती कीमत ८,९९९ रुपये है. यह फोन कंपनी के पहले आए रियामे २ का एड्वांस्ड वर्जन है. भारत में रियामे ३ के दो वैरिएंट मंगलवार से बिक्री के लिए उपलब्ध हैं. इसके ३ जीबी+३२ जीबी स्टोरेज वैरिएंट की कीमत ८,९९९ रुपये है. वहीं इसके ४ जीबी+6४जीबी स्टोरेज वैरिएंट की कीमत १०,९९९ रुपये है. रियामे ३ में ड्यूल रियर कैमरा सेटअप दिया गया है, जिसमें 1३ मेगापिक्सल और २ मेगापिक्सल के दो कैमरे लगे हैं. साथ ही सेल्फी के लिए फ्रंट में 1३ मेगापिक्सल का कैमरा दिया गया है, जिसमें आपको फेस अनलॉक फीचर भी मिलेगा. इस फोन में ब्लैक, डायनामिक ब्लैक और रेडिएंट ब्लू तीन कलर्स के विकल्प हैं.
hindi
मिशेल का केस लड़ने वाले वकील जोसेफ को कांग्रेस ने पार्टी से निकाला अगस्ता वेस्टलैंड हेलीकॉप्टर घोटाले के आरोपी क्रिश्चियन मिशेल के वकील के तौर पर पेश हुए अलजो जोसेफ को युवा कांग्रेस ने तत्काल प्रभाव से पार्टी से निष्कासित कर दिया है। यूएई से प्रत्यर्पण के बाद भारत लाए गए मिशेल के वकील के तौर पर पेश हुए जोसेफ भारतीय युवा कांग्रेस के विधि विभाग के प्रभारी की भूमिका में थे। अखिल भारतीय कांग्रेस कमेटी के संयुक्त सचिव एवं युवा कांग्रेस के प्रभारी कृष्णा अल्लावरू ने कहा कि जोसेफ अपनी निजी हैसियत से वकील के तौर पर पेश हुए थे और उन्हें तत्काल प्रभाव से पार्टी की सभी जिम्मेदारियों से मुक्त और निष्कासित किया जाता है। मिशेल को इस मामले में संयुक्त अरब अमीरात से प्रत्यर्पित करके मंगलवार देर रात भारत लाया गया। गौरतलब है कि अगस्ता वेस्टलैंड को ठेका दिलाने और भारतीय अधिकारियों को गैरकानूनी कमीशन या रिश्वत का भुगतान करने के लिए बिचौलिए के तौर पर मिशेल की संलिप्तता २०१२ में सामने आई थी।
hindi
<?php /** * Created by PhpStorm. * User: Toshiba * Date: 10/11/2016 * Time: 13:23 */ class CI_Unit_testTest extends PHPUnit_Framework_TestCase { }
code
दान देने में मुकेश अंबानी से आगे हैं भारत के अरबपति, तीसरे नंबर पर शामिल है रिलायंस इंडस्ट्री मुख्यपृष्ठओद्दिटीदान देने में मुकेश अंबानी से आगे हैं भारत के अरबपति, तीसरे नंबर पर शामिल है रिलायंस इंडस्ट्री भारत में दान करना बहुत पुण्य का काम माना जाता है लेकिन बहुत से लोग ऐसा नहीं करते क्योंकि उन्हें ऐसा करने में अपना घाटा लगता है। दान, धर्म और जरूरतमंदों की मदद करना अच्छा काम होता है और ऐसा नहीं है कि हर कोई इन सब बातों को नहीं मनते। कुछ और भी नाम हैं जो सबसे ज्यादा दान कर्म का काम करते हैं। आपको जानकर हैरानी होगी कि दान देने में मुकेश अंबानी से आगे हैं भारत के अरबपति, मगर ये हैं कौन आपको इनके बारे में जानना चाहिए। दान देने में मुकेश अंबानी से आगे हैं भारत के अरबपति एडलगिव हुरुन इंडिया की परोपकारी लोगों की एक लिस्ट निकाली है और इसमें तीन सबसे ज्यादा दान देने वाले भारतीयों के नाम सामने आए हैं। इन सबमें सबसे हैरान करने वाली बात ये है कि मुकेश अंबानी देश के सबसे अमीर आदमी हैं लेकिन उनसे भी ज्यादा दो बिजनेसमैन दान देने में आगे हैं। चलिए बताते हैं आपको मुकेश अंबानी के अलावा दो और के नाम। एचसीएल कंपनी के फाउंडर और चेयरमैन शिव नाडर देश के टॉप-१० अमीरों में एक हैं। इनकी कमाई जितनी होती है उसका कुछ हिस्सा ये दान-धर्म में भी लगा देते हैं। दिए गए लिस्ट के अनुसार, ८२६ करोड़ रुपये का दान उन्होंने और उनके परिवार ने परमार्थ कार्यों में दिए हैं। आपको बता दें साल २०१९ में इनकी नेट वॉर्थ १४७० करोड़ रुपये है। बिजनेस टाइकून अजीम हाशमी प्रेमजी देश के दूसरे ऐसे अरबपति हैं जो दान-धर्म के कामों में सक्रीय रहते हैं। इनकी नेटवॉर्थ ७२० करोड़ रुपये है और ये विप्रो लिमिटेड के चेयरमैन भी हैं। इसके अलावा इन्होंने कई दूसरी बड़ी कंपनियों में पैसे भी लगाए हैं। सूची के मुताबिक प्रेमजी ने ४५३ करोड़ रुपये दान-धर्म में दिये हैं। इसके साथ ही ये देश के दूसरे सबसे बड़े दान देने वाले अरबपति बन गए हैं। भारत के सबसे अमीर अरबपति और रिलायंस इंडस्ट्री के चेयरमैन मुकेश अंबानी भले ही भारत में पहले नंबर पर सबसे अमीर हों लेकिन इन्हें दान करने में इन्हें तीसरे नंबर पर रखा गया है। मुकेश अंबानी की नेट वॉर्थ ५२९० करोड़ रुपये है और इन्हें दुनिया के सबसे अमीर आदमियों में शामिल किया गया है। परमार्थ कार्यों में अगर इनका योगदान बताया जाए तो ये ४०२ करोड़ रुपये है। एडलगिव फाउंडेशन की मुख्य कार्यकारी विद्या शाह ने बताया कि अभी भी कई ऐसे मुद्दे हैं जिनको लेकर उद्यमियों के मन में परमार्थ कार्यों के लिए ज्यादा देने के लिए एक दुविधा रहती है। वैसे तो सामाजिक कार्यों में दान का काम हर किसी को करना चाहिए लेकिन बहुत से लोग इसके लिए सामर्थ्य नहीं हो पाते हैं। समाजिक कार्यों के लिए ५ करोड़ रुपये से ज्यादा धन देने वाले भारतीयों की संख्या इस दौरान बढ़कर ७२ हुई है जबकि साल २०१८ में इसकी संख्या ३८ ही थी।
hindi
--- layout: sieutv title: pt 1120 tags: [pttv] thumb_re: q7t11120 --- {% include q7t1 key="1120" %}
code
آسہٕ نہٕ آمژٕ۔
kashmiri
Earlier this month, The Doris Duke Charitable Foundation gathered the grantees of their Child Well-Being program for a summit in Atlanta, Georgia June 4-5, 2018. This was the first meeting of its kind with the entire child well-being program together to strengthen and increase collaboration amongst all DDCF grantees. 11 Doris Duke fellows attended the conference and presented their research, including: Clinton Boyd, Jr., Cohort Six fellow; Scott Brown, Cohort Six fellow; Megan Finno-Velasquez, Cohort Two fellow; Allie Giovanelli, Cohort Six fellow; Paul Lanier, Cohort One fellow; Katie Paschall, Cohort Four fellow; Lisa Schelbe, Cohort One fellow; Judith Scott, Cohort Five fellow; and Aditi Srivastav, Cohort Seven fellow. Wendy Ellis, Cohort Seven fellow, and Jared Parrish, Cohort Five fellow also attended the meetings as representatives of program grantees and presented their work, as well. Posters represented a variety of topics of, including childhood resilience, immigration, home visiting, pregnant and parenting youth in the foster care system, and discrimination. We are so glad to have a great group of interdisciplinary leaders represent The Doris Duke Fellowships. Thank you! Take a look at the newest blog post about interdisciplinary action by Cohort Six Fellows: Christina DeNard, Andi Eastman, Lindsay Huffhines, Kate Stepleton, and Lindsey Weil. NCCAN recently released their Call for Abstracts, with a deadline of July 3. We at the fellowship want to encourage all current and graduated fellows to submit abstracts! This interdisciplinary conference, hosted by the Children’s Bureau’s Office on Child Abuse and Neglect within the Administration on Children, Youth and Families in D.C., is an excellent place to come together with other researchers, policymakers, and practitioners who are focused on the prevention of child maltreatment and the promotion of child well-being. Each of the five learning tracks for this year’s conference (i.e., Prioritize Prevention, Focus on Well-Being, Reshape Foster Care as a Support for Families, Build Community Capacity, and Support the Workforce) are topics in which all fellows’ research would be an excellent fit! Additionally, there is no registration fee for this conference, making it much more accessible. Collaboration between fellows and academic and policy mentors is encouraged! Tova Walsh, Cohort One fellow, recently published an opinion piece entitled “Military families can teach us about the cost of family separations.” Click here for a link to read more. Congrats, Tova! This past week a group of Doris Duke Fellows attended the 25th Annual Colloquium in New Orleans where 20+ fellows presented on various topics addressing all aspects of child maltreatment. The fellows and their mentors were all able to gather near the conference for a casual happy hour. Thanks for all who attended and congrats on partaking in the conference. Chapin Hall at the University of Chicago released a statement in response to the recent U.S. government practice of forcibly separating parents and children who arrive at U.S. borders. Click here to read more. Jay Miller, Cohort Two, launched a Self-Care Research Lab at the University of Kentucky where he will serve as the Director and PI. Click here to read more about Jay and the incredible work planned for the Lab! Will Schneider, Cohort Two, will finish his Postdoctoral Internship at Northwestern University this summer and will start as an Assistant Professor at the School of Social Work at the University of Illinois at Urbana-Champaign in the fall. Congrats, Will! published a paper entitled, “School readiness among children of Hispanic immigrants and their peers: the role of parental cognitive stimulation and early care and education” in Early Childhood Research Quarterly for an upcoming issues focused on early childhood education among Hispanic families. Click here to read more! is the recipient of the Dr. Karen Gale Exceptional PhD Student award at Georgetown University. Congratulations, Christina! Catherine Corr, Cohort Four, served on a panel at the University of Illinois at Urbana-Champaign School of Education to discuss the documentary film Resilience, which highlights the link between childhood trauma and mental and physical health issues later in life. The discussion focused on next steps needed to help children and families thrive.
english
چیراہن کوٚر پور جایہ چولاہن سٟتؠ جنگ تہٕ اَتھ جنٛگس منٛز سپٕدِ دۄشوَے جنٛگجوُ مارٕ
kashmiri
अनिलप्रभा कुमार :शिल्प के प्रति भी सजग हैं और यह शिल्प की विशिष्टता उनकी कहानियों में दृष्टव्य है। - अपनी माती: न्यूज पोर्तल होम कोलम्बिया साहित्य अनिलप्रभा कुमार :शिल्प के प्रति भी सजग हैं और यह शिल्प की विशिष्टता उनकी कहानियों में दृष्टव्य है। अनिलप्रभा कुमार :शिल्प के प्रति भी सजग हैं और यह शिल्प की विशिष्टता उनकी कहानियों में दृष्टव्य है। न्यूयॉर्क के कोलम्बिया विश्वविद्यालय में ४ मई को अनिलप्रभा कुमार के कहानी संग्रह बहता पानी का विमोचन हुआ। दक्षिण- एशिया संस्थान के अन्तर्गत कहानी- मंच की ओर से इस कार्यक्रम की अध्यक्षता सुप्रसिद्ध लेखिका सुषम बेदी ने की। डॉ. सुषम बेदी ने अनिल प्रभा के लेखन का परिचय देते हुए कहा कि संवेदनशीलता का प्रेषण उनके कहानी-लेखन की विशेषता है। उन्होंने कहा कि अनिलप्रभा की कहानियाँ अमरीका में रहने वाले भारतीयों के मूल्यों से जुड़ी हुई कहानियाँ हैं। उनकी कहानियों में यहाँ की रोज़मर्रा की स्थितियों को प्रामाणिकता के साथ प्रस्तुत किया गया है। वह कहानियों में गहरे तक डूबकर उसको गहन संवेदना के साथ व्यक्त करती हैं। वह शिल्प के प्रति भी सजग हैं और यह शिल्प की विशिष्टता उनकी कहानियों में दृष्टव्य है। येल विश्वविद्यालय की सीमा खुराना ने भारत के प्रसिद्ध लेखक और पत्रकार रूपसिंह चन्देल द्वारा बहता पानी कहानी संग्रह पर व्यक्त किए गए विचार पढ़े। न्यूयॉर्क स्थित आई.टी.वी के कार्यक्रम निर्देशक और लेखक श्री अशोक व्यास ने कहा कि अनिलप्रभा जी की कहानियाँ पढ़ना- सुनना संवेदना की सोई नदी को जगा देता है। उन्होंने इस संग्रह की कहानी उसका इंतज़ार की प्रशंसा करते हुए कहा कि अनिलप्रभा की शैली संवेदनशील पर संयत, भावनात्मक पर व्यावहारिक, विस्तृत आयामों वाली पर बारीक़ी से गढ़ी हुई शैली है। उसका इंतज़ार कहानी में नायिका का अवसाद पाठक के भीतर रुलाई बन कर फूट पड़ता है। उन्होंने संग्रह की एक और कहानी किसलिए के विषय में कहा कि जिस तरह इस एक छोटी सी कहानी में एक विस्तार को सहेजा गया है वह कहानी कथन में अनिलप्रभा जी के कौशल का प्रमाण है। टैग्स # कोलम्बिया # साहित्य
hindi
संसद भवन के अंदर धार्मिक नारों को लगाने की इजाजत नहीं देंगे- लोकसभा स्पीकर - शियासत डेली १७वीं लोकसभा के नवनिर्वाचित अध्यक्ष ओम बिड़ला ने कहा कि वह संसद भवन के अंदर धार्मिक नारों को लगाने की इजाजत नहीं देंगे। बिड़ला ने कहा, मुझे नहीं लगता कि संसद नारे लगाने, प्लेकार्ड दिखाने या वेल में आने वाली जगह है। इसके लिए एक जगह है जहां वह जाकर प्रदर्शन कर सकते हैं। लोग कुछ भी कहना चाहते हैं, जो भी आरोप लगाना चाहते हैं, चाहे वह सरकार पर हमला करना चाहते हैं तो वह कर सकते हैं लेकिन उन्हें गैलरी में आकर यह सब नहीं करना चाहिए। जब उनसे पूछा गया कि क्या वह इस बात का आश्वासन दे सकते हैं कि इस तरह की नारेबाजी दोबारा नहीं होगी तो उन्होंने मना कर दिया। ५६ साल के बिड़ला ने कहा, मुझे नहीं पता कि ऐसा दोबारा होगा या नहीं लेकिन मैं नियमानुसार संसद को चलाने की कोशिश करुंगा। जय श्रीराम, जय भारत, वंदे मातरम् के नारे एक पुराने मुद्दे हैं। बहस के दौरान यह अलग होते हैं। हर बार अलग परिस्थितियां होती हैं। परिस्थितियां क्या हैं इसका निर्णय अध्यक्ष की कुर्सी पर बैठे व्यक्ति द्वारा किया जाता है। जब उनसे पूछा गया कि संसद में अपने भाषण के दौरान उन्होंने वंदे मातरम् क्यों कहा तो उन्होंने जवाब देते हुए कहा, किसने कहा कि हम वंदे मातरम् और भारत माता की जय नहीं बोल सकते हैं। लोकसभा में कांग्रेस नेता अधीर रंजन चौधरी ने अपने स्वागत भाषण में नारेबाजी का उल्लेख किया था। चौधरी ने कहा, मुझे नहीं लगता कि यह बहुदलीय लोकतंत्र की भावना का हिस्सा है। उनके बयान पर जवाब देते हुए बिड़ला ने कहा, मैं इसे लेकर स्पष्ट हूं। संसद लोकतंत्र का मंदिर है। इस मंदिर को संसद के नियमों के जरिए चलाया जाता है। मैंने सभी पक्षों से अनुरोध किया है कि हमें जितना हो सके इस स्थान की शोभा को बनाए रखना चाहिए। अमर उजाला पर छपी खबर के अनुसार, हम दुनिया के सबसे बड़े लोकतंत्र हैं। हर कोई हमारी तरफ देखता है। ठीक इसी तरह हमारी संसदीय प्रक्रियाओं को भी दुनिया भर में एक उदाहरण स्थापित करना चाहिए। दो बार के सांसद बिड़ला राजस्थान के कोटा से भाजपा सांसद हैं। उन्होंने कहा, सभी पार्टियों ने मुझपर अपना विश्वास जताया है। ऐसे में यह मेरा कर्तव्य है कि मैं उस विश्वास को बनाए रखूं। हर किसी के पास अभिव्यक्ति का अधिकार है। सरकार को और अधिक जिम्मेदार होना होगा क्योंकि उनके पास इतना बड़ा बहुमत है। उन्हें सभी सवालों के जवाब देने होंगे। मैंने देखा है कि सरकार हमेशा बहस की मांग को स्वीकार करती है। राजस्थान सरकार ने पहलु खान के फैसले के खिलाफ हाइ कोर्ट में अपील दायर की मुंबई दंगों में बम मुस्लिम क्षेत्र में बनाए गए थे! भाजपा प्रमुख की दंगाई टिप्पणी के बाद, चुनाव आयोग का नोटिस सप्ताह पहले कब्र से बच्ची जिंदा मिली : भाजपा विधायक ने बच्चे की तुलना सीता से किया जम्मू-कश्मीर : बच्चों को वापस स्कूल लाने के उद्देश्य से, ५-१२ वीं कक्षा तक की परीक्षाओं की घोषणा हम वीर सावरकर के नहीं उनकी हिंदूवादी विचारधारा के खिलाफ हैं : मनमोहन सिंह अयोध्या विवाद- मध्यस्थता के प्रयासों को सुप्रीम कोर्ट ने खारिज किया ! मुख्तार अंसारी के बेटे अब्बास अंसारी के घर से ४ हजार से ज्यादा कारतूस और कई बंदूकें बरामद २५००० में विशेष दर्शन देने वाले कल्कि भगवान आश्रमों पर इट रेड, सैकड़ों करोड़ की अवैध संपत्ति बरामद ! केट मिडलटन और प्रिंस विलियम ने लाहौर की बादशाही मस्जिद का दौरा किया सुन्नी वक्फ बोर्ड के अध्यक्ष बोले सुप्रीम कोर्ट को नहीं दिया समझौता प्रस्ताव, लेकिन.! बाबरी मस्जिद - बफ्फ बोर्ड ने दावा छोड़ने के लिए रखी ये ३ शर्तें ! अल्लामा इकबाल की लिखी प्रार्थना 'लब पे आती है दुआ' पढ़ाने पर हेडमास्टर बर्खास्त पश्चिम बंगाल में अशांति के लिए र्स और हिंदुत्ववादी संगठन जिम्मेदार- अमर्त्य सेन ओवैसी ने कहा- 'आतंकवाद के नाम पर अब एक लिस्ट निकाली जाएगी और..?'
hindi
अगर फर्स्ट डेट है, तो कभी ये चीज़ आर्डर ना करे - समाचार जगत करंट अफेयर ऑटो गैलरी ट्रेवल धर्म स्वास्थ्य कार्टून ओलिंपिक ज़रा हट के अगर फर्स्ट डेट है, तो कभी ये चीज़ आर्डर ना करे समाचार जगत | सुन्दए, २७ नोव २०१६ ०३:११:३१ प्म फर्स्ट डेट हमेसा दिलचस्प होती है, और साथ में दिल धक धक भी करता रहता है की कही कोई ऐसी चीज़ ना हो जाये की बाद में अजीब लगे। जब आप अपने फर्स्ट डेट पर जाते है, तो अपने लुक्स का खास ख्याल रखते है। लेकिन सच तो ये है की सिर्फ लुक्स नही बल्कि खाने पिने का भी खास ध्यान देना होता है। जैसे क्या ऑर्डर करे ,कैसे ऑर्डर करे। तो आइये आप जाने और अपनी फर्स्ट डेट को शानदार बनाये। जी, हां अपनी फर्स्ट डेट पर इस बात का बिल्कुल ध्यान रखें कि किस तरह की डिश ऑर्डर करनी है। अगर आप डिनर पर जा रहे हैं तो पांच तरह की चीजें बिल्कुल ऑर्डर न करें। पत्तेदार सब्जियां एक दिन के लिए अपने हेल्दी डाइट वाले प्लान को भूल जाइये। पालक, मिंट, स्प्रिंग ऑनियन जैसी पत्तेदार सब्जियां ऑर्डर न करें। क्योंकि ये दांतों में फंसने से आप खुद असहज महसूस करेंगे। स्पाइसी खाना तीखे और मसालेदार खाने से परहेज रखें। अगर कुछ चटपटा खाने का मन करे तो खाने में चिली फ्लेक्स या सॉस डाल लें। चीज़ बर्गर एक तो पब्लिक प्लेस में सलीके से बर्गर खा लेना ही बड़ी बात है, ऊपर से उसमें चीज़ की फिलिंग हो, तो ज्यादा ध्यान लगाना पड़ता है। इसलिए चीज बर्गर को अवोइड करें। भारी भरकम नाम वाले डिश रेस्त्रां में डेट के दौरान उन डिश को ऑर्डर करने से बचें जिनका नाम भारी भरकम हो या जिसके बारे में आप ठीक से जानते तक नहीं। एक्सपेरिमेंट और रिसर्च का काम आप डेट से पहले या बाद में करें, तो ही अच्छा होगा। वरना आप डिश का सही उच्चारण नहीं कर पाएंगे। हो सकता है डिश की शक्ल देखकर आप खुद अचरज में पड़ जाएं। प्याज़/लहसुन इन्हें खाने के बाद आपके मुंह से बदबू आएगी और सामने वाले को आपसे बात करने में दिलचस्पी नहीं रहेगी! रिलेटेड न्यूज़ चुकंदर के रस में नींबू और अदरक मिला कर पिएं, होगा फायदा ही फायदा अब पार्लर नही, घर पर ही पाए ड्राई स्किन से छुटकारा आखिर क्यों पसंद किया जाता है युवाओं द्दारा लिव-इव रिलेशन जानिए आयुर्वेद के जरिये करे सफल इलाज दाग-धब्बे हटाने के लिए रात में सोने से पहले लगाएं नीबू और गुलाबजल बॉलीवुड 'लाइफ पार्टनर' अच्छा इंसान होना चाहिए : आलिया भट्ट महिला निर्देशकों को साथ ज्यादा सहज महसूस करते हैं शाहरुख खान फिल्मी गीतों में महिलाओं का चित्रण एलियन की तरह: प्रसून जोशी इशिता दत्ता से रोमांस करेंगे कपिल शर्मा २७-११-२०१६ ०३:११:३१ प्म सपने और उनका फल २७-११-२०१६ ०३:११:३१ प्म जानें ! नीलम आपके लिए लकी है या नहीं २७-११-२०१६ ०३:११:३१ प्म
hindi
Carrot! | Worthy (匹配) | Ticker. This entry was posted in 部落格仔舖 2013 and tagged App Store, Apple, Carrot!, Derek, Innopage, MACitizen, Ticker., Worthy, 部落格仔舖. Bookmark the permalink.
english
ایمہِ علاوہ چُھ شمٲلی کیرولائنا تہٕ جارجیا کہِ درمیان سرحدُک اکھ حصہٕ.
kashmiri
Belgrade Waterfront is a huge project started in 2014 and driven by the Serbian government and Dubai investors. It is an ambitious plan to rebuild the old Sava and Danube city centre docks into a brand new commercial and residential complex. Since ancient times, Belgrade has been a crossroad between Western & Easter Europe. Connected by its two rivers, it creates the perfect gateway between the Balkans and Central Europe. The water connects. We used traditional Serbian/Slavic embroidery as the common thread to represent the brandmark of this new area in the old town –the typographical connected structure emphasises the idea of network and central hub, expressed by a characteristic use of the graphical heritage of the Serbian people. Yep, unfortunately and after all the hard work (embroidery included) in the end it didn’t happen, but we are still quite proud of our Slavic Embroidery concept, typography & Balkan amulet illustrations.
english
As an examination of the modern era’s game behind the game, with it being the busied business of basketball in this David vs. Goliath tale, High Flying Bird does its telling with too much crystal-clear traveling mistakenly deemed to be the good ol’ Euro Step. It’s a compelling film but never an engaging one. Thought-provoking yet unexplained. More stylish than it is substantial. High Flying Bird looks great and has the kind of razor-sharp and critically thinking dialogue you’d expect to find in an Aaron Sorkin screenplay, except here the whip smart, wordsmith writing takes priority over the emotional development of the characters. This might be the most heartless yet impeccably crafted movie I’ll see this year. A lockout between the disputing players and owners sees no end in sight, and so sports agent Ray Burke (André Holland) attempts to take matters into his own hands, violently shaking up a system that’d rather be carefully stirred. It’ll take some help. Ray’s now former assistant Sam (Zazie Beetz) volunteers to play his calculated cat and mouse game, seemingly out of experiential intrigue. Player spokeswoman Myra (Sonja Sohn) becomes entangled, as does Spence (Bill Duke), the longtime figurehead at the rec center where Ray grew up playing and loving the game. And with number one draft pick Erick Scott (Melvin Gregg) laying in the wings, Ray navigates the streets and the courts and the economics of basketball with a careful eye for the pulse, always searching for an avenue to empower, as well as to enrich, the subjugated athlete. High Flying Bird isn’t just about sport or basketball, though. This movie really tries to understand and empathize with the experience of the black male athlete as a cash-cow pawn in a game owned and operated by rich white men. It’s respectable in that regard, even if the racial or social messaging of film itself doesn’t fully come to fruition. I liked three-quarters of the script by Tarell Alvin McCraney and only wish the film had been longer, thus giving depth and breath to a story that hardly, if ever, takes a moment to exhale and savor the flavor of its many strong scenes or great bits of dialogue. Then the film ends, quite abruptly, and I realized that not only was it missing character, but that the movie is missing important pieces of story. High Flying Bird is like a chic boutique with no stocked items to buy. You can admire the floor design all you want, but you’ll still leave empty-handed. I wonder what Spike Lee might have managed with this material. Had the film been better, or at the very least made the information more engaging and absorbing, I’d have argued that André Holland’s stellar leading performance as the disgruntled and deeply frustrated Ray was awards worthy. But Steven Soderbergh’s latest iPhone shot project – featuring some truly stunning cinematography that serves as an invigorating approach for new directorial voices, so long as you have a few million dollars to fuel the production side – doesn’t have a full-bodied script to explore the full use of its limbs. Instead High Flying Bird actualizes itself as a representation of the mythical Icarus, and its wings melt far before it ever reaches the scalding heat of the sun, floundering more than it flies.
english
The show is directed by Bryan Anthony, produced by Kami Styles and Teresa Bittner and based on the book by Rolad Dahl. Performances are 7 p.m. April 25, 26 and May 2 and 2 p.m. April 27 and May 4. Admission is $10 for adults, $8 for seniors and students and $5 for ages 10 and younger. Seniors pay $5 for Sunday matinees. For tickets and information, call Kami Stiles at 925-216-4613 or go to www.srctgrp.org. Big Band dinner, dance set for May 10. BRENTWOOD — The Brentwood Senior Citizens Club will host its “Big Band Dinner Dance” at the Brentwood Community Center, 35 Oak St. “The paying guests at the event were faced with a nearly personalised birthstone ballet shoes swarovski crystal pendant | flower girl necklace | bridesmaid gift | ballet shoes necklace impossible labyrinth of the defendants’ making to get out of that building,” said District Attorney Nancy O’Malley at a news conference Monday, “Almena and Harris’ actions were reckless and they created the high risk of death.”, Prosecutors announced the arrests a little more than six months after the inferno but declined to speak about whether others, including warehouse landlord Chor Ng, will face criminal charges, Friends and families of victims on Monday called for Ng to be charged, blaming her for owning a building many called a “deathtrap.”.. Daymé Arocena: In her mid-20s, Cuban vocalist Daymé Arocena has become an international sensation and her new album “Cubafonía” lives up to the considerable hype. Details: 1-3 p.m. Aug. 19. Kugelplex with Linda Tillery: The Bay Area klezmer ensemble Kugelplex joins forces with Bay Area vocal legend Linda Tillery, whose celebrates her 69th birthday (and 50th year on the scene) with a soul-steeped dance party. Details: 1-2:30 p.m. Sept. 2. Brooklyn Raga Massive and Classical Revolution presents Terry Riley’s “In C”: Co-founded by former Bay Area percussionist Sameer Gupta, Brooklyn Massive Raga transforms Terry Riley’s seminal minimalist work “In C” with classical Indian instrumentation. For this singular recital BMR is joined by the Bay Area string collective Classical Revolution. Details: 1-2:30 p.m. Sept. 16. Crushed by the betrayal, Bauer put down his brush, The Third Reich couldn’t still him, but he was no match for the brute force of capitalism, Ross, always personalised birthstone ballet shoes swarovski crystal pendant | flower girl necklace | bridesmaid gift | ballet shoes necklace a fierce presence on stage, radiates Hilla’s elegance as well as her monstrosity, A preening diva in a blue dress, she treats Bauer’s wife, Louise (a wry turn by Susi Damilano), like a maid, She pretends not to understand how his dealings with her, as Guggenheim’s chief curator, stripped him of the will to paint, She goads and bullies and teases, Anything to get him to paint once more and rescue her legacy, as a champion of abstract art, as well as his.. Sen. Jim Beall, D-San Jose. Total gifts: $715. $151.27 firefighter helmet and mug from California Professional Firefighters. Total travel: $0. Assemblyman Bob Wieckowski, D-Fremont. Total gifts: $4,028. $87.09 for three movie tickets to “This Means War” from FDX Entertainment Group. $40 bottle of tequila from Arteagas Food Center. $203 49ers Stadium groundbreaking event with commemorative hard hat, food, beverage and entertainment. $112: Two tickets to an Oakland A’s game from Noel Cook of Moraga.
english
व्केड्स २.४० / पट १.१२ सॉफ्टवेयर डीवीडी (विकास मॉडल में) भाषाएँ: अंग्रेजी, सेस्की, डस्क, डेच, एस्पालोल, फ्रांसिस, इटैलियन, मग्यार, नीदरलैंड्स, पोलस्की, पोर्टुगेस, रोमाना, श्रीपस्की, स्वेन्स्का, तुर्क, रूसी, लिटुवा संचार इकाई ८८८९००२० के लिए उत्पाद विवरण संचार इकाई के बारे में विस्तृत जानकारी के लिए, कनेक्शन प्रकार, स्थापना, एलईडी का वर्णन, दोष अनुरेखण और सॉफ़्टवेयर अद्यतन, टेक टूल की मदद देखें। जब केवल वाहन से जुड़ा होता है: यूनिट को एलईडी सेंसर फ्लैश करना चाहिए और फिर "पावर व्हीकल" को लाइट करना चाहिए। सुरक्षात्मक आवरण ८८८८००७० संचार इकाई ८८८९००२० के लिए सुरक्षात्मक आवरण यूएसबी केबल के माध्यम से वाहन या पीसी के साथ कनेक्शन खोने की संभावना को कम करना है। सुरक्षात्मक आवरण संचार इकाई की भी रक्षा करता है जब यह त्वरण क्षति (उदाहरण के लिए छोड़ने) के साथ-साथ कनेक्टर्स को नुकसान पहुंचाता है। सही कनेक्टर आयामों के साथ केवल उसब केबल (पार्ट्स नो.८८८८००२४) का उपयोग करें, चित्र देखें। १ ८८८९००२० तैयार करें, ताला तंत्र, रबर पैड और यूएसबी कनेक्टर (प्लग ए और प्लग बी) को हटा दें। २ रबर सुरक्षा में 888900२0 रखें। ३ ब्रैकेट को इकट्ठा करें। वाहन और उसब केबल कनेक्ट करें। प्लग ए में प्लग करें उसब केबल लॉक को इकट्ठा करें। वाहन केबल लॉक को इकट्ठा करें। ४ उपकरण अब दैनिक कार्य में उपयोग करने के लिए तैयार है।
hindi
\begin{eqnarray}gin{document} \title{On nilpotent automorphism groups of function fields} \author{Nurdag\"{u}l Anbar and Bur\c{c}in G\"{u}ne\c{s} \\ \small{Sabanc{\i} University}\\ \small MDBF, Orhanl\i, Tuzla, 34956 \. Istanbul, Turkey\\ \small Email: {\tt [email protected]}\\ \small Email: {\tt [email protected] } } \date{} \maketitle \begin{eqnarray}gin{abstract} We study the automorphisms of a function field of genus $g\geq 2$ over an algebraically closed field of characteristic $p>0$. More precisely, we show that the order of a nilpotent subgroup $G$ of its automorphism group is bounded by $16 (g-1)$ when G is not a $p$-group. We show that if $|G|=16(g-1) $, then $g-1$ is a power of $2$. Furthermore, we provide an infinite family of function fields attaining the bound. \end{abstract} \mathfrak{n}oindent \textbf{Keywords:} Function field, Hurwitz Genus Formula, nilpotent group, positive characteristic \\ \textbf {Mathematics Subject Classification(2010):} 14H05, 14H37 \section{Introduction}\label{introduction} Let $ K $ be an algebraically closed field and $F$ be a function field of genus $g$ with constant field~$K$. Denote by $\mathrm{Aut}(F/K) $ the automorphism group of $ F $ over $ K $. It is a well-known fact that if $ F$ is of genus $0$ or $1$, then $\mathrm{Aut}(F/K)$ is an infinite group. However, this group is finite if $g \geq 2$, which is proved by Hurwitz \cite{Hurwitz1893} for $K=\mathbb{C}$ and by Schmid \cite{schmid1938} for $K$ of positive characteristic. In his paper, Hurwitz also showed that $|\mathrm{Aut}(F/K)| \leq 84(g-1)$, which is now called the Hurwitz bound. This bound is sharp, i.e., there exist function fields of characteristic zero of arbitrarily high genus whose automorphism group has order $ 84(g - 1) $, see \cite{Macbeath}. Roquette \cite{roquette1970} showed that the Hurwitz bound also holds in positive characteristic $p$, if $p$ does not divide $|\mathrm{Aut}(F/K) |$. We remark that the Hurwitz bound does not hold in general. In positive characteristic, the best known bound is \begin{eqnarray}gin{center} $ |\mathrm{Aut}(F/K) | \leq 16g^4 $ \end{center} with one exception: the Hermitian function field. This result is due to Stichtenoth \cite{Sti1, Sti2}. \mathfrak{n}oindent However, there are better bounds for the order of certain subgroups of automorphism groups. Let $ G \leq \mathrm{Aut}(F/K)$. Nakajima \cite{Nakajima} showed that if $G$ is abelian, then $ |G| \leq 4(g+1)$ for any characteristic. Furthermore, Zomorrodian \cite{Zom} proved that $$ |G| \leq 16(g-1), $$ when $K=\mathbb{C}$ and $G$ is a nilpotent subgroup. He also showed that if the equality holds, then $g-1$ is a power of $2$. Conversely, if $g-1$ is a power of $2$, then there exists a function field of genus $g $ that admits an automorphism group of order $ 16(g-1) $; whence a nilpotent group of order power of $ 2 $. We remark that his approach is based on the method of Fuchsian groups. In this paper, we give a similar bound for the order of any nilpotent subgroup of the automorphism group of a function field in positive characteristic except for one case. More precisely, our main result is as follows. \begin{eqnarray}gin{theorem}\label{thm:main} Let $K$ be an algebraically closed field of characteristic $p>0$ and $F/K$ be a function field of genus $g\geq 2$. Suppose that $G \leq \mathrm{Aut}(F/K)$ is a nilpotent subgroup of order $$|G|>16(g-1). $$ Then the following holds. \begin{eqnarray}gin{itemize} \item[(i)] $ G $ is a $p$-group. \item[(ii)] The fixed field $ F_0 $ of $ G $ is rational. \item[(iii)] There exists a unique place $ P_0 $ of $ F_0 $, which is ramified in $ F/F_0 $. Moreover, $ P_0 $ is totally ramified, and $$ |G| \leq \frac{4p}{(p-1)^2}g^2. $$ \end{itemize} \end{theorem} \begin{eqnarray}gin{remark} In the exceptional case in Theorem~\ref{thm:main}, since there is a unique ramified place $ F $ has $p$-rank zero by \cite[Corollary~2.2]{AST}. \end{remark} \begin{eqnarray}gin{remark} We conclude from the proof of Theorem~\ref{thm:main} that the bound $ |G| \leq 16(g-1) $ also holds when $ p=0 $. Moreover, when the bound is attained the order of $ G $ is a power of $ 2 $. \end{remark} \section{Preliminary Results}\label{pre} In this section, we recall some basic notions related to function fields and give some preliminary results, which will be our main tools for the proof of Theorem~\ref{thm:main}. For more details about function fields, we refer to \cite{HKT, thebook}. Let $K$ be an algebraically closed field of characteristic $ p $ and $F\supseteq E$ be a finite separable extension of function fields over $K$ of genus $g(F)$ and $g(E)$, respectively. Denote by $\mathbb{P}_F$ the set of places of $F$. For a place $Q\in \mathbb{P}_F$ lying above $P\in \mathbb{P}_E$, we write $Q|P$, and denote by $e(Q|P)$ the ramification index and by $d(Q|P)$ the different exponent of $Q|P$. Recall that $Q|P$ is ramified if $e(Q|P)>1$. Moreover, if $p$ does not divide $e(Q|P)$, then it is called tamely ramified; otherwise it is called wildly ramified. By Dedekind's Different Theorem \cite[Theorem 3.5.1]{thebook}, $Q|P$ is ramified if and only if $d(Q|P)>0$. More precisely, $d(Q|P)\geq e(Q|P)-1$ and the equality holds if and only if $Q|P$ is tame. Note that every place of $F$ has degree $1$ as $K$ is algebraically closed; hence, the genera of $F$ and $E$ are related by the Hurwitz Genus Formula \cite[Theorem 3.4.13]{thebook} as follows. \begin{eqnarray}gin{equation}\label{eq:hgf1} 2g(F)-2 = [F:E](2g(E)-2) + \sum_{\substack{Q\in \mathbb{P}_F, P\in \mathbb{P}_E\\ Q|P}} d(Q|P), \end{equation} where $[F:E]$ is the extension degree of $F/E$. \mathfrak{n}oindent From now on, we suppose that $F/E$ is a Galois extension with Galois group $G$. As $F/E$ is Galois, $[F:E]=|G|$. Let $Q_1, \ldots, Q_m \in \mathbb{P}_F$ be all extensions of $P\in \mathbb{P}_E$. Since $G$ acts transitively on $Q_1, \ldots, Q_m$, we have $$ e(Q_i|P)=e(Q_j|P)=:e(P) \quad \text{and} \quad d(Q_i|P)=d(Q_j|P)=:d(P) $$ for all $i,j\in \lbrace 1,\ldots,m \rbrace$. Then by the Fundamental Equality \cite[Theorem 3.1.11]{thebook}, Equation~\eqref{eq:hgf1} can be written as \begin{eqnarray}gin{align}\label{eq:hgf} 2g(F)-2=|G|\left( 2g(E)-2+ \sum_{ P\in \mathbb{P}_E }\frac{d(P)}{e(P)}\right). \end{align} Equation~\eqref{eq:hgf} and the following well-known lemma will be our main tools to give an upper bound for the order of a nilpotent subgroup of the automorphism group of a function field. \begin{eqnarray}gin{lemma} If $ G $ is a finite nilpotent group, then $G$ has a normal subgroup of order $n$ for each divisor $n$ of $ |G| $. \end{lemma} \begin{eqnarray}gin{proof} The proof follows from the fact that every finite nilpotent group is a direct product of its Sylow subgroups, see \cite{Hungerford}. \end{proof} Let $G$ be a subgroup of $\mathrm{Aut}(F/K)$. We denote the fixed field $F^G$ of $G$ by $F_0$ and the genus of $F_0$ by $g_0$. Note that $F/F_0$ is a Galois extension with Galois group $ G$. Set $N:= |G|$. \begin{eqnarray}gin{lemma}\label{l1} If $g_0 \geq 1$, then $N \leq 4(g-1)$. \end{lemma} \begin{eqnarray}gin{proof} If $g_0 \geq 2$, then by Equation~\eqref{eq:hgf}, we conclude that $2g-2 \geq 2 N$, i.e., $ N \leq (g-1) $. If $ g_0=1$, then $$ \displaystyle 2g-2= N\left( \sum_{\substack{P\in \mathbb{P}_{F_0}}}\frac{d(P)}{e(P)}\right), $$ by Equation~\eqref{eq:hgf}. Since $g \geq 2 $, there exists a ramified place $P_0 \in \mathbb{P}_{F_0}$. Hence, $$ 2g-2 \geq N \frac{d(P_0)}{e(P_0)} \geq N \frac{(e(P_0)-1)}{e(P_0)} . $$ This implies that $ N \leq 4(g-1) $ as $e(P_0) \geq 2$. \end{proof} \mathfrak{n}oindent From now on, we assume that $G$ is a nilpotent subgroup of $\mathrm{Aut}(F/K)$. By Lemma~\ref{l1}, we also assume that $g(F_0)=0$. \begin{eqnarray}gin{definition} Suppose that $P_1, \ldots, P_r$ are all places of ${F_0}$, which are ramified in $F$ with ramification indices $e_1, \ldots, e_r $ and different exponents $d_1, \ldots, d_r $, respectively. We can without loss of generality assume that $e_1 \leq \ldots \leq e_r $. In this case, we say that $F/F_0$ (or shortly $F$) is of type $ (e_1,e_2, \ldots, e_r) $. \end{definition} \begin{eqnarray}gin{lemma} \label{lemma1} Let $\ell$ be a prime number. Then $ \ell|N $ if and only if $\ell|e_i$ for some $i~\in~\{1, \ldots, r \} $. \end{lemma} \begin{eqnarray}gin{proof} If $ \ell|e_i $ for some $i \in \{1, \ldots, r \} $, then $ \ell|N $ since $ e_i |N $. Suppose that $ \ell |N $ and $\ell \mathfrak{n}mid e_i$ for any $i = 1, \ldots, r$. Since $G$ is nilpotent, there is a normal subgroup $H $ of $ G$ such that $ [G:H]=\ell $. Set $ F_1:=F^H $. Note that $ F_1 / F_0 $ is an unramified Galois extension of degree $\ell$. Then by Equation~\eqref{eq:hgf} and the assumption $g(F_0)=0$, we obtain that $2g(F_1) - 2 = -2\ell$. That is, $g(F_1) = -\ell+1<0$, which is a contradiction. \end{proof} \begin{eqnarray}gin{lemma} \label{lemma2} If $\ell$ is a prime number, which divides exactly one of $e_1, \ldots, e_r$, then $\ell=p$. \end{lemma} \begin{eqnarray}gin{proof} Let $H$ be a normal subgroup of $G$ of index $[G:H]=\ell $ and $ F_1=F^H $. Then there is only one place of $F_0$, which is ramified in $F_1/F_0$, say $P_1$. Suppose that $P_1$ is tamely ramified; equivalently, $\ell\mathfrak{n}eq\mathrm{char}(K)$, which is $p$. Then by Equation~\eqref{eq:hgf} $$ 2g(F_1)-2 = \ell\bigg(-2+\frac{d_1}{e_1}\bigg)= \ell\bigg(-2+\frac{\ell-1}{\ell}\bigg)=-\ell-1< -2, $$ which is a contradiction. \end{proof} \begin{eqnarray}gin{lemma}\label{lem:two} Suppose that $\ell$ is a prime dividing $N$ and $\ell \mathfrak{n}eq \mathrm{char}(K)$. Say $N=\ell^aN_1$ for some integers $a, N_1 \geq 1$ such that $\gcd(\ell,N_1)=1$. Then we have the following. \begin{eqnarray}gin{itemize} \item[(i)] There exist at least two places, whose ramification indices are divisible by $\ell$. \item[(ii)] If there are exactly two places, whose ramification indices are divisible by $\ell$, then their ramification indices are divisible by $\ell^a$. \end{itemize} \end{lemma} \begin{eqnarray}gin{proof} By Lemma~\ref{lemma1}, we know that $\ell|e_i$ for some $i \in \{1, \ldots, r \} $. Then $(i)$ follows from Lemma~\ref{lemma2} as $\ell\mathfrak{n}eq \mathrm{char}(K)$. Suppose that there are exactly two places whose ramification indices are divisible by $\ell$. Let $H $ be a normal subgroup of $ G$ of index $[G:H]=\ell^a$ and $F_1=F^H$. We consider the Galois extension $F_1/F_0 $ of degree $\ell^a$. Note that there are exactly two ramified places of $F_0$ in $F_1$. Since $F_1/F_0 $ is a tame extension and $g(F_0)=0$, by Equation~\eqref{eq:hgf}, we conclude that they are totally ramified, which proves $(ii)$. \end{proof} \begin{eqnarray}gin{lemma}\label{1wild} Let $p=\mathrm{char}(K)$ and $ |G| = p^aN_1 $ for some integers $a, N_1\geq 1$ such that $ \gcd(p,N_1)~=~1 $. Let $P$ be a wildly ramified place of $F_0$ in $F$ with ramification index $e(P)=p^tn$ for some positive integers $t\leq a$ and $n$ such that $\gcd (p,n)=1$. Then we have the following. \begin{eqnarray}gin{enumerate}[(i)] \item\label{different} $ d(P) \geq (e(P)-1) + n(p^t-1) $. \item\label{order} If $P$ is the unique wildly ramified place of $F_0$ in $F$, then $t=a$. \end{enumerate} \end{lemma} \begin{eqnarray}gin{proof} Let $H $ be a normal subgroup of $ G$ of index $[G:H]=p^a$ and $F_1=F^H$. Then $ F_1/F_0 $ is a Galois $ p $-extension of degree $ p^a $. \begin{eqnarray}gin{enumerate}[(i)] \item Let $P'\in \mathbb{P}_{F_1}$ and $P''\in \mathbb{P}_{F}$ such that $P''| P'|P$. Then by the transitivity of the different exponent \cite[Corollary 3.4.12 ]{thebook}, $$ d(P)= e(P''|P')\cdot d(P'|P)+d(P''|P') = n\cdot d(P'|P) + (n-1). $$ Also, $d(P'|P)\geq 2(p^t-1)$ by Hilbert's Different Formula \cite[Theorem 3.8.7]{thebook}; hence, \begin{eqnarray}gin{align*} d(P)\geq 2n(p^t-1)+(n-1)= (np^t -1) + n(p^t-1). \end{align*} Then the fact that $e(P)=np^t$ gives the desired result. \item By Lemma~\ref{lem:two}-$ (ii)$, we conclude that $ G $ is a $ p $-group. Suppose that $ P $ is not totally ramified in $ F$. Let $ P' $ be a place of $ F $ lying over $ P $. We denote by $ G_T(P'|P) $ the inertia group of $ P $. Note that since $P$ is not totally ramified $ G_T(P'|P) $ is a proper subgroup of $ G $. Then the fact that $ G $ is solvable implies that there exists a normal subgroup $ H $ such that $ G_T(P'|P) \leq H \leq G $ and $ [G:H]=p $. That is, $ F^H $ is a Galois extension of $ F_0 $ of degree $ p $ . Moreover, $ F^H /F_0$ is unramified as inertia group of a place of $ F $ lying over $ P $ lies in $ H $, which is a contradiction. \end{enumerate} \end{proof} \section{Proof of Theorem~\ref{thm:main}} In this section, we prove Theorem~\ref{thm:main}. We first fix the following notation. We denote by \begin{eqnarray}gin{itemize} \item[] $F/K$ \hspace{3cm}a function field of genus $g\geq 2 $ over an algebraically closed field $ K $ \hspace*{4cm} of characteristic~$ p>0 $, \item[] $G \leq \mathrm{Aut}(F/K)$ \hspace{1cm} a nilpotent subgroup of $ \mathrm{Aut}(F/K) $, \item[] $N :=|G|>1$, \item[] $F_0 := F^G$ \hspace{2.1cm} the fixed field of $ G $. \end{itemize} Note that $ F/F_0 $ is Galois extension of degree $ [F:F_0]=N $. By Lemma~\ref{l1}, we can assume that $ g(F_0) = 0 $, that is, $ F_0 $ is rational. Suppose that $ F $ is of type $ (e_1,\ldots,e_r) $, where $ r $ is the number of places of $ F_0 $ that are ramified in $ F/F_0 $. Recall that $e_1 \leq \ldots \leq e_r $. We will prove Theorem~\ref{thm:main} according to the number $ r $. \begin{eqnarray}gin{theorem} If $ r \geq 5 $, then $ N \leq 4(g-1) $. \end{theorem} \begin{eqnarray}gin{proof} By Equation~\eqref{eq:hgf}, we have the following equalities. $$ 2g-2= N\bigg(-2+\sum_{i=1}^{r}\frac{d_i}{e_i}\bigg) \geq N\bigg(-2 + 5\cdot \frac{1}{2}\bigg)=\frac{N}{2}, $$ which gives the desired result. \end{proof} We consider $F$ of type $(e_1,e_2,e_3,e_4)$. That is, there are exactly $4$ ramified places of $ F_0 $, say $ P_1,P_2,P_3,P_4 $, with ramification indices $ e_1,e_2,e_3,e_4 $ and different exponents $ d_1,d_2,d_3,d_4 $, respectively. Then we have the following result. \begin{eqnarray}gin{theorem}\label{thm:r=4} If $r= 4$, then $N\leq 8 (g-1)$. \end{theorem} \begin{eqnarray}gin{proof} Note that if $e_2 \geq 3 $, then $ N\leq 4(g-1) $ since by Equation~\eqref{eq:hgf} $$ 2g-2 = N\bigg(-2+\sum_{i=1}^{4}\frac{d_i}{e_i}\bigg) \geq N\bigg(-2 +\frac{1}{2} + 3\cdot \frac{2}{3}\bigg)=\frac{N}{2}. $$ From now on, we suppose that $e_2 =2$, i.e., $e_1= e_2 =2 $. Similarly, by Equation~\eqref{eq:hgf}, if $ e_3 \geq 4 $, then $N \leq 4(g-1)$. Hence, we investigate $e_3\leq 3 $ into cases as follows. \begin{eqnarray}gin{itemize} \item[(i)] $ e_3= 3 $:\\ If $e_4 \geq 6$, then $$ 2g-2 \geq N \bigg(-2 + 2\cdot \frac{1}{2} + \frac{2}{3} + \frac{5}{6}\bigg)= \frac{N}{2}, $$ which implies that $N \leq 4 (g-1)$. \mathfrak{n}oindent Note that $e_4\mathfrak{n}eq 5$ by Lemma~$\ref{lemma2}$, i.e., $F$ is either of type $ (2, 2, 3, 4) $ or of type $ (2, 2, 3, 3) $. \mathfrak{n}oindent Assume that $F$ is of type $ (2, 2, 3, 4) $. Then $\mathrm{char}(K) = 3 $ by Lemma~\ref{lemma2}; hence, $$ 2g-2 \geq N \bigg(-2 +2\cdot \frac{1}{2}+\frac{2\cdot (3-1)}{3}+\frac{3}{4} \bigg)> N, $$ i.e., $N<2(g-1)$. \mathfrak{n}oindent Assume that $F$ is of type $(2, 2, 3, 3)$. Then $ N = 2^a 3^b $ for some positive integers $ a$ and $b$ by Lemma~\ref{lemma1}. If $\mathrm{char}(K) = 2$ or $3$, then there are two wildly ramified places. Therefore, by Equation~\eqref{eq:hgf}, we obtain that $N \leq 2(g-1)$. Assume that $ \mathrm{char}(K) >3 $. By Lemma~\ref{lem:two}$-(ii)$, we conclude that $a=b=1$, i.e., $N=6$. Then by Equation~\eqref{eq:hgf}, we obtain that $g=2$; hence, $N= 6(g-1)$. \item[(ii)] $e_3 = 2$: \\ Write $ e_4 = 2^s m $ for some integers $s \geq 0$ and $m\geq 1$ such that $\gcd (2,m)=1$. That is, $F$ is of type $(2,2,2,2^s m)$. \mathfrak{n}oindent If $m > 1$, then $m=\ell^t$ for a prime $\ell > 2$ and an integer $t\geq 1$. By Lemma~\ref{lemma2}, we conclude that $\ell= p$, where $p$ is $\mathrm{char}(K)$. Moreover, $N=2^ap^b$ for some integers $a$, $b$ such that $a\geq 1$ and $b\geq t$ by Lemma~\ref{lemma1}. As there is a unique wild ramification, $t=b$ by Lemma~\ref{1wild}$-(ii)$, i.e., $e_4=2^sp^b $, and $d_4\geq (2^sp^b-1)+2^s(p^b-1)$. Then by Equation~\eqref{eq:hgf} \begin{eqnarray}gin{align*} 2g-2 &= N \bigg( -2 + 3 \cdot \frac{1}{2} +\frac{(2^sp^b-1)+2^s(p^b-1)}{2^sp^b} \bigg) \geq N, \end{align*} i.e., $N\leq 2(g-1)$. \mathfrak{n}oindent If $m = 1$, then $F$ is of type $(2,2,2,2^s)$ and $N=2^a$. If $\mathrm{char}(K) =2$, then $N \leq g-1$. Suppose that $\mathrm{char} (K) > 2$. Then $ P_1, P_2,P_3, P_4 $ are all tamely ramified in $ F $; hence, by Equation~$ \eqref{eq:hgf} $ $$ 2g-2 = N \left(-2 + 3\cdot\frac{1}{2} + \frac{2^s-1}{2^s}\right). $$ Note that $ s\geq 2 $ since $ g \geq 2 $, and $N=\displaystyle\frac{2^{s+1}}{2^{s-1}-1}(g-1) \leq 8(g-1)$. \end{itemize} \end{proof} \begin{eqnarray}gin{remark} Note that if the bound $8(g-1)$ is attained by $F$, then $g-1$ is a power of $2$, $F$ is of type $(2,2,2,2^s)$ for some integer $s\geq 2$, and $ \mathrm{char} (K) \mathfrak{n}eq 2 $. \end{remark} Now we consider $F$ of type $(e_1,e_2,e_3)$. That is, there are exactly $3$ ramified places of $ F_0 $, say $ P_1,P_2,P_3 $, with ramification indices $ e_1,e_2,e_3 $ and different exponents $ d_1,d_2,d_3$, respectively. Then we have the following result. \begin{eqnarray}gin{theorem}\label{thm:r=3} If $r=3$, then $N\leq 16 (g-1)$. \end{theorem} \begin{eqnarray}gin{proof} If $e_1 \geq 4$, then $N\leq 8(g-1)$ by Equation~\eqref{eq:hgf}. Therefore, we investigate $e_1\leq 3 $ into cases as follows. \begin{eqnarray}gin{itemize} \item[(i)] $e_1 = 3 $:\\ Note that if $e_2 \geq 5$, then $N < 8(g-1)$ by Equation~\eqref{eq:hgf}. \begin{eqnarray}gin{itemize} \item[(a)] $ e_2=4 $:\\ Then $F $ is of type $(3,4,e_3)$. By Lemma~\ref{lemma2}, the ramification index $ e_3 $ can have at most one prime divisor $ \ell >3 $. Then $ e_3 = 2^a3^b\ell^c $ for some integers $a,b,c \geq 0$ and $N= 2^s3^t\ell^c $ for some integers $s\geq 2 $ and $t \geq 1 $. If $c > 0 $, then $ \ell= p $ by Lemma~\ref{lemma2} and $ a= 2 $, $b=1 $ by Lemma~\ref{lem:two}, i.e., $ e_3 = 12p^c $. Also, by Lemma~\ref{1wild}, $d_3\geq (12p^c-1) + 12(p^c-1)$. Then by Equation~\eqref{eq:hgf}, we have $N< 2(g-1)$. Assume that $c=0$. Note that $a=0$ is not possible in this case; otherwise $F$ would be of type $(3,4,3)$ by Lemma~\ref{lem:two}$-(ii)$. If $b=0$, then $\mathrm{char}(K)=3$ and $F$ is of type $(3,4,4)$. As $P_1$ is wildly ramified, $N < 4(g-1)$ by Equation~\eqref{eq:hgf}. If $a,b\mathfrak{n}eq 0$, i.e., $e_3\geq 6$, then $N \leq 8(g-1)$ by Equation~\eqref{eq:hgf}. \item[(b)] $ e_2=3 $:\\ By similar arguments, $F $ is of type $(3,3,e_3)$, where $e_3=3^a\ell^b$ for a prime $\ell\mathfrak{n}eq 3$ and integers $a,b~\geq~0$. Then by Lemma~\ref{lemma1} and Lemma~\ref{1wild}, $N= 3^s\ell^b $ for an integer $s\geq 1$. If $b > 0 $, then $\ell=p$. By Lemma~\ref{1wild}, $d_3\geq (3^ap^b-1) +3^a(p^b-1)$; hence, $N < 4(g-1)$ by Equation~\eqref{eq:hgf}. Now suppose that $b=0$. If $a=1$, then $\mathrm{char} (K)=3$; otherwise $ g=1 $. That is, all places are wildly ramified; hence, $N\leq (g-1)$ by Equation~\eqref{eq:hgf}. If $a>1$, then $N \leq 9(g-1)$. \end{itemize} \item[(ii)] $e_1 = 2 $:\\ If $\mathrm{char}(K)=2$, i.e., $P_1$ is wildly ramified, then $ N \leq 6(g-1) $ by Equation~\eqref{eq:hgf}. From now on, we suppose that $\mathrm{char}(K) >2$. If $e_2 \geq 6$, then $N\leq 12(g-1)$ by Equation~\eqref{eq:hgf}. We investigate $e_2 \leq 5$ into cases as follows. \begin{eqnarray}gin{itemize} \item[(a)] $e_2=5$:\\ Then $F $ is of type $(2,5,e_3)$, where $ e_3 = 2^a5^b\ell^c $ for a prime $ \ell \mathfrak{n}eq 2,5 $ and integers $a,b,c \geq 0$. As $\mathrm{char}(K) \mathfrak{n}eq 2 $, by Lemma~\ref{lem:two} and Lemma~\ref{1wild}, $ e_3 = 2\cdot5^b\ell^c $ and $N= 2\cdot5^t\ell^c $ for an integer $t \geq 1$. If $c > 0 $, then $\ell =p$ and $b=t=1$, i.e., $e_3 = 10p^c$. Then $d_3 \geq (10p^c-1) +10(p^c-1)$ by Lemma~\ref{1wild}; hence, $N < 3(g-1)$ by Equation~\eqref{eq:hgf}. \mathfrak{n}oindent If $c=0$, then $b\mathfrak{n}eq 0$, i.e., $e_3\geq 10$; hence, $N\leq 10(g-1)$ by Equation~\eqref{eq:hgf}. \item[(b)] $e_2=4$:\\ Then $F $ is of type $(2,4,e_3)$, where $e_3=2^a\ell^b$ for a prime $ \ell > 2 $ and integers $a,b \geq 0$. If $b> 0$, then $\ell = p $ and $d_3=(2^ap^b-1)+2^a(p^b -1 )$; hence, $N < 3(g-1)$ by Equation~\eqref{eq:hgf}. In this case, we also have $a\geq 1$. Suppose that $b=0$. Since $\mathrm{char} (K)\mathfrak{n}eq 2 $, we have the following equalities by Equation~\eqref{eq:hgf} $$ 2g-2 = N \bigg(-2 + \frac{1}{2} + \frac{3}{4} + \frac{2^a-1}{2^a}\bigg) = N \bigg(\frac{1}{4} - \frac{1}{2^a}\bigg). $$ Then the fact that $g\geq 2$ implies that $a \geq 3$; hence, $N \leq 16(g-1)$. \item[(c)] $e_2=3$:\\ Then $F $ is of type $(2,3,e_3)$, where $e_3 = 2\cdot3^b\ell^c$ for a prime $\ell > 3$ and integers $b,c \geq 0$. \mathfrak{n}oindent If $c > 0 $, then $\ell=p$ and $e_3=6p^c$ by Lemma~\ref{lemma2} and Lemma~\ref{lem:two}$-(ii)$. Then by Lemma~\ref{1wild}, i.e., $d_3\geq (6p^c-1) +6(p^c-1)$, and by Equation~\eqref{eq:hgf}, we conclude that $N < 3(g-1)$. \mathfrak{n}oindent Suppose that $c=0$; hence, $b\mathfrak{n}eq 0$. If $\mathrm{char}(K) \mathfrak{n}eq 3$, then $e_3=6$. By Equation~\eqref{eq:hgf}, we conclude that $g=1$, which is a contradiction. Therefore, $\mathrm{char}(K)= 3$, i.e., $P_2, P_3$ are wildly ramified. Then $N \leq 2(g-1)$ by Equation~\eqref{eq:hgf} and Lemma~\ref{1wild}. \item[(d)] $e_2=2$:\\ Then $F $ is of type $(2,2,e_3)$, where $e_3=2^a\ell^b$ for a prime $ \ell > 2 $ and integers $a,b \geq 0$. Suppose that $b=0$. Since $\mathrm{char} (K) \mathfrak{n}eq 2$ , $F/F_0$ is a tame extension. Then by Equation~\eqref{eq:hgf}, we conclude that $g=0$, which is a contradiction. Therefore, $b> 0$. Then $\ell=\mathrm{char} (K)$, i.e., $\ell=p $. Hence, by Lemma~\ref{1wild} and Equation~\eqref{eq:hgf}, we have the following. \begin{eqnarray}gin{align*} 2g-2 &= N \bigg(-2 + 2\cdot \frac{1}{2} + \frac{(2^ap^b-1)+2^a(p^b-1 )}{2^ap^b}\bigg) \\ &= N\frac{2^ap^b - 2^a -1}{2^ap^b} \geq N \bigg(1- \frac{2^a+1}{3 \cdot2^a} \bigg) \geq \frac{N}{3}. \end{align*} Hence, $N \leq 6(g-1)$. \end{itemize} \end{itemize} \end{proof} \begin{eqnarray}gin{remark} Note that if the bound $16(g-1)$ is attained by $F$, then $g-1$ is a power of $2$, $F$ is of type $(2,4,2^s)$ for some integer $s\geq 3$, and $ \mathrm{char}(K) \mathfrak{n}eq 2 $. \end{remark} We continue investigating $F$ of type $(e_1,e_2)$. That is, there are exactly $2$ ramified places of $ F_0 $, say $ P_1,P_2 $, with ramification indices $ e_1,e_2 $ and different exponents $ d_1,d_2$, respectively. Then we have the following result. \begin{eqnarray}gin{theorem}\label{thm:r=2} If $r=2$, then $N\leq 10(g-1)$. \end{theorem} \begin{eqnarray}gin{proof} We first remark that $F/F_0$ cannot be a tame extension; otherwise we would have that $g=0$. Therefore, we can write $N = p^t N_1$ for some positive integers $t$ and $N_1 $ such that $\gcd(p,N_1)=1$. Note that $e_1=p^{a}N_1$ and $e_2=p^{b}N_1$ for some integers $0 \leq a,b\leq t$ by Lemma~\ref{lem:two}$-(ii)$. \begin{eqnarray}gin{itemize} \item[(i)] $N_1=1$, i.e., $G$ is a $p$-group:\\ Note that if $p^{a}=p^{b}=2$, then the case $d_1=d_2=2$ cannot hold; otherwise we would have that $g=1$. That is, $d_i\geq 3$ for some $i=1,2$, and hence $N\leq 4(g-1)$ by Equation~\eqref{eq:hgf}. \mathfrak{n}oindent Suppose that $p^{a}=p^{b}=2$ is not the case. Since $P_1$ and $P_2$ are ramified with different exponents $d_1\geq 2(p^{a}-1)$ and $d_2\geq 2(p^{b}-1)$, respectively, by Equation~\eqref{eq:hgf} \begin{eqnarray}gin{align}\label{eq:a,b} 2g-2 \geq N\left( -2+ \frac{2(p^{a}-1)}{p^{a}}+ \frac{2(p^{b}-1)}{p^{b}}\right) = N\left(2 -\frac{2}{p^{a}}-\frac{2}{p^{b}} \right)\geq \frac{N}{2}. \end{align} Therefore, $N\leq 4(g-1)$. \item[(ii)] $N_1>1$: \begin{eqnarray}gin{itemize} \item[(a)] Suppose that $ F$ is of the type $ (N_1, N_1 p^b) $. Then $N=N_1 p^b$ by Lemmas~\ref{lem:two} and \ref{1wild}. Note that if $N< 10$, then $N< 10(g-1)$ as $g\geq 2$. Therefore, we suppose that $N \geq 10$. Moreover, by Lemma~\ref{1wild} and Equation~\eqref{eq:hgf} \begin{eqnarray}gin{align}\label{eq:pb} 2g-2 &\geq N \bigg(-2 + \frac{N_1-1}{N_1} + \frac{(N_1p^b-1) +N_1(p^b-1) }{N_1p^b}\bigg) \\ \mathfrak{n}onumber &= N\bigg(1-\frac{1}{N_1}-\frac{1}{N_1p^b}-\frac{1}{p^b}\bigg). \end{align} Note that if $p^b\geq 5$, then $N\leq 10(g-1)$ by Equation~\eqref{eq:pb}. If $p^b = 4 $ and $N_1\geq 3$ or $p^b = 3$ and $N_1\geq 4$, then $N\leq 6(g-1)$. In the case that $p^b = 2 $ and $N_1\geq 5$, we obtain that $N\leq 10(g-1)$. \item[(b)] Suppose that $ F $ is of type $ (N_1 p^a, N_1 p^b) $, where $0<a\leq b$. Then $d(P_1)\geq (N_1p^a-1) +N_1(p^a-1) $ and $d(P_2)\geq (N_1p^b-1) +N_1(p^b-1) $ by Lemma~\ref{1wild}. By Equation~\eqref{eq:hgf}, we obtain that \begin{eqnarray}gin{align*} 2g-2 &\geq N \bigg(2 - \frac{1}{N_1p^a} - \frac{1}{N_1p^b} - \frac{1}{p^a} - \frac{1}{p^b}\bigg) \geq N \bigg(2 - \frac{2}{N_1p^a} - \frac{2}{p^a}\bigg) , \end{align*} which implies that $N\leq 3(g-1)$. \end{itemize} \end{itemize} \end{proof} \begin{eqnarray}gin{remark}\label{rem:2} The bound $10(g-1)$ is attained only by $F$ of genus $2$ such that $F$ is of type $(5,10)$ if $\mathrm{char}(K)=2$ or $(2,10)$ if $\mathrm{char}(K)=5$. \end{remark} \begin{eqnarray}gin{remark} In \cite[Theorem 3.1]{KM}, the authors proved independently that if $ G $ is a $\ell$-subgroup of $ \mathrm{Aut}(F/K)$, where $\ell\geq 3$ is a prime and $\ell\mathfrak{n}eq \mathrm{char}(K)$, then $ |G| \leq 9(g - 1) $. They also showed that the equality can only be obtained for $ \ell = 3 $. Our analysis of the types of function fields with nilpotent automorphism groups not only leads us the same result, but also provides a bound for all nilpotent subgroups of $\mathrm{Aut}(F/K)$. \end{remark} It remains to consider the case $ r=1 $. \begin{eqnarray}gin{theorem}\label{thm:r=1} If $ r=1 $, then $\displaystyle N \leq \frac{4p}{(p-1)^2}g^2 $. \end{theorem} \begin{eqnarray}gin{proof} In Lemma~\ref{1wild}-$(ii)$, we observe that $ G $ is a $ p $-group and the unique ramified place $ P $ of $F_0$ is totally ramified in $F$. Therefore, the first ramification group $ G_1 $ of $ P $ is the whole group $G $. By \cite[Satz~1~(c)]{Sti1}, $$ |G_1| \leq \frac{4|G_2|}{(|G_2|-1)^2}g^2 \leq \frac{4p}{(p-1)^2}g^2, $$ where $ G_2 $ is the second ramification group of $ P $. This gives the desired result. \end{proof} \section{Examples} In this section, we present examples of function fields that attain the bounds we obtained in Theorem~\ref{thm:r=4}, \ref{thm:r=3}, \ref{thm:r=2}, and \ref{thm:r=1}. In other words, the bounds given in these theorems cannot be improved. Moreover, for Theorem~\ref{thm:r=4} and Theorem~\ref{thm:r=3}, we construct a sequence of function fields $ F_n/K $ such that \begin{eqnarray}gin{itemize} \item [(i)] $ g(F_n) \rightarrow \infty $, \item [(ii)] there exists a nilpotent subgroup $ G_n \leq \mathrm{Aut}(F_n/K)$, whose order attains the respective bound. \end{itemize} We need the following lemma to construct examples attaining the bound in Theorem~\ref{thm:r=3}. \begin{eqnarray}gin{lemma}\label{unramifiedext} Let $ \mathrm{char}(K) \mathfrak{n}eq 2$ and $ F_1/K $ be a Galois extension of $ F_0/K $ with $ g(F_1)\geq 2 $. Then there exists a sequence of function fields $ F_n/K $ with the following properties. \begin{eqnarray}gin{itemize} \item[(i)] $ F_n/F_0 $ is Galois, \item[(ii)] $ F_{n+1}/F_n $ is Galois, abelian of degree $[F_{n+1}:F_n] = 2^{2g(F_n)} $, \item[(iii)] $ F_{n+1}/F_n $ is unramified, \item[(iv)] the exponent of $ \mathrm{Gal}(F_{n+1}/F_n) $ is $ 2 $. \end{itemize} \end{lemma} \begin{eqnarray}gin{proof} By \cite[Section~4.7]{PS}, for a given function field $ F/K $, there exists a unique maximal field $ F' \supseteq F $ such that \begin{eqnarray}gin{itemize} \item[(a)] $ F'/F $ is Galois and abelian of degree $[F':F] = 2^{2g(F)} $, \item[(b)] $ F'/F $ is unramified, and \item[(c)] the exponent of $ \mathrm{Gal}(F'/F) $ is $ 2 $. \end{itemize} For $ n \geq 1 $, let $ F_{n+1} $ be the extension of $ F_n $ described as above. Now we show that $ F_n/F_0 $ is a Galois extension for each $ n\geq 1 $. We proceed by induction on $ n $. By our assumption, $ F_1/F_0 $ is Galois. Now suppose that $ F_n/F_0 $ is Galois. Let $ \tilde{F} $ be the Galois closure of $ F_{n+1}/F_0 $, see Figure~\ref{4}. \begin{eqnarray}gin{figure}[!ht] \begin{eqnarray}gin{center}{ \xymatrix{ &&&&&&&&\tilde{F}&&\\ &&&&&&&&F_{n+1} \ar@{-}[u] &&\\ &&&&&&&&F_n\ar@{-}[u] \ar@{-}@/_1pc/[u]_{\text{Galois } }&&\\ &&&&&&&&F_0 \ar@{-}[u]\ar@{-}@/_1pc/[u]_{\text{Galois} }\ar@{-}@/^2pc/[uuu]^{\text{Galois}}&& }} \end{center} \caption{The Galois closure of $F_{n+1}/F_0$} \label{4} \end{figure} Let $ \gamma \in \mathrm{Gal}(\tilde{F}/F_0)$. Since $ F_n/F_0 $ is Galois, we have $ \gamma(F_n) = F_n $. In particular, $ \gamma(F_{n+1}) $ is a Galois, abelian, unramified extension of $ F_n $ of degree power of $ 2$. By the uniqueness of such an extension, we conclude that $ \gamma(F_{n+1})=F_{n+1} $, which gives the desired result. \end{proof} The following example shows that the bound in Theorem~\ref{thm:r=3} is attained by function fields of infinitely many genera. We apply a similar approach as in \cite{KM}. \begin{eqnarray}gin{example}\label{ex:r=3} Let $ p \mathfrak{n}eq 2 $ and $ \zeta $ be a primitive $8$-th root of unity. Consider the function field $ F $ with the defining equation $$ y^2 = x(x^4-1). $$ Note that $F/K(x)$ is a Kummer extension of degree $2$, where $(x=\infty)$, $(x=0)$ and $(x=\zeta^{2k})$, $k=0,1,2,3$, are the ramified places of $K(x)$, see \cite[Proposition 3.7.3]{thebook}. Hence we conclude that $ g(F)=2 $. Note that the maps \begin{eqnarray}gin{align*} \sigma :\left\{ \begin{eqnarray}gin{array}{lr} x \mapsto \zeta^2 x \\ y \mapsto \zeta y \\ \end{array} \right. \qquad \text{ and } \qquad \tau:\left\{ \begin{eqnarray}gin{array}{lr} x \mapsto -1/x \\ y \mapsto y/x^3 \end{array} \right. \vspace*{-.5cm} \end{align*} define automorphisms of $ F$ over $K$. Set $\eta :=\sigma \tau$. Then $\sigma, \eta \in \mathrm{Aut}(F/K)$ such that $ \mathrm{ord}(\sigma) = 8$, $ \mathrm{ord} (\eta) = 2 $ and $\eta\sigma \eta^{-1}=\sigma^3$. Let $G=\langle \sigma \rightarrowngle \rtimes \langle\eta \rightarrowngle$. Then $G$ is a group of order $16$. Set $ z:= x^4 $ and $ t:= \displaystyle \frac{z^2+1}{2z} $. We consider the sequence of function fields $K(t) \subseteq K(z) \subseteq K(x) \subseteq F$ to investigate the ramification structure in $F/K(t)$, see Figure~\ref{5}. \begin{eqnarray}gin{figure}[!ht] \begin{eqnarray}gin{center}{ \xymatrix{ &&&&&&&{F}=K(x,y)&&\\ &&&&&&& K(x) \ar@{-}[u]^{y^2 = x(x^4-1) \;\;}_{\;\;\mathrm{deg}=2}&&\\ &&&&&&&K(z)\ar@{-}[u]^{z=x^4 \;\;}_{\;\; \mathrm{deg}=4}&&\\ &&&&&&&K(t) \ar@{-}[u]^{t= \frac{z^2+1}{2z} \;\;}_{\;\;\mathrm{deg}=2}&& }} \end{center} \caption{$K(t) \subseteq K(z) \subseteq K(x) \subseteq F$ } \label{5} \end{figure} \mathfrak{n}oindent Observe that $ K(t) \subseteq F^{\langle \sigma \rightarrowngle} $ and $ K(t) \subseteq F^{\langle \tau \rightarrowngle} $; hence, $ K(t) \subseteq F^G $. Then the fact that $[F:K(t)]~=~16$ implies that $F^G=K(t)$, that is, $ F/K(t) $ is a Galois extension of degree $ 16 $. Then we have the following observations. \begin{eqnarray}gin{itemize} \item[(i)] $(z=0)$ and $(z=\infty)$ are the only ramified places of $K(z)$ in $K(x)$, which are totally ramified. $(x=0)$ and $(x=\infty)$ are the unique places lying over $(z=0)$ and $(z=\infty)$, respectively. That is, $(z=0)$ and $(z=\infty)$ are totally ramified in $F$. Furthermore, $(x=\zeta^{2k})$, $k=0,1,2,3$ are the places lying over $(z=1)$. \item[(ii)] Ramified places of $K(t)$ in $K(z)$ are $(t=1)$ and $(t=-1)$ lying over $(z=1)$ and $(z=-1)$, respectively. Furthermore, $(z=0)$ and $(z=\infty)$ lie over $(t=\infty)$. \end{itemize} Hence, we conclude that the ramified places of $ K(t) $ in $ F $ are $(t=-1) $, $(t=1) $ and $(t=\infty) $ with ramification indices $ 2,4,8$, respectively. That is, $ F $ is of type $ (2,4,8)$ and $N=16=16(g(F)-1)$. Set $ F_0 = K(t) $ and $ F_1 := F $. Then by Lemma~\ref{unramifiedext}, there exists a sequence of function fields $ F_n/K $ such that $F_{n+1}/K(t) $ is a Galois extension of degree power of $ 2 $. Note that $ g(F_{n+1}) > g(F_{n})$ as $ g(F_1)=2 $. Since $ F_{n+1}/F_1 $ is an unramified extension, $ K(t) $ has exactly $3$ ramified places in $ F_{n+1} $, namely, $ (t=-1) $, $ (t=1) $ and $ (t=\infty) $, whose ramification indices are $2$, $4$, $8$, respectively. Thus, $ F_{n+1} $ is of type $ (2,4,8) $ such that $ [F_{n+1}:K(t)] = 16 (g(F_{n+1})-1) $. \end{example} By using Example~\ref{ex:r=3}, we obtain the following example, which shows that the bound given in Theorem~\ref{thm:r=4} is attained by function fields of infinitely many genera. \begin{eqnarray}gin{example} \label{ex:r=4} Let $F_n/K(t)$ be the Galois extension given in Example \ref{ex:r=3} for $n\geq 1$. Recall that $ p \mathfrak{n}eq 2 $. We consider the Kummer extension $K(w)/K(t)$ given by $w^2=t-1$. Note that $w\in F_n$ as $t$ is a square in $F_n$, i.e., $F_n/K(w)$ is a Galois extension of degree power of $ 2$, see Figure~\ref{6}. \begin{eqnarray}gin{figure}[!ht] \begin{eqnarray}gin{center}{ \xymatrix{ &&&&&&&{F_n}&&\\ &&&&&&& &&\\ &&&&&&& & K(w)\ar@{-}[uul]_{\;\;\mathrm{deg}= \frac{[F_n:K(t)]}{2}}&\\ &&&&&&&K(t)\ar@{-}[uuu]^{\;\;}\ar@{-}[ur]_{w^2=t-1}&& }} \end{center} \caption{$K(t) \subseteq K(w) \subseteq F_n$} \label{6} \end{figure} By \cite[Proposition 3.7.3]{thebook} , $(t=1)$ and $(t=\infty)$ are the only ramified places of $K(t)$ in $K(w)/K(t)$. In particular, $(w=0)$ and $(w=\infty)$ are the places of $K(w)$ lying over $(t=1)$ and $(t=-1)$, respectively. Moreover, $(w=\alpha)$ and $(w=-\alpha)$ are the places of $K(w)$ lying over $(t=-1)$, where $\alpha^2=-2$. Then the transitivity of the ramification extension, we conclude that $(w=\alpha)$, $(w=- \alpha)$, $(w=0)$ and $(w=\infty)$ are the only ramified places of $K(w)$ in $F_n$ and the ramification indices are given by \begin{eqnarray}gin{align*} e((w= \alpha))=e((w=- \alpha))=e((w=0))=2 \quad \text{and} \quad e((w=\infty))=4 . \end{align*} Hence, $F_n$ is a function field of type $(2,2,2,4)$ satisfying $[F_n:K(w)]=8(g(F_n)-1)$. \end{example} The following two examples show that both cases, where the bound in Theorem~\ref{thm:r=2} can be attained, appear, see Remark \ref{rem:2}. \begin{eqnarray}gin{example} \label{ex:r=2} Let $ p=2 $ and let $ F $ be a function field given by the defining equation $y^2-y =x^5$. By considering $ F $ as a Kummer extension over $ K(y) $, where $ (y=\infty) $, $ (y = 0) $ and $ (y=1) $ are all the ramified places of $K(y)$, we conclude that $ g(F)=2 $ by \cite[Proposition 3.7.3]{thebook}. \mathfrak{n}oindent Set $ z:= x^5 $. Then $K(x)/K(z)$ and $K(y)/K(z)$ are Galois extensions of degree $5$ and $2$, respectively. Hence, $F/K(z)$ is a Galois extension of degree $10$, see Figure~\ref{3}. Note that the automorphism group of $ F/K(z) $ is generated by $\sigma$ defined by $$ \sigma: \left\{ \begin{eqnarray}gin{array}{lr} x \mapsto \zeta x \\ y \mapsto y + 1 ,\\ \end{array} \right. $$ where $ \zeta $ is a primitive $5$-root of unity. \begin{eqnarray}gin{figure}[!ht] \begin{eqnarray}gin{center}{ \xymatrix{ &&&&&& F =K(x,y)&&&&&\\ &&&&&K(x) \ar@{-}[ur]^{y^2-y=x^5}&&K(y) \ar@{-}[ul]&&&& \\ &&&&&& K(z) \ar@{-}[ur]_{y^2-y=z}\ar@{-}[ul]^{x^5=z}&&&&&}} \end{center} \caption{$F$ as a compositum of $K(x)$ and $K(y)$} \label{3} \end{figure} \mathfrak{n}oindent Note that $(z=\infty)$ is the only ramified place in $K(y)$ with ramification index $2$. Also, $(z=0)$ and $(z=\infty)$ are the only ramified places in $K(x)$ with ramification indices $5$. Then, by Abhyankar's Lemma \cite[Theorem 3.9.1]{thebook}, $(z=0)$ and $(z=\infty)$ are the only ramified places of $K(z)$ in $F$ with the ramification indices $5$ and $10$, respectively. That is, $ F $ is of type $ (5,10) $ satisfying $ [F:K(z)] = 10 (g(F)-1)$. \mathfrak{n}oindent Let $ p=5 $ and let $ F $ be a function field given by the defining equation $y^5-y =x^2$. Similarly, $F$ is a function field of genus $2$. If we set $ z:= x^2 $, then $F/K(z)$ is a cyclic extension of degree $10$, where $ (z=0) $ and $ (z=\infty) $ are all the ramified places of $K(z)$ in $F$ with ramification indices $2$ and $10$, respectively. That is, $ F $ is of type $ (2,10) $ satisfying $ [F:K(z)] = 10(g(F)-1)$. Note that the automorphism group of $ F/K(z) $ is generated by $\sigma$ defined by $$ \sigma: \left\{ \begin{eqnarray}gin{array}{lr} x \mapsto \zeta x \\ y \mapsto y + 1 ,\\ \end{array} \right. $$ where $ \zeta $ is a primitive $2$-root of unity. \end{example} The following example shows that the bound in Theorem \ref{thm:r=1} holds, for further details see \cite[Satz~5]{Sti2}. \begin{eqnarray}gin{example}\label{ex:r=1} Let $p\geq 5 $ and $n \geq 1 $ be integers. Consider the function field $ F_n:= K(x,y) $ defined by $$ y^{p}+ y =x^{p^{n}+1}. $$ Then $g(F_n)= \frac{p^{n}(p-1)}{2} $. Note that the pole divisors of $ x,y $ are $ (x)_\infty = p\cdot P $ and $ (y)_\infty = (p^n+1)\cdot P $, respectively, for a place $ P $ of $ F_n$. Let $ G=(\mathrm{Aut}(F_n/K))_{P} $ be the automorphism group fixing the unique pole $ P $ of $ x $ and $ y $. The group $ G $ consists of automorphisms of the form $$ \sigma:\left\{ \begin{eqnarray}gin{array}{lr} x \mapsto x+d, \\ y \mapsto y+Q(x), \end{array} \right. $$ where $ p\deg Q(x) \leq p^n $ and $ Q(x)^p + Q(x) = (x+d)^{p^n+1} -x^{p^n+1} $. In this case, $|G| = p^{2n+1}$ and $\displaystyle |G| = \frac{4p}{(p-1)}g^2 $. \end{example} \begin{eqnarray}gin{thebibliography}{999} \addcontentsline{toc}{chapter}{\mathfrak{n}umberline{}\large Bibliography} \bibitem{AST} N. Anbar, H. Stichtenoth, and S. Tutdere. \textit{Asymptotically good towers with small $p$-rank and many automorphisms}. preprint. \bibitem{HKT} J. W. P. Hirschfeld, G. Korchm\'{a}ros, and F. Torres. \textit{Algebraic curves over a finite field}. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, 2008. \bibitem{Hungerford} T. W. Hungerford, \textit{Algebra}, volume 73 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1980. \bibitem{Hurwitz1893} A. Hurwitz. \textit{\"{U}ber algebraische Gebilde mit eindeutigen Transformationen in sich}. Math. Ann., 41:403--442, 1893. \bibitem{KM} G. Korchm\'{a}ros and M. Montanucci. \textit{Large odd prime power order automorphism groups of algebraic curves in any characteristic}. J. Algebra 547 (2020), 312--344. \bibitem{Macbeath} A. M. Macbeath. \textit{On a theorem of Hurwitz}. Proc. Glasgow Math. Assoc., 5:90--96, 1961. \bibitem{Nakajima} S. Nakajima. \textit{On abelian automorphism groups of algebraic curves}. J. Lond. Math. Soc., s2--36(1):23--32, 1987. \bibitem{PS} R. Pries and K. Stevenson.\textit{ A survey of Galois theory of curves in characteristic p}. In WIN—women in numbers, volume 60 of Fields Inst. Commun., pages 169--191. Amer. Math. Soc., Providence, RI, 2011. \bibitem{roquette1970} P. Roquette. \textit{Absch\"{a}tzung der Automorphismenanzahl von Funktionenk\"{o}rpern bei Primzahlcharakteristik.} Math. Z., 117(1):157--163, 1970. \bibitem{schmid1938} H. L. Schmid. \textit{\"{U}ber die Automorphismen eines algebraischen Funktionenk\"{o}rpers von Primzahlcharakteristik}. J. Reine Angew. Math., 179:5--15, 1938. \bibitem{Sti1} H. Stichtenoth. \textit{\"{U}ber die Automorphismengruppe eines algebraischen Funktionenk\"{o}rpers von Primzahlcharakteristik. I. Eine Absch\"{a}tzung der Ordnung der Automorphismengruppe}. Arch. Math. (Basel), 24:527--544, 1973. \bibitem{Sti2} H. Stichtenoth. \textit{\"{U}ber die Automorphismengruppe eines algebraischen Funktionenk\"{o}rpers von Primzahlcharakteristik. II. Ein spezieller Typ von Funktionenk\"{o}rpern}. Arch. Math. (Basel), 24:615--631, 1973. \bibitem{thebook} H. Stichtenoth. \textit{Algebraic function fields and codes}, volume 254 of Graduate Texts in Mathematics. Springer-Verlag, Berlin, second edition, 2009. \bibitem{Zom} R. Zomorrodian. \textit{Nilpotent automorphism groups of Riemann surfaces}. Trans. Amer. Math. Soc., 288(1):241--255, 1985. \end{thebibliography} \end{document}
math
दो सेवाएं एक-दूसरे के साथ हस्तक्षेप करती हैं। दो सेवाएं एक-दूसरे के साथ हस्तक्षेप करती हैं। इसी तरह की शाखाएँ खोजें मेरे पास एक सेवा है, यह काम करता है। मैंने एक अलग नाम के साथ एक और लिखा। लेकिन एक की स्थापना रद्द करने के लिए टीमें, दूसरे के लिए काम करती हैं। यानी आप उन्हें एक ही समय में शुरू नहीं कर सकते, कहते हैं कि सेवा पहले से ही चल रही है। और एक के इंस्टॉलर को दूसरे द्वारा हटा दिया जाता है। प्रयोगात्मक रूप से प्राप्त एक छोटा सा जोड़। मैंने तीसरी सेवा ली (खाली, यह केवल ध्वनि संकेत देता है, यह इस बात पर था कि इसने दूसरी सेवा बनाई), दूसरी और तीसरी को एक साथ स्थापित किया, सब कुछ काम किया (पहला स्थापित नहीं है)। फिर मैं पहले को अनइंस्टॉल करता हूं और वह तीसरे को अनइंस्टॉल करता है, जिसके बाद पहला पूरी तरह से स्थापित हो जाता है। नतीजतन, पहला और दूसरा एक साथ काम करते हैं। शायद एक अलग विवरण के साथ, और एक अलग नाम के साथ नहीं? क्या आप अधिक विस्तृत हो सकते हैं, क्या नाम और विवरण हो सकते हैं? फ़ाइल नाम अलग हैं, वर्णन अलग हैं। यह एक तीसरी सेवा, जिसका मैंने उपयोग किया, उसका नाम पहले वाले के समान है। लेकिन दूसरी सेवा के लिए उसका अनइंस्टॉल पर्याप्त था ताकि पहले के साथ हस्तक्षेप करना बंद कर दिया जा सके। वह नाम जिसके तहत वह सेवाओं की सूची में है? सबका अपना-अपना है। एक और जोड़। (छत की सवारी) अब पहली और दूसरी सेवाएं काफी सामान्य रूप से स्थापित और अपने दम पर स्थापित हैं। एक ओर, समस्या का समाधान किया गया :), लेकिन मैं वास्तव में गड़बड़ से निपटना चाहूंगा। "सेवाओं की सूची में वह नाम जिसके तहत वह है" "दिसप्लैनामी" और सेवा का नाम त्सर्विस वर्ग उदाहरण का "नाम" है। उनके अलग-अलग नाम हैं। लेकिन अंत में, वे अब एक साथ काम कर रहे हैं, उनमें कुछ भी नहीं बदला है, उन्होंने सिर्फ तीसरी सेवा के साथ उपरोक्त अनुष्ठान किया है। खींचें और ड्रॉप घटक डेलेटेफिले फ़ंक्शन के बारे में प्रश्न
hindi
अनुज ने ४ साल में खड़ा किया २५० करोड़ का ब्रांड, कभी नीरव मोदी के यहां करते थे काम अनुज राक्यान एक समय नीरवमोदी की कंपनी में वाइस प्रेसिडेंट के रुप में ज्वैलरी बिजनेस का काम संभालते थे। अनुज कभी नीरव मोदी की कंपनी में थे वाइस प्रेसिडेंट, आज कर रहे हैं जूस का बिजनेस नई दिल्ली। अनुज राक्यान एक समय नीरवमोदी की कंपनी में वाइस प्रेसिडेंट के रुप में ज्वैलरी बिजनेस का काम संभालते थे। लेकिन ४ साल पहने उन्होंने खुद का बिजनेस खड़ा करने का मन बनाया है और १.५ करोड़ रुपए की शुरूआती लागत से आज 2५0 करोड़ रुपए का ब्रांड खड़ा कर दिया है। अनुज जूस को नए कॉन्सेप्ट में लेकर आए। वह जूस में हेल्थ पर खास तौर से फोकस करने वाले कस्टमर को टारगेट करते हुए बिजनेस में उतरें। अनुज ने रऑ प्रैसरी ब्रांड की शुरूआत की। जिसकी इस समय जैकलीन फर्नाडिंस ब्रांड अम्बेस्डर है।
hindi
\begin{document} \begin{abstract} Certain rigid irregular $G_2$-connections constructed by the first-named author are related via pullbacks along a finite covering and Fourier transform to rigid local systems on a punctured projective line. This kind of property was first observed by Katz for hypergeometric connections and used by Sabbah and Yu to compute irregular Hodge filtrations for hypergeometric connections. This strategy can also be applied to the aforementioned $G_2$-connections and we compute jumping indices and dimensions for their irregular Hodge filtrations. \epsilonsilonnd{abstract} \title{Irregular Hodge numbers for Rigid $G_2$-Connections} \sigmaection{Introduction} Initiated by Deligne \cite{De07}, the goal of irregular Hodge theory is to provide an analogue of the theory of variations of Hodge structures in the context of irregular singular differential equations. These equations are of interest in several areas of mathematics ranging from mirror symmetry to the geometric Langlands program. \par In analogy to the case of rigid local systems for which Simpson proves in \cite[Corollary 8.1]{Sim90} that they underlie a complex variation of Hodge structure (provided their local monodromy has eigenvalues with absolute value one), Sabbah proves in \cite[Theorem 0.7]{Sa18} that an irreducible rigid irregular connection can be equipped with a canonical irregular Hodge filtration (if the eigenvalues of the formal monodromy at every singularity have absolute value one). \par Construction and classification of $G_2$-connections goes back to the work of Dettweiler and the second-named author \cite{DR10} who classified tamely ramified rigid $G_2$-local systems. The classification is carried out explicitly and relies on the work of Katz on rigid local systems \cite{Ka96}. He defines an operation called middle convolution and proves that any rigid local system may be constructed from a local system of rank one using twists with rank one local systems and middle convolution. The classification of \cite{DR10} lead in particular to the construction of a family of motives for motivated cycles with motivic Galois group $G_2$. \par The work of Katz was generalised by Arinkin \cite{Arinkin10} (and Deligne) who proved that any rigid irregular connection may be constructed from a rank one connection if one allows the additional operation of Fourier transform. Using these results the classification of tamely ramified rigid $G_2$-connections was generalised by the first-named author in \cite{Ja20} to irregular $G_2$-connections with slope of the form $1/k$ for some positive integer $k$. \par In the case of rigid local systems the work of Dettweiler and Sabbah \cite{DS13} provides an algorithmic way to compute the Hodge data of a rigid local system provided one knows how to construct it from rank one using middle convolution. Unfortunately, in general in the irregular case it is still unknown how the irregular Hodge filtration changes under Fourier transform. Therefore one has to make use of other tools to compute irregular Hodge filtrations for rigid connections. \par First explicit results for Hodge filtrations of hypergeometric connections have been obtained by Sevenheck and Domínguez in \cite{CDS19}. These results were extended by Sabbah and Yu to arbitrary non-resonant hypergeometric connections in \cite{SY19}, using a result of Katz \cite[Theorem 6.2.1.]{Ka90} which relates irregular hypergeometric connections to regular singular hypergeometric connections via pullbacks and Fourier transform. Additionally in \cite{FSY18} irregular Hodge numbers for symmetric powers of Kloosterman connections are computed. Apart from these results explicit computations of irregular Hodge filtrations remain rare. \par The goal of this article is to prove that a similar stability property holds for non-hypergeometric rigid irregular $G_2$-connections constructed in \cite{Ja20} and to compute their irregular Hodge filtrations and expand the list of computable examples of irregular Hodge filtrations. \par \sigmaubsection{Results} Consider the rigid irregular $G_2$-connections with local data at $0$ and $\infty$ given by \begin{center} \begin{tabular}{ c c } $0$ & $\infty$ \\ \hline \\ $(\mathbf{J}(3),\mathbf{J}(3),1)$ & \sigmapecialcell[c]{$\tauextup{El}( 2,1,(\alpha,\alpha^{-1}))$ \\ $\oplus\, \tauextup{El}(2,2,1) \oplus (-1)$} \\ [15pt] $(-\mathbf{J}(2),-\mathbf{J}(2),E_3) $ & \sigmapecialcell[c]{$\tauextup{El}( 2,1,(\alpha,\a^{-1}))$ \\ $\oplus\, \tauextup{El}(2,2,1)\oplus (-1)$} \\ [15pt] $(-\beta E_2,-\beta^{-1}E_2,E_3)$ & \sigmapecialcell[c]{$\tauextup{El}( 2,1,(\a,\a^{-1}))$ \\ $\oplus\, \tauextup{El}(2,2,1)\oplus (-1)$} \\ \hline \\ $(\mathbf{J}(3),\mathbf{J}(2), \mathbf{J}(2))$ & \sigmapecialcell[c]{$\tauextup{El}(2,1,1) \oplus \tauextup{El}(2,t,1)$ \\ $\oplus\,\tauextup{El}(2,t+1,1) \oplus (-1)$} \epsilonsilonnd{tabular} \epsilonsilonnd{center} with $t \in \mathbb{C}^\tauimes$, $\alpha=\epsilonsilonxp(-2\pi i a), \beta=\epsilonsilonxp(-2\pi i b)$ and $a,b\in (1/2,1)$, constructed in \cite[Theorem 1.1.]{Ja20}. Here by $x\mathbf{J}(s)$ we denote a Jordan block of size $s$ with eigenvalue $x$ and \[\tauextup{El}(2,2,1)=\tauextup{El}(u^2,2/u,1)\] is an elementary module in the sense of \cite{Sa08}, Section 2. \par Denote the above connections by $\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3$ and $\mathcal{E}_4$, numbered from top to bottom. By \cite[Theorem 0.7]{Sa18} these connections are equipped with a canonical irregular Hodge filtration $F_\tauextup{irr}^\beta$. This is a decreasing filtration (indexed by real numbers) of the fiber \[H_i=\mathcal{E}_i|_{z=1} \] which is a finite dimensional $\mathbb{C}$-vector space. It is only well-defined up to a global shift of the index. We denote by \[\tauextup{gr}_{F_{\tauextup{irr}}}^\beta(H_i):=F_{\tauextup{irr}}^\beta / \sigmaum_{\gamma > \beta} F_{\tauextup{irr}}^\gamma \] the associated graded. The set of indices $\beta$ for which $\tauextup{gr}_{F_{\tauextup{irr}}}^\beta(H_i) \neq 0$ is finite and these indices are called the jumping indices (or jumps) of the filtration. Our main result is the following. \begin{theorem} \lambdaabel{thm: irreg hodge} Denote by $F_{\tauextup{irr}}$ the irregular Hodge filtration and let $d_\alpha^p(-)=\dim \tauextup{gr}_{F_\tauextup{irr}}^{\alpha+p}(-)$. We have the following jumping indices and irregular Hodge numbers (up to a global shift) for $\mathcal{E}_1$, $\mathcal{E}_2$ and $\mathcal{E}_4$. \[ \begin{array}{cc} p & d_{1/2}^p\\ \hline 0 & 2 \\ 1 & 3 \\ 2 & 2 \epsilonsilonnd{array} \] For $\mathcal{E}_3$ we have the following cases. \[ \begin{array}{ccc} p & d^p_{2b} &d^p_{1}\\ \hline 0 & 2 & 0 \\ 1 & 2 & 3 \epsilonsilonnd{array},\quad 3/4=b< a, \] \[ \begin{array}{cccccc} p & d^p_{2b}& d^p_{1} \\ \hline 0 & 2 & 1 & \\ 1 & 2 & 1 & \\ 2 & 0 & 1 \epsilonsilonnd{array},\quad 3/4=b > a \] \[ \begin{array}{ccccc} p & d^p_{2b} & d^p_{2(1-b)+1} & d^p_{1} \\ \hline 0 & 1 &1 & 1\\ 1 & 1 & 1 & 1 \\ 2 & 0 & 0 & 1 \epsilonsilonnd{array},\quad b > a, b\neq 3/4 \] \[ \begin{array}{cccc} p & d^p_{2b} & d^p_{2(1-b)+1}&d^p_{1}\\ \hline 0 & 2 & 0 & 0 \\ 1 & 0 & 2 & 3 \\ \epsilonsilonnd{array},\quad 1/2<b< a, b\neq 3/4 \] \epsilonsilonnd{theorem} The connections above are not all that appear in \cite[Theorem 1.1.]{Ja20}. There are six more rigid $G_2$-connections with the following local data \begin{center} \begin{tabular}{ c c } $(iE_2,-iE_2,-E_2,1)$ & \sigmapecialcell[c]{$\tauextup{El}(3,\alpha,1)$ \\ $\oplus\,\tauextup{El}(3,-\alpha,1)\oplus(1)$} \\ [15pt] \hline \\ $\mathbf{J}(7)$ & $\tauextup{El}(6,\alpha_1, 1)\oplus(-1)$ \\ [10pt] $(\varepsilon\mathbf{J}(3), \varepsilon^{-1}\mathbf{J}(3),1)$ & $\tauextup{El}(6,\alpha, 1)\oplus(-1)$ \\ [10pt] $(z\mathbf{J}(2), z^{-1}\mathbf{J}(2), z^2, z^{-2},1)$ & $\tauextup{El}(6,\alpha, 1)\oplus(-1)$ \\ [10pt] $(x\mathbf{J}(2),x^{-1}\mathbf{J}(2), \mathbf{J}(3))$ & $\tauextup{El}(6,\alpha, 1)\oplus(-1)$ \\ [10pt] $(x,y,xy,(xy)^{-1},y^{-1},x^{-1}, 1)$ & $\tauextup{El}(6,\alpha, 1)\oplus(-1)$ \\ [10pt] \epsilonsilonnd{tabular} \epsilonsilonnd{center} where $\alpha\in \mathbb{C}^\tauimes$. However it is easy to see that the last five connections are hypergeometric connections (their Euler characteristic on $\mathbb{G}_m$ is $-1$) and that the first in the list is the pull-back of a hypergeometric connection. Therefore Theorem $1$ of \cite{SY19} can be applied to compute their Hodge numbers. \par \sigmaubsection{Outlook} In \cite{FSY18} the authors use irregular Hodge theory to prove meromorphic continuation for $L$-functions associated to symmetric power moments of Kloosterman sums. This is done by computing the Hodge numbers of symmetric powers of Kloosterman connections which turn out to be zero or one. The same property for Hodge numbers is true for the $G_2$-connection $\cE_3$ in the case $b>a$ and $b \neq 3/4$ in which the irregular Hodge filtration is of maximal length with every graded quotient being one dimensional. \par Already in the tame case Dettweiler and Sabbah \cite{DS13} use explicit computations of Hodge numbers to prove potential automorphy for a family of Galois representations attached to a tamely ramified rigid $G_2$-local system (which implies meromorphic continuation for the associated $L$-function). Both of these approaches rely on a potential automorphy criterion of Patrikis and Taylor \cite{PT15}. \par In the future we hope to relate the rigid irregular $G_2$-connections above to exponential motives in a similar fashion as done in \cite{FSY18} for Kloosterman connections. Since the irregular $G_2$-connection $\mathcal{E}_3$ arises from a rigid local systems via pull-back and Fourier transform one may hope that one can furthermore relate these exponential motives to classical motives and finally prove a similar result for $L$-functions attached to those - using the maximality of the Hodge filtration. \sigmaubsection*{Acknowledgement} Studying the irregular Hodge filtration of rigid connections was suggested to us by Claude Sabbah and we thank him for that. We wish to thank him, Javier Fresán and Jeng-Daw Yu for helpful conversations on irregular Hodge filtrations. In addition we wish to thank Claude Sabbah and Michael Dettweiler for comments that helped improve a preliminary version of this article. \sigmaubsection{Strategy} The crucial point for computing the irregular Hodge filtration is a certain stability that was observed first by Katz in \cite[Section 6.2.]{Ka90} for hypergeometric differential equations. Let $\mathcal{H}$ be a confluent hypergeometric connection of type $(n,m), n>m$ on $\mathbb{G}_m$. Let $d=n-m$ and denote by $[d]$ the $d$-fold covering of $\mathbb{P}^1$. For sufficiently generic parameters there exists a regular singular hypergeometric connection $\mathcal{H}'$ such that the Fourier transform of $[d]^*\mathcal{H}$ is isomorphic to $[d]^*\mathcal{H}'$. This property was used in \cite{SY19} by Sabbah and Yu to compute irregular Hodge filtrations for hypergeometric connections. \par We will prove a similar stability for the above mentioned $G_2$-systems and adopt the strategy of Sabbah and Yu to compute the Hodge filtrations. More precisely we proceed in the following steps. \par First we construct rigid local systems $\mathcal{L}$ for which the Fourier transform of a pullback along $[k]$ for some positive integer $k$ will give the rigid irregular connections $\mathcal{E}_i$. In the case of hypergeometrics in \cite{SY19} the authors used a result of Fedorov \cite{Fe18} who computed the Hodge data for hypergeometric local systems on $\mathbb{P}^1 \sigmaetminus \{0,1,\infty\}$. In our case no such result is available. We therefore have to explicitly compute the Hodge data of $[k]^*\mathcal{L}$ using results from \cite{DS13} and \cite{DR19} and a result on pullbacks of variations of Hodge structures in \cite[Section 4]{SY19}. The construction of $\mathcal{L}$ and computation of the Hodge data is carried out in Section \rhoef{s: hodge data}. \par The next step is to prove that the aforementioned stability actually holds. We do this via an explicit computation with differential operators in Section \rhoef{s: operators}. The crucial point here is that we know an explicit algorithmic construction of $\mathcal{E}_i$ in terms of the Katz-Arinkin algorithm \cite{Arinkin10}, i.e. via middle convolution and Fourier transform. \par Finally we want to apply the stationary phase formula \cite[Section 5, (7)]{SY19} to compute the irregular Hodge filtrations. Unfortunately in all cases the local monodromy at infinity will have eigenvalue one and stationary phase is not directly applicable. We can avoid this by applying a suitable middle convolution to make all eigenvalues non-trivial. This is done directly and explicitly in terms of differential operators. \par The Fourier transform will transform the middle convolution into a twist with a rank one local system. By \cite[Lemma 1]{SY19} such a twist will only change the Hodge filtration by a global shift and we can conclude. \sigmaubsection{Notations and preliminary results} Here we recall some notation from \cite{DS13}. \sigmaubsubsection{Local Hodge data} Let $\Delta$ be a disc and $j:\Delta^*\hookrightarrow \Delta$ the open punctured disc. Let $(V, F^\bulletllet V, \nabla, k)$ be a variation of polarized complex Hodge structure on $\Delta^*$ defined as in \cite{DS13}, Section~2.1. We will often denote this only by $V$. \par For any real number $a$ there is an extension $V^a$ of $V$ to $\Delta$ called the Deligne canonical lattice. It is characterised by the property that the eigenvalues of the residue of $\nabla$ lie in $[a,a+1)$. Letting $V^{>a}=\bigcup_{a'>a} V^{a'}$ the residue of $V^{>a}$ has eigenvalues in $(a,a+1]$. For $a\in [0,1)$ and $\lambdaambda=\epsilonsilonxp(-2\pi i a)$ we define the space of nearby cycles \[\psi_a(V):=V^a/V^{>a}.\] This space comes equipped with the nilpotent endomorphism $N=-(t\partial_t-a)$ which induces a monodromy filtration on it. For any $\epsilonsilonll \in \mathbb{Z}_{\ge 0}$ there is the space of primitive vectors $P_\epsilonsilonll\psi_a(V)$ with respect to the monodromy filtration. Its dimension is the number of Jordan blocks of size $\epsilonsilonll+1$ for the action of $N$ on $\psi_a(V)$. \par Defining $F^pV^a:= j_*F^pV\cap V^a$ we get a Hodge filtration $F^p\psi_a(V):=F^pV^a / F^pV^{>a}$ on $\psi_a(V)$. This further induces a filtration on $P_\epsilonsilonll\psi_a(V)$ and we refer to \cite{DS13}, Section~2.2 for details. \par We define the local Hodge numbers \[ \nu_{a,l}^p(V):=\dim \tauextup{gr}^p_F {\rhom P}_l\psi_{a}(V), \] \[ \nu_{a,{\rhom prim}}^p(V):=\sigmaum_{l\geq 0}\nu_{a,l}^p(V) \] \[ h^p(V):=\nu^p(V):=\sigmaum_{a\in [0,1)}\nu_{a}^p(V).\] \sigmaubsubsection{Hodge data on $\mathbb{A}^1\sigmaetminus S$} Now let $(V, F^\bulletllet V, \nabla, k)$ be a variation of polarized complex Hodge structure on $\mathbb{A}^1 \sigmaetminus S$. To indicate the local Hodge data of $V$ at the point $x\in S\cup \infty$ we write $\nu_{x,a,l}^p(V)$ and so on. \par As in \cite{DS13}, Section~2.3 we define global Hodge numbers \[ \delta^p(V)=\deg \tauextup{gr}^p_F V^0 \] and we also make use of the following further local Hodge numbers defined in \cite{DR19}, Section 2, \begin{eqnarray*} \omegaega^p_{x}(V) &:=&\nu^p_{x}(V)-\nu^p_{x,0,\tauextup{prim}}(V) \\ \omegaega^p_{\neq \infty }(V) &:=&\sigmaum_{x\in S \sigmaetminus \infty}\omegaega^p_{x}(V) \\ \omegaega^p_{}(V) &:=&\sigmaum_{x\in S } \omegaega^p_{x}(V). \\ \epsilonsilonnd{eqnarray*} By \cite[Corollary 8.1]{Sim90} any rigid local system $\mathcal{L}$ on $\mathbb{P}^1\sigmaetminus S$ (for $S$ finite) underlies a variation of polarized complex Hodge structure $(V,F^\bulletllet B, \nabla, k)$ on $\mathbb{P}^1\sigmaetminus S$ (provided their local monodromy has eigenvalues with absolute value one). A general result of Deligne \cite[Prop. 1.13]{De87} implies that any such local system underlies at most one such variation (up to a shift of the Hodge filtration). When working with rigid local systems we will often abuse notation and write for example $\nu_{x,a,l}^p(\mathcal{L})$ instead of $ \nu_{x,a,l}^p(V)$ to denote the corresponding Hodge invariant of the associated variation of Hodge structure. For readability we may sometimes even drop the $V$ completely if it is clear from the context. \par To compute the global Hodge numbers for Kummer pullbacks we make use of the following Lemma. \begin{lemma}\lambdaabel{lem: global hodge} Let $f:\mathbb{P}^1 \rhoightarrow \mathbb{P}^1$ be given by $f(t)=t^k$, $k\in \mathbb{Z}_{\ge 1}$, and $V$ a variation of polarized complex Hodge structure on $\mathbb{P}^1\sigmaetminus S$ for some finite set $S$. The pullback $f^*V$ is again a variation of polarized complex Hodge structure. Then \[\delta^p(f^*V)=k\delta^p(V)+\sigmaum_{b \in [0,1)} \lambdafloor k b \rhofloor(\nu_{0, b}^p+\nu_{\infty,b}^p).\] \epsilonsilonnd{lemma} \begin{proof} Denote by $V^{-\mathbfit{j}/\mathbfit{k}}$ the extension to $\mathbb{P}^1$ of $V$ given by the Deligne canonical lattices $V^{-{j}/{k}}$ at $0$ and at $\infty$ and by $V^0$ at the other singularities (see \cite{DS13}, Section~2.2). We have an exact sequence \[0 \rhoightarrow V^0 \rhoightarrow V^{-\mathbfit{j}/\mathbfit{k}} \rhoightarrow V^{-\mathbfit{j}/\mathbfit{k}} / V^0\rhoightarrow 0 \] where \[V^{-\mathbfit{j}/\mathbfit{k}} / V^0\cong \bigoplus_{b\in [-j/k,0)} \psi_{b,t}(V)\oplus \psi_{b,t^{-1}}(V)\] and everything is compatible with the Hodge filtration. By \cite[Lemma 2]{SY19} we have \[(f^*V)^0=\bigoplus_{j=1}^{k-1} y^j\otimes V^{-\mathbfit{j}/\mathbfit{k}}\] and hence \begin{align*}\delta^p(f^*V)=\deg\tauextup{gr}_F^p(f^*V)^0&=\sigmaum_{j=1}^{k-1}\lambdaeft( \deg\tauextup{gr}_F^pV^0 +\sigmaum_{b\in [-j/k,0)} \nu_{0,b}^p(V)+\nu_{\infty,b}^p(V)\rhoight) \\ &=k\delta^p(V)+\sigmaum_{j=1}^{k-1}\sigmaum_{b\in [1-j/k,1)} (\nu_{0,b}^p(V)+\nu_{\infty,b}^p(V)). \epsilonsilonnd{align*} Let $b\in [0,1)$ such that $\nu_{0,b}^p(V)\neq 0$ and assume that $1-\frac{j}{k} \lambdae b < 1-\frac{j-1}{k}$. This means that $k-j\lambdae k b < k-j-1$ and hence $\lambdafloor k b\rhofloor=k-j$. This is precisely the number of intervals $[1-j/k,1)$ in which $ b$ is contained. The same discussion applies to $\nu_{\infty, b}^p(V)\neq 0$ and hence \[k\delta^p(V)+\sigmaum_{j=1}^{k-1}\sigmaum_{ b\in [1-j/k,1)} (\nu_{0, b}^p(V)+\nu_{\infty, b}^p(V))=k\delta^p(V)+\sigmaum_{ b \in [0,1)} \lambdafloor k b \rhofloor(\nu_{0, b}^p(V)+\nu_{\infty, b}^p(V)),\] proving the claim. \epsilonsilonnd{proof} \sigmaection{Hodge data for rigid local systems} \lambdaabel{s: hodge data} In this section we construct rigid local systems $\mathcal{P}_{13}, \mathcal{P}_2$ and $\mathcal{P}_4$ whose pullbacks are related to the pullbacks of $\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3$ and $\mathcal{E}_4$ via Fourier transform. \begin{prop}\lambdaabel{E1E3} Let $a,b \in [1/2,1)$ and $\alpha:=\epsilonsilonxp(-2\pi ia),\beta:=\epsilonsilonxp(-2\pi i b)$, where $\alpha \neq \beta^{\pm 1}$. There exists a regular singular connection $\mathcal{P}_{13}$ on $\mathbb{P}^1\sigmaetminus \{ 0,1,4,\infty\}$ with symplectic monodromy group and local monodromy \begin{eqnarray*} \begin{array}{cccc} 0 & 1 & 4 & \infty \\ \hline (\mathbf{J}(2),1,1) & (\alpha,\alpha^{-1},1,1) & (\mathbf{J}(2),1,1) & (\beta,\beta,\beta^{-1},\beta^{-1}) \epsilonsilonnd{array}, &&\quad \alpha\neq -1,\beta\neq -1,\\ \begin{array}{cccc} 0 & 1 & 4 & \infty \\ \hline (\mathbf{J}(2),1,1) & (-\mathbf{J}(2),1,1) & (\mathbf{J}(2),1,1) & (\beta,\beta,\beta^{-1},\beta^{-1}) \epsilonsilonnd{array},&& \quad \alpha=-1,\beta\neq -1,\\ \begin{array}{cccc} 0 & 1 & 4 & \infty \\ \hline (\mathbf{J}(2),1,1) & (\alpha,\alpha^{-1},1,1) & (\mathbf{J}(2),1,1) & (-\mathbf{J}(2),-\mathbf{J}(2)) \epsilonsilonnd{array}, &&\quad \alpha\neq -1, \beta= -1 \epsilonsilonnd{eqnarray*} Furthermore $\mathcal{P}_{13}$ has the following Hodge data \begin{eqnarray*} \begin{array}{ccccc} p & h^p & \delta^p & \omegaega^p & \nu^p_{\infty,b,1} \\ \hline 0 & 2 & -2 & 5 & 0 \\ 1 & 2 & -1 & 3 & 2 \epsilonsilonnd{array},&&\quad b=1/2, \\ \begin{array}{cccccc} p & h^p & \delta^p & \omegaega^p & \nu^p_{\infty,b,0} & \nu^p_{\infty,1-b,0}\\ \hline 0 & 2 & -2 & 5 & 2 & 0 \\ 1 & 2 & -1 & 3 & 0 & 2 \epsilonsilonnd{array},&&\quad 1/2<b< a, \quad \\ \begin{array}{cccccc} p & h^p & \delta^p & \omegaega^p & \nu^p_{\infty,b,0} & \nu^p_{\infty,1-b,0}\\ \hline 0 & 2 & -2 & 5 & 1 & 1 \\ 1 & 2 & -1 & 3 & 1 & 1 \epsilonsilonnd{array},&&\quad b > a. \epsilonsilonnd{eqnarray*} \epsilonsilonnd{prop} \begin{proof} Using the Katz algorithm \cite[Chapter~6]{Ka96} we construct $\mathcal{P}_{13}$ via the following sequence of middle convolutions $\tauextup{MC}_\lambdaambda$ and tensor products with regular singular connections of rank 1 on $\mathbb{P}^1\sigmaetminus \{ 0,1,4,\infty\}$ from a regular singular connection of rank 1 on $\mathbb{P}^1\sigmaetminus \{ 0,1,4,\infty\}$. Let ${\mathcal M}$ be the regular singular connection of rank 1 on $\mathbb{P}^1\sigmaetminus \{ 0,1,4,\infty\}$ with local monodromies \[ \begin{array}{ccccc} &0 & 1 & 4 & \infty \\ \hline {\mathcal M} & \alpha^{-1} & \alpha \beta & \alpha^{-1} & \alpha \beta^{-1} \epsilonsilonnd{array}.\] Then \[ \tauextup{MC}_\beta (\tauextup{MC}_{\beta^{-1}\alpha} {\mathcal M} \otimes {\mathcal M}_2)={ \mathcal{P}_{13}},\] where ${\mathcal M}_2$ is a regular singular connection of rank 1 on $\mathbb{P}^1\sigmaetminus \{ 0,1,4,\infty\}$ with local monodromies \[ \begin{array}{ccccc} &0 & 1 & 4 & \infty \\ \hline {\mathcal M}_2 & 1 & \alpha^{-1} \beta^{-1} & 1 & \alpha \beta^{} \epsilonsilonnd{array}.\] By the Katz algorithm \cite[Chapter~6]{Ka96} the local monodromy data is \[ \begin{array}{ccccc} &0 & 1 & 4 & \infty \\ \hline {\mathcal M} & \alpha^{-1} & \alpha \beta & \alpha^{-1} & \alpha \beta^{-1} \\ \tauextup{MC}_{\beta^{-1}\alpha} {\mathcal M} & (1,\beta^{-1}) & (1,\alpha^2) & (1,\beta^{-1}) & (\alpha^{-1}\beta,\alpha^{-1}\beta) \\ \tauextup{MC}_{\beta^{-1}\alpha} {\mathcal M} \otimes {\mathcal M}_2& (1,\beta^{-1}) & ( \alpha^{-1} \beta^{-1},\alpha \beta^{-1}) & (1,\beta^{-1}) & (\beta^2,\beta^2) \\ \tauextup{MC}_\beta (\tauextup{MC}_{\beta^{-1}\alpha} {\mathcal M} \otimes {\mathcal M}_2)& (\mathbf{J}(2),1,1) & (\alpha,\alpha^{-1},1,1) & (\mathbf{J}(2),1,1) & (\beta,\beta,\beta^{-1},\beta^{-1}) \epsilonsilonnd{array}.\] Since the monodromy tuple of $\mathcal{P}_{13}$ is linearly rigid and the eigenvalues of its local monodromies are invariant under taking inverses the monodromy group is selfdual. The transvection at $0$ implies that the monodromy group is contained in $\Sp_4$ and the Hodge length is even. To compute the Hodge data of $ \mathcal{P}_{13}$ one can now apply the Hodge theoretic version of the Katz algorithm by Dettweiler and Sabbah in \cite{DS13} and determine the Hodge data in each of the above steps. However in this special situation one can shortcut the computations. From \cite[Proposition~2.3.3]{DS13} one concludes that the Hodge length of $ \mathcal{P}_{13}$ is at most the number of used middle convolutions which is $3$. Since the Hodge length is even it has to be $2$. Thus $h^0( \mathcal{P}_{13})=h^1( \mathcal{P}_{13})=2$ by selfduality. Further, since \[ \sigmaum_{s \in \mathbb{P}^1} \rhok(T_s- {\rhom id})-2\rhok( \mathcal{P}_{13})=0,\] where $T_s$ denotes the local monodromy of $\mathcal{P}_{13}$ at $s$, the parabolic cohomology group $H^1_{\rhom par} (\mathcal{P}_{13})$ vanishes cf. \cite[Section~2]{DR19} , i.e. $\mathcal{P}_{13}$ is parabolically rigid. Therefore we get the following system of equations, cf. \cite[Proposition~2.7]{DR19}, \[ \delta^{i-1}-\delta^i-h^{i-1}-h^{i} +\omegaega^{i-1}=0 .\] The selfduality and the definition of $\omegaega^p$ give $\omegaega^0=5$ and $\omegaega^1=3$, hence the claim on the $\delta^p$ follows. It remains to determine the nearby cycle data at $\infty$. If $b=1/2$ then the local monodromy at infinity already determines $\nu^1_{\infty,1/2,1}=2$. Let $b>1/2$ and $c \in \{a,1-a\}$ such that $\nu^0_{\infty,c,0}=1$. Let further $ { \mathcal L}_x, x \in (0,1),$ be the rank $1$ connection with local monodromy \[ \begin{array}{ccccc} &0 & 1 & 4 & \infty \\ \hline { \mathcal L}_x & 1 & \epsilonsilonxp(-2\pi ix) & 1 & \epsilonsilonxp(2\pi ix) \epsilonsilonnd{array}.\] Then \[ \mathcal{P}_{13} \otimes { \mathcal L}_b,\quad \mathcal{P}_{13} \otimes {\mathcal L}_{1-b} \] are both parabolically rigid. Thus \[ -2=-h^0( \mathcal{P}_{13})=-h^0( \mathcal{P}_{13}\otimes {\mathcal L}_b)=\delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_b) \] \[ -2=-h^0( \mathcal{P}_{13})=\delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_{1-b}). \] Hence if $\nu^0_{\infty,d,0}( \mathcal{P}_{13})=2, d \in \{ b,1-b\},$ then by \cite[Proposition]{DS13} \[ \delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_b)=\delta^0( \mathcal{P}_{13})-h^0( \mathcal{P}_{13})+\lambdafloor c+b \rhofloor +2 \lambdafloor d +1-b \rhofloor \] and \[\delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_{1-b})=\delta^0( \mathcal{P}_{13})-h^0( \mathcal{P}_{13})+\lambdafloor c+1-b \rhofloor +2 \lambdafloor d +b \rhofloor. \] Thus \[ d+1-b\geq 1, \quad c+b<1, \quad c+1-b <1, \quad d+b\geq 1.\] This implies $ b=d $ and $ c<b<1-c=a.$ On the other hand, if $\nu^0_{\infty,b,0}( \mathcal{P}_{13})=\nu^1_{\infty,b,0}( \mathcal{P}_{13})=1$ then by \cite[Proposition]{DS13} \[ \delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_{1-b})=\delta^0( \mathcal{P}_{13})-h^0( \mathcal{P}_{13})+\lambdafloor c+1-b \rhofloor +2 \] \[ \delta^0( \mathcal{P}_{13} \otimes {\mathcal L}_{b})=\delta^0( \mathcal{P}_{13})-h^0( \mathcal{P}_{13})+\lambdafloor c+b \rhofloor +1. \] Thus $ c+1-b< 1$ and $c+b\geq 1, $ that is $ a= \max\{c,1-c\}< b. $ This shows the claim. \epsilonsilonnd{proof} \begin{prop} \lambdaabel{E1E3conv} Let $\mathcal{P}_{13}^2:=[2]^* \mathcal{P}_{13}$ and $\mathcal{P}':= \tauextup{MC}_\tauextup{char}i\mathcal{P}_{13}^2$ for $\tauextup{char}i=\epsilonsilonxp(-2\pi i \mu ), \mu \in [0,1), 1-\mu \sigmaim 0$ by which we mean that $\mu$ is generic and sufficiently close to $1$. Then ${\mathcal{P}'}$ has the following Hodge data \begin{eqnarray*} \begin{array}{ccccc} p & h^p & \nu^p_{\infty,1-\mu,2} & \nu^p_{\infty,1-\mu,0}\\ \hline 0 & 2 & 0 & 0 \\ 1 & 3 & 0 & 1 \\ 2 & 2 & 2 & 0 \epsilonsilonnd{array},&&\quad b=1/2, \\ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,2b-\mu,0} & \nu^p_{\infty,2(1-b)+1-\mu,0}&\nu^p_{\infty,1-\mu,0}\\ \hline 0 & 2 & 2 & 0 & 0 \\ 1 & 5 & 0 & 2 & 3 \epsilonsilonnd{array},&&\quad 1/2<b< a, b\neq 3/4 \\ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,2b-\mu,0} &\nu^p_{\infty,1-\mu,0}\\ \hline 0 & 2 & 2 & 0 \\ 1 & 5 & 2 & 3 \epsilonsilonnd{array},&&\quad 3/4=b< a, \\ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,2b-\mu,0} & \nu^p_{\infty,2(1-b)+1-\mu,0} & \nu^p_{\infty,1-\mu,0} \\ \hline 0 & 3 & 1 &1 & 1 & \\ 1 & 3 & 1 & 1 & 1 & \\ 2 & 1 & 0 & 0 & 1 \epsilonsilonnd{array},&& \quad b > a, b\neq 3/4 \\ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,2b-\mu,0}& \nu^p_{\infty,1-\mu,0} \\ \hline 0 & 3 & 2 & 1 & \\ 1 & 3 & 2 & 1 & \\ 2 & 1 & 0 & 1 \epsilonsilonnd{array},&&\quad 3/4=b > a. \epsilonsilonnd{eqnarray*} \epsilonsilonnd{prop} \begin{proof} One has the following Hodge data for ${ \mathcal{P}}_{13}^2$ \begin{eqnarray*} \begin{array}{cccccc} p & h^p & \delta^p & \omegaega^p & \omegaega^p_{\neq \infty} & \nu^p_{\infty,0,1} \\ \hline 0 & 2 & -2 & 7&5 & 0 \\ 1 & 2 & 0 & 2&2 & 2 \epsilonsilonnd{array},&& \quad b=1/2, \\ \begin{array}{ccccccc} p & h^p & \delta^p & \omegaega^p & \omegaega^p_{\neq \infty} & \nu^p_{\infty,2b-1,0} & \nu^p_{\infty,2(1-b),0} \\ \hline 0 & 2 & -2 & 7&5 & 2 & 0 \\ 1 & 2 & -2 & 4& 2 & 0 & 2 \epsilonsilonnd{array},&& \quad 1/2<b< a, b\neq 3/4 \\ \begin{array}{cccccc} p & h^p & \delta^p & \omegaega^p & \omegaega^p_{\neq \infty} & \nu^p_{\infty,2b-1,0} \\ \hline 0 & 2 & -2 & 7&5 & 2 \\ 1 & 2 & -2 & 4& 2 & 2 \epsilonsilonnd{array},&& \quad b< a, b=3/4 \\ \begin{array}{ccccccc} p & h^p & \delta^p & \omegaega^p& \omegaega^p_{\neq \infty} & \nu^p_{\infty,2b-1,0} & \nu^p_{\infty,2(1-b),0} \\ \hline 0 & 2 & -3 & 7&5 & 1 & 1 \\ 1 & 2 & -1 & 4& 2 & 1 & 1 \epsilonsilonnd{array},&& \quad b > a \\ \begin{array}{ccccccc} p & h^p & \delta^p & \omegaega^p& \omegaega^p_{\neq \infty} & \nu^p_{\infty,2b-1,0} \\ \hline 0 & 2 & -3 & 7&5 & 2 \\ 1 & 2 & -1 & 4& 2 & 2 \epsilonsilonnd{array},&& \quad 3/4=b > a. \epsilonsilonnd{eqnarray*} The claim on $h^p , \delta^p, \omegaega^p, \omegaega^p_{\neq \infty}, \nu^p$ follows from the above propositions, \cite[Lemma~2]{SY19} and the local monodromy data of ${ \mathcal{P}}_{13}^2$ . By \cite[Proposition~5.3 (i)]{DR19} \[ h^p({ \mathcal{P}}')=\delta^{p-1}({ \mathcal{P}}_{13}^2)-\delta^p({ \mathcal{P}}_{13}^2)+\omegaega^{p-1}_{\neq \infty}({ \mathcal{P}}_{13}^2).\] The claim on $\nu^p_{\infty,1-\mu,0}$ follows from \cite[Corollary~6.2]{DR19} and \cite[Proposition~2.7]{DR19} \[ \nu^p_{\infty,1-\mu,0}({ \mathcal{P}}')=h^p(H^1_{{\rhom par}}({ \mathcal{P}}_{13}^2))=\] \[ \delta^{p-1}({ \mathcal{P}}_{13}^2)-\delta^p({ \mathcal{P}}_{13}^2)-h^{p-1}({ \mathcal{P}}_{13}^2)-h^{p}({ \mathcal{P}}_{13}^2)+\omegaega^{p-1}({ \mathcal{P}}_{13}^2) \] and the claim on $ \nu^p_{\infty,2b-\mu,l}({ \mathcal{P}}'), \nu^p_{\infty,2(1-b)+1-\mu,l}({ \mathcal{P}}')$ from \cite[Proposition~5.3 (ii)]{DR19}. \epsilonsilonnd{proof} \begin{prop}\lambdaabel{E2} There is a rigid regular singular connection ${\mathcal P}_2$ of rank $2$ with symplectic monodromy group and local monodromy \[ \begin{array}{ccccc} 0 & 1& 4& \infty \\ \hline \mathbf{J}(2) & (\alpha,\alpha^{-1}) & \mathbf{J}(2) & (1,1) \epsilonsilonnd{array}. \] Further, let ${\mathcal P}_2^2=[2]^*{\mathcal P}_2$. Then for the rank $7$ connection $\tauextup{MC}_{-1}{\mathcal P}_2^2$ we have the Hodge data \[ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,1/2,0} & \nu^p_{\infty,1/2,1} \\ \hline 0 & 2 & 1 & 0\\ 1& 3& 1& 1 \\ 2 & 2& 1& 1 \epsilonsilonnd{array} \] \epsilonsilonnd{prop} \begin{proof} The connection $\mathcal{P}_2$ may be constructed using the Katz algorithm \cite[Chapter 6]{Ka96}. Since the monodromy tuple of ${ {\mathcal P}_2}$ is linearly rigid and the eigenvalues of its local monodromies are invariant under taking inverses the monodromy group is selfdual. The transvection at $0$ implies that the monodromy group is contained in $\Sp_2$ and the Hodge length is two. The local monodromy gives $\omegaega^0=3, \omegaega^1=1$ and $\nu^0_{\infty,0,0}=\nu^1_{\infty,0,0}=1$. From the parabolic rigidity of ${\mathcal P}_2$ we infer from \cite[Proposition~2.3.3]{DS13} \[ 0=-\delta^0-h^0, \quad 0=\delta^0-\delta^1-h^0-h^1-\omegaega^0.\] Hence we have the following Hodge data \[ \begin{array}{ccccc} p & h^p & \delta^p& \omegaega^p& \nu^p_{\infty,0,0} \\ \hline 0 & 1 & -1 &3 & 1 \\ 1 & 1 & 0 &1 & 1 \epsilonsilonnd{array} \] The global Hodge data of ${\mathcal P}_2^2$ can be determined from Proposition~\rhoef{lem: global hodge} which yields \[ \begin{array}{ccccc} p & h^p & \delta^p& \nu^p_{\infty,0,0} \\ \hline 0 & 1 & -2 & 1 \\ 1& 1& 0& 1 \epsilonsilonnd{array} \] The Hodge data of ${\mathcal P}_2^2 \otimes \mathcal{M},$ where $\mathcal{M}$ is the regular singular connection of rank $1$ with local monodromy $-1$ both at $\infty$ and at $t$ for any $t \in \mathbb{P}^1\sigmaetminus\{0,\pm 1,\pm 2,\infty\}$ are \[ \begin{array}{ccccc} p & h^p & \delta^p &\omegaega^p \\ \hline 0 & 1 & -3 & 7\\ 1& 1& -1& 4 \epsilonsilonnd{array} \] by \cite[Proposition~2.3.2]{DS13}. Since the Hodge numbers of the parabolic cohomology of ${\mathcal P}_2^2 \otimes \mathcal{M}$ are the Hodge numbers of $\tauextup{MC}_{-1} {\mathcal P}_2^2$, we get by \cite[Theorem~4.3]{DR19} \[h^p(\tauextup{MC}_{-1}{\mathcal P}_2^2)=\delta^{p-1}-\delta^p-h^{p-1}-h^p+\omegaega^{p-1}.\] Therefore the Hodge numbers $h^p, p=0,1,2,$ of $\tauextup{MC}_{-1}{\mathcal P}_2^2$ are $2,3,2$. Since the monodromy of $\tauextup{MC}_{-1}{\mathcal P}_2^2$ is orthogonal by \cite[Corollary~5.10]{DR00} and the local monodromy at infinity is $(-\mathbf{J}(2),-\mathbf{J}(2),-1,-1,-1)$ by the Katz algorithm \cite[Chapter~6]{Ka96} we get $\nu^1_{\infty,1/2,1}=\nu^2_{\infty,1/2,1}=1$. Having $h^p=\nu^p_{\infty,1/2}$ one concludes $\nu^p_{\infty,1/2,0}=1$. \epsilonsilonnd{proof} \begin{prop}\lambdaabel{E4} There is a rigid regular singular connection ${\mathcal P}_4$ of rank $4$ with symplectic monodromy group and local monodromy \[ \begin{array}{ccccc} 0 & 1& t^2 &(t+1)^2 & \infty \\ \hline (\mathbf{J}(2),1,1) & (\mathbf{J}(2),1,1)& (\mathbf{J}(2),1,1) & (\mathbf{J}(2),1,1) & (-\mathbf{J}(2),-1,-1) \epsilonsilonnd{array} \] Further, let ${\mathcal P}_4^2=[2]^*{\mathcal P}_4$. Then for the rank $7$ connection $\tauextup{MC}_{-1}{\mathcal P}_4^2$ we have the Hodge data \[ \begin{array}{cccccc} p & h^p & \nu^p_{\infty,1/2,1} & \nu^p_{\infty,1/2,2} \\ \hline 0 & 2 & 0 & 0\\ 1& 3& 1& 0 \\ 2 & 2& 1& 1 \epsilonsilonnd{array} \] \epsilonsilonnd{prop} \begin{proof} Since ${\mathcal P}_4=\tauextup{MC}_{-1} {\mathcal M},$ where ${\mathcal M}$ is the regular singular connection with local monodromy \[ \begin{array}{c c c c c c} & 0 & 1 & t^2 & (t+1)^2 & \infty \\ \hline & -1 & -1 &-1 & -1 & 1 \epsilonsilonnd{array} \] the monodromy tuple of ${ {\mathcal P}_4}$ is linearly rigid and the monodromy group is contained in $\Sp_4$ by \cite[Corollary~5.10]{DR00}. Thus the monodromy group of $\tauextup{MC}_{-1}{\mathcal P}_4^2$ is orthogonal by \cite[Corollary~5.10]{DR00}. From \cite[Proposition~2.3.3]{DS13} one concludes that the Hodge length of $ \tauextup{MC}_{-1}{\mathcal P}_4^2$ is at most the number of used middle convolutions which is $3$. The local monodromy at infinity is $(-\mathbf{J}(2),-\mathbf{J}(3),-\mathbf{J}(2))$ by \cite[Chapter 6]{Ka96}. This implies that the Hodge length is $3$ and $\nu^2_{\infty,1/2,2}=1$. By selfduality the Hodge numbers $h^p, p=0,1,2,$ are $a,b,a$ with $2a+b=7$. Thus the Hodge numbers are $2,3,2$ and $\nu^1_{\infty,1/2,1}=\nu^2_{\infty,1/2,1}=1$. \epsilonsilonnd{proof} \sigmaection{Operators and Fourier transform} \lambdaabel{s: operators} In the following we construct explicit differential operators corresponding to the connections we consider. Using these operators we can relate pullbacks of the irregular rigid $G_2$-connections to pullbacks of rigid local systems. \par We use the following notation. Let $\delta$ denote the operator on $\mathbb{C}[x]$ that sends a polynomial $f$ to its derivative $\frac{d}{dx}f$. Denote by $D=\mathbb{C}[x]\lambdaangle \delta \rhoangle$ the ring of differential operators and let $\vartheta=x\delta$. \par Consider the Fourier transform \[FT:\mathbb{C}[x]\lambdaangle \delta \rhoangle \rhoightarrow \mathbb{C}[x]\lambdaangle \delta \rhoangle\] defined by $FT(x)=\delta$ and $FT(\delta)=-x$. For any holonomic $D$-module its Fourier transform is the pullback along the map $FT$. In particular for any $P\in \mathbb{C}[x]\lambdaangle \delta \rhoangle$ the module $D/DP$ is holonomic and its Fourier transform is the holonomic module $D/DFT(P)$. Any differential operator $P\in \mathbb{C}[x]\lambdaangle \delta \rhoangle$ has a finite singular locus $S\sigmaubset \mathbb{P}^1$. Assume that $S=S'\cup \{0,\infty\}$. We may consider $P$ as an element of the localization $\mathbb{C}[x, x^{-1}, (x-s)^{-1}, s\in S']\lambdaangle \vartheta \rhoangle$. As such it determines a linear homogeneous differential equation on $\mathbb{P}^1\sigmaetminus S$. This in turn determines a connection on the trivial vector bundle on $\mathbb{P}^1\sigmaetminus S$. Given a connection $\mathcal{E}$ on $\mathbb{P}^1 \sigmaetminus S$ we say that $P$ is the operator for $\mathcal{E}$ if the connection on $\mathbb{P}^1 \sigmaetminus S$ determined by $P$ is isomorphic to the connection $\mathcal{E}$. \begin{prop}\lambdaabel{OpEi} The operator \begin{eqnarray*} L &:= &L_0(\vartheta)+xL_1(\vartheta)+x^2L_2(\vartheta)+x^3L_3(\vartheta)\in \mathbb{C}[x][\vartheta], \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} L_0(\vartheta)&=& 4 \vartheta ( \vartheta-1) ( \vartheta+1) (2 \vartheta+1-2 b) (-2 b-1+2 \vartheta) (2 b+1+2 \vartheta) (2 b-1+2 \vartheta) \\ L_1(\vartheta)&=& -12 \vartheta ( \vartheta+1) (2 \vartheta+1) (2 b+1+2 \vartheta) (2 \vartheta+1-2 b) \\ L_2(\vartheta)&=&( \vartheta+1) (36 \vartheta^2+12 a^2-16 b^2+72 \vartheta-12 a+43) \\ L_3(\vartheta)&=& -6-4 \vartheta \epsilonsilonnd{eqnarray*} is the operator for \begin{enumerate} \item ${\mathcal E}_1$ if $b\in 1/2+\mathbb{Z}$. \item ${\mathcal E}_2$ if $b\in \mathbb{Z}$. \item ${\mathcal E}_3$ if $b\in \mathbb{R} \sigmaetminus \{\mathbb{Z} \cup 1/2+\mathbb{Z}\}$. \epsilonsilonnd{enumerate} \epsilonsilonnd{prop} \begin{proof} The construction of $L$ follows from the construction of the ${\mathcal E}_i, i=1,2,3, $ in \cite[Theorem~1.1]{Ja20} and the translation to operators, cf. \cite[Chapter~4]{BR2012}, where one also uses \[ FT(\vartheta)=(-1-\vartheta),\quad FT(x)=\delta,\quad x^i\delta^i=(\vartheta-i+1) \cdots \vartheta,\quad (\vartheta-1) x=x \vartheta\] and \[ x^{\deg_x L} [1/x]^*L=\sigmaum_{i=0}^{\deg_x L} x^{\deg_x L-i} L_i(-\vartheta). \] \epsilonsilonnd{proof} \begin{prop}\lambdaabel{OpE4} The operator \begin{eqnarray*} L&=& L_0(\vartheta)+xL_1(\vartheta)+x^2L_2(\vartheta)+x^3L_3(\vartheta) \in \mathbb{C}[x][\vartheta], \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} L_0(\vartheta)&=& 128 \vartheta^2 (\vartheta-2)^2 (\vartheta-1)^3\\ L_1(\vartheta)&=& -32 \vartheta^2 (t^2+t+1) (2 \vartheta-1) (\vartheta-1)^2\\ L_2(\vartheta)&=& 8 (t^2+t+1)^2 \vartheta^3\\ L_3(\vartheta)&=& -t^2 (t+1)^2 (2 \vartheta+1) \epsilonsilonnd{eqnarray*} is the operator for ${\mathcal E}_4$. \epsilonsilonnd{prop} \begin{proof} The construction of $L$ follows from the construction of ${\mathcal E}_4$ in \cite[Theorem~1.1]{Ja20} and the translation to operators as in Proposition~\rhoef{OpEi}. \epsilonsilonnd{proof} \begin{prop}\lambdaabel{P:b<>0} Let \begin{eqnarray*} P &:= &P_0(\vartheta)+xP_1(\vartheta)+x^2P_2(\vartheta)+x^3P_3(\vartheta)\in \mathbb{C}[x][\vartheta], \quad b\not \in \mathbb{Z},\\ P_0(\vartheta)&=& -4096 \vartheta^2 (\vartheta-1) (\vartheta+1) \\ P_1(\vartheta)&=& 1024 \vartheta (\vartheta+1) (9 \vartheta^2+3 a^2-4 b^2+9 \vartheta-3 a+4) \\ P_2(\vartheta)&=& -6144 (\vartheta+1)^2 (\vartheta+1+b) (\vartheta+1-b) \\ P_3(\vartheta)&=&1024 (\vartheta+1+b) (\vartheta+1-b) (\vartheta+2+b) (\vartheta+2-b), \epsilonsilonnd{eqnarray*} be the symplectic operator of degree $4$ with Riemann scheme \[ \begin{array}{ccccc} 0 & 1& 4& \infty \\ \hline -1 & 0 & 0& 2-b \\ 0 & 1 & 1& 1-b \\ 0 & a& 1& 1+b \\ 1 & 1-a &2 & 2+b \epsilonsilonnd{array} \] Then $P$ is the operator for $\mathcal{P}_{13}$ in Proposition~\rhoef{E1E3}. \epsilonsilonnd{prop} \begin{proof} The construction of $P$ follows from the construction of ${\mathcal P}_{13}$ in Proposition~\rhoef{E1E3} and the translation to operators, cf. \cite[Chapter~4]{BR2012}. Hence the claim follows. \epsilonsilonnd{proof} \begin{prop}\lambdaabel{P:b=0} Let \begin{eqnarray*} P &:= &P_0(\vartheta)+xP_1(\vartheta)+x^2P_2(\vartheta)+x^3P_3(\vartheta) \in \mathbb{C}[x][\vartheta],\quad b\in \mathbb{Z} \epsilonsilonnd{eqnarray*} \begin{eqnarray*} P_0(\vartheta)&=&-4 \vartheta^2\\ P_1(\vartheta)&=& 9 \vartheta^2+3 a^2+9 \vartheta-3 a+4 \\ P_2(\vartheta)&=& -6 (\vartheta+1)^2\\ P_3(\vartheta)&=& (\vartheta+2) (\vartheta+1) \epsilonsilonnd{eqnarray*} with Riemann scheme \[ \begin{array}{ccccc} 0 & 1& 4& \infty \\ \hline 0 & -a& 0& 1\\ 0 & a-1& 0 &2 \epsilonsilonnd{array} \] Then $P$ is the operator for $\mathcal{P}_2$ in Proposition~\rhoef{E2}. \epsilonsilonnd{prop} \begin{proof} The construction of $P$ follows from the construction of ${{\mathcal P}_2}$ in Proposition~\rhoef{E2} and the translation to operators, cf. \cite[Chapter~4]{BR2012}. Hence the claim follows. \epsilonsilonnd{proof} \begin{prop}\lambdaabel{P:E4} Let \begin{eqnarray*} P&=& P_0(\vartheta)+xP_1(\vartheta)+x^2P_2(\vartheta)+x^3P_3(\vartheta) \in \mathbb{C}[x][\vartheta], \epsilonsilonnd{eqnarray*} \begin{eqnarray*} P_0(\vartheta)&=& -16 \vartheta^2 t^2 (t+1)^2 (\vartheta-1) (\vartheta+1)\\ P_1(\vartheta)&=& 4 \vartheta (t^2+t+1)^2 (\vartheta+1) (2 \vartheta+1)^2\\ P_2(\vartheta)&=&-8 (t^2+t+1) (2 \vartheta+3) (2 \vartheta+1) (\vartheta+1)^2 \\ P_3(\vartheta)&=&(2 \vartheta+5) (2 \vartheta+1) (2 \vartheta+3)^2 \epsilonsilonnd{eqnarray*} be the symplectic operator of degree $4$ with Riemann scheme \[ \begin{array}{ccccc} 0 & 1& t^2& (t+1)^2 &\infty \\ \hline -1 & 0 & 0&0& 1/2 \\ 0 & 1 & 1& 1& 3/2 \\ 0 & 1& 1& 1& 3/2 \\ 1 & 2 &2 & 2& 5/2 \epsilonsilonnd{array} \] Then $P$ is the operator corresponding to $\mathcal{P}_4$ in Proposition~\rhoef{E4}. \epsilonsilonnd{prop} \begin{proof} The construction of $P$ follows from the construction of ${\mathcal P}_4$ in Proposition~\rhoef{E4} and the translation to operators, cf. \cite[Chapter~4]{BR2012}. Hence the claim follows. \epsilonsilonnd{proof} \begin{remark} In the following we make use of the following transformations of operators. Let \begin{eqnarray*} P &=& \sigmaum_{i=0}^{\deg_x(P)} x^{i}P_i(\vartheta). \epsilonsilonnd{eqnarray*} Then \begin{eqnarray*} [2]^*P&=&\sigmaum_{i=0}^{\deg_x(P)} x^{2i}P_i(\vartheta/2) \epsilonsilonnd{eqnarray*} and \begin{eqnarray*} FT(P)&=&\sigmaum_{i=0}^{\deg_x(P)} \delta^{i} P_i(-1-\vartheta), \\ x^{\deg_x P} FT(P) &=& \sigmaum x^{\deg_x(P)-i} (\vartheta-i+1) \cdots \vartheta \cdot P_i(-1-\vartheta). \epsilonsilonnd{eqnarray*} Here we have used that \[ FT(\vartheta)=(-1-\vartheta),\quad FT(x)=\delta,\quad x^i\delta^i=(\vartheta-i+1) \cdots \vartheta,\quad (\vartheta-1) x=x \vartheta.\] \epsilonsilonnd{remark} \begin{cor} \lambdaabel{MC P132} Let $\mathcal{P}':= \tauextup{MC}_\tauextup{char}i\mathcal{P}_{13}^2$ for $\tauextup{char}i=\epsilonsilonxp(-2\pi i \mu ), \mu \in [0,1), 1-\mu \sigmaim 0$, be as in Proposition~\rhoef{E1E3conv}. The corresponding operator is \begin{eqnarray*} P' &=& \sigmaum_{i=0}^3 x^{2i} P_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} P_0(\vartheta) &=& -4 \vartheta (\vartheta-5) (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-\mu) \\ P_2(\vartheta) &=& \vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-\mu+1) (9 \vartheta^2-18 \vartheta \mu+12 a^2-16 b^2+9 \mu^2+18 \vartheta-12 a-18 \mu+16)\\ P_4(\vartheta) &=& -6 \vartheta (\vartheta-1) (\vartheta-\mu+3) (\vartheta-\mu+2) (\vartheta-\mu+1) (-\mu+2+2 b+\vartheta) (-\mu+2-2 b+\vartheta) \\ P_6(\vartheta) &=& (\vartheta-\mu+1) (-\mu+5+\vartheta) (\vartheta-\mu+3) (-\mu+2+2 b+\vartheta) (- \mu+2-2 b+\vartheta) (-\mu+4+2 b+\vartheta) (-\mu+4-2 b+\vartheta). \epsilonsilonnd{eqnarray*} Its Riemann scheme is \[ \begin{array}{cccccc} -2 & -1 & 0 & 1 &2 & \infty \\ \hline \mu & \mu-a & \mu & \mu-a & \mu & 2b+4-\mu \\ 5 & \mu+a-1 & 5 & \mu+a-1 & 5 & 2b+2-\mu \\ 4 & 4 & 4 & 4 & 4 & 5-\mu \\ 3 & 3 & 3 & 3 & 3 & 3-\mu \\ 2 & 2 & 2 & 2 & 2 & 1-\mu \\ 1 & 1 & 1& 1 & 1 & -2b+2-\mu \\ 0 & 0 & 0 & 0 & 0 & -2b+4-\mu \\ \epsilonsilonnd{array} \] Moreover \begin{eqnarray*} FT(P') &=& \vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5) H, \\ H &=& \sigmaum_{i=0}^3 x^{2i} H_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} H_0(\vartheta) &=& -(\mu-4+\vartheta) (\mu+\vartheta) (\mu-2+\vartheta) (\mu-3-2 b+\vartheta) (\mu-3+2 b+\vartheta) (\mu-1-2 b+\vartheta) (\mu-1+2 b+\vartheta) \\ H_2 (\vartheta)&=& 6 (\mu-2+\vartheta) (\mu+\vartheta) (\mu-1+\vartheta) (\mu-1-2 b+\vartheta) (\mu-1+2 b+\vartheta) \\ H_4(\vartheta) &=& -(\mu+\vartheta) (9 \vartheta^2+18 \vartheta \mu+12 a^2-16 b^2+9 \mu^2-12 a+7) \\ H_6 (\vartheta)&=& 4 (\vartheta+\mu+1). \epsilonsilonnd{eqnarray*} This is up to a twist with $x^{\mu+1}$ the operator $ [2^*]P $ in Proposition~\rhoef{OpEi}. \epsilonsilonnd{cor} \begin{proof} We start with the operator $P$ in Proposition~\rhoef{P:b<>0}. The construction of $P'$ follows from the construction of ${\mathcal P}'$ in Proposition~\rhoef{E1E3} and the translation to operators, cf. \cite[Chapter~4]{BR2012}. The claim follows by a straightforward calculation using the above remark. \epsilonsilonnd{proof} \begin{cor} \lambdaabel{MC P22} Let $\mathcal{P}':= \tauextup{MC}_{-1}\mathcal{P}_{2}^2$ be as in Proposition~\rhoef{E2}. Then the corresponding operator is \begin{eqnarray*} P' &=& \sigmaum_{i=0}^3 x^{2i} P_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} P_0(\vartheta) & =& 256 \vartheta ( \vartheta-1) ( \vartheta-2) ( \vartheta-3) ( \vartheta-4) ( \vartheta-5) (2 \vartheta-1)\\ P_2(\vartheta) & =& -16 \vartheta ( \vartheta-1) ( \vartheta-2) ( \vartheta-3) (2 \vartheta+1) (36 \vartheta^2+48 a^2+36 \vartheta-48 a+37) \\ P_4(\vartheta) & =& 24 \vartheta (2 \vartheta+5) ( \vartheta-1) (2 \vartheta+1) (2 \vartheta+3)^3 \\ P_6(\vartheta) & =& -(2 \vartheta+5) (2 \vartheta+1) (2 \vartheta+9) (2 \vartheta+7)^2 (2 \vartheta+3)^2. \epsilonsilonnd{eqnarray*} Its Riemann scheme is \[ \begin{array}{cccccc} -2 & -1 & 0 & 1 &2 & \infty \\ \hline 1/2 & 1/2+a & 1/2 & 1/2+a & 1/2 & 9/2 \\ 5 & -1/2-a & 5 & -1/2-a & 5&7/2 \\ 4 & 4 & 4 & 4 & 4 & 7/2 \\ 3 & 3 & 3 & 3 & 3 & 5/2 \\ 2 & 2 & 2 & 2 & 2 & 3/2 \\ 1 & 1 & 1& 1 & 1 & 3/2 \\ 0 & 0 & 0 & 0 & 0 & 1/2 \\ \epsilonsilonnd{array} \] Thus $FT(P')$ is \begin{eqnarray*} FT(P') &=& \vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5) H, \\ H &=& \sigmaum_{i=0}^3 x^{2i} H_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} H_0(\vartheta) &=& (2 \vartheta-3) (2 \vartheta+1) (2 \vartheta-7) (2 \vartheta-5)^2 (2 \vartheta-1)^2 \\ H_2(\vartheta) &=&-24 (2 \vartheta-3) (2 \vartheta+1) (2 \vartheta-1)^3\\ H_4(\vartheta) &=&16 (2 \vartheta+1) (36 \vartheta^2+48 a^2+36 \vartheta-48 a+37)\\ H_6(\vartheta) &=& -256 (2 \vartheta+3). \epsilonsilonnd{eqnarray*} This is up to a twist with $x^{3/2}$ the operator $ [2^*]P $ in Proposition~\rhoef{OpEi}. \epsilonsilonnd{cor} \begin{proof} The proof is similar to the above one. \epsilonsilonnd{proof} \begin{cor} \lambdaabel{MC P42} Let $\mathcal{P}':= \tauextup{MC}_{-1}\mathcal{P}_{4}^2$ be as in Proposition~\rhoef{E4}. Then the corresponding operator is \begin{eqnarray*} P' &=& \sigmaum_{i=0}^3 x^{2i} P_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} P_0(\vartheta) & =& -64 \vartheta t^2 (t+1)^2 (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5) (2 \vartheta-5) \\ P_2(\vartheta) & =& 16 \vartheta (t^2+t+1)^2 (\vartheta-1) (\vartheta-2) (\vartheta-3) (2 \vartheta-3)^3 \\ P_4(\vartheta) & =& -8 \vartheta (t^2+t+1) (2 \vartheta-1) (\vartheta-1) (2 \vartheta+1)^2 (2 \vartheta-3)^2 \\ P_6(\vartheta) & =& (2 \vartheta+5)^2 (2 \vartheta-3)^2 (2 \vartheta+1)^3\\ \epsilonsilonnd{eqnarray*} Its Riemann scheme is \[ \begin{array}{cccccc} \pm (t+1) & \pm t & \pm 1 & 0 & \infty \\ \hline 5 & 5 & 5 & 5 & 5/2 \\ 4 & 4 & 4 & 4 & 5/2 \\ 3 & 3 & 3 & 3 & 1/2 \\ 5/2 & 5/2 & 5/2 & 5/2 & 1/2 \\ 2 & 2 & 2 & 2 & 1/2 \\ 1 & 1 & 1 & 1 & -3/2 \\ 0 & 0 & 0 & 0 & -3/2 \\ \epsilonsilonnd{array} \] Thus $FT(P')$ is \begin{eqnarray*} FT(P') &=& \vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5) H, \\ H &=& \sigmaum_{i=0}^3 x^{2i} H_{2i}(\vartheta), \epsilonsilonnd{eqnarray*} where \begin{eqnarray*} H_0(\vartheta) &=& -(2 \vartheta+5)^2 (2 \vartheta-3)^2 (2 \vartheta+1)^3 \\ H_2(\vartheta) &=& 8 (t^2+t+1) (3+2 \vartheta) (2 \vartheta+1)^2 (2 \vartheta+5)^2\\ H_4(\vartheta) &=& -16 (t^2+t+1)^2 (2 \vartheta+5)^3\\ H_6(\vartheta) &=& 64 t^2 (t+1)^2 (7+2 \vartheta). \epsilonsilonnd{eqnarray*} This is up to a twist with $x^{5/2}$ the operator $ [2^*]P $ in Proposition~\rhoef{OpE4}. \epsilonsilonnd{cor} \begin{proof} The proof is again similar to the above one. \epsilonsilonnd{proof} Denote by $\mathcal{D}$ the sheaf of differential operators on $\mathbb{A}^1$ and by $\mathcal{D}_{\mathbb{G}_m}$ its restriction to $\mathbb{G}_m$. Let $M_{13}:=\mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m}P'$ for $P'$ the operator in Corollary \rhoef{MC P132}. Similarly define $M_2$ using Corollary \rhoef{MC P22} and $M_4$ using Corollary \rhoef{MC P42}. Denoting by $S$ the singular locus of $P'$ the restriction of $M_{13}$ to $\mathbb{P}^1\sigmaetminus S$ is isomorphic to $\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{13}^2)$ and similarly in the other cases. \begin{cor} \lambdaabel{remstabi} Let $j:\mathbb{G}_m\hookrightarrow \mathbb{A}^1$ be the open inclusion and denote by $K_\tauextup{char}i$ (and $K_{-1}$) the Kummer sheaves with monodromy $\tauextup{char}i$ at $0$ (and $-1$ at $0$). We have \begin{align*} FT(j_{!*}M_{13})&\cong j_{!*}([2]^*\mathcal{E}_1\otimes K_\tauextup{char}i), b\in1/2+\mathbb{Z},\\ FT(j_{!*}M_{13})&\cong j_{!*}([2]^*\mathcal{E}_3\otimes K_\tauextup{char}i), b\in \mathbb{R} \sigmaetminus \{\mathbb{Z} \cup 1/2+\mathbb{Z}\} \\ FT(j_{!*}M_2) &\cong j_{!*}([2]^*\mathcal{E}_2\otimes K_{-1}), \\ FT(j_{!*}M_4) &\cong j_{!*}([2]^*\mathcal{E}_4\otimes K_{-1}). \epsilonsilonnd{align*} \epsilonsilonnd{cor} \begin{proof} Again the proof is similar to the proof of Theorem 6.2.1 in \cite{Ka90}. We will only prove the first case as the others are similar. None of the local exponents of $P'$ are contained in $\mathbb{Z}_{<0}$. We may therefore apply Lemma 2.9.5 in \cite{Ka90} to find that \[\mathcal{D}/\mathcal{D} P' \cong j_!\mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m}P'.\] Therefore we have a short exact sequence \[0\rhoightarrow \delta_0 \otimes \tauextup{Soln}_0^\vee \rhoightarrow \mathcal{D}/\mathcal{D} P' \rhoightarrow j_{!*} \mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m}P'\rhoightarrow 0 \] where $\tauextup{Soln}_0^\vee$ denotes dual space of the formal local solutions at $0$ of $\mathcal{D}/\mathcal{D} P$. Applying Fourier transform we obtain the short exact sequence \[0\rhoightarrow FT( \delta_0 \otimes \tauextup{Soln}_0^\vee) \rhoightarrow \mathcal{D}/\mathcal{D} FT(P') \rhoightarrow FT(j_{!*} \mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m}P')\rhoightarrow 0. \] By Corollary \rhoef{MC P132} we have $FT(P')=\vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5) H$. Note that \[FT( \delta_0 \otimes \tauextup{Soln}_0^\vee) = \mathcal{D} / \mathcal{D} (\vartheta (\vartheta-1) (\vartheta-2) (\vartheta-3) (\vartheta-4) (\vartheta-5)).\] Therefore we have \[FT(j_{!*} \mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m}P')\cong \mathcal{D} / \mathcal{D} H. \] Recall from Corollary \rhoef{MC P132} that $H=\sigmaum_{i=0}^3 x^{2i} H_{2i}(\tauhetaeta)$. Then $H_0(X)$ considered as a polynomial in $X$ is the indicial polynomial of $H$ at $0$. Since it has no integer roots we may apply Lemma 2.9.4 in \cite{Ka90} to conclude that \[\mathcal{D}/ \mathcal{D} H \cong j_{!*}j^* \mathcal{D}_{\mathbb{G}_m}/\mathcal{D}_{\mathbb{G}_m} H \cong j_{!*}([2]^*\mathcal{E}_1\otimes K_\tauextup{char}i). \] \epsilonsilonnd{proof} Combining Corollary \rhoef{remstabi} with the computations of Hodge data for $\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{13}^2)$, $\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{2}^2)$ and $\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{4}^2)$ we now compute the irregular Hodge filtration of the rigid irregular $G_2$-connections. \begin{proof}[{Proof of Theorem \rhoef{thm: irreg hodge}}]By \cite[Theorem 0.3]{Sa18} pullback by a smooth morphism does not change the ranks and jumping indices of the irregular Hodge filtration. Additionally by \cite[Lemma 2.3]{SY19} twisting with a local system of rank one only changes the indices of the Hodge filtration by a global shift. We can therefore compute the irregular Hodge filtration of $\mathcal{E}_{1}$ by computing it for the twist of $[2]^*\mathcal{E}_{11}$. Note that we have arranged the local system $\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{13}^2)$ (where $b\in 1/2+\mathbb{Z}$) such that the local monodromy at $\infty$ does not have an eigenvalue equal to $1$. We can therefore apply the stationary phase formula \cite[Section 5, (7)]{SY19} to conclude that \[\dim \tauextup{gr}_{F_\tauextup{irr}}^{\alpha+p} (\mathcal{E}_1)=\dim \tauextup{gr}_{F_\tauextup{irr}}^{\alpha+p} ([2]^*\mathcal{E}_1\otimes K_\tauextup{char}i))= \nu_{\infty, \alpha}^p(\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{13}^2))= \sigmaum_{\epsilonsilonll \ge 0} \sigmaum_{k=0}^\epsilonsilonll \nu_{\infty, \alpha, \epsilonsilonll}^{p+k}(\tauextup{MC}_\tauextup{char}i(\mathcal{P}_{13}^2)). \] The other cases work the same and a straightforward calculation proves Theorem \rhoef{thm: irreg hodge}. \epsilonsilonnd{proof} \epsilonsilonnd{document}
math
package tsm1_test import ( "archive/tar" "bytes" "fmt" "io/ioutil" "math/rand" "os" "path/filepath" "reflect" "runtime" "strings" "testing" "time" "github.com/influxdata/influxdb/influxql" "github.com/influxdata/influxdb/models" "github.com/influxdata/influxdb/pkg/deep" "github.com/influxdata/influxdb/tsdb" "github.com/influxdata/influxdb/tsdb/engine/tsm1" ) // Ensure engine can load the metadata index after reopening. func TestEngine_LoadMetadataIndex(t *testing.T) { e := MustOpenEngine() defer e.Close() if err := e.WritePointsString(`cpu,host=A value=1.1 1000000000`); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } // Ensure we can close and load index from the WAL if err := e.Reopen(); err != nil { t.Fatal(err) } // Load metadata index. index := tsdb.NewDatabaseIndex("db") if err := e.LoadMetadataIndex(1, index); err != nil { t.Fatal(err) } // Verify index is correct. if m := index.Measurement("cpu"); m == nil { t.Fatal("measurement not found") } else if s := m.SeriesByID(1); s.Key != "cpu,host=A" || !reflect.DeepEqual(s.Tags, map[string]string{"host": "A"}) { t.Fatalf("unexpected series: %q / %#v", s.Key, s.Tags) } // write the snapshot, ensure we can close and load index from TSM if err := e.WriteSnapshot(); err != nil { t.Fatalf("error writing snapshot: %s", err.Error()) } // Ensure we can close and load index from the WAL if err := e.Reopen(); err != nil { t.Fatal(err) } // Load metadata index. index = tsdb.NewDatabaseIndex("db") if err := e.LoadMetadataIndex(1, index); err != nil { t.Fatal(err) } // Verify index is correct. if m := index.Measurement("cpu"); m == nil { t.Fatal("measurement not found") } else if s := m.SeriesByID(1); s.Key != "cpu,host=A" || !reflect.DeepEqual(s.Tags, map[string]string{"host": "A"}) { t.Fatalf("unexpected series: %q / %#v", s.Key, s.Tags) } // Write a new point and ensure we can close and load index from TSM and WAL if err := e.WritePoints([]models.Point{ MustParsePointString("cpu,host=B value=1.2 2000000000"), }); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } // Ensure we can close and load index from the TSM & WAL if err := e.Reopen(); err != nil { t.Fatal(err) } // Load metadata index. index = tsdb.NewDatabaseIndex("db") if err := e.LoadMetadataIndex(1, index); err != nil { t.Fatal(err) } // Verify index is correct. if m := index.Measurement("cpu"); m == nil { t.Fatal("measurement not found") } else if s := m.SeriesByID(1); s.Key != "cpu,host=A" || !reflect.DeepEqual(s.Tags, map[string]string{"host": "A"}) { t.Fatalf("unexpected series: %q / %#v", s.Key, s.Tags) } else if s := m.SeriesByID(2); s.Key != "cpu,host=B" || !reflect.DeepEqual(s.Tags, map[string]string{"host": "B"}) { t.Fatalf("unexpected series: %q / %#v", s.Key, s.Tags) } } // Ensure that deletes only sent to the WAL will clear out the data from the cache on restart func TestEngine_DeleteWALLoadMetadata(t *testing.T) { e := MustOpenEngine() defer e.Close() if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=B value=1.2 2000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } // Remove series. if err := e.DeleteSeries([]string{"cpu,host=A"}); err != nil { t.Fatalf("failed to delete series: %s", err.Error()) } // Ensure we can close and load index from the WAL if err := e.Reopen(); err != nil { t.Fatal(err) } if exp, got := 0, len(e.Cache.Values(tsm1.SeriesFieldKey("cpu,host=A", "value"))); exp != got { t.Fatalf("unexpected number of values: got: %d. exp: %d", got, exp) } if exp, got := 1, len(e.Cache.Values(tsm1.SeriesFieldKey("cpu,host=B", "value"))); exp != got { t.Fatalf("unexpected number of values: got: %d. exp: %d", got, exp) } } // Ensure that the engine will backup any TSM files created since the passed in time func TestEngine_Backup(t *testing.T) { // Generate temporary file. f, _ := ioutil.TempFile("", "tsm") f.Close() os.Remove(f.Name()) walPath := filepath.Join(f.Name(), "wal") os.MkdirAll(walPath, 0777) defer os.RemoveAll(f.Name()) // Create a few points. p1 := MustParsePointString("cpu,host=A value=1.1 1000000000") p2 := MustParsePointString("cpu,host=B value=1.2 2000000000") p3 := MustParsePointString("cpu,host=C value=1.3 3000000000") // Write those points to the engine. e := tsm1.NewEngine(f.Name(), walPath, tsdb.NewEngineOptions()).(*tsm1.Engine) // mock the planner so compactions don't run during the test e.CompactionPlan = &mockPlanner{} if err := e.Open(); err != nil { t.Fatalf("failed to open tsm1 engine: %s", err.Error()) } if err := e.WritePoints([]models.Point{p1}); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } if err := e.WriteSnapshot(); err != nil { t.Fatalf("failed to snapshot: %s", err.Error()) } if err := e.WritePoints([]models.Point{p2}); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } b := bytes.NewBuffer(nil) if err := e.Backup(b, "", time.Unix(0, 0)); err != nil { t.Fatalf("failed to backup: %s", err.Error()) } tr := tar.NewReader(b) if len(e.FileStore.Files()) != 2 { t.Fatalf("file count wrong: exp: %d, got: %d", 2, len(e.FileStore.Files())) } for _, f := range e.FileStore.Files() { th, err := tr.Next() if err != nil { t.Fatalf("failed reading header: %s", err.Error()) } if !strings.Contains(f.Path(), th.Name) || th.Name == "" { t.Fatalf("file name doesn't match:\n\tgot: %s\n\texp: %s", th.Name, f.Path()) } } lastBackup := time.Now() // we have to sleep for a second because last modified times only have second level precision. // so this test won't work properly unless the file is at least a second past the last one time.Sleep(time.Second) if err := e.WritePoints([]models.Point{p3}); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } b = bytes.NewBuffer(nil) if err := e.Backup(b, "", lastBackup); err != nil { t.Fatalf("failed to backup: %s", err.Error()) } tr = tar.NewReader(b) th, err := tr.Next() if err != nil { t.Fatalf("error getting next tar header: %s", err.Error()) } mostRecentFile := e.FileStore.Files()[e.FileStore.Count()-1].Path() if !strings.Contains(mostRecentFile, th.Name) || th.Name == "" { t.Fatalf("file name doesn't match:\n\tgot: %s\n\texp: %s", th.Name, mostRecentFile) } } // Ensure engine can create an ascending iterator for cached values. func TestEngine_CreateIterator_Cache_Ascending(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Dimensions: []string{"host"}, Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: true, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(1): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 2000000000, Value: 1.2}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(2): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3}) { t.Fatalf("unexpected point(2): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } // Ensure engine can create an descending iterator for cached values. func TestEngine_CreateIterator_Cache_Descending(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Dimensions: []string{"host"}, Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: false, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unepxected error(1): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 2000000000, Value: 1.2}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(2): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1}) { t.Fatalf("unexpected point(2): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } // Ensure engine can create an ascending iterator for tsm values. func TestEngine_CreateIterator_TSM_Ascending(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } e.MustWriteSnapshot() itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Dimensions: []string{"host"}, Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: true, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(1): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 2000000000, Value: 1.2}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(2): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3}) { t.Fatalf("unexpected point(2): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } // Ensure engine can create an descending iterator for cached values. func TestEngine_CreateIterator_TSM_Descending(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } e.MustWriteSnapshot() itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Dimensions: []string{"host"}, Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: false, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(1): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 2000000000, Value: 1.2}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(2): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1}) { t.Fatalf("unexpected point(2): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } // Ensure engine can create an iterator with auxilary fields. func TestEngine_CreateIterator_Aux(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.MeasurementFields("cpu").CreateFieldIfNotExists("F", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A F=100 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, `cpu,host=A F=200 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Aux: []influxql.VarRef{{Val: "F"}}, Dimensions: []string{"host"}, Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: true, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !deep.Equal(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1, Aux: []interface{}{float64(100)}}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(1): %v", err) } else if !deep.Equal(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 2000000000, Value: 1.2, Aux: []interface{}{(*float64)(nil)}}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(2): %v", err) } else if !deep.Equal(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3, Aux: []interface{}{float64(200)}}) { t.Fatalf("unexpected point(2): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } // Ensure engine can create an iterator with a condition. func TestEngine_CreateIterator_Condition(t *testing.T) { t.Parallel() e := MustOpenEngine() defer e.Close() e.Index().CreateMeasurementIndexIfNotExists("cpu") e.Index().Measurement("cpu").SetFieldName("X") e.Index().Measurement("cpu").SetFieldName("Y") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.MeasurementFields("cpu").CreateFieldIfNotExists("X", influxql.Float, false) e.MeasurementFields("cpu").CreateFieldIfNotExists("Y", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) if err := e.WritePointsString( `cpu,host=A value=1.1 1000000000`, `cpu,host=A X=10 1000000000`, `cpu,host=A Y=100 1000000000`, `cpu,host=A value=1.2 2000000000`, `cpu,host=A value=1.3 3000000000`, `cpu,host=A X=20 3000000000`, `cpu,host=A Y=200 3000000000`, ); err != nil { t.Fatalf("failed to write points: %s", err.Error()) } itr, err := e.CreateIterator(influxql.IteratorOptions{ Expr: influxql.MustParseExpr(`value`), Dimensions: []string{"host"}, Condition: influxql.MustParseExpr(`X = 10 OR Y > 150`), Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Ascending: true, }) if err != nil { t.Fatal(err) } fitr := itr.(influxql.FloatIterator) if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected error(0): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 1000000000, Value: 1.1}) { t.Fatalf("unexpected point(0): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("unexpected point(1): %v", err) } else if !reflect.DeepEqual(p, &influxql.FloatPoint{Name: "cpu", Tags: ParseTags("host=A"), Time: 3000000000, Value: 1.3}) { t.Fatalf("unexpected point(1): %v", p) } if p, err := fitr.Next(); err != nil { t.Fatalf("expected eof, got error: %v", err) } else if p != nil { t.Fatalf("expected eof: %v", p) } } func BenchmarkEngine_CreateIterator_Count_1K(b *testing.B) { benchmarkEngineCreateIteratorCount(b, 1000) } func BenchmarkEngine_CreateIterator_Count_100K(b *testing.B) { benchmarkEngineCreateIteratorCount(b, 100000) } func BenchmarkEngine_CreateIterator_Count_1M(b *testing.B) { benchmarkEngineCreateIteratorCount(b, 1000000) } func benchmarkEngineCreateIteratorCount(b *testing.B, pointN int) { benchmarkIterator(b, influxql.IteratorOptions{ Expr: influxql.MustParseExpr("count(value)"), Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, Ascending: true, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, }, pointN) } func BenchmarkEngine_CreateIterator_Limit_1K(b *testing.B) { benchmarkEngineCreateIteratorLimit(b, 1000) } func BenchmarkEngine_CreateIterator_Limit_100K(b *testing.B) { benchmarkEngineCreateIteratorLimit(b, 100000) } func BenchmarkEngine_CreateIterator_Limit_1M(b *testing.B) { benchmarkEngineCreateIteratorLimit(b, 1000000) } func benchmarkEngineCreateIteratorLimit(b *testing.B, pointN int) { benchmarkIterator(b, influxql.IteratorOptions{ Expr: influxql.MustParseExpr("value"), Sources: []influxql.Source{&influxql.Measurement{Name: "cpu"}}, Dimensions: []string{"host"}, Ascending: true, StartTime: influxql.MinTime, EndTime: influxql.MaxTime, Limit: 10, }, pointN) } func benchmarkIterator(b *testing.B, opt influxql.IteratorOptions, pointN int) { e := MustInitBenchmarkEngine(pointN) b.ResetTimer() b.ReportAllocs() for i := 0; i < b.N; i++ { itr, err := e.CreateIterator(opt) if err != nil { b.Fatal(err) } influxql.DrainIterator(itr) } } var benchmark struct { Engine *Engine PointN int } var hostNames = []string{"A", "B", "C", "D", "E", "F", "G", "H", "I", "J"} // MustInitBenchmarkEngine creates a new engine and fills it with points. // Reuses previous engine if the same parameters were used. func MustInitBenchmarkEngine(pointN int) *Engine { // Reuse engine, if available. if benchmark.Engine != nil { if benchmark.PointN == pointN { return benchmark.Engine } // Otherwise close and remove it. benchmark.Engine.Close() benchmark.Engine = nil } const batchSize = 1000 if pointN%batchSize != 0 { panic(fmt.Sprintf("point count (%d) must be a multiple of batch size (%d)", pointN, batchSize)) } e := MustOpenEngine() // Initialize metadata. e.Index().CreateMeasurementIndexIfNotExists("cpu") e.MeasurementFields("cpu").CreateFieldIfNotExists("value", influxql.Float, false) e.Index().CreateSeriesIndexIfNotExists("cpu", tsdb.NewSeries("cpu,host=A", map[string]string{"host": "A"})) // Generate time ascending points with jitterred time & value. rand := rand.New(rand.NewSource(0)) for i := 0; i < pointN; i += batchSize { var buf bytes.Buffer for j := 0; j < batchSize; j++ { fmt.Fprintf(&buf, "cpu,host=%s value=%d %d", hostNames[j%len(hostNames)], 100+rand.Intn(50)-25, (time.Duration(i+j)*time.Second)+(time.Duration(rand.Intn(500)-250)*time.Millisecond), ) if j != pointN-1 { fmt.Fprint(&buf, "\n") } } if err := e.WritePointsString(buf.String()); err != nil { panic(err) } } if err := e.WriteSnapshot(); err != nil { panic(err) } // Force garbage collection. runtime.GC() // Save engine reference for reuse. benchmark.Engine = e benchmark.PointN = pointN return e } // Engine is a test wrapper for tsm1.Engine. type Engine struct { *tsm1.Engine root string } // NewEngine returns a new instance of Engine at a temporary location. func NewEngine() *Engine { root, err := ioutil.TempDir("", "tsm1-") if err != nil { panic(err) } return &Engine{ Engine: tsm1.NewEngine( filepath.Join(root, "data"), filepath.Join(root, "wal"), tsdb.NewEngineOptions()).(*tsm1.Engine), root: root, } } // MustOpenEngine returns a new, open instance of Engine. func MustOpenEngine() *Engine { e := NewEngine() if err := e.Open(); err != nil { panic(err) } if err := e.LoadMetadataIndex(1, tsdb.NewDatabaseIndex("db")); err != nil { panic(err) } return e } // Close closes the engine and removes all underlying data. func (e *Engine) Close() error { defer os.RemoveAll(e.root) return e.Engine.Close() } // Reopen closes and reopens the engine. func (e *Engine) Reopen() error { if err := e.Engine.Close(); err != nil { return err } e.Engine = tsm1.NewEngine( filepath.Join(e.root, "data"), filepath.Join(e.root, "wal"), tsdb.NewEngineOptions()).(*tsm1.Engine) if err := e.Engine.Open(); err != nil { return err } return nil } // MustWriteSnapshot forces a snapshot of the engine. Panic on error. func (e *Engine) MustWriteSnapshot() { if err := e.WriteSnapshot(); err != nil { panic(err) } } // WritePointsString parses a string buffer and writes the points. func (e *Engine) WritePointsString(buf ...string) error { return e.WritePoints(MustParsePointsString(strings.Join(buf, "\n"))) } // MustParsePointsString parses points from a string. Panic on error. func MustParsePointsString(buf string) []models.Point { a, err := models.ParsePointsString(buf) if err != nil { panic(err) } return a } // MustParsePointString parses the first point from a string. Panic on error. func MustParsePointString(buf string) models.Point { return MustParsePointsString(buf)[0] } type mockPlanner struct{} func (m *mockPlanner) Plan(lastWrite time.Time) []tsm1.CompactionGroup { return nil } func (m *mockPlanner) PlanLevel(level int) []tsm1.CompactionGroup { return nil } // ParseTags returns an instance of Tags for a comma-delimited list of key/values. func ParseTags(s string) influxql.Tags { m := make(map[string]string) for _, kv := range strings.Split(s, ",") { a := strings.Split(kv, "=") m[a[0]] = a[1] } return influxql.NewTags(m) }
code
\begin{document} \title{Larc: a State Collapse Theory of Quantum Measurement} \author{Michael Simpson} \date{July 2010} \maketitle \begin{abstract} This proposes a new theory of Quantum measurement; a state reduction theory in which reduction is to the elements of the number operator basis of a system, triggered by the occurrence of annihilation or creation (or lowering or raising) operators in the time evolution of a system. It is from these operator types that the acronym `LARC' is derived. Reduction does not occur immediately after the trigger event; it occurs at some later time with probability $P_t$ per unit time, where $P_t$ is very small. Localisation of macroscopic objects occurs in the natural way: photons from an illumination field are reflected off a body and later absorbed by another body. Each possible absorption of a photon by a molecule in the second body generates annihilation and raising operators, which in turn trigger a probability per unit time $P_t$ of a state reduction into the number operator basis for the photon field and the number operator basis of the electron orbitals of the molecule. Since all photons in the illumination field have come from the location of the first body, wherever that is, a single reduction leads to a reduction of the position state of the first body relative to the second, with a total probability of $m\tau_L$, where $m$ is the number of photon absorption events. Unusually for a reduction theory, the larc theory is naturally relativistic. \end{abstract} \section{The Measurement Problem} Quantum Mechanical time evolution is Hamiltonian time evolution. But quantum measurement, on the face of things, appears to need a second law, unique to it. There are two reasons for this apparent need. The first is that without a second law, measurements do not produce definite results. The second is that Hamiltonian evolution is deterministic, but measurement outcomes are probabilistic; the probabilistic element of quantum measurement is provided by the second law. The most primitive form of second law is the `quantum jump'; a sudden, probabilistic, change of state that occurs during a measurement. Unfortunately, no-one has been able to convincingly specify the physical property of measurement that invokes this special process. The measurement problem consists in the apparent inability of pure Hamiltonian time evolution to describe measurements correctly, combined with our inability to give any special process a sound physical basis. When using quantum mechanics for everyday calculation, physicists don't notice this problem, because they have learned to recognise physical arrangements that constitute measurement instruments, and at what point measurement can be said to have occurred, by applying the laws of quantum theory with what Bell referred to as `good taste'\cite{bell1}. A solution to the measurement problem needs to either: 1) explain measurement without recourse to special processes, or 2) specify the special processes as physical laws, most importantly providing a law to explain when the process occurs, since `measurement' is not physically well-defined. This is a basis for a broad classification of solutions to the measurement problem: type 1 solutions explain measurement without invoking any fundamental physical processes outside Hamiltonian time evolution; type 2 solutions specify new fundamental physical laws that allow measurement to be explained. Everett's relative state interpretation is an example of a type 1 solution \cite{everett1}. Type 2 solutions can themselves be said to fall into two broad classes: those that retain the bulk of the quantum theoretical formalism, only adding the minimum needed to address the measurement problem itself; and those that depart more or less widely from the quantum formalism, producing effectively a new theory. The Ghirardi-Rimini-Weber \cite{grw1} theory falls into the former class, while hidden-variable theories such as the Bohm\cite{bohm1} theory fall into the latter. This present will propose a solution to the measurement problem of type 2, and of the former class of adding only enough to quantum theory to (with luck) explain the behaviour of measurement processes. Maudlin\cite{maudlin1} has given a very succinct summary of the system of arguments that leads some to believe that the solution to the measurement problem must be of type 2. We are going to need at least one new law which must do two things. It must specify when the state reduction process occurs, including what states of affairs trigger the process, and it must specify how the Hilbert space basis in which the reduction occurs is chosen. Somewhere along the way, the theory must explain why this new process appears to occur only during measurements and not at other times. Since there is no physical property possessed by measurement that distinguishes it from other processes, this is inherently tricky. \section{Outline of the Theory} Up to now, proposed type 2 solutions to the measurement problem have usually kept within nonrelativistic quantum mechanics. They have often, even when adequate at the nonrelativistic level, met with difficulty in developing into a relativistic form\cite{grw1}\cite{bohm1}. The event that will trigger state reduction in this theory, while it appears in nonrelativistic quantum mechanics, is more naturally associated with relativistic quantum mechanics and quantum field theories. The ability of the number of particles, or of field quanta, to vary in quantum field theory is one of its fundamental properties. Changes in the number of field quanta have operators associated with them, known as the annihilation ($\hat{a}$), and creation ($\hat{a}^\dagger$) operators. Processes in which a particle of a given type is annihilated and replaced by the creation of particles of other types are routine in quantum field theory. Conservation laws demand that particle annihilation must be followed by particle creation. (`Followed' here means in the temporal sense. Formally, the creation operators fall to the left of the annihilation operators.) The concepts behind annihilation and creation operators also apply to the less esoteric case of a bound system in a certain energy state. The classic case is the harmonic oscillator, for which there exist lowering ($\hat{a}$) and raising ($\hat{a}^\dagger$) operators that move the oscillator to the next lower or next higher energy state. The harmonic oscillator is the classic example because it is one of the very few cases for which the mathematical form of the operators can be solved exactly. However, in principle, corresponding operators exist for the electronic states of hydrogen atoms, or more complex molecules. Lowering and raising operators mix with annihilation and creation operators, following the rule that a lowering or annihilation operator (or group of them) must be followed temporally by a raising or creation operator (or group of them) so that conservation laws are obeyed. I'll refer to a well-formed cluster of lowering/annihilation - raising/creation operators as a larc. One example of a larc would be pair creation: a photon annihilation operator followed by an electron creation operator and a positron creation operator. Another example would be absorption of a photon by a molecule, described by a photon annihilation operator, followed by a raising operator for the electronic states of the molecule. The larc theory postulates that the triggering event for a state reduction is the occurrence of a larc in the description of a process. This does not mean that a state reduction happens at every larc event: a larc triggers a probability of a state reduction, specifically a probability per unit time. If this probability per unit time is small enough, there will be virtually zero chance of a state reduction occurring during a process involving small numbers of larc events, which will be the case, for example, in high-energy particle experiments. A state reduction consists of a lopping off of branches. In most cases a larc event occurs as a branching: the larc event for absorption of a photon by a molecule will be a branch superposed with the alternative event of the photon passing through the molecule without being absorbed. Each of these branches has an amplitude. The presence of a larc in this process creates a probability per unit time that one of the branches will be lopped off, the probability for each branch to be the survivor is the square modulus of its amplitude. In the larc theory, the basis in which the reduction occurs is the number basis for the particles or field quanta defined by the lowering/annihilation and raising/creation operators involved in each larc event. In summary, the elements of the larc theory of quantum measurement are: 1. State reduction occurs following a larc. After a larc occurs there is a probability per unit time of a state reduction, which consists of a lopping off of the branches not chosen. `Lopping off' means a zeroing of the amplitudes of those branches. 2. The basis in which the reduction occurs is the number basis defined by the operators in the larc. \section{Localisation of Macroscopic Bodies} So far, the larc theory doesn't appear to include anything that will solve the classic problem of quantum measurement: the localisation of macroscopic bodies. Macroscopic body localisation is important for two reasons: first, it explains why the moon, or a cricket ball, is in a definite location (and is still there even when nobody looks). Second, it ensures that the pointers of measuring devices must show definite outcomes. Since pretty much any measuring device can be designed to show its output as the position of a pointer, or some equivalent, such as the position of macroscopic patches of colour on a display device, localisation of macroscopic objects is effectively a general solution to the measurement problem, at least to a first approximation. That macroscopic objects are going to be localised by the larc theory may not be immediately obvious. I've postulated a process that has nothing to do with position localisation, and occurs extremely rarely to boot. In fact, the fundamental machinery required is already in place. The completion of the argument to localisation requires a further property of quantum mechanics I will refer to as amplification by correlation (abc). As far as I know, this mechanism is due to Ghirardi, Rimini, and Weber\cite{grw1}. In outline, abc acts whenever a composite state contains a large number of quanta that are in states correlated with one another. The correlation means that if any one of the quanta gets reduced to a single definite state, all the other quanta in the composite state will be reduced to their corresponding definite states. Since every quantum's state will be reduced if any quantum's state becomes reduced, the probability of reduction of the whole composite state becomes roughly the sum of the probabilities of each individual quantum being reduced. In this way, the probability of reduction is amplified, potentially by many orders of magnitude, by the correlated state. Body B begins in a superposition of positions $x_1$ and $x_2$. Larc-abc is capable of localising a body whose position is unknown in three dimensions, but it is sufficient to show that it can localise B. The space where B exists can be illuminated. Initially, the light is off. At $t=t_i$ the light will turn on. The final element of the thought experiment is a pinhole camera, C. This is positioned in the plane, facing towards the line we know B is confined to. C consists of a box with a small hole in the side facing B (not small compared to the wavelength of the illumination, of course). At the back of the camera is an old-fashioned photographic plate. \begin{picture} (100,200)(-10,-20) \thicklines \mathbf{p}ut(0,25) { \mathbf{p}ut(35,15){B} \mathbf{p}ut(217,45){C} \mathbf{p}ut(200, 60.35) { \mathbf{p}ut(0,0){\line(0,1){18}} \mathbf{p}ut(0,22){\line(0,1){18}} \mathbf{p}ut(0,0){\line(1,0){40}} \mathbf{p}ut(0,40){\line(1,0){40}} \mathbf{p}ut(40,0){\line(0,1){40}} } \mathbf{p}ut(28,118){$x_1$} \mathbf{p}ut(240,70){\line(-4,1){200}} \mathbf{p}ut(244,66.5){$\xi_1$} \mathbf{p}ut(28,38){$x_2$} \mathbf{p}ut(240,90){\line(-4,-1){200}} \mathbf{p}ut(244,87){$\xi_2$} } \mathbf{p}ut(45,0){Localisation of body B by pinhole camera C} \end{picture} Let's start with the simplest possible case; B in a superposition of two possible locations, $x_1$ and $x_2$. \begin{equation} \mathbf{p}si_{B} = a_{1} \delta (x-x_1) + a_{2} \delta (x-x_2) \label{e1} \end{equation} With $|a_{1}|^{2} + |a_{2}|^{2} = 1$. At time $t_{i}$, the light turns on \begin{equation} a_{1} \delta (x-x_1) I(x,t_{i}) + a_{2} \delta (x-x_2) I(x,t_{i}) \label{e2} \end{equation} Where $I(x,t_{i})$ represents the electromagnetic field radiation expanding from point $x$, at time $t_{i}$. The photographic plate is composed of individual molecules capable of absorbing visible photons and, as a consequence, undergoing a change which will later be visible to the unaided human eye, provided enough molecules absorb photons. At time $t_f$ the plate has been illuminated by $n$ photons, where $n$ is presumed to be large enough to ensure that the result will be visible. Again to simplify the mathematics, and without loss of generality, we assume that the camera is positioned equidistant from $x_1$ and $x_2$, so that the number of photons illuminating the plate is the same for either possible position of B. If the body B is at $x_1$, then a patch of the photographic plate will be illuminated, call this patch $\xi_1$. If the body is at $x_2$, a different patch will be illuminated, call it $\xi_2$. The illumination landing on $\xi_i$ at time $t_f$ will be represented by $I(\xi_i,t_f,n)$; $n$ is the number of photons that will fall on the patch $\xi_i$. As $I$ passes through each molecule it generates a factor representing possible absorption of a photon by a molecule at the photographic plate. A very simple representation of this factor is \begin{eqnarray} \lefteqn{ a_{1} b_{1} \delta (x-x_1) \hat{a}_{\gamma} I(\xi_{1},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) } \nonumber \\ & \mbox{} + a_{1} b_{2} \delta (x-x_1) I(\xi_{1},t_f,n) \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) \nonumber \\ & \mbox{} + a_{2} b_{1} \delta (x-x_2) \hat{a}_{\gamma} I(\xi_{2},t_f,n) \mathbf{p}si(\xi_{1},i) \hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{2},i) \nonumber \\ & \mbox{} + a_{2} b_{2} \delta (x-x_2) I(\xi_{2},t_f,n)) \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) \label{e3} \end{eqnarray} Where $|b_{1}|^{2} + |b_{2}|^{2} = 1$, $b_{1}$ being the amplitude for a photon to be absorbed by a light-sensitive molecule, while $b_{2}$ is the amplitude for the photon to pass through without being absorbed (in a given time). $\mathbf{p}si(\xi_{1},i)$ is the initial electron orbital state of a light-sensitive molecule in area $\xi_{1}$, the area on the photographic plate that will be illuminated if B is at $x_{1}$. $\hat{a}_{\gamma}$ is a photon annihilation operator such that $\hat{a}_{\gamma} I(\xi_{1},t_f,n) = n^{1/2} I(\xi_{1},t_f,n-1)$. $\hat{a}^{\dagger}_{e}$ is a raising operator acting on the electron orbital of the molecule, raising it to the energy state that leads to a visible change, so that $\hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{1},i) = \mathbf{p}si(\xi_{1},f)$. When states of the form (\hat{a}^{\dagger}_{e}f{e3}) exist the proposed state reduction mechanism becomes active. The mechanism has three elements. (1) Trigger. Once the state contains a larc there is a probability per unit time of a reduction. Call the time constant for this $P_t$. In this reduction branches other than the one selected will be lopped off, their amplitudes set to zero. (2) Basis choice. The basis in which this reduction occurs is the number basis consistent with the annihilation and raising operators that appear in the larc used to describe the absorption of the photon by the molecule. In this case the relevant number operators are $\hat{n}_\gamma = \hat{a}_{\gamma}^\dagger \hat{a}_{\gamma}$, corresponding to the number of photons in the incoming electromagnetic field, $\hat{n}_{ei}$ the number of electrons in the initial electron orbital state, and $\hat{n}_{ef}$, the number of electrons in the final electron orbital state. I will express these combinations of number operator eigenvalues in the form of a 3-tuple $(n, i, f)$, where $n$ is the number of photons, $i$ the number of electrons in the initial state, and $f$ the number of electrons in the final state. $i$ and $f$ can each only be zero or one, because the electron is a fermion, and $i + f = 1$, because the number of electrons is conserved. The initial state is $(n, 1, 0)_1$, where the subscript indicates which of the two regions, $\xi_1$ or $\xi_2$, is in question. There are three possible final combinations: if the photon is absorbed the final state is $(n-1, 0, 1)_1$, which corresponds to a zeroing of amplitude $b_{2}$; if the photon is not absorbed the state is $(n, 1, 0)_1$, which corresponds to a zeroing of amplitude $b_{1}$. In each of these cases it is inherent that $a_{2}$ is also zeroed. There remains a third possible final combination: $(n-1, 1, 0)_1$. This corresponds not to the photon being absorbed by the molecule, but to a removal of one photon from the electomagnetic field $I(\xi_{1},t_f,n)$ without its being absorbed by the molecule at $\xi_{1}$. On the face of things, this outcome violates energy conservation, since the photon disappears but its energy does not get transferred to the electron orbital of the molecule. However, recall that (\hat{a}^{\dagger}_{e}f{e3}) is a superposition of two different positions for B, and consequently, two different illumination states. If a photon disappears from one of the two superposed illumination states, then it still exists in the other. This outcome corresponds with a zeroing of $a_{1}$. This third way of reducing the state is necessary to the success of the whole enterprise. Imagine an alternative version of the pinhole camera, in which there is film in only the half of the camera containing $\xi_1$, the half containing $\xi_2$ being open, so that any photons that would have been absorbed at $\xi_2$ now pass through empty space without being absorbed at all. If in this situation only the first two combinations were allowed, then with probability one, B would be at $x_1$. It would be possible to determine where B is simply by choosing whether to detect the illumination beam at $\xi_1$ or $\xi_2$. Therefore, combination 3 is necessary if the larc-abc mechanism is to produce the correct Born rule probabilities. (3) State Reduction. The final result of the state reduction process is one of three possible states. The first in which the photon is definitely absorbed, is that corresponding to $(n-1, 0, 1)_1$ \begin{equation} \delta (x-x_1) \hat{a}_{\gamma} I(\xi_{1},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) \label{e3.2} \end{equation} which occurs with probability $|a_{1} b_{1}|^2$. Of course, this state is equal to \begin{equation} \delta (x-x_1) n^{1/2} I(\xi_{1},t_f,n-1) \mathbf{p}si(\xi_{1},f) \mathbf{p}si(\xi_{2},i) \label{e3.3} \end{equation} The second, in which the photon is definitely not absorbed, corresponding to $(n, 1, 0)_1$ \begin{equation} \delta (x-x_1) I(\xi_{1},t_f,n) \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) \label{e3.4} \end{equation} which occurs with probability $|a_{1} b_{2}|^2$. The third case is a little harder to describe neatly. One photon is removed from the illumination field in the $\xi_{1}$ region, without being absorbed. This situation corresponds to $(n-1, 1, 0)_1$ \begin{eqnarray} \lefteqn{ b_{1} \delta (x-x_2) \hat{a}_{\gamma} I(\xi_{2},t_f,n) \mathbf{p}si(\xi_{1},i) \hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{2},i) } \nonumber \\ & \mbox{} + b_{2} \delta (x-x_2) I(\xi_{2},t_f,n)) \mathbf{p}si(\xi_{1},i) \mathbf{p}si(\xi_{2},i) \label{e3.6} \end{eqnarray} which occurs with probability $|a_{2}|^2$. \section{Amplification By Correlation} As exposure goes on, we get large numbers of factors like (\hat{a}^{\dagger}_{e}f{e3}) representing possible absorption of photons by the molecules in regions $\xi_{1}$ and $\xi_{2}$. Assuming that the photographic plate's coating is opaque, all the photons will eventually be absorbed. After the entire illumination front has passed the camera, the state at the photographic plate will look something like \begin{eqnarray} \lefteqn{ \textstyle{ a_{1} \delta (x-x_1) \mathbf{p}rod_{j=1}^{m_{1}}\left[b_{2}\mathbf{p}si_{j}(\xi_{1},i) + b_{1}\hat{a}^{\dagger}_{e}\mathbf{p}si_{j}(\xi_{1},i)\hat{a}_{\gamma} \right]I(\xi_{1},t_f,n) }} \nonumber \\ & \textstyle{ \mbox{} \times \mathbf{p}rod_{j=1}^{m_{2}}\mathbf{p}si_{j}(\xi_{2},i) }\nonumber \\ & \textstyle{ \mbox{} + a_{2} \delta (x-x_2) \mathbf{p}rod_{j=1}^{m_{2}}\left[b_{2}\mathbf{p}si_{j}(\xi_{2},i)+b_{1}\hat{a}^{\dagger}_{e}\mathbf{p}si_{j}(\xi_{2},i)\hat{a}_{\gamma} \right]I(\xi_{2},t_f,n) } \nonumber \\ & \textstyle{ \mbox{} \times \mathbf{p}rod_{j=1}^{m_{1}}\mathbf{p}si_{j}(\xi_{1},i) }\label{e4} \end{eqnarray} Where $m_1$ and $m_2$ represent the number of absorbing molecules in regions 1 and 2 respectively. As before there are three elements to the state reduction process, and three possible end states. (1) Trigger. The state described by equation (\hat{a}^{\dagger}_{e}f{e4}) contains a large number of larc events. Each one creates a probability per unit time of a state reduction. The probability per unit time is $P_t$ for each larc event. The total number of larc events in (\hat{a}^{\dagger}_{e}f{e4}) is $m_1 + m_2$, which means that, roughly, the probability per unit time for a reduction is $P_t(m_1 + m_2)$. (2) Basis Choice. The basis choice for any indidvidual larc event is the same as that for equation (\hat{a}^{\dagger}_{e}f{e3}). Let us say, without loss of generality, that the larc event associated with molecule $q$ in patch $\xi_1$ is the first to trigger a state reduction. The basis will then be that associated with the photon number operator, the number operator for the initial state of molecule $1;q$, and the number operator for the final state of molecule $1;q$. The possible outcomes, characterised as above, are $(n-1,0,1)_{1;q}$, $(n,1,0)_{1;q}$, and $(n-1,1,0)_{1;q}$. (3) State Reduction. Again, one of three possible states results from the process. The first, $(n-1,0,1)_{1;q}$, corresponds to the photon being definitely absorbed at $1;q$. \begin{eqnarray} \delta(x-x_1) \hat{a}^{\dagger}_{e}\mathbf{p}si_{q}(\xi_{1},i)\hat{a}_{\gamma} \mathbf{p}rod_{\stackrel{\scriptstyle j=1}{j\ne q}}^{m_{1}}\left[b_{2}\mathbf{p}si_{j}(\xi_{1},i) + b_{1}\hat{a}^{\dagger}_{e}\mathbf{p}si_{j}(\xi_{1},i)\hat{a}_{\gamma} \right]I(\xi_{1},t_f,n) \nonumber \\ \mbox{} \times \mathbf{p}rod_{j=1}^{m_{2}}\mathbf{p}si_{j}(\xi_{2},i) \label{e4.2} \end{eqnarray} with probability $|a_1 b_1|^2$. The core of the larc-abc mechanism is now clear. The crucial property of the state described by (\hat{a}^{\dagger}_{e}f{e4}) is this: the photons are all correllated. If any one photon is absorbed in region $\xi_{1}$, then every photon must be absorbed in region $\xi_{1}$ and none in region $\xi_{2}$. This is clear in (\hat{a}^{\dagger}_{e}f{e4.2}), which is the result of a single photon definitely absorbed in region $\xi_{1}$. All terms in which a photon could be absorbed in region $\xi_{2}$ have vanished. Since the region in which a photon is absorbed is correlated with the position of B, any photon absorbed in $\xi_{1}$ means the position state of B is $\delta (x-x_1)$, which corresponds to a zeroing of $a_2$. Again, this is clear in (\hat{a}^{\dagger}_{e}f{e4.2}): all the terms with $\delta (x-x_2)$ are gone. The second possible end state after a state reduction is $(n,1,0)_{1;q}$, corresponding to a photon being definitely not absorbed at $1;q$. \begin{eqnarray} \delta (x-x_1) \mathbf{p}si_{q}(\xi_{1},i) \mathbf{p}rod_{\stackrel{\scriptstyle j=1}{j\ne q}}^{m_{1}}\left[b_{2}\mathbf{p}si_{j}(\xi_{1},i) + b_{1}\hat{a}^{\dagger}_{e}\mathbf{p}si_{j}(\xi_{1},i)\hat{a}_{\gamma} \right]I(\xi_{1},t_f,n) \nonumber \\* \mbox{} \times \mathbf{p}rod_{j=1}^{m_{2}}\mathbf{p}si_{j}(\xi_{2},i) \label{e4.4} \end{eqnarray} with probability $|a_1 b_2|^2$. The third end state is $(n-1,1,0)_{1;q}$, corresponding to the photon being removed from the illumination field at $\xi_1$ without being absorbed at $1;q$. \begin{eqnarray} \delta (x-x_2) \mathbf{p}rod_{j=1}^{m_{2}}\left[b_{2}\mathbf{p}si_{j}(\xi_{2},i)+b_{1}\hat{a}^{\dagger}_{e}\mathbf{p}si_{j}(\xi_{2},i)\hat{a}_{\gamma} \right]I(\xi_{2},t_f,n) \nonumber \\* \mbox{} \times \mathbf{p}rod_{j=1}^{m_{1}}\mathbf{p}si_{j}(\xi_{1},i) \label{e4.6} \end{eqnarray} with probability $|a_2|^2$. Effectively, B has been localised to $\delta (x-x_2)$ before any photon has been definitely absorbed, or definitely not absorbed, at $\xi_2$. The form of equation (\hat{a}^{\dagger}_{e}f{e4.6}) is determined by the correlation of photons in the illumination field $a_{1} I(\xi_{1},t_f,n) + a_{2} I(\xi_{2},t_f,n)$. The photons are correlated such that they are either all at $\xi_1$, or all at $\xi_2$. So, equations (\hat{a}^{\dagger}_{e}f{e4.2}), (\hat{a}^{\dagger}_{e}f{e4.4}), and (\hat{a}^{\dagger}_{e}f{e4.6}), are the final end results of the measurement process. In each of them, body B has become definitely localised. As I have said, if even a single photon undergoes a state reduction, it carries all the photons with it, as well as the body B with whose position they are all in turn correlated. This means that even if the probability per unit time of a reduction is very small for each larc event, the enormous number of larc events involved implies that the probability of B becoming localised may be virtually one. This is the effect I have dubbed amplification by correlation (abc): a very small probability of a state reduction ($P_t$) for each photon is amplified by correlation of large enough numbers of photons ($m_1 + m_2$), into a very large probability of reduction of the position of body B ($P_t(m_1 + m_2)$). Hence (\hat{a}^{\dagger}_{e}f{e4}) implies that B becomes localised with probability virtually one. It is worth being clear about what is and is not correlated in (\hat{a}^{\dagger}_{e}f{e4}). If any photon is definitely absorbed at $\xi_1$, then no photon can be absorbed at $\xi_2$, all must ultimately be absorbed at $\xi_1$. However, while the dropping of one molecule into a definite state forces the position of B into a definite state, it does not force any of the other photons to decide whether they have been definitely absorbed. As a result, it is possible that when a person looks at the plate, there will not be any definite fact of the matter of exactly which molecules have been changed and which have not. Some molecules may still be in a superposition of having absorbed or not absorbed a photon. But this is not as serious as it might sound. The differing states that are superposed are not distinguishable to the human observer because they all lie within one region Furthermore, it is likely that photon absorption and amplification processes that occur within the visual system and brain will raise the probability of these superposed states becoming definite. It is clear that the above argument can be generalised to a body whose position is unknown in one, two or three dimensions, using two or three pinhole cameras. \section{Human-independent localisation} The argument so far has relied on the pinhole camera, which is clearly a human artifact. Can we expect that the larc-abc mechanism described above will lead to localisation by other measuring devices, or even without any human involvement? The answer is yes, for more than one reason. First, the involvement of humans, as observers, is irrelevant to the localisation process. The only involvement of humans as observers (as opposed to artificers) in the above argument is the specification that the state reached by a molecule when it absorbs a photon, $\hat{a}^{\dagger}_{e} \mathbf{p}si(\xi_{1},i)=\mathbf{p}si(\xi_{1},f)$, is visibly different, to the human eye, when compared with the initial state, $\mathbf{p}si(\xi_{1},i)$. But this assumption plays no role in the argument leading to localisation. Localisation would occur in exactly the same way if the two molecular states were not visibly different to the human eye, and hence does not depend on the camera's use as a measuring instrument, but only on its basic physical characteristics. Second, localisation will occur even if the `pinhole camera' is just a natural cavity in the surface of a second body. After all, real macroscopic bodies are not smooth surfaced; real bodies have surfaces which have many ridges and valleys, hills and cavities. At the bottoms of these valleys and cavities will be places where illumination from another body can reach only if it is in the right relative position, so the above analysis based on the `pinhole camera' will still apply. Third, and more generally, when a photon is absorbed by a molecule or crystal lattice it transfers information about the direction of its source in the form of a momentum kick. Imagine a single molecule, M, part of a macroscopic body positioned somewhere off the line $x$, which is struck by the illumination field coming from body B. The illumination is a superposition of illumination fields coming from two different directions. Direction one corresponds to a photon coming from position $x_1$, which will give the molecule a kick $\mathbf{p}_1$. Direction two corresponds to a photon coming from position $x_2$, which will give the molecule a kick $\mathbf{p}_2$. Once the illumination field has passed through the molecule the state will be \begin{eqnarray} \lefteqn{ a_1 b_1 \delta (x-x_1) \hat{a}_{\gamma} I(\mathbf{p}_{1},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(i) \rp{1} \mathbf{p}hi(\mathbf{p}_i) } \nonumber \\ & & \mbox{} + a_1 b_2 \delta (x-x_1) I(\mathbf{p}_{1},t_f,n) \mathbf{p}si(i) \mathbf{p}hi(\mathbf{p}_i) \nonumber \\ & & \mbox{} + a_2 b_1 \delta (x-x_2) \hat{a}_{\gamma} I(\mathbf{p}_{2},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(i) \rp{2} \mathbf{p}hi(\mathbf{p}_i) \nonumber \\ & & \mbox{} + a_2 b_2 \delta (x-x_2) I(\mathbf{p}_{2},t_f,n) \mathbf{p}si(i) \mathbf{p}hi(\mathbf{p}_i) \label{e5} \end{eqnarray} where $\mathbf{p}_1$ ($\mathbf{p}_2$) represents the momentum of a photon travelling from $x_1$ ($x_2$) towards M. It is no longer the position of the molecule M that is correlated with the position of B; the position of B is now correlated with the direction of the momentum striking a single molecule. $\mathbf{p}hi(\mathbf{p}_i)$ is the centre of mass momentum state for the molecule M, with an initial momentum of $\mathbf{p}_i$. $\rp{1}$ is a raising operator affecting the centre of mass momentum state of the molecule and adding momentum $\mathbf{p}_1$ to it; hence $\rp{1} \mathbf{p}hi(\mathbf{p}_i) = \mathbf{p}hi(\mathbf{p}_i + \mathbf{p}_1)$. \begin{picture} (100,200)(-10,-20) \thicklines \mathbf{p}ut(0,20) { \mathbf{p}ut(35,10){B} \mathbf{p}ut(196,45){M} \mathbf{p}ut(200,80){\circle*{5}} \mathbf{p}ut(28,118){$x_1$} \mathbf{p}ut(200,80){\line(-4,1){160}} \mathbf{p}ut(200,80){\vector(4,-1){50}} \mathbf{p}ut(254,65.5){$\mathbf{p}_1$} \mathbf{p}ut(28,38){$x_2$} \mathbf{p}ut(200,80){\line(-4,-1){160}} \mathbf{p}ut(200,80){\vector(4,1){50}} \mathbf{p}ut(254,90){$\mathbf{p}_2$} } \mathbf{p}ut(10,0){Localisation of body B by momentum transferred to molecule M} \end{picture} Decomposing the momentum kick into components $\mathbf{p}_1 = \mathbf{p}_{1x} + \mathbf{p}_{1y}$, we have \begin{eqnarray} \lefteqn{ a_1 b_1 \delta (x-x_1) \hat{a}_{\gamma} I(\mathbf{p}_{1x},\mathbf{p}_{1y},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(i) \rpc{1}{x} \rpc{1}{y} \mathbf{p}hi(\mathbf{p}_{ix} + \mathbf{p}_{iy}) } \nonumber \\ & & \mbox{} + a_1 b_2 \delta (x-x_1) I(\mathbf{p}_{1x},\mathbf{p}_{1y},t_f,n) \mathbf{p}si(i) \mathbf{p}hi(\mathbf{p}_i) \nonumber \\ & & \mbox{} + a_2 b_1 \delta (x-x_2) \hat{a}_{\gamma} I(\mathbf{p}_{2x},\mathbf{p}_{2y},t_f,n) \hat{a}^{\dagger}_{e} \mathbf{p}si(i) \rpc{2}{x} \rpc{2}{y} \mathbf{p}hi(\mathbf{p}_{ix} + \mathbf{p}_{iy}) \nonumber \\ & & \mbox{} + a_2 b_2 \delta (x-x_2) I(\mathbf{p}_{2x},\mathbf{p}_{2y},t_f,n) \mathbf{p}si(i) \mathbf{p}hi(\mathbf{p}_i) \label{e6} \end{eqnarray} The larcs in equation (\hat{a}^{\dagger}_{e}f{e6}) create a probability per unit time $P_t$ of a state reduction which will leave only one term of equation (\hat{a}^{\dagger}_{e}f{e6}) behind. If we consider all the molecules exposed to the illumination field $I(n)$ we will end up with a state like (\hat{a}^{\dagger}_{e}f{e4}), but one in which the direction of the momentum transferred to each molecule is correlated, rather than the position in which the transfer takes place, which means that the abc mechanism will apply to the momenta transferred by many photons to a macro body: the directions of these momenta will be correlated, and just one reduction to a definite direction will carry all the others with it through this correlation. Localisation of body B occurs without any human involvement, and with no assumptions about the structure of the other body(s). \section{Some properties of the theory} A significant property of the theory is that while it localises macroscopic objects, it can only produce localisation relative to other bodies. The larc theory is in principle incapable of producing absolute position localisation. I regard this as a strength of the theory. Most physical theories that are not explicitly relativistic are agnostic about the relativity of position: position variables can be interpreted as either relative or absolute without any effect on observable physics. The larc-abc theory is a rare example of a theory that permits only of relative positions. Type 2 theories of quantum measurement are usually formulated initially in nonrelativistic terms, and they often have problems extending to the relativistic case. Both the Bohm\cite{bohm1} and GRW\cite{grw1} theories have difficulties of this kind. The larc theory, by contrast, analyses the situation the same way it would be done in quantum electrodynamics. Larc clusters are already present in quantum field theory, so they do not disrupt the relativistic invariance of the analysis. The branches in the larc number operator basis for bound states effectively define the energy basis, which again is consistent with a relativistic theory. If each branch is relativistic, then lopping off any one branch, or all but one, will leave behind a system that remains relativistic. It is a vital property of the larc-abc process that it leads to position localisation in appropriate circumstances. But this localisation can only occur relative to some other body. Therefore, the theory cannot produce any problem of there being a preferred reference frame. Likewise, the temporal constant $P_t$ for the probability of a state reduction also belongs to a well-defined system: that to which the relevant larc cluster belongs, such as the molecule where the photon is absorbed, so again there is no need for a preferred reference frame. Taken altogether, larc theory should be as relativistic as quantum field theory itself. Similar arguments show that the theory conserves energy. The emission of photons from B conserves energy, the absorption of photons, either in the pinhole camera, or in molecule M, also conserves energy. Each branch in the superposition conserves energy, so if the amplitudes of all but one branch are zeroed, whichever branch remains conserves energy. Larc theory has a property which could be considered a further advantage over the GRW theory. GRW theory has two free parameters: the probability per unit time for a spontaneous localisation, and the parameter defining the width of the localised state. In fact, the aspect of the theory that is free to be adjusted is much larger than two parameters, because the position distribution after spontaneous localisation is free; a wide range of arbitrary localised distributions can be chosen. Larc theory does not have this freedom: the distribution of the localised state is determined by the detail of the mechanism that establishes the correlation between relative position and the bound electron states within molecules. In fact, the distribution of the localised state is determined in exactly the same way it would have been in standard 'good taste' quantum theory. The time distribution for the spontaneous localisation in GRW is also free, but in GRW a constant probability per unit time is natural given that every particle may localise at any time. The only event that could form an anchor for a distribution not constant in time would be each particle's own localisations, or possibly those of particles spatially correlated with the first. In this exposition I have assumed that the Larc theory also produces a constant probability per unit time. Although the justification for this is less natural, because Larc theory does have a trigger event relative to which the probability distribution could be defined. Nevertheless, constant probability per unit time remains a natural expectation in the Larc theory. So Larc theory contains only one free parameter, where GRW has two, plus significant freedom in the position distribution. I have not made any attempt to resolve one point. Larc clusters occur also in virtual processes. There is therefore the question of whether virtual larc clusters should be considered to trigger state reductions just as real ones do. If they do, it is also not clear whether the temporal constant should be the same in both cases. I have no basis on which to offer an opinion on whether virtual larc clusters should be considered as triggering state reductions. The only comment I have to make is that, if they do, this will certainly have an effect on the calculation of virtual processes, but with results that may be significant only in those of high complexity. \section{Empirical tests} Since the core purpose of a measurement theory is to match the well-known behaviour of measurements, without changing the known behaviour of other types of quantum systems, empirical verification of a measurement theory is inherently difficult. Nevertheless, as in any other case where one has changed the fundamental laws underlying a theory, one expects that there will be at least one situation where the predictions of the new theory diverge from those of the old. This is what we expect for all type 2 solutions to the measurement problem. Measurement is not a physically well-defined concept, so it is next to impossible that any physical law could exactly match its behaviour. At some point, there will be a situation where the new law operates detectably when there is no measurement occurring. The problem of verification is made worse by the phenomenon usually called environment-induced decoherence. This phenomenon mimics an aspect of the behaviour of measurements by causing the branches corresponding to the possible outcomes of measurements to lose their coherence. A measurement necessarily requires the measured system to be put into correlation with the measuring apparatus. If one considers the measuring apparatus by itself, tracing out the measured system, its possible outcome states become decohered. As the total state of the system and apparatus remains a superposition this does not solve the measurement problem, but it does mean that interference effects that would be destroyed by a successful measurement theory are destroyed in any case by decoherence, making straightforward tests impossible. Nevertheless, the possibility of processes that generate large enough numbers of larcs to produce detectable loss of interference, without its being masked by environment-induced decoherence, remains. If larc-triggered decoherence occurs during virtual processes, this may also produce detectable deviation from behaviours predicted by standard quantum field theory. \section{Conclusion} In 1990, Shimony\cite{shim1} provided a heuristic justifiction for a theory in which state reductions result from sudden jumps to energy eigenstates. Technically, larc theory is not such a theory, since in it jumps are to eigenstates of number operators defined by larc operators, not energy operators. Nevertheless, since the number operator states for electron orbitals correspond to bound state energy eigenstates, larc theory is essentially a theory of the type Shimony called for. Shimony's argument therefore may constitute support for the larc-abc theory. In my case, at least, the core intuition behind the desire for a measurement theory involving jumps to energy eigenstates is that if bound electrons jumped to energy eigenstates after absorbing incoming particles, they would carry anything correlated with them, such as the centres of mass of other bodies, into suitably classical states. For bound states, the energy eigenstates seem to form a natural solution to the basis problem, where they do not for unbound states. A severe barrier to achieving a `jumps to energy eigenstates' theory has been the expectation that it would be impossible to treat the bound states separately from the other bodies correlated with them, and therefore one would be forced to adopt a theory where the entire correlated state would need to jump to an energy eigenstate, even though this would include the centre of mass states of free bodies. From this point of view, the crucial element of larc theory is that the creation and annihilation operators in larc events provide a barrier that separates the bound energy eigenstates from other elements of the total system, so that it is possible for bound electrons to jump to energy eigenstates, and carry other systems that are correlated with them into states that need not themselves be energy eigenstates. Nor does the total state of the whole system need to be an energy eigenstate. The measurement problem is one of the oldest open questions in quantum theory. The list of attempted solutions to this problem is long. While many have their good points, most have failed, or at least failed to be convincing. Some remain in play, but none that can claim to be a clear solution to the problem. The theory described here has a number of advantages that I believe have not been seen before in a single theory. It solves the problem as a matter of straightforward physics, without requiring conceptual contortions in attempting to generate the Born rule probabilities, as the Everett theory does. It produces localisation of macroscopic bodies in the most physically natural way, as a direct result of illumination reflected from one body being absorbed by other bodies. And it is that rarity, a reduction theory which is naturally relativistic. I would like to thank Adrian Flitney, Michael Hall, and Angas Hurst, for valuable discussions and criticism. \end{document}
math
\begin{document} \title{Copula Graphical Models for Heterogeneous Mixed Data} \author{Sjoerd Hermes$^{1,2}$, Joost van Heerwaarden$^{1,2}$ and Pariya Behrouzi\thanks{Corresponding author.}$^{\text{ }1}$} \date{ $^1$ Mathematical and Statistical Methods, Wageningen University\\% $^2$ Plant Production Systems, Wageningen University\\ } \maketitle \begin{abstract} \noindent This article proposes a graphical model that handles mixed-type, multi-group data. The motivation for such a model originates from real-world observational data, which often contain groups of samples obtained under heterogeneous conditions in space and time, potentially resulting in differences in network structure among groups. Therefore, the i.i.d.\ assumption is unrealistic, and fitting a single graphical model on all data results in a network that does not accurately represent the between group differences. In addition, real-world observational data is typically of mixed discrete-and-continuous type, violating the Gaussian assumption that is typical of graphical models, which leads to the model being unable to adequately recover the underlying graph structure. The proposed model takes into account these properties of data, by treating observed data as transformed latent Gaussian data, by means of the Gaussian copula, and thereby allowing for the attractive properties of the Gaussian distribution such as estimating the optimal number of model parameter using the inverse covariance matrix. The multi-group setting is addressed by jointly fitting a graphical model for each group, and applying the fused group penalty to fuse similar graphs together. In an extensive simulation study, the proposed model is evaluated against alternative models, where the proposed model is better able to recover the true underlying graph structure for different groups. Finally, the proposed model is applied on real production-ecological data pertaining to on-farm maize yield in order to showcase the added value of the proposed method in generating new hypotheses for production ecologists. \end{abstract} \section{Introduction} Gaussian graphical models are statistical learning techniques used to make inference on conditional independence relationships within a set of variables arising from a multivariate normal distribution \cite{lauritzen1996graphical}. These techniques have been successfully applied in a variety of fields, such as finance \citep{giudici2016graphical}, biology \citep{krumsiek2011gaussian}, healthcare (Gunathilake et al., \citeyear{gunathilake2020identification}) and others. Despite their wide applicability, the assumption of multivariate normality is often untenable. Therefore, a variety of alternative models have been proposed, in, for example, the case of Poisson or exponential data (Yang et al., \citeyear{yang2015graphical}), ordinal data (Guo et al., \citeyear{guo2015graphical}) and the Ising model in the case of binary data. More general, despite the availability of approaches that do not impose specific distributions on the data, they are limited by their inability to allow for non-binary discrete data (Liu et al., \citeyear{liu2012high}; Fan et al., \citeyear{fan2017high}) or contain a substantial number of parameters (Lee \& Hastie, \citeyear{lee2015learning}). Dobra and Lenkoski (\citeyear{dobra2011copula}) developed a type of Gaussian graphical model that allows for mixed-type data, by combining the theory of copulas, Gaussian graphical models and the rank likelihood method (Hoff, \citeyear{hoff2007extending}). Whereas this model consisted of a Bayesian framework, Abegaz and Wit (\citeyear{abegaz2015copula}) proposed a frequentist alternative, reasoning that the choice of priors for the inverse covariance matrix is nontrivial. Both the Bayesian and frequentist approaches have seen further development and application to real problems in the medical (Mohammadi et al., \citeyear{mohammadi2017bayesian}) and biomedical sciences (Behrouzi \& Wit, \citeyear{behrouzi2019detecting}). Notwithstanding distributional assumptions, all aforementioned methods assume that the data is i.i.d.\ (independent and identically distributed). However, real-world observational data often contain groups of samples obtained under heterogeneous conditions in space and time, potentially resulting in differences in network structure among groups. Therefore, the i.i.d.\ assumption is unrealistic, and fitting a single graphical model on all data results in a network that does not accurately represent the between group differences. Conversely, fitting each graph separately for each group fails to take advantage of underlying similarities that may exist between the groups, thereby possibly resulting in highly variable parameter estimates, especially if the sample size for each group is small (Guo et al., \citeyear{guo2011joint}). For these reasons, during the last decade, several researchers have developed graphical models for so-called heterogeneous data, that is, data consisting of various groups (Guo et al., \citeyear{guo2011joint}; Danaher et al., \citeyear{danaher2014joint}; Xie et al., \citeyear{xie2016joint}). Akin to graphical models for homogeneous data, research on heterogeneous graphical models has mainly pertained to the Gaussian setting, despite mixed-type heterogeneous data occurring in a wide variety of situations, such as multi-nation survey data, meteorological data measured at different locations, or medical data of different diseases. Consequently, the aim of this article is to fill the methodological gap that is graphical models for heterogeneous mixed data. Even though Jia and Liang (\citeyear{jia2020joint}) aimed to close this methodological gap using their joint mixed learning model, the effectiveness of said model has only been shown in the case where the data follow Gaussian or binomial distributions. This is not always the case in real-world applications. In addition, the model is unable to handle missing data, which tend to be the norm, rather than the exception in real-world data (Nakagawa \& Freckleton, \citeyear{nakagawa2008missing}). Despite Jia and Liang also including an R package with their method, it is currently depreciated and not usable for graph estimation. Motivated by an application of networks on disease status, Park and Won (\citeyear{park2022inference}) recently proposed the fused mixed graphical model: a method to infer graph structures of mixed-type (numerical and categorical) data for multiple groups. This approach is based on the mixed graphical model by Lee and Hastie (\citeyear{lee2013structure}), but extended to the multi-group setting. The proposed model assumes that the categorical variables given all other variables follow a multinomial distribution and all numeric variables follow a Gaussian distribution given all other variables, which is not realistic in the case of Poisson, or non-Gaussian continuous variables. Moreover, the imposed penalty function consists of 6 different penalty parameters to be estimated for 2 groups, which only grows further as the number of groups increases, resulting in the FMGM being prohibitively computationally expensive. Furthermore, no comparative analysis is done with existing methods, but only to a separate network estimation, giving no indication of comparative performance on different types of data. Finally, the FMGM is not accompanied by an R package that allows for such comparative analyses. There is a need for a method that can handle more general mixed-type data consisting of any combination of continuous and ordered discrete variables in a heterogeneous setting, which to the best of our knowledge does not exist at present. Borrowing from recent developments in copula graphical models, the proposed method can handle Gaussian, non-Gaussian continuous, Poisson, ordinal and binomial variables, thereby letting researchers model a wider variety of problems. All code used in this article can be found at \url{https://github.com/sjoher/cgmhmd-analysis}, whilst the R package can be found at \url{https://github.com/sjoher/cgmhmd}. \subsection{Application to production ecological data} Interest in relationships between multiple variables based on samples obtained over different locations and time-points is particularly common in production-ecology, a science that aims to understand and predict the productivity of agricultural systems (e.g.\ yield) as a function of their genetic biological components (G), the production environment (E) and human management (M). Production-ecological data typically consist of observations from different crops, seasons, environments, or management conditions and research is likely to benefit from the use of graphical models. Moreover, production ecological data tends to be of mixed-type, consisting of (commonly) Gaussian, non-Gaussian continuous and Poisson environmental data, but also ordinal and binomial management data. A typical challenge for production-ecological research lies in explaining variability in observed yields as a function of a wide set of potential enabling and constraining variables. This is typically done by employing linear models or basic machine learning methods such as random forest that model yield as a function of a set of covariates (Ronner et al., \citeyear{ronner2016}; Bielders \& Gérard, \citeyear{bielders2015millet}; Palmas \& Chamberlin, \citeyear{palmas2020fertilizer}). However, advanced statistical models such as graphical models have not yet been introduced to this field. As graphical models are used to represent the conditional dependencies underlying a set of variables, we expect that these models can greatly aid researchers' understanding of G$\times$E$\times$M interactions by way of exposing new, fundamental relationships that affect plant production, which have not been captured by methods that are commonly used in the field of production ecology. Therefore, we use this field as a way to illustrate our proposed method and thereby introduce graphical models in general to production ecologists. \\ \\ This article extends the Gaussian copula graphical model to allow for heterogeneous, mixed data, where we showcase the effectiveness of the novel approach on production-ecological data. To this end, in Section \ref{Methodology}, the proposed methodology behind the Gaussian copula graphical model for heterogeneous data is presented. Section \ref{Simulation study} presents an elaborate simulation study, where the performance of the newly proposed method compared to other types of graphical models is evaluated. An application of the new method on production-ecological data consisting of multiple seasons is given in Section \ref{Real world data example}. Finally, the conclusion can be found in Section \ref{Conclusion}. \section{Methodology} \label{Methodology} A Gaussian graphical model corresponds to a graph $G = (V,E)$ that represents the full conditional dependence structure between variables represented by a set of vertices $V = \{1,2,\ldots,p\}$ through the use of a set of undirected edges $E \subset V \times V$, and depends on a $n \times p$ data matrix $\bm{X} = (X_1, X_2,\ldots,X_p), X_j = (X_{1j}, X_{2j}, \ldots, X_{nj})^T, j = 1,\ldots,p$, where $X \sim N_p(0, \Sigma)$, with $\Sigma = \Theta^{-1}$. $\Theta$ is known as the precision matrix containing the scaled partial correlations: $\rho_{ij} = -\frac{\Theta_{ij}}{\sqrt{\Theta_{ii}\Theta_{jj}}}$. Thus, the partial correlation $\rho_{ij}$ represents the independence between $X_i$ and $X_j$ conditional on $X_{V\backslash ij}$. Therefore, $(i,j) \not\in E \Leftrightarrow \Theta_{ij} = 0$. \subsection{Copula graphical models for heterogeneous data} Let $X^{(k)} = X_1^{(k)}, X_2^{(k)}, \ldots, X_p^{(k)}$, where $k = 1,2,\ldots,K$ represents the group index, indicating differential genotypic, environmental or management situations, and $X_j^{(k)}$ is a column of length $n_{k}$, where $n_{k}$ is not necessarily equal to $n_{k'}$ for $k \neq k'$ and the data are of mixed-type, i.e. non-Gaussian, counts, ordinal or binomial data, as obtained from measurements on different genotypic, environmental, management and production variables. Moreover, the data across the different groups are not i.i.d.\ For group $k$, a general form of the joint cumulative density function is given by \begin{equation} \label{eq:jointdensity} F(x_1^{(k)},\ldots,x_p^{(k)}) = \mathbb{P}(X_1^{(k)} \leq x_1^{(k)},\ldots, X_p^{(k)} \leq x_p^{(k)}). \end{equation} As the Gaussian assumption is violated for the $X^{(k)}$, maximum likelihood estimation of $\Theta^{(k)}$ based on a Gaussian model will not suffice. For joint densities consisting of different marginals, as in (\ref{eq:jointdensity}), copulas can be applied to model the joint dependency structure between the variables (Nelsen \citeyear{nelsen2007introduction}). In the copula graphical model literature, each observed variable $X_j$ is assumed to have arisen by some perturbation of a latent variable $Z_j$, where $Z \sim N_p(0, \Sigma)$, with correlation matrix $\Sigma$. The choice for a Gaussian latent variable is motivated by the familiar closed-form of the density and the fact that the Gaussian copula correlation matrix enforces the same conditional (in)dependence relations as the precision matrix of graphical models (Dobra \& Lenkoski, \citeyear{dobra2011copula}; Behrouzi \& Wit, \citeyear{behrouzi2019detecting}; Abegaz \& Wit, \citeyear{abegaz2015copula}). This article also assumes a Gaussian distribution for the latent variables such that \begin{equation*} Z^{(k)} \sim N_p(0, \Sigma^{(k)}), \end{equation*} where $\Sigma^{(k)} \in \mathbb{R}^{p \times p}$ represents the correlation matrix for group $k$. The latent variables are linked to the observed data as \begin{equation*} X_j^{(k)} = F_j^{(k)^{-1}}(\Phi(Z_j^{(k)})), \end{equation*} where the $F_j^{(k)}()$ are non-decreasing marginal distribution functions, the $F_j^{(k)^{-1}}()$ are quantile functions, $\Phi()$ the standard normal cdf and the $X_j^{(k)}, j = 1,2,\ldots, p$ are observed continuous and ordered discrete variables taking values (in the discrete case) in $\{0,1,\ldots, d_j^{(k)}-1\}, d_j^{(k)} \geq 2$, with $d_j^{(k)}$ being the number of categories of variable $j$ in group $k$. A visualization of the relationship between the latent and observed variables is given in Figure \ref{fig:copulatransform}. \begin{figure} \caption{Relationship between the latent and observed values for ordinal variable $X_j^{(k)} \label{fig:copulatransform} \end{figure} \noindent The copula function joining the marginal distributions is denoted as \begin{equation*} \mathbb{P}(X_1^{(k)} \leq x_1^{(k)},\ldots, X_p^{(k)} \leq x_p^{(k)}) = C(F_1^{(k)}(x_1^{(k)}),\ldots, F_p^{(k)}(x_p^{(k)})), \end{equation*} where $F_j^{(k)}(x_j^{(k)})$ is standard uniform (Casella \& Berger, \citeyear{casella2001statistical}) and, due to the Gaussian assumption of the $Z_j^{(k)}$, can be written as \begin{equation*} \Phi_{\Sigma^{(k)}}(\Phi^{-1}(F_1^{(k)}(x_1^{(k)})),\ldots, \Phi^{-1}(F_p^{(k)}(x_p^{(k)}))) = \Phi_{\Sigma^{(k)}}(\Phi^{-1}(u_1^{(k)}),\ldots,\Phi^{-1}(u_p^{(k)})), \end{equation*} where $\Phi_{\Sigma^{(k)}}()$ is a cdf of a multivariate normal distribution with correlation matrix $\Sigma^{(k)} \in \mathbb{R}^{p \times p}$. As the $\Phi()$ is always nondecreasing and the $F_j^{(k)^{-1}}(t)$ are nondecreasing due to the ordered nature of the data, we have that $x_{ij}^{(k)} < x_{i'j}^{(k)}$ implies $z_{ij}^{(k)} < z_{i'j}^{(k)}$ and $z_{ij}^{(k)} < z_{i'j}^{(k)}$ implies $x_{ij}^{(k)} \leq x_{i'j}^{(k)}, 1 \leq i \neq i' \leq n_{k}$, see Hoff (\citeyear{hoff2007extending}). Thus, we have that $z_j^{(k)} \in D(x_j^{(k)}) = \{z_j^{(k)} \in \mathbb{R}^{n_{k}}: L_{ij}(x_{ij}^{(k)}) < z_{ij}^{(k)} < U_{ij}(x_{ij}^{(k)})\}$, where $L_{ij}(x_{ij}^{(k)}) = \max\{z_{i'j}^{(k)} : x_{i'j}^{(k)} < x_{ij}^{(k)}\}$ and $U_{ij}(x_{ij}^{(k)}) = \min\{z_{i'j}^{(k)} : x_{ij}^{(k)} < x_{i'j}^{(k)}\}$. From here on out, we refer to the set of intervals containing the latent data $D(x) = \{z \in \mathbb{R}^{\sum^K n_k \times p}:z_j^{(k)} \in D(x_j^{(k)})\}$ as $D$. \noindent In order to facilitate the joint estimation of the different $\Theta^{(k)}$, the probability density function over all $K$ groups is given as \begin{equation} \label{eq:pdfgroups} \begin{gathered} f(x_1^{(1)},\ldots,x_p^{(1)},\ldots\ldots,x_1^{(K)},\ldots,x_p^{(K)})\\ = \prod_{k = 1}^K \left[c\left(F_1^{(k)}(x_1^{(k)}),\ldots, F_p^{(k)}(x_p^{(k)})\right)\prod_{j = 1}^p f^{(k)}_j(x_j^{(k)})\right], \end{gathered} \end{equation} where $c(F_1^{(k)}(x_1^{(k)}), \ldots, F_p^{(k)}(x_p^{(k)}))$ is the copula density function and $f^{(k)}_j$ is the marginal density function for the $j$-th variable and the $k$-th group. This copula density is obtained by taking the derivative of the cdf with respect to the marginals. As the Gaussian copula is used, the copula density function can be rewritten as: \begin{equation*} \begin{gathered} c(F_1^{(k)}(x_1^{(k)}), \ldots, F_p^{(k)}(x_p^{(k)}))\\ = \frac{\partial^p C}{\partial F_1^{(k)}, \ldots, \partial F_p^{(k)}}\\ = \frac{\Phi_{\Sigma^{(k)}}(\Phi^{-1}(u_1^{(k)}),\ldots,\Phi^{-1}(u_p^{(k)}))}{\prod_{i=1}^p\Phi(\Phi^{-1}(u_i^{(k)}))}\\ = (2\pi)^{-\frac{p}{2}}\det(\Sigma^{(k)})^{-\frac{1}{2}}\frac{\exp(-\frac{1}{2}(\Phi^{-1}(u_1^{(k)}),\ldots,\Phi^{-1}(u_p^{(k)}))^T \Sigma^{(k)^{-1}}(\Phi^{-1}(u_1^{(k)}),\ldots,\Phi^{-1}(u_p^{(k)})))}{\prod_{i=1}^p (2\pi)^{-\frac{1}{2}}\exp(-\frac{1}{2}\Phi^{-1}(u_i^{(k)})\Phi^{-1}(u_i^{(k)}))}\\ = (2\pi)^{-\frac{p}{2}}\det(\Sigma^{(k)})^{-\frac{1}{2}}\frac{\exp(-\frac{1}{2}Z^{(k)^T} \Sigma^{(k)^{-1}}Z^{(k)})}{(2\pi)^{-\frac{p}{2}}\exp(-\frac{1}{2}Z^{(k)^T} Z^{(k)})}\\ = \det(\Theta^{(k)})^{\frac{1}{2}}\exp(-\frac{1}{2}Z^{(k)^T} (\Theta^{(k)}-I)Z^{(k)}), \end{gathered} \end{equation*} where $Z^{(k)}$ is used to shorten the $n_{k} \times p$ latent matrix $(\Phi^{-1}(u_1^{(k)}),\ldots,\Phi^{-1}(u_p^{(k)}))$ and $I$ is a $p \times p$ identity matrix. The full log-likelihood over $K$ groups is then given by \begin{gather} \ell(\{\Theta^{(k)}\}_{k = 1}^{K}|\bm{X}) = \log\left[\prod_{k = 1}^K \prod_{i = 1}^{n_{k}}\left( c(F_1^{(k)}(x_{i1}^{(k)}), \ldots, F_p^{(k)}(x_{ip}^{(k)}))\prod_{j = 1}^p f^{(k)}_j(x_{ij}^{(k)})\right)\right] \notag\\ = \sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\log(c(F_1^{(k)}(x_{i1}^{(k)}), \ldots, F_p^{(k)}(x_{ip}^{(k)}))) + \sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\sum_{j = 1}^p \log(f^{(k)}_j(x_{ij}^{(k)})) \notag\\ = \sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\log\left(\det(\Theta^{(k)})^{\frac{1}{2}}\exp(-\frac{1}{2}Z_i^{(k)^T} (\Theta^{(k)}-I)Z_i^{(k)})\right)\notag\\ + \sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\sum_{j = 1}^p \log\left( \frac{1}{\sigma_j\sqrt{2\pi}}\exp\left(-\frac{1}{2}\frac{x_{ij}^{(k)^2}}{\sigma_j^2}\right)\right) \notag\\ = \frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T} (\Theta^{(k)}-I)Z_i^{(k)}\notag\\ - \frac{1}{2}\sum_{k = 1}^K n_{k}p\log(2\pi) - \frac{1}{2}\sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\sum_{j = 1}^p x_{ij}^{(k)^2} \notag\\ \propto \frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T} (\Theta^{(k)}-I)Z_i^{(k)} \label{eq:loglik} \end{gather} \noindent where $\bm{X} = (X^{(1)},\ldots, X^{(K)})^T$. We denote $\{\Theta^{(k)}\}_{k = 1}^{K}$ as $\bm{\Theta}$ for the purpose of simplicity. The two rightmost terms in the penultimate line of (\ref{eq:loglik}) were omitted, as they are constant with respect to $\Theta^{(k)}$ because of the standard normal marginals. \subsection{Model estimation} When estimating the marginals, a nonparametric approach is adhered to, as is common in the copula literature. This is due to the computational costs involved their estimating and because of the fact that we only care about the dependencies encoded in the $\Theta^{(k)}$. They are estimated as $\hat{F}^{(k)}_j(x) = \frac{1}{n_{k}+1}\sum_{i = 1}^{n_k}\mathbb{I}(X_{i j}^{(k)} \leq x)$. Whilst (\ref{eq:loglik}) allows for the joint estimation of the graphical models pertaining to the different groups, these models are not sparse and cannot enforce relations to be the same. Sparsity is a common assumption in biological networks and production ecology is not an exception. Consider for example the solubilization of fertiliser which is independent of root activity (de Wit, \citeyear{de1953physical}), the independence between nitrogen and yield for certain crops (Raun et al. \citeyear{raun2011independence}), or more general the independence between weather and various management techniques. Moreover, if certain groups are highly similar, for example different locations with similar climates, enforcing relations between those groups to be the same is both realistic and parsimonious. Therefore, a fused-type penalty is imposed upon the precision matrix, such that the penalised log-likelihood function has the following form \begin{equation} \label{eq:loglikpen} \begin{gathered} \ell(\bm{\Theta}|\bm{X}) = \frac{1}{2}\sum_{k = 1}^K n_{k} \log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T} (\Theta^{(k)}-I)Z_i^{(k)}\\ - \lambda_1\sum_{k=1}^K\sum_{j\neq j'}|\theta_{jj'}^{(k)}| - \lambda_2\sum_{k<k'}\sum_{j,j'}|\theta_{jj'}^{(k)} - \theta_{jj'}^{(k')}| \end{gathered} \end{equation} for $1 \leq k \neq k' \leq K$ and $1 \leq j \neq j' \leq p$. Here, $\lambda_1$ controls the sparsity of the $K$ different graphs and $\lambda_2$ controls the edge-similarity between the $K$ different graphs. Higher values for $\lambda_1$ and $\lambda_2$ correspond to respectively more sparse and more similar graphs, where similarity is not only limited to similar sparsity patterns in the different $\Theta^{(k)}$, but also in terms of attaining the exact same coefficients across different $\Theta^{(k)}$. The fused-type penalty for heterogeneous data graphical models was originally proposed by Danaher et al. (\citeyear{danaher2014joint}). Whenever groups pertaining to seasons or environments share similar characteristics, production ecological research has hinted at similar edge values between groups (Hajjarpoor et al., \citeyear{hajjarpoor2021environmental}; Zhang et al., \citeyear{zhang1999water}; Richards \& Townley-Smith, \citeyear{richards1987variation}). Consider the case where groups represent different locations. If two groups have very similar environments, both weather patterns and soil properties, many conditional independence relations are expected to be similar between the groups, as the underlying production ecological relations are assumed to be invariant across (near) identical situations (Connor et al., \citeyear{connor2011crop}). Conversely, if the amount of shared characteristics is limited between the groups, the edge values between groups are expected to be different, resulting from the low value for $\lambda_2$, as obtained from a penalty parameter selection method. Moreover, this fused-type penalty has been shown to outperform other types of penalties (Danaher et al., \citeyear{danaher2014joint}), and, if the data contains only 2 groups, this type of penalty has a very light computational burden, due to the existence of a closed-form solution for (\ref{eq:loglikpen}) once the conditional expectations of the latent variables have been computed. \\ \\ As direct maximization $\ell(\bm{\Theta}|\bm{X})$ is not feasible due to the nonexistence of an analytic expression of (\ref{eq:loglikpen}), an iterative method is needed to estimate the value of $\bm{\Theta}_{\lambda_1,\lambda_2}$. A common algorithm used in the presence of latent variables is the EM-algorithm (McLachlan \& Krishnan, \citeyear{mclachlan2007algorithm}). A benefit of this algorithm is that it can handle missing data, which is not uncommon in production ecology as plants can die mid-season due to external stresses such as droughts or pests. The EM algorithm alternates between an E-step and an M-step, where during the E-step the expectation of the (unpenalised) complete-data (both $\bm{X}$ and $\bm{Z}$) log-likelihood conditional on the event $D$ and the estimate $\bm{\hat{\Theta}}^{(m)}$ obtained during the previous M-step is computed \begin{gather} Q(\bm{\Theta}|\bm{\hat{\Theta}}^{(m)}) = \mathbb{E}\left[\sum_{k = 1}^K \sum_{i = 1}^{n_{k}}\log(p(Z_i|\bm{\Theta}))|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D\right]\notag\\ = \mathbb{E}\left[\frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T} (\Theta^{(k)}-I)Z_i^{(k)}\bigg| x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D\right]\notag\\ = \mathbb{E}\left[\frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T}\Theta^{(k)}Z_i^{(k)} + \frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}Z_i^{(k)^T}Z_i^{(k)}\bigg| x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D\right]\notag\\ = \frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}\mathbb{E}(Z_i^{(k)^T}\Theta^{(k)}Z_i^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \notag\\+ \frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}\mathbb{E}(Z_i^{(k)^T}Z_i^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\notag\\ = \frac{1}{2}\sum_{k = 1}^K n_{k}\log(\det(\Theta^{(k)})) -\frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}\text{tr}(\Theta^{(k)}\mathbb{E}(Z_i^{(k)}Z_i^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)) \notag\\ + \frac{1}{2}\sum_{k = 1}^K\sum_{i = 1}^{n_{k}}\text{tr}(\mathbb{E}(Z_i^{(k)}Z_i^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D))\notag\\ = \frac{1}{2}\sum_{k = 1}^K n_{k}\left[\log(\det(\Theta^{(k)})) -\text{tr}(\Theta^{(k)}\bar{R}^{(k)}) + \text{tr}(\bar{R}^{(k)})\right]\notag\\ \propto \frac{1}{2}\sum_{k = 1}^K n_{k}\left[\log(\det(\Theta^{(k)})) -\text{tr}(\Theta^{(k)}\bar{R}^{(k)})\right] \label{eq:e-step} \end{gather} where \begin{equation} \begin{gathered} \label{eq:test} \bar{R}^{(k)} = \frac{1}{n_{k}}\sum_{i = 1}^{n_{k}}\mathbb{E}(Z_i^{(k)}Z_i^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\\ \mathbb{E}(Z_i^{(k)}Z_i^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) = \mathbb{E}(Z_i^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\mathbb{E}(Z_i^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)^T\\ + \text{ cov}(Z_i^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D), \end{gathered} \end{equation} which are the estimated correlation matrix and the first and second moments of a doubly truncated multivariate normal density, respectively. Expressions of these moments are given by Manjunath and Wilhelm (\citeyear{manjunath2021moments}). Note that $\text{tr}(\bar{R}^{(k)})$ does not depend on $\Theta^{(k)}$ and was therefore omitted in the last step of the derivation of $Q(\bm{\Theta}|\bm{\hat{\Theta}}^{(m)})$. \\ \\ Despite the existence of a functional expression for $\mathbb{E}(Z_i^{(k)}Z_i^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$, Behrouzi and Wit (\citeyear{behrouzi2019detecting}) used two alternative methods to compute this quantity, as directly computing the moments is computationally expensive, even for moderate $p$ of 50. Accordingly, the faster alternatives proposed are a Gibbs sampling and an approximation-based approach. Whereas the former results in better estimates for the precision matrices, the latter is computationally more efficient. The Gibbs sampler is built on the Fortran-based truncated normal sampler in the \texttt{tmvtnorm} R package (Wilhelm \& Manjunath, \citeyear{tmvtnorm}).\\ \\ The Gibbs method consists of a limited number of steps, where the focus lies on drawing $N$ samples from the truncated normal distribution, where $t_{x_{i}^{(k)}}$ is a vector of length $p$ and contains the lower truncation points for observation $x_{i}^{(k)}$ and $t_{x_{i}^{(k)} + 1}$ is a vector of length $p$ and contains the upper truncation points for observation $x_{i}^{(k)}$. The method is summarised in Algorithm \ref{alg:gibbs}. \begin{algorithm}[H] \caption{Gibbs method}\label{alg:gibbs} \hspace*{\algorithmicindent} \textbf{Input:} $\bm{X}$ and $\{\Sigma^{(1)},\ldots,\Sigma^{(K)}\}$\\ \hspace*{\algorithmicindent} \textbf{Output:} $\{\bar{R}^{(1)},\ldots,\bar{R}^{(K)}\}$ \begin{algorithmic}[1] \FOR{$k = 1$ to $K$} \FOR{$i = 1$ to $n_k$} \STATE Compute $t_{x_{i}^{(k)}}^{(k)},t_{x_{i}^{(k)} + 1}^{(k)}$ \STATE Generate $Z^{(k)}_{i*} = (Z^{(k)}_{i11}, \ldots, Z^{(k)}_{iN1}, \ldots, Z^{(k)}_{i1p}, \ldots, Z^{(k)}_{iNp}) \sim TN(0, \Sigma^{(k)}, t_{x_{i}^{(k)}}^{(k)}, t_{x_{i}^{(k)} + 1}^{(k)})$ \ENDFOR \STATE Compute $\bar{R}^{(k)} = \frac{1}{n_{k}}\sum_{i = 1}^{n_{k}}\frac{1}{N}Z^{(k)^T}_{i*} Z^{(k)}_{i*}$ \ENDFOR \end{algorithmic} \end{algorithm} \noindent When the Gibbs method is run for the first iteration ($m = 1$) of the EM algorithm, $\Sigma^{(k)} = I_p$, and otherwise $\Sigma^{(k)} = \Theta^{(m-1), (k)^{-1}}$. Computing the sample mean on simulated data from a truncated normal distribution leads to consistent estimates of $\bar{R}^{(k)}$ (Manjunath \& Wilhelm, \citeyear{manjunath2021moments}). \\ \\ Guo et al.\ (\citeyear{guo2015graphical}) proposed an approximation method to estimate the conditional expectation of the covariance matrix. When $j = j'$, the variance elements of this matrix correspond to the second moment $\mathbb{E}(Z_i^{(k)^2}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ and the covariance elements, $j \neq j'$ are approximated by $\mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\mathbb{E}(Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$, which is simply a product of the first moment for variables $j$ and $j'$. The method is summarised in Algorithm \ref{alg:approx}, and details can be found in Appendix \ref{Approximate method details}. \begin{algorithm}[H] \caption{Approximate method}\label{alg:approx} \hspace*{\algorithmicindent} \textbf{Input:} $\bm{X}$\\ \hspace*{\algorithmicindent} \textbf{Output:} $\{\tilde{R}^{(1)},\ldots,\tilde{R}^{(K)}\}$\\ \hspace*{\algorithmicindent} \textbf{Initialise:} $\text{ }\mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}(Z_{ij}^{(k)}|x_{ij}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D),\\ \textcolor{white}{texdfdhfjsdfd}\text{ }\mathbb{E}(Z_{ij}^{(k)^2}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}(Z_{ij}^{(k)^2}|x_{ij}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$\\ $\textcolor{white}{texdfdhfjsdfd}\text{ and }\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}(Z_{ij}^{(k)}|x_{ij}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\mathbb{E}(Z_{ij'}^{(k)}|x_{ij'}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$, \\ $\textcolor{white}{texdfdhfjsdfd}\text{ }$for $i = 1,\ldots n_k, j,j'=1,\ldots,p$ and $k = 1,\ldots,K$ \begin{algorithmic}[1] \FOR{$k = 1$ to $K$} \FOR{$i = 1$ to $n_k$} \IF{$j = j'$} \STATE Update $\mathbb{E}(Z_{ij}^{(k)^2}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ for $j = 1,\ldots,p$ using (\ref{eq:z}) \ELSE \STATE Update $\mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ for $j = 1,\ldots,p$ using (\ref{eq:z2}) \STATE Set $\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) = \mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\mathbb{E}(Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ for $1 \leq j \neq j' \leq p$ \ENDIF \ENDFOR \STATE Compute $\tilde{r}_{jj'}^{(k)} = \frac{1}{n_{k}}\sum_{i = 1}^{n_{k}}\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ for $1 \leq j, j' \leq p$ \ENDFOR \end{algorithmic} \end{algorithm} \noindent After obtaining an estimate for all $\bar{R}^{(k)}$, using either the Gibbs method or the approximate method, the M-step commences which consists of maximizing (\ref{eq:e-step}) with respect to the precision matrices, subject to the imposed penalties of (\ref{eq:loglikpen}): \begin{equation*} \hat{\bm{\Theta}}_{\lambda_1,\lambda_2}^{(m + 1)} = \argmax_{\bm{\Theta}}\{\frac{1}{2}\sum_{k = 1}^K n_{k}\left[\log(|\Theta^{(k)}|) - \text{tr}(\Theta^{(k)}\bar{R}^{(k)})\right] - \lambda_1\sum_{k=1}^K||\Theta^{(k)}||_1 - \lambda_2\sum_{k<k'}||\Theta^{(k)} - \Theta^{(k')}||_1\}, \end{equation*} which is done using the fused graphical lasso by Danaher et al.\ (\citeyear{danaher2014joint}). \subsection{Model selection} Instead of one penalty parameter, as is typical for graphical models, the copula graphical model for heterogeneous data requires the selection of two penalty parameters. In a predictive setting, the AIC (Akaike, \citeyear{akaike1973second}) penalty and cross-validation approaches are commonly applied, whereas the BIC (Schwarz, \citeyear{schwarz1978estimating}), EBIC (Chen \& Chen, \citeyear{chen2012extended}) and StARS (Liu et al., \citeyear{liu2010stability}) approaches are designed for graph identification (Vujačić et al., \citeyear{vujavcic2015computationally}). When considering a grid of $\lambda_1 \times \lambda_2$ combinations of penalty parameters, computational cost becomes crucial. Therefore, unless a coarse grid structure for the penalty parameters is used or if researchers are willing to spend a substantial amount of time waiting for the ‘‘optimal’’ combination of penalty parameters, information criteria are preferred over computationally intensive methods such as cross-validation and StARS. Two of these, the AIC and EBIC are given for heterogeneous data: \begin{gather} \text{AIC}(\lambda_1,\lambda_2) = \sum_{k=1}^K\left[n_{k} \text{tr}(S^{(k)} \hat{\Theta}_{\lambda_1,\lambda_2}^{(k)})-n_{k} \log(|\hat{\Theta}_{\lambda_1,\lambda_2}^{(k)}|) + 2 \nu^{(k)}_{\lambda_1,\lambda_2}\right]\\ \text{EBIC}(\lambda_1,\lambda_2) = \sum_{k=1}^K\left[n_{k} \text{tr}(S^{(k)} \hat{\Theta}_{\lambda_1,\lambda_2}^{(k)})-n_{k} \log(|\hat{\Theta}_{\lambda_1,\lambda_2}^{(k)}|) + \log(n_{k}) \nu^{(k)}_{\lambda_1,\lambda_2} + 4\gamma\log(p) \nu^{(k)}_{\lambda_1,\lambda_2}\right] \label{eq:EBIC} \end{gather} where the degrees of freedom $\nu^{(k)} = \text{card}\{(i,j): i < j, \hat{\theta}_{ij}^{(k)} \neq 0\}$ and $0 \leq \gamma \leq 1$ is a penalty parameter, commonly set to 0.5. The EBIC tends to lead to sparser networks than the AIC penalty (Vujačić et al., \citeyear{vujavcic2015computationally}). \section{Simulation studies} \label{Simulation study} To demonstrate the added value of the proposed copula graphical model for heterogeneous mixed data, a simulation study is undertaken using a variety of settings. In this simulation, relevant parameter values are known a-priori. Both the Gibbs-based and approximation-based copula graphical models will be evaluated, together with the following models; the fused graphical lasso (FGL) by Danaher et al.\ (\citeyear{danaher2014joint}) and the graphical lasso (GLASSO) method (Friedman et al., \citeyear{friedman2008sparse}), where the networks are fitted seperately for each group. Whilst the joint mixed learning method by Jia and Liang (\citeyear{jia2020joint}) is a method that aims to analyse similar data to the copula graphical model for heterogeneous data, their R package \texttt{equSA} (Jia et al., \citeyear{equSA}) is no longer supported, which, at the time of writing, resulted in constant crashes of the R software when running the joint mixed learning method. As an aside, it should be noted that both the FGL and GLASSO methods assume Gaussian data. However, the data used to compare the methods is of mixed-type. Even though Poisson data can be normalised using for instance a logarithmic base-10 transformation, ordinal and binary data cannot. For this reason, the data was not normalised. \\ \\ For each combination of network-type, 25 datasets are generated consisting of $p = \{50,100\}$, $n_{k} = \{10, 50, 100, 500\}$ and $K = 3$ groups. The combinations of these values for $n_k$ and $p$ result in both high- and low-dimensional scenarios. Moreover, the choice of $p$ is pursuant to the typical number of variables in a production-ecological dataset and the $K$ with the number of seasons or environments analysed in such data. The networks used in this simulation study are a cluster network, a scale-free network and a random network according to the Erdős-Rényi model (Erdős \& Rényi, \citeyear{erdos1959random}). The choice for the first network is motivated by the fact that for production-ecological data, when group-membership variables exist, such as environments or seasons, we expect clusters consisting of a large number of within-cluster edges compared to the number of between-cluster edges to arise. The scale-free network represents the opposite of a cluster network, consisting of a relatively high number of edges between clusters and a low number of edges within clusters. As model performance under opposite conditions is also relevant to evaluate, the scale-free network was chosen. The last network choice; the random network, allows the evaluation of the proposed copula graphical model under unspecified graph structure, where edge connection probability $p_e$ results in sparse or dense graphs, depending on whether the probability is close to 0 or 1, respectively. Having a model that performs well under assumed sparsity without additional structural network assumptions is useful, as it is not always known a-priori what the underlying graph should look like. The data is simulated in the following way: \begin{enumerate} \item Generate graph $G$ and (initial) shared precision matrix $\Theta^{(s)}$ according to the type of network: cluster, scale-free or random, by setting values in $\Theta$ to $[-1, 0.5] \cup [0.5, 1]$ the correspond to edges in $G$ \item Create different precision matrices $\Theta^{(k)}$ for $k = 1,\ldots,K$ by randomly filling in $\lfloor\rho M\rfloor$ zero elements of $\Theta^{(k)}$, where $\rho$ is a dissimilarity parameter and $M$ is the number of nonzero elements in the lower diagonal of $\Theta^{(s)}$. \item Set diag$(\Theta^{(k)})$ = 0 \item Ensure that the $\Theta^{(k)}$ are positive definite by setting diag$(\Theta^{(k)}) = |\lambda_{\min}(\Theta^{(k)})| + \epsilon$, where $\lambda_{\min}(\Theta^{(k)})$ are the smallest eigenvalues of $\Theta^{(k)}$ and $\epsilon > 0$. \item Compute covariance matrix $\Sigma^{(k)} = \Theta^{(k)^{-1}}$ \item Turn covariance matrix into correlation matrix $\Sigma^{(k)} = \text{diag}(\Sigma^{(k)})^{-\frac{1}{2}}\Sigma^{(k)}\text{diag}(\Sigma^{(k)})^{-\frac{1}{2}}$ \item Set $(\Theta^{(k)}) = \Sigma^{(k)^{-1}}$ \item From $V = \{1,\ldots,p\}$ sample $s_b = \lfloor\gamma_b p\rfloor$, $s_o = \lfloor\gamma_o p\rfloor$, $s_p = \lfloor\gamma_p p\rfloor$ and $s_g = V - s_b - s_o - s_p$ columns without replacement for the binomial, ordinal, Poisson and Gaussian variables respectively, where $\gamma_i, i \in I = \{b,o,p,g\}$ represents the proportion of that variable occurring in the data. Then, $\bigcup_{i \in I}s_i = V$ and $\bigcap_{i \in I}s_i = \emptyset$. \item Sample latent data $Z \sim N(0, \Sigma^{(k)})$ \item Generate observed data $X_{s_i}^{(k)} = F_{s_i}^{(k)^{-1}}(\Phi_i(Z_{s_i}^{(k)}))$ \end{enumerate} \noindent In the simulations, we set distribution proportions $\gamma_b$ to 0.1, $\gamma_o$ to 0.5, $\gamma_p$ to 0.2 and $\gamma_g$ to 0.2. Moreover, the success parameter for the binomial marginals is set at 0.5, the rate parameter of the Poisson marginals is set at 10 and the number of categories for ordinal variables is set at 6. The edge connection probability $p_e$ for the random network is set at 0.05, resulting in sparse random graphs. The number of clusters in the cluster network is set at 3. Finally, we considered $\rho = 0.25$ and $1$, resulting in respectively similar and different graphs for each group. This results in a total of $3\times4\times2\times2$ = 48 unique combinations of settings. Each combination is used to sample 25 different datasets to minimise the effect of randomness on the results. \\ \\ To evaluate the performance of the models, ROC curves are drawn, consisting of the false positive rate (FPR) and true positive rate (TPR), which respectively represent the rate of false and true edges selected by the model. These are defined as: \begin{equation*} \begin{gathered} \text{FPR} = \frac{1}{K}\sum_{k = 1}^K \frac{\sum_{i < j}\mathbb{I}(\theta_{ij}^{(k)} = 0, \hat{\theta}_{ij}^{(k)}\neq 0)}{\mathbb{I}(\theta_{ij}^{(k)} = 0)}\\ \text{TPR} = \frac{1}{K}\sum_{k = 1}^K \frac{\sum_{i < j}\mathbb{I}(\theta_{ij}^{(k)} \neq 0, \hat{\theta}_{ij}^{(k)}\neq 0)}{\mathbb{I}(\theta_{ij}^{(k)} \neq 0)}, \end{gathered} \end{equation*} where $\mathbb{I}()$ is an indicator function. The FPR and TPR are computed by varying $\lambda_1$ over $[0,1]$ with a step size of 0.05, whilst fixing $\lambda_2$ to either 0, 0.1 or 1, resulting in 3 ROC curves per plot (one for each value of $\lambda_2$), per dataset per model. Correspondingly, AUC scores are computed for these curves. In addition, the Frobenius and entropy loss are computed and averaged over the groups: \begin{equation*} \begin{gathered} \text{FL} = \frac{1}{K}\sum_{k=1}^K \frac{||\Theta^{(k)} - \hat{\Theta}^{(k)}||_F^2}{||\Theta^{(k)}||_F^2} \\ \text{EL} = \frac{1}{K}\sum_{k=1}^K \text{tr}(\Theta^{(k)^{-1}}\hat{\Theta}^{(k)}) - \log(\det(\Theta^{(k)^{-1}}\hat{\Theta}^{(k)})) - p \end{gathered} \end{equation*} \noindent Aside from the averaged performance measures across the values of $\lambda_2$, the best choice performance measure is given: the value of the performance measure the attained the best score, either highest or lowest depending on the measure, for a particular choice of $\lambda_2$. \\ \\ Results for the random network are given in Table \ref{tab:simrandom} and Figure \ref{fig:roccurves}, whereas the results for the cluster and scale-free networks are given in Appendix \ref{Additional simulation results} in respectively Table \ref{tab:simcluster} and Figure \ref{fig:roccurves_cl} and Table \ref{tab:simscalefree} and Figure \ref{fig:roccurves_sf}. \begin{table}[H] \centering \begin{threeparttable} \caption{Simulation results for random networks, where AUC stands for area under the curve, FL stands for Frobenius loss, EL stands for entropy loss and the bc suffix stands for best choice, i.e. the best result of that respective metric (highest for AUC and lowest for EL and FL) for a particular value of $\lambda_2$. The value corresponding to the winning method is written in bold.} \label{tab:simrandom} \begin{tabular}{lcccccc} \toprule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso/GLASSO}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC} &\textbf{FL} & \textbf{EL}& \textbf{AUC} & \textbf{FL} & \textbf{EL}\\ \midrule $10, 50, 0.25$ & \textbf{0.60}/0.60 & \textbf{0.58}/0.64 & \textbf{12.29}/13.59 & 0.56/0.56 & 3.15/1.23 & 48.30/22.09\\ $50, 50, 0.25$ & \textbf{0.83}/0.83 & \textbf{0.18}/0.18 & \textbf{5.42}/5.43 & 0.74/0.76 & 0.94/0.34 & 34.93/10.05\\ $100, 50, 0.25$ & \textbf{0.89}/0.89 & \textbf{0.17}/0.17 & \textbf{5.05}/5.06 & 0.81/0.86 & 0.91/0.32 & 35.53/9.44\\ $500, 50, 0.25$ & 0.94/0.94 & \textbf{0.17}/0.17 & \textbf{4.92}/4.94 & 0.91/\textbf{0.98} & 0.90/0.32 & 35.55/9.22\\ $10, 100, 0.25$ & \textbf{0.56}/0.56 & \textbf{0.79}/0.95 & \textbf{34.19}/40.47 & 0.54/0.53 & 4.01/1.51 & 112.07/55.05\\ $50, 100, 0.25$ & \textbf{0.77}/0.77 & \textbf{0.22}/0.22 & \textbf{14.49}/14.62 & 0.69/0.67 & 0.94/0.39 & 72.79/24.57\\ $100, 100, 0.25$ & \textbf{0.84}/0.84 & \textbf{0.20}/0.20 & \textbf{13.17}/13.20 & 0.77/0.78 & 0.88/0.35 & 72.70/22.11\\ $500, 100, 0.25$ & 0.93/0.93 & \textbf{0.19}/0.19 & \textbf{12.69}/12.72 & 0.89/\textbf{0.95} & 0.87/0.35 & 73.63/21.27\\ $10, 50, 1$ & \textbf{0.56}/0.56 & \textbf{0.58}/0.63 & \textbf{12.76}/14.06 & 0.54/0.54 & 3.05/1.19 & 49.03/22.43\\ $50, 50, 1$ & \textbf{0.72}/0.72 & \textbf{0.19}/0.19 & \textbf{5.96}/5.97 & 0.66/0.68 & 0.93/0.35 & 35.63/10.52\\ $100, 50, 1$ & \textbf{0.77}/0.77 & \textbf{0.18}/0.18 & \textbf{5.62}/5.63 & 0.72/0.78 & 0.89/0.33 & 35.76/9.94\\ $500, 50, 1$ & 0.85/0.85 & \textbf{0.18}/0.18 & \textbf{5.48}/5.50 & 0.83/\textbf{0.94} & 0.89/0.33 & 35.93/9.74\\ $10, 100, 1$ & \textbf{0.54}/0.54 & \textbf{0.77}/0.92 & \textbf{35.09}/41.27 & 0.52/0.52 & 3.97/1.47 & 114.34/55.70\\ $50, 100, 1$ & \textbf{0.67}/0.67 & \textbf{0.23}/0.23 & \textbf{15.51}/15.64 & 0.62/0.62 & 0.93/0.40 & 73.43/25.44\\ $100, 100, 1$ & \textbf{0.73}/0.73 & \textbf{0.21}/0.21 & \textbf{14.23}/14.26 & 0.68/0.70 & 0.87/0.36 & 74.06/23.05\\ $500, 100, 1$ & 0.82/0.82 & \textbf{0.20}/0.21 & \textbf{13.78}/13.81 & 0.79/\textbf{0.89} & 0.86/0.36 & 75.03/22.27\\ \midrule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc}\\ \midrule $10, 50, 0.25$ & 0.63/\textbf{0.64} & \textbf{0.28}/0.29 & \textbf{7.99}/8.22 & 0.59 & 1.45 & 37.35 \\ $50, 50, 0.25$ & \textbf{0.88}/0.87 & \textbf{0.17}/0.17 & \textbf{5.09}/5.10 & 0.79 & 0.91 & 34.42 \\ $100, 50, 0.25$ & \textbf{0.91}/0.91 & \textbf{0.17}/0.17 & \textbf{4.96}/4.98 & 0.84 & 0.90 & 35.42 \\ $500, 50, 0.25$ & \textbf{0.98}/0.98 & \textbf{0.17}/0.17 & \textbf{4.91}/4.93 & 0.91 & 0.90 & 35.50 \\ $10, 100, 0.25$ & \textbf{0.58}/0.58 & \textbf{0.39}/0.43 & \textbf{23.02}/24.82 & 0.56 & 1.88 & 84.83 \\ $50, 100, 0.25$ & \textbf{0.82}/0.82 & \textbf{0.20}/0.20 & \textbf{13.28}/13.30 & 0.74 & 0.87 & 70.54 \\ $100, 100, 0.25$ & \textbf{0.88}/0.88 & \textbf{0.19}/0.19 & \textbf{12.84}/12.86 & 0.81 & 0.87 & 72.25 \\ $500, 100, 0.25$ & \textbf{0.95}/0.95 & \textbf{0.19}/0.19 & \textbf{12.68}/12.72 & 0.89 & 0.87 & 73.47 \\ $10, 50, 1$ & \textbf{0.58}/0.58 & \textbf{0.29}/0.29 & \textbf{8.52}/8.75 & 0.56 & 1.41 & 38.21 \\ $50, 50, 1$ & \textbf{0.74}/0.74 & \textbf{0.18}/0.18 & \textbf{5.67}/5.67 & 0.68 & 0.90 & 35.12 \\ $100, 50, 1$ & \textbf{0.78}/0.78 & \textbf{0.18}/0.18 & \textbf{5.55}/5.57 & 0.73 & 0.89 & 35.62 \\ $500, 50, 1$ & \textbf{0.94}/0.94 & \textbf{0.18}/0.18 & \textbf{5.44}/5.45 & 0.87 & 0.89 & 35.82 \\ $10, 100, 1$ & \textbf{0.55}/0.55 & \textbf{0.39}/0.43 & \textbf{24.18}/25.92 & 0.53 & 1.87 & 86.84 \\ $50, 100, 1$ & \textbf{0.69}/0.69 & \textbf{0.21}/0.21 & \textbf{14.39}/14.40 & 0.65 & 0.87 & 71.20 \\ $100, 100, 1$ & \textbf{0.74}/0.74 & \textbf{0.21}/0.21 & \textbf{13.95}/13.97 & 0.69 & 0.86 & 73.61 \\ $500, 100, 1$ & \textbf{0.89}/0.88 & \textbf{0.20}/0.20 & \textbf{13.71}/13.74 & 0.82 & 0.86 & 74.74 \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{figure}\label{fig:roccurves} \end{figure} \noindent The simulation results shown in Table \ref{tab:simrandom} and Figure \ref{fig:roccurves} indicate that the proposed copula graphical model for heterogeneous mixed data in general outperforms the alternative models. Only under very low-dimensional settings does the GLASSO approach attain a better AUC that the proposed method, which is not a setting that typically occurs in real-world data. In high dimensional settings, the performance of the proposed model is substantially better than that of the alternative models. When groups become more dissimilar, by increasing dissimilarity parameter $\rho$, the relative advantage of the proposed method becomes less substantial as compared to the GLASSO method, but this is to be expected, as there is less common information to borrow across the groups. Moreover, in low-dimensional settings, enforcing the precision matrices to be equal across groups ($\lambda_2 = 1$) results in better performance than when relatively little information is borrowed across groups. Conversely, the opposite phenomenon is observed for simulations when $n = p$ and $n < p$. When values for $\lambda_2$ are high, more information is borrowed across graphs, which is of added value when the number of samples per group is low. Conversely, when groups contain enough observations, such as in the rightmost figures, setting $\lambda_2 > 0$ unnecessarily restricts the graphs, as each group contain enough information for individual estimation. Differences between the Gibbs and approximate methods are also observed: even though both methods select approximately the same number of true and false positive edges, marked differences are present when inspecting the entropy loss results, indicating differences in edge values. The Gibbs method near-consistently outperforms the approximate method, which matches the results obtained by Behrouzi and Wit (\citeyear{behrouzi2019detecting}). Therefore, the Gibbs method is recommended over the approximate method when accuracy is the primary concern of the researcher. \section{Application to production-ecological data} \label{Real world data example} The Gibbs version of the copula graphical model for heterogeneous mixed data was applied to a real world production-ecological data set. The data contained variables pertaining to soil properties (e.g.\ clay content, silt content and total nitrogen), weather (e.g.\ average mean temperature and number of frost days), management influences (e.g.\ pesticide use and weeding frequency), external stresses (pests, diseases) and maize yield. The data were collected in Ethiopia by the Ethiopian Institute of Agricultural Research (EIAR) in collaboration with the International Maize and Wheat Improvement Center (CIMMYT) and was used as part of a larger study (\citeyear{VascoSilva_forthcoming}) aimed at explaining crop yield variability. \\ \\ The maize data used to illustrate the proposed model consists of measurements taken on two different locations: Pawe in northwestern Ethiopia and Adami Tullu in central Ethiopia, see Figure \ref{fig:map}. These two locations have many properties that are not present in the data, but can influence the underlying relationships, such as soil water level, air pressure, wind, drainage, etc, as the two locations have different climatic properties and altitude levels (Abate et al., \citeyear{abate2015factors}). For this reason, 4 groups were created: data from farms in Pawe in 2010 (group 1) and 2013 (group 2) and data data from farms in Adami Tullu in 2010 (group 3) and 2013 (group 4). Each group consists of measurements across 63 variables with sample sizes 82, 82, 129 and 132 for Pawe 2010/2013 and Adamui Tullu 2010/2013 respectively. The data comprises 26 continuous, 22 count, 3 ordinal and 12 binomial variables. \\ \\ Crop yield tends to be the principal variable of interest in production ecology and will be the focus of this analysis. Both reported yield (\texttt{yield}) and simulated, water-limited, yield (\texttt{yield\_theoretical}) were present in the data. The difference between these two quantities (i.e. the yield gap) tends to be substantial in Ethiopia which makes unraveling the factors determining their relationship of interest to production ecologists (van Dijk et al., \citeyear{van2020reducing}; Assefa et al., \citeyear{assefa2020unravelling}; Kassie et al., \citeyear{kassie2014climate}; \citeyear{silva2019labour}; Getnet et al., \citeyear{getnet2016yield}). The literature (Rabbinge et al., \citeyear{rabbinge1990theoretical}; Connor et al., \citeyear{connor2011crop}) frequently identifies the following factors as potentially important for determining yield: total rainfall, planting density, soil fertility, use of intercropping, crop residue, amount of labour used, maize variety, plot size and fertiliser use. However, establishing the relative importance of these factors under different conditions is not trivial, since many of these factors may interact in complex ways depending on time and place. \begin{figure} \caption{Map showing the farm locations in Ethiopia} \label{fig:map} \end{figure} To gain more insight into the relations underlying yield variation, we apply the proposed copula graphical model for heterogeneous mixed data. The model is fitted using a grid of 11$\times$11 combinations for $\lambda_1$ and $\lambda_2$, from which the combination is selected that minimises the EBIC (\ref{eq:EBIC}) with $\gamma = 0.5$. Some general graph properties of the full graphs as seen in Figure \ref{fig:realdata} are discussed, followed by a discussion of the yield graphs seen in Figure \ref{fig:yield}. \begin{figure} \caption{Results for the 4 networks, with penalty parameters set to $\lambda_1 = 0.2$, $\lambda_2 = 0$ as chosen by the EBIC with $\gamma = 0.5$. Node size is indicative of the node degree, edge width reflects the absolute value of the partial correlation and color is used to make visual distinction between positive (green) and negative (red) partial correlations (green).} \label{fig:realdata} \end{figure} \noindent Topologically, the graphs consist of a dense cluster of variables mainly consisting of soil and weather properties with some management variables mixed in, a less dense cluster of yield with its neighbours and some conditionally independent management and external-stress variables. The graphs consist respectively (by increasing group order) of 271, 261, 220 and 253 edges, with average node degrees of 8.60, 8.29, 6.98 and 8.03. However, in order to gain insights into which variables are conditionally dependent on yield, the full graphs are impractical. Therefore, the subgraphs containing the yield variable and its neighbours are given in Figure \ref{fig:yield} below. By the Local Markov Property (cf.\ Lauritzen, \citeyear{lauritzen1996graphical}), yield is conditionally independent of all variables not shown in the graph given its neighbours, resulting in Figure \ref{fig:yield} being sufficient to infer all conditional dependencies of the yield variable. \begin{figure} \caption{Results for the 4 yield networks obtained by applying the Local Markov Property to the yield variable. All variables that share an edge with yield in at least one of the four networks are included in all four networks. Node size is indicative of the node degree, edge width reflects the absolute value of the partial correlation and color is used to make visual distinction between positive (green) and negative (red) partial correlations.} \label{fig:yield} \end{figure} \noindent Central in these graphs is the actual, reported yield. Whereas this variable has many direct relationships in the graph of Pawe and Adami Tullu in 2010, this is not the case for the other groups, including Pawe in 2013, possibly indicating a temporal interaction. Most results presented in Figure \ref{fig:yield} are not unexpected, such as the positive effects of the use of (\texttt{variety\_hybrid}) seed, and consequently the negative effect of planting a local variety (\texttt{variety\_local}) (Assefa et al., \citeyear{assefa2020unravelling}), the benefit from the application of N fertiliser and labour input. More surprising perhaps was the negative relationship with the amount of seed found in Adami Tullu in 2010, but this may be a true indication that optimum densities were relatively low for that location and year. The direct relation found in Pawe and Adami Tullu in 2010 between \texttt{livestock} (ownership) and yield is also interesting and may reflect either beneficial effects of their use as draft animals or positive effects on soil fertility, either directly through manure, or through greater resource endowment in general, for which livestock ownership is an indicator (Silva et al., \citeyear{silva2019labour}). In this regard, the negative effect of livestock on the labour input per person, which in turn has a positive effect on yield could also reflect the beneficial effect of the use of animal power over manual labour. With respect to labour per se, both graphs for Pawe in 2010 and Adami Tullu in 2013 contain an edge between yield and total labour use (\texttt{totlab\_persdayha}), indicating conditional dependence. Whilst the presence of the edges, and the corresponding positive partial correlations are not surprising per se, the fact that this relation is only found in Pawe in 2010 and Adami Tullu in 2013 is unusual. As labour is highly seasonal (Silva et al., \citeyear{silva2019labour}), and labour seasonality can vary with location, this edge is expected either for Pawe in 2010 and 2013 or for Adami Tullu in 2010 and 2013. This is an example of a conditional dependence that can be explored further by production ecologists. \\ \\ The present analysis is an example of how graphical models can aid in the exploration and understanding of fundamental production-ecological relations. Once other researchers take note of this novel application, methodologies can be tailored which will further understanding of how yield is influenced. \subsection{Goodness of fit} \noindent In order to evaluate the stability of the selected network, we apply a bootstrapping procedure to the data, where 200 (row) permutations of the original data are created, the proposed model is fitted over all values for $\lambda_1$ and $\lambda_2$, and an optimal model is selected using the the EBIC ($\gamma = 0.5$). For each graph $G_k, k\in 1\ldots,4$, edges that occur over $90\%$ across the 200 bootstrapped $G_k$ graphs (hereinafter referred to as the acceptance ratio) are compared to the pre-bootstrap fitted graphs, indicating how stable the model performance is across random permutations of the data. The choice of an acceptance ratio of $90\%$ was based on the fact that we are primarily interested in whether the results obtained by the proposed model are stable across small perturbations of the data, and using a high threshold only takes into account edges that can be considered part of the underlying graph. \begin{figure} \caption{Edge stability results for the 4 networks, as determined by the number of times the edges from the fitted graphs occurred in the 200 bootstrapped graphs.} \label{fig:bootstrapedge} \end{figure} \noindent Using an acceptance ratio of $90\%$ for the edges obtained from bootstrapped data, the fitted model described above with $\lambda_1 = 0.2$ and $\lambda_2 = 0$ is able to infer all fundamental relations (the discovery rate). This perfect discovery rate remains up until the acceptance ratio is decreased to $70\%$, where across all four graphs, the discovery rate decreases to $98.71\%$. Even when we identify those edges that appear in $50\%$ of the graphs obtained from the bootstrap as fundamental, the discovery rate remains $89.53\%$. When the acceptance ratio is set even lower, non-fundamental relations are more likely to be included, which is not of interest here. In addition, judging by Figure \ref{fig:bootstrapedge}, most edges in the fitted model occur frequently in the bootstrapped graphs, as judged by the low amount of very light red edges. Of particular interest are the edges surrounding the yield variable, which are stable. Therefore, using the EBIC results in stable model selection where fundamental relations are satisfactorily recovered. \section{Conclusion} \label{Conclusion} Responding to the need of production ecologists for a statistical technique that can shed light into fundamental relations underlying plant productivity in agricultural systems, this article introduces the copula graphical model for mixed heterogeneous data: a novel statistical method that can be used to obtain estimate simultaneous interactions amongst multi-group mixed-type datasets. The proposed model can be seen as the fusion of two different models: the copula graphical model and the fused graphical lasso. The former extends the Gaussian graphical model with non-Gaussian variables, whereas the latter extends the graphical model to a multi-group setting and enforces similarity between similar groups. The model performs competitively for a myriad of graph structures underlying very different datasets, thereby extending its use beyond production-ecological data, to any mixed-type heterogeneous data. Moreover, the proposed method was applied to a production-ecological dataset consisting of 4 groups, reflecting spatial and temporal heterogeneity, as is typical of production-ecological data. Aside from yield relationships that are typically identified in production-ecological research, the model also found some peculiar relationships, giving motivation for future research. In terms of future statistical research, one recommendation that we give is model selection for multi-group graphical models, in order to facilitate applied researchers. Current approaches do not give theoretical guarantees for these types of models and model selection remains an important part of any statistical application. Another possible research direction is to extend the proposed method by allowing for unordered categorical (nominal) data, which is one of the shortcomings of the copula. Finally, we urge statisticians to develop methodologies that make optimal use of the intricacies that production-ecological data offer and is in line with the goals of production ecologists. \begin{appendices} \section{Approximate method details} \label{Approximate method details} As computing the first and second moments of a truncated normal distribution is computationally expensive even for moderate $p$, Guo et al.\ (\citeyear{guo2015graphical}) proposed an approximation method which ensures feasibility of the copula graphical model even for high $p$. The mathematical derivations and theory behind it is outlined in this section. \\ \\ In order to approximate the first and second moments of the latent variables arising from a truncated normal distribution, we apply mean field theory: \begin{gather} \mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}[\mathbb{E}(Z_{ij}^{(k)}|z_{i-j}^{(k)}, x_{ij}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D ]\label{eq:mom1} \\ \mathbb{E}(Z_{ij}^{(k)^2}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}[\mathbb{E}(Z_{ij}^{(k)^2}|z_{i-j}^{(k)}, x_{ij}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D ], \label{eq:mom2} \end{gather} where $z_{i -j} = (z_{i1}^{(k)},\ldots,z_{i j-1}^{(k)},z_{i j+1}^{(k)}, z_{i p}^{(k)})$. We have that $z_{ij}^{(k)}|z_{i-j}^{(k)}, x_{ij}^{(k)} \sim TN(\mu_{ij}^{(k)}, \sigma^{(k)^2}_{ij}, t_{x_{ij}^{(k)} j}^{(k)}, t_{x_{ij}^{(k)}+1 j}^{(k)})$, where $\mu_{ij}^{(k)} = \hat{\Sigma}^{(k)}_{j-j}\hat{\Sigma}^{(k)^{-1}}_{-j-j}z_{i-j}^{(k)^T}$ and $\sigma^{(k)^2}_{ij} = 1-\hat{\Sigma}^{(k)}_{j-j}\hat{\Sigma}^{(k)^{-1}}_{-j-j}\hat{\Sigma}^{(k)}_{-j-j}$. The $(j,j')$th element of sample correlation matrix $\bar{R}^{(k)}: \bar{r}^{(k)}_{j j'}$ is compute as $\frac{1}{n_{k}}\sum_{i = 1}^{n_{k}}\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$. If $j = j'$ we use that $\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) = \mathbb{E}(Z_{ij}^{(k)^2}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$ and if $j \neq j'$ we approximate the quantity as $\mathbb{E}(Z_{ij}^{(k)}Z_{ij'}^{(k)^T}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D) \approx \mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)\mathbb{E}(Z_{ij'}^{(k)}|x_{i}^{(k)}, \bm{\hat{\Theta}}^{(m)}, D)$. \\ \\ For the remainder, we need some theory on truncated normal distributions; for a random variable $X\sim N(\mu_0, \sigma_0)$ with lower and upper truncation points $a$ and $b$ respectively, $X|a \leq X \leq b$ follows a truncated normal distribution. Define $\epsilon_a = \frac{(a - \mu_0)}{\sigma_0}$ and $\epsilon_b = \frac{(b - \mu_0)}{\sigma_0}$; then, the first and second order moments are \begin{equation*} \begin{gathered} \mathbb{E}(X|a \leq X \leq b) = \mu_0 + \frac{\phi(\epsilon_a) - \phi(\epsilon_b)}{\Phi(\epsilon_a) - \Phi(\epsilon_b)}\sigma_0\\ \mathbb{E}(X^2|a \leq X \leq b) = \mu_0^2 + \sigma_{0}^2 + 2\frac{\phi(\epsilon_a) - \phi(\epsilon_b)}{\Phi(\epsilon_a) - \Phi(\epsilon_b)}\mu_0\sigma_0 + \frac{\epsilon_a\phi(\epsilon_a) - \epsilon_b\phi(\epsilon_b)}{\Phi(\epsilon_a) - \Phi(\epsilon_b)}\sigma_0 \end{gathered} \end{equation*} Therefore, we have that \begin{equation} \label{eq:z} \mathbb{E}(Z_{ij}^{(k)}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D) = \hat{\Sigma}^{(k)}_{j-j}\hat{\Sigma}^{(k)^{-1}}_{-j-j}\mathbb{E}(Z_{i-j}^{(k)^T}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D)\frac{\phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right) - \phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right)}{\Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right) - \Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right)}\tilde{\sigma}_{ij}^{(k)} \end{equation} \begin{equation} \begin{gathered} \mathbb{E}(Z_{ij}^{(k)^2}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D) = \hat{\Sigma}^{(k)}_{j-j}\hat{\Sigma}^{(k)^{-1}}_{-j-j}\mathbb{E}(Z_{i-j}^{(k)^T}Z_{i-j}^{(k)}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D)\hat{\Sigma}^{(k)^{-1}}_{-j-j}\hat{\Sigma}^{(k)^{T}}_{j-j} + \tilde{\sigma}_{ij}^{(k)^2}\\ + 2\frac{\phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right) - \phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right)}{\Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right) - \Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right)}[\hat{\Sigma}^{(k)^{-1}}_{j-j}\hat{\Sigma}^{(k)^T}_{-j-j}\mathbb{E}(Z_{i-j}^{(k)^T}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D)]\tilde{\sigma}_{ij}^{(k)}\\ + \frac{\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right) - \tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right)}{\Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}+1 j}}\right) - \Phi\left(\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}}\right)}\tilde{\sigma}_{ij}^{(k)^2}, \label{eq:z2} \end{gathered} \end{equation} where $Z_{i -j} = (Z_{i1}^{(k)},\ldots,Z_{i j-1}^{(k)},Z_{i j+1}^{(k)}, Z_{i p}^{(k)})$ and $\tilde{\delta}^{(k)}_{i_{x_{ij}^{(k)}j}} = \frac{t_{ij}^{(k)} - \mathbb{E}(\tilde{\mu}_{ij}^{k}|x_{i}^{(k)}; \bm{\hat{\Theta}}^{(m)}, D)}{\tilde{\sigma_{ij}^{(k)}}}$ \section{Additional simulation results} \label{Additional simulation results} \begin{table}[H] \centering \begin{threeparttable} \caption{Simulation results for cluster networks, where AUC stands for area under the curve, FL stands for Frobenius loss, EL stands for entropy loss and the bc suffix stands for best choice, i.e. the best result of that respective metric (highest for AUC and lowest for EL and FL) for a particular value of $\lambda_2$. The value corresponding to the winning method is written in bold.} \label{tab:simcluster} \begin{tabular}{lcccccc} \toprule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso/GLASSO}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC} &\textbf{FL} & \textbf{EL}& \textbf{AUC} & \textbf{FL} & \textbf{EL}\\ \midrule $10, 50, 0.25$ & \textbf{0.64}/0.64 & \textbf{0.71}/0.80 & \textbf{9.67}/11.08 & 0.60/0.58 & 4.24/1.57 & 45.71/19.64\\ $50, 50, 0.25$ & \textbf{0.86}/0.86 & \textbf{0.11}/0.11 & \textbf{2.54}/2.55 & 0.77/0.81 & 1.12/0.26 & 32.44/7.20\\ $100, 50, 0.25$ & \textbf{0.92}/0.92 & \textbf{0.09}/0.09 & \textbf{2.13}/2.14 & 0.85/0.91 & 1.08/0.23 & 32.60/6.51\\ $500, 50, 0.25$ & 0.97/0.97 & \textbf{0.09}/0.09 & \textbf{1.99}/2.00 & 0.95/\textbf{1.00} & 1.05/0.23 & 32.91/6.25\\ $10, 100, 0.25$ & \textbf{0.59}/0.60 & \textbf{1.01}/1.25 & \textbf{27.91}/34.49 & 0.56/0.55 & 5.65/1.99 & 107.44/49.01\\ $50, 100, 0.25$ & \textbf{0.81}/0.81 & \textbf{0.15}/0.15 & \textbf{7.77}/7.92 & 0.73/0.73 & 1.13/0.33 & 66.19/18.09\\ $100, 100, 0.25$ & \textbf{0.89}/0.89 & \textbf{0.12}/0.12 & \textbf{6.32}/6.36 & 0.81/0.85 & 1.03/0.27 & 66.50/15.34\\ $500, 100, 0.25$ & 0.96/0.96 & \textbf{0.11}/0.11 & \textbf{5.78}/5.83 & 0.92/\textbf{0.99} & 1.01/0.25 & 67.67/14.33\\ $10, 50, 1$ & \textbf{0.60}/0.60 & \textbf{0.69}/0.78 & \textbf{10.06}/11.46 & 0.56/0.56 & 4.00/1.52 & 45.52/19.95\\ $50, 50, 1$ & \textbf{0.77}/0.77 & \textbf{0.12}/0.12 & \textbf{2.95}/2.96 & 0.71/0.75 & 1.11/0.27 & 32.82/7.58\\ $100, 50, 1$ & 0.83/0.83 & \textbf{0.10}/0.10 & \textbf{2.57}/2.58 & 0.78/\textbf{0.86} & 1.05/0.24 & 32.68/6.91\\ $500, 50, 1$ & 0.92/0.92 & \textbf{0.10}/0.10 & \textbf{2.44}/2.45 & 0.90/\textbf{0.99} & 1.03/0.24 & 33.44/6.66\\ $10, 100, 1$ & \textbf{0.56}/0.56 & \textbf{0.97}/1.20 & \textbf{28.94}/35.58 & 0.54/0.54 & 5.31/1.90 & 107.66/49.93\\ $50, 100, 1$ & \textbf{0.71}/0.72 & \textbf{0.16}/0.16 & \textbf{8.88}/9.03 & 0.66/0.68 & 1.10/0.34 & 67.86/19.07\\ $100, 100, 1$ & 0.78/0.78 & \textbf{0.13}/0.13 & \textbf{7.46}/7.49 & 0.72/\textbf{0.79} & 1.00/0.28 & 67.85/16.38\\ $500, 100, 1$ & 0.89/0.89 & \textbf{0.12}/0.12 & \textbf{6.94}/6.98 & 0.86/\textbf{0.97} & 0.99/0.27 & 68.54/15.40\\ \midrule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc}\\ \midrule $10, 50, 0.25$ & 0.68/\textbf{0.69} & \textbf{0.26}/0.27 & \textbf{5.23}/5.50 & 0.65 & 1.87 & 34.83 \\ $50, 50, 0.25$ & \textbf{0.90}/0.90 & \textbf{0.09}/0.09 & \textbf{2.17}/2.18 & 0.81 & 1.08 & 31.90 \\ $100, 50, 0.25$ & \textbf{0.93}/0.93 & \textbf{0.09}/0.09 & \textbf{2.03}/2.04 & 0.87 & 1.07 & 32.48 \\ $500, 50, 0.25$ & \textbf{1.00}/1.00 & \textbf{0.09}/0.09 & \textbf{1.98}/2.00 & 0.95 & 1.05 & 32.88 \\ $10, 100, 0.25$ & 0.62/\textbf{0.63} & \textbf{0.42}/0.48 & \textbf{16.67}/18.61 & 0.58 & 2.53 & 79.72 \\ $50, 100, 0.25$ & \textbf{0.85}/0.85 & \textbf{0.12}/0.12 & \textbf{6.44}/6.47 & 0.78 & 1.03 & 63.82 \\ $100, 100, 0.25$ & \textbf{0.91}/0.91 & \textbf{0.11}/0.11 & \textbf{5.94}/5.97 & 0.84 & 1.01 & 66.02 \\ $500, 100, 0.25$ & \textbf{0.99}/0.99 & \textbf{0.11}/0.11 & \textbf{5.78}/5.82 & 0.93 & 1.01 & 67.55 \\ $10, 50, 1$ & \textbf{0.63}/0.63 & \textbf{0.26}/0.28 & \textbf{5.65}/5.93 & 0.58 & 1.75 & 34.74 \\ $50, 50, 1$ & \textbf{0.78}/0.78 & \textbf{0.10}/0.10 & \textbf{2.61}/2.62 & 0.72 & 1.07 & 32.28 \\ $100, 50, 1$ & \textbf{0.86}/0.86 & \textbf{0.10}/0.10 & \textbf{2.49}/2.50 & 0.79 & 1.04 & 32.56 \\ $500, 50, 1$ & \textbf{0.99}/0.99 & \textbf{0.10}/0.10 & \textbf{2.40}/2.41 & 0.93 & 1.03 & 33.35 \\ $10, 100, 1$ & \textbf{0.57}/0.57 & \textbf{0.41}/0.47 & \textbf{17.71}/19.70 & 0.55 & 2.41 & 80.47 \\ $50, 100, 1$ & \textbf{0.74}/0.74 & \textbf{0.13}/0.13 & \textbf{7.61}/7.63 & 0.68 & 1.00 & 65.50 \\ $100, 100, 1$ & \textbf{0.79}/0.79 & \textbf{0.13}/0.13 & \textbf{7.12}/7.14 & 0.73 & 0.99 & 67.36 \\ $500, 100, 1$ & \textbf{0.97}/0.97 & \textbf{0.12}/0.12 & \textbf{6.87}/6.91 & 0.90 & 0.98 & 68.30 \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{figure}\label{fig:roccurves_cl} \end{figure} \begin{table}[H] \centering \begin{threeparttable} \caption{Simulation results for scale-free networks, where AUC stands for area under the curve, FL stands for Frobenius loss, EL stands for entropy loss and the bc suffix stands for best choice, i.e. the best result of that respective metric (highest for AUC and lowest for EL and FL) for a particular value of $\lambda_2$. The value corresponding to the winning method is written in bold.} \label{tab:simscalefree} \begin{tabular}{lcccccc} \toprule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso/GLASSO}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC} &\textbf{FL} & \textbf{EL}& \textbf{AUC} & \textbf{FL} & \textbf{EL}\\ \midrule $10, 50, 0.25$ & 0.56/\textbf{0.57} & \textbf{0.68}/0.76 & \textbf{10.25}/11.63 & 0.53/0.53 & 3.99/1.49 & 46.29/20.20\\ $50, 50, 0.25$ & \textbf{0.77}/0.77 & \textbf{0.13}/0.13 & \textbf{3.21}/3.22 & 0.69/0.69 & 1.08/0.28 & 33.11/7.84\\ $100, 50, 0.25$ & \textbf{0.85}/0.85 & \textbf{0.12}/0.12 & \textbf{2.84}/2.84 & 0.76/0.80 & 1.03/0.26 & 33.23/7.19\\ $500, 50, 0.25$ & 0.94/0.94 & \textbf{0.11}/0.11 & \textbf{2.70}/2.71 & 0.90/\textbf{0.97} & 1.04/0.25 & 33.45/6.95\\ $10, 100, 0.25$ & \textbf{0.55}/0.55 & \textbf{1.11}/1.38 & \textbf{26.45}/33.17 & 0.54/0.53 & 6.09/2.17 & 104.97/47.47\\ $50, 100, 0.25$ & \textbf{0.73}/0.73 & \textbf{0.14}/0.14 & \textbf{6.16}/6.30 & 0.66/0.65 & 1.21/0.32 & 65.08/16.45\\ $100, 100, 0.25$ & \textbf{0.82}/0.82 & \textbf{0.10}/0.10 & \textbf{4.73}/4.74 & 0.74/0.75 & 1.09/0.25 & 65.55/13.71\\ $500, 100, 0.25$ & 0.93/0.93 & \textbf{0.09}/0.09 & \textbf{4.19}/4.21 & 0.88/\textbf{0.95} & 1.06/0.23 & 65.88/12.69\\ $10, 50, 1$ & \textbf{0.55}/0.55 & \textbf{0.65}/0.73 & \textbf{10.75}/12.11 & 0.52/0.53 & 3.76/1.42 & 46.71/20.59\\ $50, 50, 1$ & \textbf{0.69}/0.69 & \textbf{0.14}/0.15 & \textbf{3.82}/3.83 & 0.62/0.65 & 1.04/0.30 & 33.87/8.40\\ $100, 50, 1$ & \textbf{0.76}/0.76 & \textbf{0.13}/0.13 & \textbf{3.45}/3.46 & 0.69/0.74 & 1.00/0.28 & 33.82/7.76\\ $500, 50, 1$ & 0.86/0.85 & \textbf{0.13}/0.13 & \textbf{3.31}/3.32 & 0.83/\textbf{0.94} & 0.99/0.27 & 34.33/7.54\\ $10, 100, 1$ & \textbf{0.53}/0.53 & \textbf{1.06}/1.33 & \textbf{27.30}/33.95 & 0.52/0.52 & 6.00/2.08 & 107.31/48.16\\ $50, 100, 1$ & \textbf{0.66}/0.66 & \textbf{0.15}/0.15 & \textbf{7.09}/7.23 & 0.61/0.62 & 1.17/0.33 & 65.62/17.31\\ $100, 100, 1$ & \textbf{0.72}/0.72 & \textbf{0.11}/0.11 & \textbf{5.69}/5.71 & 0.67/0.70 & 1.06/0.26 & 66.48/14.62\\ $500, 100, 1$ & 0.84/0.83 & \textbf{0.10}/0.10 & \textbf{5.15}/5.17 & 0.81/\textbf{0.91} & 1.04/0.24 & 67.07/13.62\\ \midrule \multicolumn{4}{r}{Gibbs method/Approximate method} & \multicolumn{3}{c}{Fused graphical lasso}\\ \midrule \textbf{$n, p$ $\rho$} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc} & \textbf{AUC bc} & \textbf{FL bc} & \textbf{EL bc}\\ \midrule $10, 50, 0.25$ & \textbf{0.59}/0.59 & \textbf{0.27}/0.28 & \textbf{5.85}/6.11 & 0.55 & 1.74 & 35.41 \\ $50, 50, 0.25$ & \textbf{0.81}/0.81 & \textbf{0.12}/0.12 & \textbf{2.86}/2.86 & 0.74 & 1.04 & 32.59 \\ $100, 50, 0.25$ & 0.87/\textbf{0.88} & \textbf{0.11}/0.11 & \textbf{2.74}/2.75 & 0.80 & 1.02 & 33.11 \\ $500, 50, 0.25$ & \textbf{0.97}/0.97 & \textbf{0.11}/0.11 & \textbf{2.70}/2.70 & 0.91 & 1.04 & 33.42 \\ $10, 100, 0.25$ & \textbf{0.57}/0.57 & \textbf{0.44}/0.52 & \textbf{15.08}/17.11 & 0.55 & 2.76 & 78.08 \\ $50, 100, 0.25$ & 0.77/\textbf{0.78} & \textbf{0.10}/0.10 & \textbf{4.83}/4.84 & 0.71 & 1.10 & 62.70 \\ $100, 100, 0.25$ & \textbf{0.85}/0.85 & \textbf{0.09}/0.09 & \textbf{4.33}/4.34 & 0.78 & 1.07 & 65.05 \\ $500, 100, 0.25$ & \textbf{0.95}/0.94 & \textbf{0.09}/0.09 & \textbf{4.17}/4.20 & 0.89 & 1.06 & 65.79 \\ $10, 50, 1$ & \textbf{0.56}/0.56 & \textbf{0.27}/0.28 & \textbf{6.40}/6.65 & 0.54 & 1.66 & 35.95 \\ $50, 50, 1$ & \textbf{0.71}/0.71 & \textbf{0.13}/0.13 & \textbf{3.49}/3.49 & 0.65 & 1.01 & 33.33 \\ $100, 50, 1$ & \textbf{0.77}/0.77 & \textbf{0.13}/0.13 & \textbf{3.37}/3.38 & 0.70 & 0.99 & 33.70 \\ $500, 50, 1$ & \textbf{0.94}/0.94 & \textbf{0.13}/0.13 & \textbf{3.28}/3.29 & 0.85 & 0.99 & 34.25 \\ $10, 100, 1$ & \textbf{0.54}/0.54 & \textbf{0.43}/0.51 & \textbf{16.04}/18.02 & 0.53 & 2.68 & 79.36 \\ $50, 100, 1$ & \textbf{0.68}/0.68 & \textbf{0.11}/0.11 & \textbf{5.79}/5.80 & 0.64 & 1.07 & 63.27 \\ $100, 100, 1$ & \textbf{0.74}/0.74 & \textbf{0.11}/0.11 & \textbf{5.33}/5.34 & 0.68 & 1.04 & 65.98 \\ $500, 100, 1$ & \textbf{0.91}/0.91 & \textbf{0.10}/0.10 & \textbf{5.14}/5.16 & 0.83 & 1.04 & 66.91 \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{figure}\label{fig:roccurves_sf} \end{figure} \end{appendices} \end{document}
math
// Copyright (c) 2009-2010 Satoshi Nakamoto // Copyright (c) 2009-2014 The Bitcoin developers // Distributed under the MIT software license, see the accompanying // file COPYING or http://www.opensource.org/licenses/mit-license.php. #ifndef H_DIAMOND_SCRIPT #define H_DIAMOND_SCRIPT #include "key.h" #include "tinyformat.h" #include "utilstrencodings.h" #include <stdexcept> #include <boost/variant.hpp> static const unsigned int MAX_SCRIPT_ELEMENT_SIZE = 520; // bytes /** Script opcodes */ enum opcodetype { // push value OP_0 = 0x00, OP_FALSE = OP_0, OP_PUSHDATA1 = 0x4c, OP_PUSHDATA2 = 0x4d, OP_PUSHDATA4 = 0x4e, OP_1NEGATE = 0x4f, OP_RESERVED = 0x50, OP_1 = 0x51, OP_TRUE=OP_1, OP_2 = 0x52, OP_3 = 0x53, OP_4 = 0x54, OP_5 = 0x55, OP_6 = 0x56, OP_7 = 0x57, OP_8 = 0x58, OP_9 = 0x59, OP_10 = 0x5a, OP_11 = 0x5b, OP_12 = 0x5c, OP_13 = 0x5d, OP_14 = 0x5e, OP_15 = 0x5f, OP_16 = 0x60, // control OP_NOP = 0x61, OP_VER = 0x62, OP_IF = 0x63, OP_NOTIF = 0x64, OP_VERIF = 0x65, OP_VERNOTIF = 0x66, OP_ELSE = 0x67, OP_ENDIF = 0x68, OP_VERIFY = 0x69, OP_RETURN = 0x6a, // stack ops OP_TOALTSTACK = 0x6b, OP_FROMALTSTACK = 0x6c, OP_2DROP = 0x6d, OP_2DUP = 0x6e, OP_3DUP = 0x6f, OP_2OVER = 0x70, OP_2ROT = 0x71, OP_2SWAP = 0x72, OP_IFDUP = 0x73, OP_DEPTH = 0x74, OP_DROP = 0x75, OP_DUP = 0x76, OP_NIP = 0x77, OP_OVER = 0x78, OP_PICK = 0x79, OP_ROLL = 0x7a, OP_ROT = 0x7b, OP_SWAP = 0x7c, OP_TUCK = 0x7d, // splice ops OP_CAT = 0x7e, OP_SUBSTR = 0x7f, OP_LEFT = 0x80, OP_RIGHT = 0x81, OP_SIZE = 0x82, // dia logic OP_INVERT = 0x83, OP_AND = 0x84, OP_OR = 0x85, OP_XOR = 0x86, OP_EQUAL = 0x87, OP_EQUALVERIFY = 0x88, OP_RESERVED1 = 0x89, OP_RESERVED2 = 0x8a, // numeric OP_1ADD = 0x8b, OP_1SUB = 0x8c, OP_2MUL = 0x8d, OP_2DIV = 0x8e, OP_NEGATE = 0x8f, OP_ABS = 0x90, OP_NOT = 0x91, OP_0NOTEQUAL = 0x92, OP_ADD = 0x93, OP_SUB = 0x94, OP_MUL = 0x95, OP_DIV = 0x96, OP_MOD = 0x97, OP_LSHIFT = 0x98, OP_RSHIFT = 0x99, OP_BOOLAND = 0x9a, OP_BOOLOR = 0x9b, OP_NUMEQUAL = 0x9c, OP_NUMEQUALVERIFY = 0x9d, OP_NUMNOTEQUAL = 0x9e, OP_LESSTHAN = 0x9f, OP_GREATERTHAN = 0xa0, OP_LESSTHANOREQUAL = 0xa1, OP_GREATERTHANOREQUAL = 0xa2, OP_MIN = 0xa3, OP_MAX = 0xa4, OP_WITHIN = 0xa5, // crypto OP_RIPEMD160 = 0xa6, OP_SHA1 = 0xa7, OP_SHA256 = 0xa8, OP_HASH160 = 0xa9, OP_HASH256 = 0xaa, OP_CODESEPARATOR = 0xab, OP_CHECKSIG = 0xac, OP_CHECKSIGVERIFY = 0xad, OP_CHECKMULTISIG = 0xae, OP_CHECKMULTISIGVERIFY = 0xaf, // expansion OP_NOP1 = 0xb0, OP_NOP2 = 0xb1, OP_NOP3 = 0xb2, OP_NOP4 = 0xb3, OP_NOP5 = 0xb4, OP_NOP6 = 0xb5, OP_NOP7 = 0xb6, OP_NOP8 = 0xb7, OP_NOP9 = 0xb8, OP_NOP10 = 0xb9, // template matching params OP_SMALLDATA = 0xf9, OP_SMALLINTEGER = 0xfa, OP_PUBKEYS = 0xfb, OP_PUBKEYHASH = 0xfd, OP_PUBKEY = 0xfe, OP_INVALIDOPCODE = 0xff, }; const char* GetOpName(opcodetype opcode); class scriptnum_error : public std::runtime_error { public: explicit scriptnum_error(const std::string& str) : std::runtime_error(str) {} }; class CScriptNum { // Numeric opcodes (OP_1ADD, etc) are restricted to operating on 4-byte integers. // The semantics are subtle, though: operands must be in the range [-2^31 +1...2^31 -1], // but results may overflow (and are valid as long as they are not used in a subsequent // numeric operation). CScriptNum enforces those semantics by storing results as // an int64 and allowing out-of-range values to be returned as a vector of bytes but // throwing an exception if arithmetic is done or the result is interpreted as an integer. public: explicit CScriptNum(const int64_t& n) { m_value = n; } explicit CScriptNum(const std::vector<unsigned char>& vch) { if (vch.size() > nMaxNumSize) throw scriptnum_error("CScriptNum(const std::vector<unsigned char>&) : overflow"); m_value = set_vch(vch); } inline bool operator==(const int64_t& rhs) const { return m_value == rhs; } inline bool operator!=(const int64_t& rhs) const { return m_value != rhs; } inline bool operator<=(const int64_t& rhs) const { return m_value <= rhs; } inline bool operator< (const int64_t& rhs) const { return m_value < rhs; } inline bool operator>=(const int64_t& rhs) const { return m_value >= rhs; } inline bool operator> (const int64_t& rhs) const { return m_value > rhs; } inline bool operator==(const CScriptNum& rhs) const { return operator==(rhs.m_value); } inline bool operator!=(const CScriptNum& rhs) const { return operator!=(rhs.m_value); } inline bool operator<=(const CScriptNum& rhs) const { return operator<=(rhs.m_value); } inline bool operator< (const CScriptNum& rhs) const { return operator< (rhs.m_value); } inline bool operator>=(const CScriptNum& rhs) const { return operator>=(rhs.m_value); } inline bool operator> (const CScriptNum& rhs) const { return operator> (rhs.m_value); } inline CScriptNum operator+( const int64_t& rhs) const { return CScriptNum(m_value + rhs);} inline CScriptNum operator-( const int64_t& rhs) const { return CScriptNum(m_value - rhs);} inline CScriptNum operator+( const CScriptNum& rhs) const { return operator+(rhs.m_value); } inline CScriptNum operator-( const CScriptNum& rhs) const { return operator-(rhs.m_value); } inline CScriptNum& operator+=( const CScriptNum& rhs) { return operator+=(rhs.m_value); } inline CScriptNum& operator-=( const CScriptNum& rhs) { return operator-=(rhs.m_value); } inline CScriptNum operator-() const { assert(m_value != std::numeric_limits<int64_t>::min()); return CScriptNum(-m_value); } inline CScriptNum& operator=( const int64_t& rhs) { m_value = rhs; return *this; } inline CScriptNum& operator+=( const int64_t& rhs) { assert(rhs == 0 || (rhs > 0 && m_value <= std::numeric_limits<int64_t>::max() - rhs) || (rhs < 0 && m_value >= std::numeric_limits<int64_t>::min() - rhs)); m_value += rhs; return *this; } inline CScriptNum& operator-=( const int64_t& rhs) { assert(rhs == 0 || (rhs > 0 && m_value >= std::numeric_limits<int64_t>::min() + rhs) || (rhs < 0 && m_value <= std::numeric_limits<int64_t>::max() + rhs)); m_value -= rhs; return *this; } int getint() const { if (m_value > std::numeric_limits<int>::max()) return std::numeric_limits<int>::max(); else if (m_value < std::numeric_limits<int>::min()) return std::numeric_limits<int>::min(); return m_value; } std::vector<unsigned char> getvch() const { return serialize(m_value); } static std::vector<unsigned char> serialize(const int64_t& value) { if(value == 0) return std::vector<unsigned char>(); std::vector<unsigned char> result; const bool neg = value < 0; uint64_t absvalue = neg ? -value : value; while(absvalue) { result.push_back(absvalue & 0xff); absvalue >>= 8; } // - If the most significant byte is >= 0x80 and the value is positive, push a // new zero-byte to make the significant byte < 0x80 again. // - If the most significant byte is >= 0x80 and the value is negative, push a // new 0x80 byte that will be popped off when converting to an integral. // - If the most significant byte is < 0x80 and the value is negative, add // 0x80 to it, since it will be subtracted and interpreted as a negative when // converting to an integral. if (result.back() & 0x80) result.push_back(neg ? 0x80 : 0); else if (neg) result.back() |= 0x80; return result; } static const size_t nMaxNumSize = 4; private: static int64_t set_vch(const std::vector<unsigned char>& vch) { if (vch.empty()) return 0; int64_t result = 0; for (size_t i = 0; i != vch.size(); ++i) result |= static_cast<int64_t>(vch[i]) << 8*i; // If the input vector's most significant byte is 0x80, remove it from // the result's msb and return a negative. if (vch.back() & 0x80) return -((int64_t)(result & ~(0x80ULL << (8 * (vch.size() - 1))))); return result; } int64_t m_value; }; inline std::string ValueString(const std::vector<unsigned char>& vch) { if (vch.size() <= 4) return strprintf("%d", CScriptNum(vch).getint()); else return HexStr(vch); } /** Serialized script, used inside transaction inputs and outputs */ class CScript : public std::vector<unsigned char> { protected: CScript& push_int64(int64_t n) { if (n == -1 || (n >= 1 && n <= 16)) { push_back(n + (OP_1 - 1)); } else { *this << CScriptNum::serialize(n); } return *this; } public: CScript() { } CScript(const CScript& b) : std::vector<unsigned char>(b.begin(), b.end()) { } CScript(const_iterator pbegin, const_iterator pend) : std::vector<unsigned char>(pbegin, pend) { } CScript(const unsigned char* pbegin, const unsigned char* pend) : std::vector<unsigned char>(pbegin, pend) { } CScript& operator+=(const CScript& b) { insert(end(), b.begin(), b.end()); return *this; } friend CScript operator+(const CScript& a, const CScript& b) { CScript ret = a; ret += b; return ret; } CScript(int64_t b) { operator<<(b); } explicit CScript(opcodetype b) { operator<<(b); } explicit CScript(const uint256& b) { operator<<(b); } explicit CScript(const CScriptNum& b) { operator<<(b); } explicit CScript(const std::vector<unsigned char>& b) { operator<<(b); } CScript& operator<<(int64_t b) { return push_int64(b); } CScript& operator<<(opcodetype opcode) { if (opcode < 0 || opcode > 0xff) throw std::runtime_error("CScript::operator<<() : invalid opcode"); insert(end(), (unsigned char)opcode); return *this; } CScript& operator<<(const uint160& b) { insert(end(), sizeof(b)); insert(end(), (unsigned char*)&b, (unsigned char*)&b + sizeof(b)); return *this; } CScript& operator<<(const uint256& b) { insert(end(), sizeof(b)); insert(end(), (unsigned char*)&b, (unsigned char*)&b + sizeof(b)); return *this; } CScript& operator<<(const CPubKey& key) { assert(key.size() < OP_PUSHDATA1); insert(end(), (unsigned char)key.size()); insert(end(), key.begin(), key.end()); return *this; } CScript& operator<<(const CScriptNum& b) { *this << b.getvch(); return *this; } CScript& operator<<(const std::vector<unsigned char>& b) { if (b.size() < OP_PUSHDATA1) { insert(end(), (unsigned char)b.size()); } else if (b.size() <= 0xff) { insert(end(), OP_PUSHDATA1); insert(end(), (unsigned char)b.size()); } else if (b.size() <= 0xffff) { insert(end(), OP_PUSHDATA2); unsigned short nSize = b.size(); insert(end(), (unsigned char*)&nSize, (unsigned char*)&nSize + sizeof(nSize)); } else { insert(end(), OP_PUSHDATA4); unsigned int nSize = b.size(); insert(end(), (unsigned char*)&nSize, (unsigned char*)&nSize + sizeof(nSize)); } insert(end(), b.begin(), b.end()); return *this; } CScript& operator<<(const CScript& b) { // I'm not sure if this should push the script or concatenate scripts. // If there's ever a use for pushing a script onto a script, delete this member fn assert(!"Warning: Pushing a CScript onto a CScript with << is probably not intended, use + to concatenate!"); return *this; } bool GetOp(iterator& pc, opcodetype& opcodeRet, std::vector<unsigned char>& vchRet) { // Wrapper so it can be called with either iterator or const_iterator const_iterator pc2 = pc; bool fRet = GetOp2(pc2, opcodeRet, &vchRet); pc = begin() + (pc2 - begin()); return fRet; } bool GetOp(iterator& pc, opcodetype& opcodeRet) { const_iterator pc2 = pc; bool fRet = GetOp2(pc2, opcodeRet, NULL); pc = begin() + (pc2 - begin()); return fRet; } bool GetOp(const_iterator& pc, opcodetype& opcodeRet, std::vector<unsigned char>& vchRet) const { return GetOp2(pc, opcodeRet, &vchRet); } bool GetOp(const_iterator& pc, opcodetype& opcodeRet) const { return GetOp2(pc, opcodeRet, NULL); } bool GetOp2(const_iterator& pc, opcodetype& opcodeRet, std::vector<unsigned char>* pvchRet) const { opcodeRet = OP_INVALIDOPCODE; if (pvchRet) pvchRet->clear(); if (pc >= end()) return false; // Read instruction if (end() - pc < 1) return false; unsigned int opcode = *pc++; // Immediate operand if (opcode <= OP_PUSHDATA4) { unsigned int nSize = 0; if (opcode < OP_PUSHDATA1) { nSize = opcode; } else if (opcode == OP_PUSHDATA1) { if (end() - pc < 1) return false; nSize = *pc++; } else if (opcode == OP_PUSHDATA2) { if (end() - pc < 2) return false; nSize = 0; memcpy(&nSize, &pc[0], 2); pc += 2; } else if (opcode == OP_PUSHDATA4) { if (end() - pc < 4) return false; memcpy(&nSize, &pc[0], 4); pc += 4; } if (end() - pc < 0 || (unsigned int)(end() - pc) < nSize) return false; if (pvchRet) pvchRet->assign(pc, pc + nSize); pc += nSize; } opcodeRet = (opcodetype)opcode; return true; } // Encode/decode small integers: static int DecodeOP_N(opcodetype opcode) { if (opcode == OP_0) return 0; assert(opcode >= OP_1 && opcode <= OP_16); return (int)opcode - (int)(OP_1 - 1); } static opcodetype EncodeOP_N(int n) { assert(n >= 0 && n <= 16); if (n == 0) return OP_0; return (opcodetype)(OP_1+n-1); } int FindAndDelete(const CScript& b) { int nFound = 0; if (b.empty()) return nFound; iterator pc = begin(); opcodetype opcode; do { while (end() - pc >= (long)b.size() && memcmp(&pc[0], &b[0], b.size()) == 0) { pc = erase(pc, pc + b.size()); ++nFound; } } while (GetOp(pc, opcode)); return nFound; } int Find(opcodetype op) const { int nFound = 0; opcodetype opcode; for (const_iterator pc = begin(); pc != end() && GetOp(pc, opcode);) if (opcode == op) ++nFound; return nFound; } // Pre-version-0.6, Diamond always counted CHECKMULTISIGs // as 20 sigops. With pay-to-script-hash, that changed: // CHECKMULTISIGs serialized in scriptSigs are // counted more accurately, assuming they are of the form // ... OP_N CHECKMULTISIG ... unsigned int GetSigOpCount(bool fAccurate) const; // Accurately count sigOps, including sigOps in // pay-to-script-hash transactions: unsigned int GetSigOpCount(const CScript& scriptSig) const; bool IsPayToScriptHash() const; // Called by IsStandardTx and P2SH VerifyScript (which makes it consensus-critical). bool IsPushOnly() const; // Called by IsStandardTx. bool HasCanonicalPushes() const; // Returns whether the script is guaranteed to fail at execution, // regardless of the initial stack. This allows outputs to be pruned // instantly when entering the UTXO set. bool IsUnspendable() const { return (size() > 0 && *begin() == OP_RETURN); } std::string ToString() const { std::string str; opcodetype opcode; std::vector<unsigned char> vch; const_iterator pc = begin(); while (pc < end()) { if (!str.empty()) str += " "; if (!GetOp(pc, opcode, vch)) { str += "[error]"; return str; } if (0 <= opcode && opcode <= OP_PUSHDATA4) str += ValueString(vch); else str += GetOpName(opcode); } return str; } CScriptID GetID() const { return CScriptID(Hash160(*this)); } void clear() { // The default std::vector::clear() does not release memory. std::vector<unsigned char>().swap(*this); } }; #endif // H_DIAMOND_SCRIPT
code
// // Copyright (c) 2004-2017 Jaroslaw Kowalski <[email protected]>, Kim Christensen, Julian Verdurmen // // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions // are met: // // * Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // * Redistributions in binary form must reproduce the above copyright notice, // this list of conditions and the following disclaimer in the documentation // and/or other materials provided with the distribution. // // * Neither the name of Jaroslaw Kowalski nor the names of its // contributors may be used to endorse or promote products derived from this // software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" // AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE // IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE // ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE // LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR // CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF // SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS // INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN // CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) // ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF // THE POSSIBILITY OF SUCH DAMAGE. // namespace NLog.LayoutRenderers.Wrappers { using System.Text; using NLog.Config; using NLog.Layouts; /// <summary> /// Base class for <see cref="LayoutRenderer"/>s which wrapping other <see cref="LayoutRenderer"/>s. /// /// This has the <see cref="Inner"/> property (which is default) and can be used to wrap. /// </summary> /// <example> /// ${uppercase:${level}} //[DefaultParameter] /// ${uppercase:Inner=${level}} /// </example> public abstract class WrapperLayoutRendererBase : LayoutRenderer { /// <summary> /// Gets or sets the wrapped layout. /// /// [DefaultParameter] so Inner: is not required if it's the first /// </summary> /// <docgen category='Transformation Options' order='10' /> [DefaultParameter] public Layout Inner { get; set; } /// <summary> /// Renders the inner message, processes it and appends it to the specified <see cref="StringBuilder" />. /// </summary> /// <param name="builder">The <see cref="StringBuilder"/> to append the rendered data to.</param> /// <param name="logEvent">Logging event.</param> protected override void Append(StringBuilder builder, LogEventInfo logEvent) { string msg = this.RenderInner(logEvent); builder.Append(this.Transform(msg)); } /// <summary> /// Transforms the output of another layout. /// </summary> /// <param name="text">Output to be transform.</param> /// <remarks>If the <see cref="LogEventInfo"/> is needed, overwrite <see cref="Append"/>.</remarks> /// <returns>Transformed text.</returns> protected abstract string Transform(string text); /// <summary> /// Renders the inner layout contents. /// </summary> /// <param name="logEvent">The log event.</param> /// <returns>Contents of inner layout.</returns> protected virtual string RenderInner(LogEventInfo logEvent) { return this.Inner.Render(logEvent); } } }
code
In a matter of minutes and without a single line of code, Zapier allows you to connect Sendy and Klipfolio, with as many as 9 possible integrations. Are you ready to find your productivity superpowers? It's easy to connect Sendy + Klipfolio and requires absolutely zero coding experience—the only limit is your own imagination. Create a new draft campaign. Replace data in a data source in Klipfolio. Add data to a data source in Klipfolio. Klipfolio makes it easy and affordable to build and share real-time dashboards. We connect to 400+ data sources and have 100+ pre-built visualizations to help you get started in minutes. Combine data from multiple data sources into a single visualization to build the metrics you need to run your business.
english
""" Inspect the status of a given project. """ import phabulous phab = phabulous.Phabulous() project = phab.project(id=481) # print out some stuff print "%s\b" % project for attr in ('name', 'date_created', 'phid', 'id'): print "%s: %s" % (attr.capitalize(), getattr(project, attr)) print "members:" for user in project.members: print "\t%s" % user.name for task in user.tasks[:5]: print "\t\t%s" % task.title
code
इनेक्सिलाइव के साथ रहिए खबरों की दुनिया से जुड़े। यहां पढ़िए वीक्ली होरोस्कोप से जुड़ी हिन्दी न्यूज़ वीक्ली होरोस्कोप हिन्दी न्यूज और ताजा खबरें वीक्ली होरोस्कोप लाटेस्ट न्यूज। देखिए वीक्ली होरोस्कोप संबंधी ताजा वीडियो। साथ ही वीक्ली होरोस्कोप की ताजा तस्वीरें। वीक्ली होरोस्कोप से जुड़ी ब्रेकिंग न्यूज, वीडियो व फोटो एक साथ एक जगह। पाइए वीक्ली होरोस्कोप से जुड़ी जानकारी जिसमें है आपकी रुचि या जो आ सकती है आपके काम। वीक्ली होरोस्कोप से जुड़े अलग-अलग पहलुओं को छूने वाली नवीनतम जानकारी, खबरें, वीडियो व तस्वीरें। राजनीति से लेकर टेक्नोलॉजी, एंटरटेनमेंट व संस्कृति जगत की ताजा हलचल के लिए बने रहिए इनेक्सिलाइव के साथ. वीक्ली होरोस्कोप २१ तो २७ जून: मिथुन व कन्या राशि वाले इस हफ्ते कामयाबी के सफर पर आगे बढ़ेंगे, अन्य राशियों का भी जानें हाल वीक्ली होरोस्कोप १४ - २० जून: कर्क व सिंह राशि वालों के लिए हफ्ता आर्थिक व पारिवारिक रूप से शानदार रहेगा, जानें सभी राशियों का हाल वीक्ली होरोस्कोप १७ से २३ मई: मेष व कर्क राशि वालों के लिए यह हफ्ता बहुत अच्छा रहेगा, जानें सभी राशियों का पूरा हाल
hindi
میہ گۅڈنیتھ بلا ووٚن تہ یقرار کوٚر
kashmiri
देखिए अगर हम हर जिले की बात करें तो हर जिला हर शहर हर राज्य के इलेक्शन है वह अलग अलग टाइम पर होते हैं तो उसे बताया नहीं जा सकता कि कितने क्योंकि और भारत में कुल ७२३ जिले हैं तू अगर देखा जाए तो सब की ज... पंचायत एक लेबल होता है जो उसके गांव का समस्याओं को वहां पर निवारण होता है और वहां पर सभी लोगों को बराबर का न्याय मिलता है जैसे न्यायालय होता है ना वैसे एक पंचायतजवाब पढ़िये ग्राम पंचायत का आपको चुनाव लड़ना है तो उसमें आपको अर्जी लगानी होगी और अपना नाम देना होगा उसके बाद में आप जीतेंगे कैसे तो वह डिपेंड करता है कि आपके गांव के जो लोग हैं वह किस को वोट करते किस को जीत आते जवाब पढ़िये अगर आप अपनी राजनीति में जिला पंचायत में खड़े हुए हैं और आप जीत सकते हैं या नहीं यह तो आप की दुआ तरीके जो जनता है आपकी जो गांव के लोग हैं उन पर डिपेंड करता हुआ चाचा कोई उम्मीद जो आप की पार्टी में खड़े जवाब पढ़िये प्रधानमंत्री प्राइम मिनिस्टर का पावर इलेक्शन के वक्त भी खत्म नहीं होता है यहां वहां पर कुछ रिस्ट्रिक्शंस होती है तो जब तक इलेक्शन का रिजल्ट यहां से और नई पार्टी का गठन नहीं होता है तब तक प्रधानमंत्री जवाब पढ़िये लंका में कुछ ३३४ पंचायत समितियां हैजवाब पढ़िये किसी पंचायत के सीईओ आफ बनना चाहता है देखे पंचायत का मुखिया ही हुआ वह तो पंचायत के मुखिया बनना चाहते हैं तो आपको जो आपके पंचायत है वहां के जितने भी लोग हैं उनका ट्रस्ट जीतना पड़ेगा तो आप उनके लिए ऐसा कजवाब पढ़िये नगर पंचायत नगर पंचायत है नगर निगम नगर निगम सबसे नीचे सीढ़ियों से पंचायत जिसमें गांव होते हैं लेकिन इसकी आबादी वाले इस बालोतरा नगर पंचायत जिसमें बड़े गांव है उसमें जो कसबा कहते हैं ज्यादा आबादी वाले वहजवाब पढ़िये एक यही तरीका की काउंसिल बार के चुनाव के द्वारा ही जो है बिजली बन सकते हैं स्टैंडर्ड प्रोसीजर इसके अलावा जज नहीं बन सकता कोईजवाब पढ़िये रीवा जिले के कलेक्टर का नाम है श्री ओम प्रकाश श्रीवास्तवजवाब पढ़िये लिखे राष्ट्रपति जो है उसे भारत में सबसे पहला व्यक्ति मारा जाता है हर ५ साल में राष्ट्रपति का चुनाव होता है विश्व में भारत के राष्ट्रपति के चुनाव पर राष्ट्रपति का चुनाव करता है एक तरफ इलेक्ट्रॉन कॉलेज जवाब पढ़िये
hindi
We looked at a Globe and maps of the world and discussed how maps are made. The children then drew objects from an aerial view and side view. They then tried to draw the classroom.
english
#!/bin/sh # CYBERWATCH SAS - 2017 # # Security fix for DSA-3222-1 # # Security announcement date: 2015-04-12 00:00:00 UTC # Script generation date: 2017-01-01 21:07:19 UTC # # Operating System: Debian 7 (Wheezy) # Architecture: x86_64 # # Vulnerable packages fix on version: # - chrony:1.24-3.1+deb7u3 # # Last versions recommanded by security team: # - chrony:1.24-3.1+deb7u4 # # CVE List: # - CVE-2015-1821 # - CVE-2015-1822 # - CVE-2015-1853 # # More details: # - https://www.cyberwatch.fr/vulnerabilites # # Licence: Released under The MIT License (MIT), See LICENSE FILE sudo apt-get install --only-upgrade chrony=1.24-3.1+deb7u4 -y
code
مراں چھےٚ رٲژ ٲچھن منز تہ دۅہ دۅلابن منز
kashmiri
NORTH ADAMS, MA - Berkshire Health Systems has announced the further expansion of the North Adams Campus of Berkshire Medical Center, with a service that has never been provided before in Northern Berkshire – Cardiac Rehabilitation. The announcement was made during a community open house event held at the Northern Berkshire Specialty Practices of BMC, first floor of the North Adams Campus, on Wednesday, January 18th. Berkshire Medical Center has expanded its highly effective cardiac rehabilitation program to North Berkshire, which marks the first time such a service has been provided in the region. Each year, hundreds of Berkshire residents participate in this valuable, life-extending program. BMC Cardiac Rehabilitation is located at the Northern Berkshire Specialty Practices, on the first floor of the North Adams Campus. For patients who have suffered a recent heart attack, heart failure or had surgical intervention such as a bypass, valve replacement or the installation of stents to improve blood flow, this monitored exercise, education and counseling program is an essential next step in recovery and a return to a healthy, active lifestyle. Patients undergo cardiac rehabilitation three days a week for a period of 12 weeks. Patients access the program through a referral from their cardiologist. BMC Cardiac Rehabilitation features state-of-the-art equipment and the expertise of clinical professionals specializing in the latest techniques to improve health following a cardiac incident. Cardiac Rehabilitation is located inside the Northern Berkshire Specialty Practices of BMC, a convenient, single site for multiple physician specialties and associated programs, providing North Berkshire and Southern Vermont patients with greater access to care. Northern Berkshire Specialty Practices of BMC opened on November 1, 2016 on the first floor of the North Adams Campus, with access through the main lobby at 71 Hospital Avenue. Northern Berkshire Specialty Practices of BMC consolidates in one location the offices of several BMC physician practices, including: Cardiology Professional Services of BMC; Endocrinology & Metabolism of BMC and BHS Diabetes Education; Hematology Oncology Services of BMC; the Pain Diagnosis & Treatment Center of BMC; and Urology Professional Services of BMC. The North Adams Campus of Berkshire Medical Center provides an extensive array of healthcare services to the Northern Berkshire community, including 24-hour emergency care, primary care and specialty care, home care, outpatient imaging and mammography, endoscopy, outpatient urologic and gynecologic surgeries, laboratory services, renal dialysis, wound care, the Neighborhood for Health and more.
english
\begin{document} \begin{abstract} It is known that any tame hyperbolic 3-manifold with infinite volume and a single end is the geometric limit of a sequence of finite volume hyperbolic knot complements. Purcell and Souto showed that if the original manifold embeds in the 3-sphere, then such knots can be taken to lie in the 3-sphere. However, their proof was nonconstructive; no examples were produced. In this paper, we give a constructive proof in the geometrically finite case. That is, given a geometrically finite, tame hyperbolic 3-manifold with one end, we build an explicit family of knots whose complements converge to it geometrically. Our knots lie in the (topological) double of the original manifold. The construction generalises the class of fully augmented links to a Kleinian groups setting. \end{abstract} \title{Constructing knots with specified geometric limits} \section{Introduction}\label{Sec:Intro} In this paper, we construct finite volume hyperbolic 3-manifolds that converge geometrically to infinite volume ones. In 2010, Purcell and Souto proved that every tame infinite volume hyperbolic 3-manifold with a single end that embeds in $S^3$ is the geometric limit of complements of knots in $S^3$~\cite{PurcellSouto}. However, that was purely an existence result; the proof shed very little light on what the knots might look like. This paper is much more constructive. Starting with a tame, infinite volume hyperbolic 3-manifold $M$ with a single end, we give an algorithm to construct a sequence of knots that converge geometrically to $M$ --- with a cost. We can no longer ensure that our knot complements lie in $S^3$. The methods are to generalise the highly geometric fully augmented links in $S^3$ to lie on surfaces other than $S^2\subset S^3$. This will likely be of interest in its own right. Since their appearance in the appendix by Agol and Thurston in a paper of Lackenby~\cite{Lackenby:Vol}, fully augmented links have contributed a great deal to our understanding of the geometry of many knot and link complements with diagrams that project to $S^2$. For example they have been used to bound volumes~\cite{FKP:DehnFilling} and cusp shapes~\cite{Purcell:CuspShapes}, give information on essential surfaces~\cite{BlairFuterTomova}, crosscap number~\cite{KalfagianniLee}, and short geodesics~\cite{Millichap}. Such links on $S^2$ are amenable to study via hyperbolic geometry because their complements are hyperbolic and contain a pair of totally geodesic surfaces meeting at right angles: a projection surface, coloured white, and a disconnected \emph{shaded surface} consisting of many 3-punctured spheres; see~\cite{Purcell:AugLinksSurvey}. While essential 3-punctured spheres are geodesic in any hyperbolic 3-manifold, the white projection surface does not remain geodesic when generalising to links on surfaces other than $S^2$. However, using machinery from circle packings and Kleinian groups, we are able to construct links with a geometry similar to the projection surface. We note other vey recent generalisations of fully augmented links to lie in thickened surfaces, due to Adams \emph{et al} \cite{adamsetal:2021Generalized}, Kwon~\cite{kwon2020fully}, and Kwon and Tham~\cite{kwon2020hyperbolicity}. We work within a different manifold, as follows. Given a compact 3-manifold $\overline{M}$ with a single boundary component, the \emph{double of $\overline{M}$}, denoted $D(\overline{M})$ is the closed manifold obtained by gluing two copies of $\overline{M}$ by the identity along $\partial \overline{M}$. The first main result of this paper is the following. \begin{theorem}\label{Thm:MainDouble} Let $M$ be a geometrically finite hyperbolic 3-manifold of infinite volume that is homeomorphic to the interior of a compact manifold $\overline M$ with a single boundary component. Then there exists a sequence $M_n$ of finite volume hyperbolic manifolds that are knot complements in $D(\overline M)$, such that $M_n$ converges geometrically to $M$. Moreover, the method is constructive: we construct for $p\in M$ and any $R>0$ and $\epsilon>0$ a fully augmented link complement $M_{\epsilon,R}$ in $D(\overline M)$ with a basepoint $p_{\epsilon,R}$ such that the metric ball $B(p_{\epsilon,R},R)\subset M_{\epsilon,R}$ is $(1+\epsilon)$-bilipschitz to the metric ball $B(p,R) \subset M$. Performing sufficiently high Dehn filling along the crossing circles of the fully augmented link yields a knot complement, where the Dehn filling slopes can also be determined effectively, so that the resulting knot complement contains a metric ball that is $(1+\epsilon)^2$-bilipschitz to $B(p,R)$. \end{theorem} We prove \refthm{MainDouble} by first proving the theorem in the convex cocompact case. In \refsec{GFtoCC}, we extend the result to the geometrically finite case. The density theorem states that any hyperbolic 3-manifold $M$ with finitely generated fundamental group is the algebraic limit of a sequence of geometrically finite hyperbolic 3-manifolds; see Ohshika~\cite{ohshika2011realising} and Namazi and Souto~\cite{namazi2012nonrealizability}. Namazi and Souto proved a strong version of this theorem~\cite[Corollary~12.3]{namazi2012nonrealizability}: that in fact, the sequence can be chosen such that $M$ is also the geometric limit. Thus an immediate corollary of \refthm{MainDouble} is the following. \begin{corollary}\label{Cor:Density} Let $M$ be a hyperbolic 3-manifold of infinite volume which is homeomorphic to the interior of a compact manifold $\overline M$ with a single boundary component. Then there exists a sequence $M_n$ of finite volume hyperbolic manifolds that are knot complements in $D(\overline M)$, such that $M_n$ converges geometrically to $M$. \end{corollary} \section{Background}\label{Sec:Background} In this section we review definitions and results that we will need for the construction, particularly terminology and results in Kleinian groups and their relation to hyperbolic 3-manifolds. Further details are contained, for example, in the books \cite{marden2007outer} and \cite{kapovich2001hyperbolic}. \subsection{Kleinian Groups} Recall that the ideal boundary of $\mathbb H^3$ is homeomorphic to $S^2$, which can be viewed as the Riemann sphere, and that the group of isometries $\text{Isom}(\mathbb{H}^3)$ corresponds to the group of M\"obius transformations acting on the boundary. We mostly consider orientation preserving M\"obius transformations here, which may be viewed as elements in $\operatorname{PSL}(2,{\mathbb{C}})$. A discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ is called a \emph{Kleinian group}. \begin{definition} A point $x \in S^2$ is a \emph{limit point} of a Kleinian group $\Gamma$ if there exists a point $y \in S^2$ such that $\lim_{n \to \infty} A_n(y) = x$ for an infinite sequence of distinct elements $A_n \in \Gamma$. The \emph{limit set} of $\Gamma$ is $\Lambda(\Gamma) = \{ x \in S^2 \mid x \text{ is a limit point of $\Gamma$} \}.$ The \emph{domain of discontinuity} is the open set $\Omega(\Gamma) = S^2 \setminus \Lambda(\Gamma)$. This set is sometimes called the \emph{ordinary set} or \emph{regular set}. \end{definition} A Kleinian group $\Gamma$ is often studied by its quotient space: \[ \mathcal{M}(\Gamma) = (\mathbb{H}^3 \cup \Omega(\Gamma)) / \Gamma \] If $\Gamma$ is torsion-free, then $\mathcal{M}(\Gamma)$ is an oriented manifold with possibly empty boundary $\partial \mathcal{M}(\Gamma)=\Omega(\Gamma) / \Gamma$. The interior ${\mathrm{int}}(\mathcal{M}(\Gamma)) = \mathbb{H}^3 / \Gamma$ has a complete hyperbolic structure, since its universal cover is $\mathbb{H}^3$. The fundamental group of ${\mathrm{int}}(\mathcal{M}(\Gamma))$ is isomorphic to $\Gamma$. By Ahlfors' finiteness theorem~\cite{ahlfors1964finitely, ahlfors1965finitelycorrection}, if $\Gamma$ is a finitely generated torsion-free Kleinian group, then $\Omega(\Gamma) / \Gamma$ is the union of a finite number of compact Riemann surfaces with at most a finite number of points removed. The boundary $\partial \mathcal{M}(\Gamma)=\Omega(\Gamma) / \Gamma$ endowed with this conformal structure is called the \emph{conformal boundary} of $\mathcal{M}(\Gamma)$. The Teichm\"uller space $\mathcal{T}(\partial \mathcal{M}(\Gamma))$ is the product the Teichm\"uller spaces $\mathcal{T}(S_i)$ where the $S_i$ form the components of $\partial \mathcal{M}(\Gamma)$. In fact, the conformal boundary $\partial \mathcal{M}(\Gamma)$ has a \emph{projective structure}, since it is locally modelled on $(\widehat{{\mathbb{C}}},\operatorname{PSL}(2,{\mathbb{C}}))$. A (projective) \emph{circle} on $\partial \mathcal{M}(\Gamma)$ is a homotopically trivial, embedded $S^1\subset \partial \mathcal{M}(\Gamma)$ whose lifts to $\Omega(\Gamma)$ are circles on $S^2$. \begin{definition} Let $\Gamma$ be a Kleinian group and let $D$ be an open disk in $ \Omega(\Gamma)$ whose boundary is a circle $C$ on $S^2$. The circle $C$ determines a hyperbolic plane in $\mathbb H^3$. Denote by $H(D)\subset \mathbb{H}^3$ the closed half-space bounded by this plane that meets $D$. The \emph{convex hull} of $\Lambda$ is the relatively closed set \begin{equation*} CH(\Gamma) = \mathbb H^3 - \bigcup_{D \subset \Omega(\Gamma)} H(D). \end{equation*} The \emph{convex core of $\mathcal{M}(\Gamma)$} is the quotient \begin{equation*} CC(\Gamma) = CH(\Gamma) / \Gamma \subset {\mathrm{int}}\left (\mathcal{M}(\Gamma) \right). \end{equation*} \end{definition} \begin{definition} A finitely generated Kleinian group $\Gamma$ for which the convex core $CC(\Gamma)$ has finite volume is called \emph{geometrically finite}. If the action of $\Gamma$ on $CH(\Gamma)$ is cocompact, then $\Gamma$ is said to be \emph{convex cocompact}. A hyperbolic 3-manifold is called \emph{geometrically finite} (resp. \emph{convex cocompact}), if it is isometric to ${\mathbb{H}}^3/\Gamma$ for a geometrically finite (resp.\ convex cocompact) $\Gamma$. \end{definition} If $\Gamma$ is convex cocompact and torsion-free, then it follows that $\partial \mathcal{M}(\Gamma)$ is a (possibly disconnected) compact Riemann surface without punctures. There are several equivalent definitions of a geometrically finite manifold in 3-dimensions; see Bowditch~\cite{Bowditch} for a discussion. For example, we will also use the following, which follows from the proof in~\cite{Bowditch} that GF5 is equivalent to GF3, in Section~4 of that paper. \begin{theorem}[Bowditch~\cite{Bowditch}]\label{Thm:Bowditch} The torsion-free Kleinian group $\Gamma$ is geometrically finite if and only if there a finite sided fundamental domain ${\mathcal{F}}(\Gamma)\subset{\mathbb{H}}^3$ for the action of $\Gamma$ on ${\mathbb{H}}^3$, with sides of ${\mathcal{F}}(\Gamma)$ consisting of geodesic hyperplanes. \end{theorem} If $CC(\Gamma)$ is compact, it must also have finite volume, and so convex cocompact manifolds are geometrically finite. However, geometrically finite manifolds may also contain cusps. Marden showed that a torsion-free Kleinian group $\Gamma$ is geometrically finite if and only if ${\mathcal{M}}(\Gamma)$ is compact outside of horoball neighbourhoods of finitely many rank one and rank two cusps~\cite{marden1974geometry}. The rank one cusps correspond to pairs of punctures on $\partial{\mathcal{M}}(\Gamma)$. \subsection{The Quasiconformal Deformation Space} Consider a finitely generated, discrete subgroup $\overline{\Gamma}$ of $\text{Isom}(\mathbb{H}^3)$ such that the normal subgroup (of index at most two) $\Gamma :=\overline{\Gamma}\cap \operatorname{PSL}_2(\mathbb{C})$ is torsion-free. A representation $\rho\colon \overline{\Gamma}\rightarrow \text{Isom}(\mathbb{H}^3)$ is \emph{a quasiconformal deformation} of $\overline{\Gamma}$, if there is a (orientation-preserving) $K$--quasiconformal homeomorphism $h\colon S^2\rightarrow S^2$ for some $K\geq 1$, such that we have \[ \rho(\gamma)=h_*(\gamma):=h\circ \gamma\circ h^{-1}:S^2\rightarrow S^2 \quad \text{ for all }\gamma\in \overline{\Gamma}. \] (We shorten $K$--quasiconformal homeomorphism to $K$-qc homeomorphism below.) \begin{definition} The \emph{quasiconformal deformation space $QC(\overline{\Gamma})$} of $\overline{\Gamma}$ is defined as \[ QC(\overline{\Gamma}):=\{\rho \mid \rho \text{ is a quasiconformal deformation of } \overline{\Gamma} \}/\operatorname{PSL}_2(\mathbb{C}). \] It can be endowed with a Teichm\"uller metric given by \[ d_T([\rho],[\rho']):=\inf\{ \log K \mid \exists \phi \text{ $K$-qc homeomorphism with }\rho=\phi\circ \rho'\circ\phi^{-1}\}. \] We will always endow $QC(\overline{\Gamma})$ with the topology induced by this metric. \end{definition} Now let $\Gamma$ have index two in $\overline{\Gamma}$. Then the extension $\Gamma\subset \overline{\Gamma}$ amounts to an orientation-reversing isometric involution $\sigma$ on $\mathcal{M}(\Gamma)$, as follows. The space $\mathcal{M}(\overline{\Gamma})$ is a possibly non-orientable orbifold with boundary $\partial{\mathcal{M}}(\overline{\Gamma})$. The orbifold ${\mathcal{M}}(\overline{\Gamma})$ can be recovered as $\mathcal{M}(\Gamma)/\sigma$. In particular, $\partial{\mathcal{M}}(\overline{\Gamma})$ is given by the quotient $\partial \mathcal{M}(\Gamma)/\sigma$. Conversely, the Riemann surface double of the Klein surface $\partial \mathcal{M}(\overline{\Gamma})$ yields $\partial \mathcal{M}(\Gamma)$ identified by $\sigma$. Note that by passing to the Riemann surface double, we obtain a continuous map $j\colon\mathcal{T}(\partial \mathcal{M}(\overline{\Gamma}))\rightarrow \mathcal{T}( \partial \mathcal{M}(\Gamma))$ and that restricting gives a natural inclusion map $QC(\overline{\Gamma})\rightarrow QC(\Gamma)$ for any $\Gamma\subset \overline{\Gamma}$. The first part of the following theorem follows from work of Bers~\cite{BersKleinian}, Kra~\cite{KraKleinian} and Maskit~\cite{MaskitSelfmaps} when restricting to torsion-free Kleinian groups. \begin{theorem}\label{Thm:AhlforsBers} Let $\Gamma$ be a torsion-free finitely generated Kleinian group. Then there is a continuous map $\beta\colon {\mathcal{T}}(\partial{\mathcal{M}}(\Gamma))\to QC(\Gamma)$ given by associating to a marked conformal structure on $\partial\mathcal{M}(\Gamma)$ the corresponding quasiconformal deformation of $\Gamma$. Analogously, if $\overline{\Gamma}\subset \text{Isom}(\mathbb{H}^3)$ is such that $\Gamma=\overline{\Gamma}\cap \operatorname{PSL}_2(\mathbb{C})$, then the composition $\beta\circ j$ is a continuous map $\mathcal{T}(\partial \mathcal{M}(\overline{\Gamma}))\rightarrow QC(\overline{\Gamma})\subset QC(\Gamma)$. \end{theorem} \begin{proof} We recall a proof of the first part given by Kapovich \cite[p.187]{kapovich2001hyperbolic} and then show that the second part follows by the same argument; compare also \cite[section~8.15]{kapovich2001hyperbolic}. Consider elements in $\mathcal{T}(\partial \mathcal{M}(\Gamma))$ as equivalence classes $[f\colon X\to Y]$ of quasiconformal maps defined on the conformal boundary $X:=\partial \mathcal{M}(\Gamma)$ of the hyperbolic 3-manifold associated to $\Gamma$. Such a quasiconformal map $f$ induces a Beltrami differential $\mu$ on $X$, which lifts to a Beltrami differential $\mu'$ on $\Omega(\Gamma)$ that is invariant under the action of $\Gamma$. Extending $\mu'$ by $0$ yields a $\Gamma$-invariant Beltrami differential $\overline{\mu}$ defined globally on $S^2$. Solving the Beltrami equation for $\overline{\mu}$ yields a quasiconformal homeomorphism $h\colon S^2\to S^2$ with $K(h)=K(f)$; it conjugates each $\gamma\in \Gamma$ to a Möbius transformation since $\gamma^*\overline{\mu}=\overline{\mu}$. Thus $h$ gives a representation $h_*\colon \Gamma\to \text{Isom}({\mathbb{H}}^3)$ via $\gamma\mapsto h\circ \gamma\circ h^{-1}$. We set $\beta([f\colon X\to Y])=h_*$. This map $\beta$ is well-defined, since equivalent marked Riemann surfaces yield the same conjugacy class of representations of $\Gamma$ by Sullivan's rigidity theorem. Moreover, it follows that $\beta$ is distance non-decreasing, since $K(h)=K(f)$ for any fixed marking surface $X$; in particular $\beta$ is continuous. If now $\overline{\Gamma}\subset \text{Isom}(\mathbb{H}^3)$ is such that $\Gamma=\overline{\Gamma}\cap \operatorname{PSL}_2(\mathbb{C})$, then elements in $\mathcal{T}(\partial \mathcal{M}(\overline{\Gamma}))$ can be viewed as equivalence classes of equivariant quasiconformal maps $f\colon (X,\sigma)\rightarrow (Y,\sigma_Y)$ (defined on $(X,\sigma)$ associated to $\overline{\Gamma}$) up to equivariant isotopies. Such a map $f$ induces a $\sigma$-invariant Beltrami differential $\mu$ on $X$. As before, $\mu$ lifts and extends to $\overline{\mu}$ on $S^2$, which is now $\overline{\Gamma}$-invariant. If $\overline{h}$ solves the Beltrami-equation for $\overline{\mu}$, then it conjugates $\overline{\Gamma}$ to a representation $\overline{h}_*$ of $\overline{\Gamma}$; in other words, it yields a quasiconformal deformation $[\overline{h}_*]\in QC(\overline{\Gamma})$ of $\overline{\Gamma}$ which restricts to $[h_*]\in QC(\Gamma)$. Since the map $j$ obtained by forgetting the involutions is continuous, the claimed result follows. \end{proof} \subsection{Geometric convergence of 3-Manifolds} In this section we will discuss what it means for hyperbolic 3-manifolds to converge geometrically. Background can be found in \cite{benedetti2012lectures, marden2007outer, canary2006notesonnotes, kapovich2001hyperbolic}. Let $B_R(O)$ denote the hyperbolic ball of radius $R$ centered at an origin $O\in \mathbb{H}^3$. Fix such an origin together with a frame in its tangent space (still simply denoted by $O$). Then hyperbolic manifolds with framed basepoints are in bijective correspondence with torsion-free Kleinian groups: A hyperbolic manifold with framed basepoint $(M,p)$ corresponds to the unique torsion-free Kleinian group $\Gamma$ such that there is an isometry $M\rightarrow \mathbb{H}^3/\Gamma$ taking the framed basepoint $p$ to the image of $O$ in $\mathbb{H}^3/\Gamma$. Under this correspondence a change of framed basepoint corresponds to conjugation of the Kleinian group. We denote the hyperbolic manifold with framed basepoint corresponding to $\Gamma$ by $({\mathbb{H}}^3/\Gamma,O)$. \begin{definition} For $i=1,2$ let $(N_i,p_i)=(\mathbb{H}^3/\Gamma_i,O)$ be two hyperbolic manifolds with framed basepoints. We say that $(N_2,p_2)$ is \emph{$(\varepsilon,R)$-close} to $(N_1,p_1)$, if there is a $(1+\varepsilon)$-bilipschitz embedding $\tilde{f}\colon \mathbb H^3 \supset B_{R}(O)\to \mathbb{H}^3$ such that \begin{itemize} \item $\tilde{f}$ is $\varepsilon$-close in $C^0$ to the inclusion, that is $d_{C^0}(\tilde{f}, id_{\mathbb H^3}\vert_{B_R(O)})\leq \varepsilon$ and \item $\tilde{f}$ descends to an embedding $f\colon N_1\supset B_R(O)/\Gamma_1\rightarrow N_2$. \end{itemize} \end{definition} \begin{definition}\label{Def:geometric_conv} A sequence of hyperbolic manifolds with framed basepoints $(M_k,p_k)$ is said to \emph{converge geometrically} to $(M,p)$, if for all $\varepsilon,R>0$, there is $k_0\in {\mathbb{N}}$ such that for $k\geq k_0$ we have $(M_k,p_k)$ is $(\varepsilon,R)$-close to $(M,p)$. Further, we say that a sequence of hyperbolic manifolds $M_k$ \emph{converges geometrically} to a hyperbolic manifold $M$ if for some (or equivalently\footnote{Note that if $(M_n,p_n)=({\mathbb{H}}^3/\Gamma_n,O)$ converges to $(M,p)=({\mathbb{H}}^3/\Gamma, O)$ and $p'$ is another framed basepoint on $M$ corresponding to the image of $O'$ in ${\mathbb{H}}^3$, then $({\mathbb{H}}^3/\Gamma_n,O')$ converges to $(M,p')$.}, any) framed basepoint $p$ on $M$ there are framed basepoints $p_k$ on $M_k$ such that $(M_k,p_k)$ converges geometrically to $(M,p)$. Also, a sequence of embeddings $f_k:M\rightarrow M_k$ \emph{establishes geometric convergence of $M_k$ to $M$}, if for any framed basepoint $p$ of $M$ and any $(\varepsilon,R)$ the (lifts of the) maps $f_k$ show that $(M_k,f_k(p))$ is $(\varepsilon,R)$-close to $(M,p)$ for $k$ sufficiently large. \end{definition} \begin{remark} A sequence of framed hyperbolic manifolds with framed basepoints $(M_k,p_k)$ converges geometrically to $(M,p)$, if and only if the corresponding torsion-free Kleinian groups $\Gamma_k$ converge to $\Gamma$ in the Chabauty topology. Indeed, the proof of Theorem~E.1.14 in \cite{benedetti2012lectures} adapts to show that geometric convergence of hyperbolic manifolds with framed basepoints in the sense of Definition~\ref{Def:geometric_conv} implies the convergence of the associated Kleinian groups, even though we do not assume $\tilde{f}(0)=0$ or convergence in $C^\infty$. On the other hand, geometric convergence of hyperbolic manifolds with framed basepoints in the sense of \cite[Section~E.1]{benedetti2012lectures} (or, by Theorem~E.1.14 in \cite{benedetti2012lectures}, Chabauty convergence of torsion-free Kleinian groups) implies geometric convergence in the sense of Definition~\ref{Def:geometric_conv}. \end{remark} \subsection{Controlled equivariant extensions} We say a quasiconformal homeomorphism $\phi\colon S^2\to S^2$ conjugates a Kleinian group $\Gamma_1$ into a Kleinian group $\Gamma_2$ if the prescription $\gamma\mapsto \phi\circ \gamma\circ \phi^{-1}$ defines a group isomorphism $\phi\colon \Gamma_1\to \Gamma_2$. The following result is from McMullen~\cite[Corollary~B.23]{McMullenbookrenormal}. \begin{theorem}[Visual extension of qc conjugation]\label{Thm:visual} Suppose $\phi\colon \partial \mathbb H^3 \to \partial \mathbb H^3 $ is a $K$-quasiconformal homeomorphism conjugating $\Gamma_1$ into $\Gamma_2$. Then the map $\phi$ has an extension to an equivariant $K^{3/2}$-bilipschitz diffeomorphism $\Phi$ of $\mathbb H^3$. In particular the manifolds $\mathcal M (\Gamma_1)$ and $\mathcal M (\Gamma_2)$ are diffeomorphic. \end{theorem} Strictly speaking, according to the conclusion of \cite[Corollary~B.23]{McMullenbookrenormal}, the map $\Phi$ is an equivariant ``$K^{3/2}$-quasi-isometry''. By~\cite[A.2~p.186]{McMullenbookrenormal}, this means that the extension $\Phi$ is an equivariant Lipschitz map whose differential is bounded by $K^{3/2}$. But $\Phi$ arises from the visual extension of the Beltrami Isotopy Theorem~B.22, which is obtained by integrating a smooth vector field Theorem~B.10; thus $\Phi$ is smooth. Since Corollary~B.23 also applies to the inverse map $\phi^{-1}$ and associates to it the map $\Phi^{-1}$ (by visually extending the reverse Beltrami isotopy), we can conclude that $\Phi$ is actually a $K^{3/2}$-bilipschitz diffeomorphism. \begin{corollary}\label{Cor:close} Let $\varepsilon>0$ and $R>0$. There is $\delta>0$ such that if $\phi\colon \partial \mathbb H^3 \to \partial \mathbb H^3 $ is a $(1+\delta)$-quasiconformal homeomorphism fixing $0,1,\infty$ and conjugating a torsion-free Kleinian group $\Gamma$ to $\Gamma_\phi$, then its visual extension $\Phi$ establishes that $(\mathbb H^3/\Gamma_\phi,p_\phi)$ is $(\varepsilon,R)$-close to $({\mathbb{H}}^3/\Gamma,p)$. Here both framed basepoints $p, p_\phi$ are induced by the framed basepoint $O$ in ${\mathbb{H}}^3$. \end{corollary} \begin{proof} As seen in the proof of~\cite[Theorem~B.21,~B.22]{McMullenbookrenormal}, the visual extension $\Phi\colon {\mathbb{H}}^3\cup \partial {\mathbb{H}}^3\rightarrow {\mathbb{H}}^3\cup \partial {\mathbb{H}}^3$ of a $K$-quasiconformal homeomorphism extends by reflection across $ \partial {\mathbb{H}}^3$ further to a $K^{9/2}$-quasiconformal map $\overline{\Phi}\colon S^3\rightarrow S^3$ fixing $0,1,\infty$ on the equatorial sphere $\partial {\mathbb{H}}^3\subset S^3$. Now in any dimension $n\geq 2$ and for any $L\geq 1$, the collection of $L$-quasiconformal homeomorphisms $S^n\rightarrow S^n$ fixing three specified points forms a normal family~\cite[Theorem~6.6.33]{Gehring2017}. If $L=1$, this consists only of the identity~\cite[Theorem~6.8.4]{Gehring2017}. It follows that for $K=1+\delta$ close to $1$, the visual extension of a $K$-quasiconformal homeomorphism $\phi$ is a homeomorphism $\Phi$ of ${\mathbb{H}}^3\cup \partial {\mathbb{H}}^3$ that is $C^0$-close to the identity. In particular, given $R,\varepsilon>0$ there is $\delta>0$, such that the visual extension $\Phi$ of any $(1+\delta)$-quasiconformal homeomorphism $\phi$ fixing $0,1,\infty$ is $\varepsilon$-close to the identity on $B_R(O)\subset {\mathbb{H}}^3$. Furthermore, the quasiconformal homeomorphism $\phi$ is $(\Gamma,\Gamma_\phi)$-equivariant by construction of $\Gamma_\phi$ and thus so is $\Phi$ by Theorem~\ref{Thm:visual}. Combining these statements yields the desired result. \end{proof} \subsection{Circle Packings} In this section we will define circle packings and present a few important results relating to them. For more information see Stephenson~\cite{stephenson2005introduction}. We will eventually use circle packings to glue 3-manifolds and obtain our desired knot and link complements. \begin{definition}\label{CirclePackingConformal} Let $\Gamma$ be a torsion-free convex cocompact Kleinian group and recall that its conformal boundary $\partial \mathcal{M}(\Gamma)$ has a natural projective structure. Let $V$ be a triangulation of $\partial \mathcal{M}(\Gamma)$. A \emph{circle packing} on $\partial \mathcal{M}(\Gamma)$ with nerve $V$ is a collection $P = \{c_v \vert\, v \text{ vertex of } V\}$ of (projective) circles on $\partial \mathcal{M}(\Gamma)$ bounding discs with disjoint interiors such that \begin{enumerate} \item each circle $c_v$ is centered $v$, \item two circles $c_u, c_v$ are tangent if and only if $\langle u, v \rangle$ is an edge in $V$, and \item three circles $c_u, c_v, c_w$ bound a positively oriented curvilinear triangle in $\partial \mathcal{M}(\Gamma)$ if and only if $\langle u, v, w \rangle$ form a positively oriented face of $V$. \end{enumerate} More generally, if $V$ is just a connected graph embedded in $\partial \mathcal{M}(\Gamma)$, we say that a collection of (projective) circles satisfying the first two conditions form a \emph{partial circle packing} with nerve $V$. Equivalently, we can consider locally finite, $\Gamma$-equivariant (partial) circle packings of $\Omega(\Gamma)$ obtained as lifts of (partial) circle packings on $\partial \mathcal{M}(\Gamma)$. \end{definition} See \reffig{CirclePacking} for an example of a circle packing. \begin{figure} \caption{Left: Example of a circle packing with its nerve, the edges going out all meet at the vertex at $\infty$. Right: Three circles in a circle packing along with their dual circle, drawn with a dashed line.} \label{Fig:CirclePacking} \end{figure} \begin{definition} Let $P$ be a circle packing with nerve $V$ and let $c_1,c_2,c_3 \in P$ be circles corresponding to a triangle in $V$. The curvilinear triangle bounded by these circle is called an \emph{interstice}. There is a unique circle $c^{(1,2,3)}$ orthogonal to the circles $c_1,c_2,c_3$, intersecting them at their points of tangency. The collection of all such circles corresponding to each triangle in $V$ we will denote $P^*$ and we will call the \emph{dual (partial) circle packing} of $P$, see \reffig{CirclePacking}. Note that the nerves of $P$ and $P^*$ are duals as graphs on the surface. \end{definition} Work of Brooks~\cite{brooks1986circle} shows that convex cocompact hyperbolic 3-manifolds admitting a circle packing on the conformal boundary are abundant, in the following sense. \begin{theorem}[Circle packings approximate]\label{Thm:BrooksKleinian} Let $M={\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold. Then, for every $\epsilon>0$, there is an $e^\epsilon$-quasiconformal homeomorphism $\phi$ fixing $0,1,\infty$, conjugating $\Gamma$ to $\Gamma_\epsilon$ such that the conformal boundary of $M_\epsilon={\mathbb{H}}^3/\Gamma_\epsilon$ admits a circle packing. Moreover, the process is constructive: the proof constructs the circle packing. Additionally, for fixed $r>0$, we may ensure none of the circles in the circle packing and none of the triangular interstices have diameter larger than $r$. Here we identify $\partial {\mathbb{H}}^3$ with the unit sphere in the tangent space $T_O{\mathbb{H}}^3$ at the framed basepoint $O$ in ${\mathbb{H}}^3$. \end{theorem} This is essentially contained in Brooks' proof of~\cite[Theorem~2]{brooks1986circle}, but the statement of the theorem is different in Brooks' paper. In particular, there was no consideration of diameters there, and no worry about construction. We work through the proof below, highlighting the diameters and the constructive nature of the proof. \begin{proof}[Proof of \refthm{BrooksKleinian}] We begin by choosing effective constants controlling the diameters of the circles and interstices, using a compactness argument. We may uniformise each closed surface component of $\Omega(\Gamma)/\Gamma$ by a component of $\Omega(\Gamma)$. Because $\Gamma$ is convex cocompact, hence geometrically finite, its action has a finite-sided fundamental domain $F$ by \refthm{Bowditch}, giving a finite-sided fundamental region for the action of $\Gamma$ on $\Omega(\Gamma)$. The fundamental region will have boundary consisting of vertices and edges, and will be compact. We need to choose the circles to have bounded radii when seen from $O$ in ${\mathbb{H}}^3$. To do so, it is convenient to look at hyperbolic space in the Poincar\'e ball model $\mathbb{B}^3$ with $O$ at the origin. Then circles of radius $r$ in the unit sphere of $T_O {\mathbb{H}}^3$ correspond to circles of radius in $r$ the boundary of the Poincar\'e ball $\partial \mathbb{B}^3$. For given $r>0$, pick a small $r_0\leq r/2$ such that any disk $D$ of radius $r_0$ meeting $F$ intersects, apart from $F$, at most the immediate neighbouring fundamental domains to $F$ in $\Omega(\Gamma)$. Since $F$ is compact, it can be covered by finitely many open discs $D_i$ of radius $r_0$. All translates of these $D_i$ are round disks; therefore the diameter of each translate $\gamma(D_i)$ is bounded in terms of their area. This implies that there are only finitely many translates $\gamma(D_i)$ whose diameter is larger than $r_0$. Indeed, otherwise there would be an infinite disjoint collection of such translates of diameter larger than $r_0$, but this is impossible since the area of $S^2$ is finite. It follows that there are only finitely many translates $F_1,\ldots , F_k$ of $F$ that meet a translate $\gamma(D_i)$ whose diameter is larger than $r_0$. Therefore we can pick $r_1\leq r_0$ such that for any disk $D$ of radius at most $r_1$ meeting $F$ the following holds: $D$ is contained in one of the $D_i$, and any translate of $D$ meeting $F_1, \ldots , F_k$ has diameter at most $r_0$. Note also that translates of $D$ \emph{not} meeting $F_1,\ldots,F_k$ automatically have diameter at most $r_0$ by construction. Now pack $F$ with circles of radius at most $r_1$ by the following constructive process, similar to that of~\cite[Lemma~2.3]{HoffmanPurcell}. First choose disjoint circles centred at vertices of $F$, taking their images under $\Gamma$ to ensure equivariance. Then take circles centred along edges, again ensuring translates under $\Gamma$ agree. Finally, take circles of radius at most $r_1$ with centres in the interior of the region. Extend this partial circle packing of $F$ to $\Omega(\Gamma)$ using the action of $\Gamma$, ensuring an equivariant packing. This yields a $\Gamma$-equivariant partial circle packing of $\Omega(\Gamma)$ consisting of circles of diameter at most $r_0$ and with regions complementary to the circles consisting of polygonal interstices, with circular arcs as boundaries. At this point, additional circles of radius at most $r_1$ may be added to $F$; we add sufficiently many to obtain interstitial regions that are either triangles or quads of diameter at most $r_1$; see Brooks~\cite{Brooks:Schottky} or a more detailed exposition in Stewart \cite[Lemma~3.7]{Stewart:Thesis}. Finally, extend again $\Gamma$-equivariantly to obtain an equivariant partial packing of $\Omega(\Gamma)$ with circles of diameter at most $r_0$, all of whose interstitial regions are triangles or quads of diameter at most $r_0\leq r/2$. Consider the group $\overline{\Gamma}$ generated by $\Gamma$ and all reflections across the circles in the packing. By \refthm{AhlforsBers}, the Teichm\"uller space of the complementary regions, which here are triangles and quads, maps continuously to the quasiconformal deformation space of $\overline{\Gamma}$ with its Teichm\"uller metric. The triangular interstitial regions are conformally rigid. The quads have a Teichm\"uller space homeomorphic to ${\mathbb{R}}$. Brooks shows in \cite{Brooks:Schottky} that there is an explicit homeomorphism $q$ from the Teichm\"uller space of a quad to $\mathbb{R}$ with the property that there is a full packing of a quad by finitely many circles if and only if $q(Q)$ is rational. Thus, arbitarily close to any quad $Q$ in the Teichm\"uller space of quads, there is another quad $Q'$ with $q(Q')$ rational. Applying this simultaneously to all the quads complementary to the packing, we obtain arbitrarily close configurations where $q(Q')$ is rational for all quads $Q'$. We may uniquely pack circles into this quad. By \refthm{AhlforsBers}, for any $\epsilon>0$, we can thus quasiconformally deform the associated representation $\rho\colon \overline{\Gamma}\to \text{Isom}(\mathbb{H}^3)$ by an $e^\epsilon$-quasiconformal homeomorphism $h$, normalized to fix the points $0,1,\infty$, to obtain a new convex cocompact representation with image $\overline{\Gamma}_\epsilon$, whose complementary quads are all rational. See \reffig{CircleDeformation}. \begin{figure} \caption{Once enough circles are added so a quad is sufficiently small, the space is deformed so that opposite circles become tangent. Doing this for every quad gives a circle packing.} \label{Fig:CircleDeformation} \end{figure} We need to ensure that the quasiconformal homeomorphism does not enlarge the diameters of circles and interstitial regions too much. Indeed, for any $K\geq 1$, the $K$-quasiconformal homeomorphisms of $S^2$ fixing $0,1,\infty$ form a normal family. Because we fix $0,1,\infty$, this normal family consists of only the identity map when $K=1$. Thus any sequence of $K_i$-quasiconformal homeomorphisms of $S^2$ fixing $0,1, \infty$ with $K_i\rightarrow 1$ converges to the identity map on $S^2$; compare the proof of Corollary~\ref{Cor:close}. Thus, while the $e^\epsilon$-quasiconformal deformation may enlarge some of the radii of the circles, provided $\epsilon$ is small enough, the resulting circles and interstitial regions will have diameter at most $r$. \end{proof} \begin{definition} Let $V$ be a graph. A \emph{dimer} on $V$ is a colouring of edges such that each face is adjacent to exactly one coloured edge. \end{definition} \begin{lemma}\label{Lem:AddCirclesDimer} Let $\Gamma$ be a torsion-free convex cocompact Kleinian group and let $P$ be a $\Gamma$-equivariant circle packing of $\Omega(\Gamma)$ with nerve $V$. Then there exists a circle packing $\overline{P}$ with nerve $\overline{V}$ such that $V \subset \overline{V}$ and $\overline{V}$ admits a dimer. Further, the maximal diameter of circles and interstitial regions of $\overline{P}$ in $\Omega(\Gamma)$ does not exceed that of $P$. \end{lemma} \begin{proof} We define the circle packing $\overline{P}$ by adding the unique circle to each triangular interstice in $P$ which is tangent to all three circles. The effect on the nerve is to add a vertex to the interior of each triangle of $V$, and connect by three edges to the existing vertices of $V$, subdividing each triangle into three triangles to form $\overline{V}$. Then each triangle in $\overline{V}$ has exactly one edge coming from $V$. Colour this edge. This gives a dimer on $\overline{V}$. Observe that because the action of $\Gamma$ takes triangular interstices to triangular interstices, the result is still equivariant with respect to $\Gamma$. Observe that the diameter of circles and interstitial regions at most decrease with this procedure. \end{proof} In general there are multiple ways to add circles to a circle packing so that the result admits a dimer. The strength of the above its that it works for any starting circle packing and is simple to execute. \section{Construction}\label{Sec:Construction} In this section, we construct the links of the main theorem. \subsection{Scooped Manifolds} \begin{definition} Let $M = \mathbb H^3 / \Gamma$ be a convex cocompact hyperbolic 3-manifold. Further assume that $\partial{\mathcal{M}}(\Gamma)=\Omega(\Gamma)/\Gamma$ admits a circle packing $P$ with dual packing $P^*$; then on $\Omega(\Gamma)$ there is a corresponding equivariant circle packing $\widetilde{P}$ with dual packing $\widetilde{P}^*$. For the circles $c_i$ in $\widetilde{P}$ on $\Omega(\Gamma)$, there are pairwise disjoint associated open half spaces $H(c_i) \subset {\mathbb{H}}^3$ which meet the conformal boundary $\partial{\mathbb{H}}^3$ at the interior of $c_i$. We then define the \emph{scooped manifold} $M_P$ to be the manifold formed by removing the half spaces associated with circles in $\widetilde{P}$ and its dual $\widetilde{P}^*$, and taking the quotient under $\Gamma$: \begin{equation*} M_P = {\mathbb{H}}^3 - \bigcup_{c \in \widetilde{P},\widetilde{P}^*} H(c) /\Gamma. \end{equation*} \end{definition} The boundary of $M_P$ consists of hyperbolic ideal polygons whose faces come from $\partial H(c), c \in P$ and $\partial H(c^*), c^* \in P^*$, and edges come from the intersection of $\partial H(c)$ and $\partial H(c^*)$. Note $M_P$ is a manifold with corners whose interior is homeomorphic to $M$. \begin{lemma}\label{Lem:ScoopedContainingBall} Let $M = {\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold and $O\in {\mathbb{H}}^3$. Then for any $\epsilon>0$, there exists a $e^\epsilon$-quasiconformal homeomorphism $\phi$ fixing $0,1,\infty$ conjugating $\Gamma$ to $\Gamma_\epsilon$ satisfying the following: \begin{itemize} \item The associated convex cocompact manifold $M_\epsilon = {\mathbb{H}}^3/\Gamma_\epsilon$ admits a circle packing $P$ on its conformal boundary. \item The metric ball $B(O,R)/\Gamma_\epsilon \subset M_\epsilon$ is completely contained in the corresponding scooped manifold $(M_\epsilon)_P$. \item Further, we can extend $P$ to a circle packing $\overline{P}$ that admits a dimer as in \reflem{AddCirclesDimer}, so that $B(O,R)/\Gamma_\epsilon$ is still completely contained in the scooped manifold $(M_\epsilon)_{\overline{P}}$. \end{itemize} \end{lemma} \begin{proof} The construction of \refthm{BrooksKleinian} yields an $e^\epsilon$-quasiconformal homeomorphism fixing $0,1,\infty$, and giving $M_\epsilon$ with circle packing $P$ on its conformal boundary, where circles and triangular interstices have diameter at most $r$. For $r>0$ sufficiently small, we may ensure that the half-spaces $H(c)$ defined by the circles of $P$ and its dual $P^*$ have distance at least $2R$ from $O$ in ${\mathbb{H}}^3$. Thus we have $B(O,R)/\Gamma_\epsilon \subset M_\epsilon - \cup_{c \in P} H(c) = (M_\epsilon)_P$. Finally, using \reflem{AddCirclesDimer}, we can extend $P$ to a circle packing $\widetilde{P}$ which admits a dimer. \end{proof} \begin{proposition}\label{Prop:BoundaryMP} Let $M$ be a convex cocompact hyperbolic 3-manifold. Further suppose that $\partial {\mathcal{M}}(\Gamma)$ admits a circle packing $P$ with nerve $K$ that has a fixed dimer. Then the scooped manifold $M_P$ has the following properties: \begin{enumerate} \item The faces on the boundary of $M_P$ can be checkerboard coloured, white and black. \item The white faces consist of totally geodesic ideal polygons. \item The black faces consist of totally geodesic ideal triangles. The dimer induces a pairing of the black faces, such that paired black faces share an ideal vertex. \item The ideal vertices are all four valent. \item The dihedral angle between faces on the boundary is $\pi /2$. \end{enumerate} \end{proposition} \begin{proof} By the definition of scooped manifolds the boundary of $M_P$ consists of ideal geodesic polygons coming from the boundaries of the half spaces associated with circles in $P$ and $P^*$. The geodesic polygons coming from half spaces associated with circles in $P$ we colour white, while those coming from $P^*$ we colour black. Observe that the points of tangency of circles in $P$ and $P^*$ are the same, so these points of tangency form the ideal vertices of both the black and white faces. If $c \in P$ and $c^* \in P^*$ are circles such that $c \cap c^* \neq \varnothing$ then $c$ and $c^*$ intersect in exactly two points $u$ and $v$; these points of intersection correspond to ideal points on the boundary of $M_P$. There is an edge between $u$ and $v$ on $\partial M_P$ formed by $H(c) \cap H(c^*)$. This edge lies between the face corresponding to $H(c)$ which we have coloured white and $H(c^*)$ which we have coloured black. Since every edge on $\partial M_P$ occurs in this manner, we know that every edge lies between a black and white face. Thus we know that the colouring of the faces we have assigned gives a checkerboard colouring of the faces. The fact that ideal vertices are 4-valent follows from the fact that at each ideal vertex there are four circles which meet at this point: two from $P$ and two from $P^*$. Finally, since circles in $P$ and $P^*$ meet orthogonally, the dihedral angle at each edge must be $\pi/2$. To see that the black faces are triangles, observe that for every circle $c^* \in P^*$ we have by definition that $c^*$ meets exactly three points in $P$. These points are the ideal vertices on the black faces corresponding to the half space associated with $H(c^*)$. \begin{figure} \caption{Left shows four circles in $P$, with two dashed circles in $P^*$. Part of the nerve of $P$ is shown on the left with the coloured edge from the dimer drawn with two lines. On the right we have the same two circles in $P^*$ along with the colouring of the associated part of the nerve of $P^*$.} \label{Fig:PairedBlackTri} \end{figure} Now we show how the black faces are paired. Let $K$ be the nerve of $P$, which has a dimer. Then in the dual graph $K^*$ of $K$, we can transfer the colouring of edges in $K$ to a colouring of edges in $K^*$, since edges are sent to edges. Note that $K^*$ is 3-valent since $K$ only consists of triangles. Since each face in $K$ is adjacent to exactly one coloured edge in the dimer, each vertex in $K^*$ is adjacent to exactly one coloured edge. This gives a pairing on the vertices in $K^*$ along this edge, which gives a paring of the circles in $P^*$. Thus each black face in $\partial M_P$ is paired to another black face. See \reffig{PairedBlackTri}. \end{proof} \begin{lemma}\label{Lem:RectangleVertices} Let $M={\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold and suppose that $\partial {\mathcal{M}}(\Gamma)$ admits a circle packing $P$. For each ideal vertex $v_i \in \{v_1, \dots, v_n\}$ of the scooped manifold $\partial M_P$, there is a horoball neighbourhood $H_i$ such that the $H_i$ are pairwise disjoint, and $\partial H_i \cap M_P$ is a Euclidean rectangle. \end{lemma} \begin{proof} Let $\{v_1, \dots , v_n\}$ be the collection of ideal vertices on $\partial M_P$. Note that there are two circles in $P$ and two circles in $P^*$ which meet tangentially at each $v_i$. Let $\widetilde{M_P}$ denote a lift of $M_P$ into $\mathbb H^3$ under a covering map, and $\widetilde{v_i}$ a single point in the corresponding lift of $v_i$ to $\partial \mathbb H^3$. Two circles of $P$ and two of $P^*$ lift to be tangent to $\widetilde{v_i}$. Let $\varphi$ denote a M\"obius transformation taking $\widetilde{v_i}$ to $\infty$. It takes the circles projecting to $P$ to a pair of parallel lines, and those projecting to $P^*$ to another pair of parallel lines meeting the first two orthogonally, hence forming a Euclidean rectangle. Then any horoball $H_h$ of height $h$ centred at $\infty$ in ${\mathbb{H}}^3$ meets $\varphi(\widetilde{M_P})$ in $R_i\times (h,\infty)$, where $R_i$ is a Euclidean rectangle. This projects to a rectangular horoball neighbourhood of $v_i$. Finally, because there are only finitely many ideal vertices of $M_P$, we may choose the horoball about each vertex so that all horoballs are pairwise disjoint, as desired. \end{proof} \begin{lemma}\label{Lem:FiniteVolume} Let $M={\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold and suppose that $\partial {\mathcal{M}}(\Gamma)$ admits a circle packing $P$. Then the scooped manifold $M_P$ has finite volume. \end{lemma} \begin{proof} Let $\{H_1, \dots, H_n\}$ be pairwise disjoint horoballs, one for each ideal vertex of $M_P$, as in \reflem{RectangleVertices}. Then removing these horoballs and horoball neighbourhoods from $M_P$ yields a compact manifold with boundary consisting of finitely many boundaries of horoball neighbourhoods and Euclidean planes $H_i\cap M_P$, and finitely many hyperplanes $\partial H(c)\cap M_P$, where $c\in P$ or $P^*$ is from the circle packing or its dual. This has finite volume. Finally, the horoball neighbourhoods must have finite volume, since they are of the form $R_i\times [1,\infty)$ for $R_i$ a Euclidean rectangle, as in \reflem{RectangleVertices}. Thus $M_P$ has finite volume. \end{proof} \subsection{Building Link Complements} In this section we describe how to build a hyperbolic link complement using a scooped manifold. The idea behind this construction is inspired by \emph{fully augmented links}, and their relation to circle packings on the sphere. The construction here generalises this by starting with circle packings on a surface of higher genus. First, we define a generalisation of a fully augmented link. \begin{definition}\label{Def:FALNoTwists} Let $M$ be a 3-manifold and let $\Sigma$ be an embedded surface of genus $g \ge 2$ in $M$. Then a link $L$ in a tubular neighborhood of $\Sigma$ consisting of components $K_1, \dots K_k$ and $C_1, \dots C_n$ is called a \emph{fully augmented link on} $\Sigma$ if it has the following properties. \begin{enumerate} \item $\coprod_{1\leq i\leq k} K_i$ is embedded in $\Sigma$. \item $C_j$ bounds a disk $D_j$ in $M$ such that $D_j$ intersects $\Sigma$ transversely in a single arc, and $D_j$ meets the union $\coprod_{i} K_i$ in exactly two points, for $1 \leq j \leq n$. \item A projection of $L$ to $\Sigma$ yields a 4-valent \emph{diagram graph} on $\Sigma$. We require this diagram to be connected. \end{enumerate} The components $K_i$ are said to lie in the \emph{projection surface}, while the components $C_j$ are called \emph{crossing circles.} \end{definition} We may also add a half twist at crossing circles, corresponding to cutting along $D_j$ and regluing so that the two points of intersection of $\coprod_i K_i$ with $D_j$ are swapped. This is shown in \reffig{HalfTwist}. \begin{figure} \caption{Shows how to cut and reglue at a crossing circle to add a half-twist.} \label{Fig:HalfTwist} \end{figure} \begin{definition} The link resulting from adding a single half-twist at some or no crossing circles is also called a \emph{fully augmented link on a surface}, even though condition (1) in \refdef{FALNoTwists} is typically not satisfied anymore after such a half-twist. If the distinction is important, we will say that the link of \refdef{FALNoTwists} is a fully augmented link on a surface \emph{without half-twists}. \end{definition} Fully augmented links on surfaces can be quite complicated. A 3-dimensional example on a genus-2 surface is shown in \reffig{FAL3D}. \begin{figure} \caption{An example of a fully augmented link on a genus-2 surface, with crossing circles shown in red. This image was generated in Blender~\cite{blender} \label{Fig:FAL3D} \end{figure} \begin{definition} Let $M$ be a manifold with boundary. The \emph{double} of $M$ is the manifold \[ M \times \{0,1\} / \sim \quad \text{ where } (x,0) \sim (x,1) \text{ for all } x \in \partial M. \] We denote the double of $M$ by $\mathcal{D}(M)$. \end{definition} \begin{proposition} \label{Prop:DMNeverS3} Let $M$ be an orientable compact manifold with connected boundary. Then the double of $M$ is not $S^3$, unless $\partial M$ is homeomorphic to $S^2$. \end{proposition} \begin{proof} Let $M_1$ and $M_2$ denote the two copies of $M$ in the double of $M$, where $\text{int}(M_1) \cap \text{int} (M_2) = \varnothing$ and $\partial M_1 = \partial M_2$. Now for a point $x \in M_2$ let $\tilde {x}$ denote the same point in $M_1$, or if $x \in M_1$ then $\tilde{x}$ denotes the point in $M_2$. Then the map $r\colon \mathcal \mathcal{D}(M) \to M_1$ defined by \[ r(x) = \begin{cases} x \text{ if } x \in M_1, \\ \tilde {x} \text{ if } x \in M_2 \end{cases} \] satisfies $r \mid_{M_1}$ is the identity. Moreover, $r$ is continuous since it is continuous on $M_1$ and $M_2$ and agrees on $M_1\cap M_2 = \partial M_1$. Thus $r$ is a rectract of ${\mathcal{D}}(M)$ onto $M_1$. It follows that the inclusion $M \hookrightarrow {\mathcal{D}}(M)$ induces an injection $i_*\colon \pi_1(M_1) \to \pi_1(\mathcal D (M))$. On the other hand, $\pi_1(M_1)$ is nontrivial, since its abelianisation $H_1(M)$ has rank equal to half the rank of $H_1(\partial M_1)$, which is $2g\geq 2$ unless $\partial M_1=S^2$; see~\cite[Lemmas~3.5, 3.6]{Hatcher:3MfldNotes}. Thus ${\mathcal{D}}(M)$ is not $ S^3$ unless $\partial M=S^2$. \end{proof} We are now ready to start our construction. \begin{construction}\label{Constr:FAL} Let $M={\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold whose conformal boundary on $\partial{\mathcal{M}}(\Gamma)$ admits a circle packing $P$ with dimer. By \refprop{BoundaryMP}, the boundary of the scooped manifold $M_P$ is checkerboard coloured black and white, with all black faces consisting of paired totally geodesic ideal triangles. Form the scooped manifold $M_P$. Take a second copy $M_P'$ of $M_P$ with the opposite orientation and identify each white face of $M_P$ with its copy in $M_P'$ via the identity map identifying these faces. Black faces in $M_P$ are each paired in $M_P$ by the dimer, with the coloured edge of the dimer running over a pair of ideal vertices in the two triangles. Glue these paired ideal triangles by a hyperbolic isometry, folding over the ideal vertex meeting the dimer. Do the same for the paired black triangles in $M_P'$. \end{construction} \begin{theorem}\label{Thm:LinksInDouble} Let $M = {\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold. Suppose the conformal boundary $\partial{\mathcal{M}}(\Gamma)$ admits a circle packing with a dimer. Then Construction~\ref{Constr:FAL} above yields a finite volume hyperbolic 3-manifold $N$ that is the complement of a fully augmented link $L$ on $\partial{\mathcal{M}}(\Gamma)$ in ${\mathcal{D}}({\mathcal{M}}(\Gamma))$, without half-twists. That is, $N = {\mathcal{D}}({\mathcal{M}}(\Gamma))-L$. \end{theorem} \begin{proof} Let $N$ denote the manifold obtained by the construction. There are three things we need to show: the construction gives a submanifold of ${\mathcal{D}}({\mathcal{M}}(\Gamma))$, the result is homeomorphic to a fully augmented link complement in ${\mathcal{D}}({\mathcal{M}}(\Gamma))$, and that it is a complete hyperbolic manifold of finite volume. For ease of notation, we will denote ${\mathcal{M}}(\Gamma)$ simply by ${\mathcal{M}}$. We start by showing that $N$ is a submanifold of ${\mathcal{D}}({\mathcal{M}})$. The definition of a scooped manifold gives a natural embedding of $M_P$ and $M_P^\prime$ in ${\mathcal{D}}({\mathcal{M}})$ such that $M_P \cap M_P' = \varnothing$. Under this embedding the ideal vertices of $M_P$ and $M_P'$ are identified and lie on $\Sigma = \partial {\mathcal{M}} = \partial {\mathcal{M}}'$ in ${\mathcal{D}}({\mathcal{M}})$. By \reflem{RectangleVertices}, there is a collection of horoball neighbourhoods $H_i$ with boundaries meeting the ideal vertices in Euclidean rectangles $R_i$. By shrinking the $H_i$ if needed, we may assume that for each rectangle, the length of any side meeting a black triangle is $1/h$, for some fixed large $h$. Let $\overline{M_P}$ denote the result of removing the horoballs $H_i$ from $M_P$. Thus $\overline{M_P}$ is a compact manifold with corners. Similarly form $\overline{M_P}'$ by removing identically sized horoball neighbourhoods from $M_P'$. Since the (black) truncated side lengths of $M_P$ are identical, we can glue truncated black triangles in $\overline{M_P}$ to their pair in $\overline{M_P}$ by hyperbolic isometry, and similarly for $\overline{M_P}'$. We may similarly glue truncated white faces in $\overline{M_P}$ to those in $\overline{M_P}'$ by isometry, because we will be truncating an identical amount in $M_P$ and its reflection. Let $F \subset \partial \overline{M_P}$ be a truncated white face. Then there exists a projection $p\colon F \to \Sigma$. Similarly, the corresponding truncated white face $F' \subset \partial \overline{M_P}'$ has an analogous projection $p' \colon F' \to \Sigma$ such that $p(F) = p'(F')$. Both of these projections can be extended to isotopies of $\overline{M_P}$ and $\overline{M_P}'$ in ${\mathcal{D}}({\mathcal{M}})$. Since all such maps, for all white faces, correspond to isotopies, the manifold resulting from gluing the white faces is a submanifold of ${\mathcal{D}}({\mathcal{M}})$. Next we look at gluing pairs of truncated black triangles. Let $T_1$ and $T_2$ be two truncated black triangles in $\partial \overline{M_P}$ that are paired by the dimer on $P$ across a vertex $v$, and let $R_v$ be the rectangle which truncates $v$. Similarly let $T_1'$ and $T_2'$ be the corresponding truncated triangles in $\partial \overline{M_P}'$ with $R_v'$ the rectangle meeting them. After identifying the white faces, the non-truncated edges of $T_1$ and $T_1'$ will be identified, and similarly for $T_2$ and $T_2'$. Then after gluing white faces, $T_1 \cup T_1'$ and $T_2 \cup T_2'$ will correspond to a pair of spheres with three open disks removed. They are joined together via $R_v$ and $R_v'$: after we identify the white faces, the white edges of $R_v$ and $R_v'$ have been identified, forming a cylinder $A$. The black edges on the ends of this cylinder form one of the boundary components of both spheres $T_1 \cup T_1'$, $T_2 \cup T_2'$. See \reffig{GlueBlackFaces}. \begin{figure} \caption{The result of gluing the white faces in $\overline{M_P} \label{Fig:GlueBlackFaces} \end{figure} We can then perform an isotopy expanding $A$ so that $T_1 \cup T_1'$ and $T_2 \cup T_2'$ and $A$ lie on a sphere $S$ with $A$ forming a closed neighbourhood of a north-south great circle for $S$. We continue the isotopy, identifying $T_1\cup T_1'$ to $T_2\cup T_2'$ across a ball bounded by this sphere, as shown in \reffig{GlueBlackFaces}. This corresponds to identifying $T_1$ with $T_2$, and $T_1'$ with $T_2'$. Observe that the result after identification is a disk $D$ with with two open disks removed. The annulus $A$ has two boundary components identified to form a torus. This torus meets the black geodesic surface of $D$ on its outside boundary, corresponding to a longitude. The other two boundary components of $D$ correspond to two cylinders obtained by gluing vertices which do not pair black faces in the dimer. See \reffig{GlueBlackFaces}. Thus the ideal vertices that pair black triangles correspond to crossing circles. Each of these steps is by isotopy in ${\mathcal{D}}({\mathcal{M}})$. We do this for each pair of truncated black triangles on $\partial \overline{M_P}$. Hence the gluing of $\overline{M_P}$ and $\overline{M_P}'$ gives a submanifold of ${\mathcal{D}}({\mathcal{M}})$. Finally, note that the gluing of $M_P$ now embeds as a submanifold of ${\mathcal{D}}({\mathcal{M}})$ because it is homeomorphic to the gluing of the truncated $\overline{M_P}$ without its boundary. We still need to show that $N$ is homeomorphic to a link complement in ${\mathcal{D}}({\mathcal{M}})$. We have seen that ideal vertices meeting paired black faces will correspond to crossing circles in ${\mathcal{D}}({\mathcal{M}})$. Now let $v_0 \in V$ be a vertex which does not pair two black faces. Let $R_{v_0}$ be the rectangle on $\partial \overline{M_P}$ associated with $v_0$. Then $R_{v_0}$ meets two truncated black triangles $T_{-1}, T_1 \subset \partial\overline{M_P}$. The triangle $T_1$ is paired to another truncated black triangle $T_2$ as specified by the dimer on $P$, across a vertex $v_1$. Similarly $T_{-1}$ is paired to another truncated black triangle $T_{-2}$, across a vertex $p_{-1}$. See \reffig{LinksInSurface}. After gluing $T_1$ and $T_2$, one of the black edges of $R_{v_0}$ will be glued to a black edge of another rectangle $R_{v_1}$ that intersects $T_2$, while the other black edge of $R_{v_0}$, will be glued to a black edge of a rectangle $R_{v_{-1}}$ that intersects $T_{-2}$. \begin{figure} \caption{First image shows the result after gluing truncated white faces (the truncated black triangles $T_i$ are shown as white). The light grey cylinders correspond to the cylinders associated with the paired vertices $v_{-1} \label{Fig:LinksInSurface} \end{figure} After gluing white faces, the pairs $R_{v_k}$ and $R_{v_k}'$, for $k \in \{-1,0,1\}$, are glued along their white edges and form cylinders, which we denote $A_k$ for $k \in \{-1,0,1\}$. After gluing the black faces, $A_{-1}$ will be glued to one end of $A_0$ while $A_1$ will be glued to the other end. Let $A$ be the the result of gluing these three cylinders together. The cylinder $A$ then passes through the two crossing circles associated with $v_{-1}$ and $v_{1}$. This is shown in the second image in \reffig{LinksInSurface}. Every cylinder associated with a vertex $v \in V$ that does not pair black faces has its ends glued to other cylinders. It follows that the collection of all such cylinders forms a collection of tori. If $T$ is such a torus then $T$ has a Euclidean structure given by gluing a chain of rectangles $R_{v_0},R_{v_1}, \dots , R_{v_k}$ together; these are glued along their black sides. This chain is then glued to the corresponding chain $R_{v_0}',R_{v_1}', \dots , R_{v_k}'$ via their white sides. Note that the white sides of $R_{v_i}$ and $R_{v_i}'$, for $i \in \{0,1, \dots , k\}$ lie on a geodesic surface formed from gluing the white faces. In this sense each of these tori lies on the white surface formed from from the gluing of white faces, which is homeomorphic to $\partial {\mathcal{M}}$. Thus the glued manifold $N$ is homeomorphic to the complement of a fully augmented link on a surface without half twists. The ideal boundary components that correspond to vertices in $V$ pairing black faces are crossing circles, while the other vertices make up portions of the link components in the surface. Finally we show that the resulting gluing has a complete hyperbolic structure. The fact that it has a hyperbolic structure follows from the fact that the gluing of faces is by isometry, and the faces meet at dihedral angle $\pi/2$, with four such angles identified under the gluing. Thus the sum of dihedral angles around any edge is $2\pi$; see for example~\cite[Theorem~4.7]{purcell2020hyperbolic}. To show that the structure is complete, we need to show that each of the ideal torus boundary components has an induced Euclidean structure; see for example~\cite[Theorem~4.10]{purcell2020hyperbolic}. We have seen that each torus boundary component is tiled by rectangles $R_v$ coming from ideal vertices of the scooped manifold. The cusp structure is induced by the gluing of the Euclidean rectangles. Since they are rectangles, with angles $\pi/2$, and matching side lengths, they do indeed give the cusp a Euclidean structure. Finally $N$ is finite volume since $M_P$ and $M_P\prime$ have finite volume, by \reflem{FiniteVolume}. Alternately since we have a complete hyperbolic 3-manifold with ideal boundary consisting of tori it must be finite volume; see for example~\cite[Theorem~5.24]{purcell2020hyperbolic}). \end{proof} One nice property of the links formed from this identification is that we can use the dimer on the nerve to draw the link directly from the circle packing. \begin{corollary} The link formed from the gluing of $M_{P}$ and $M_{P}^\prime$ can be drawn directly from the nerve of $P^*$ on $\Sigma$. \end{corollary} \begin{proof} The nerve of $P^*$ is 3-valent with a coloured edge given by the dimer on $P$. Each coloured edge in $P^*$ corresponds to an ideal vertex shared by two paired black faces on $\partial M_P$. Such a vertex corresponds to a crossing circle. The two edges that are not coloured correspond to arcs in $\Sigma$. So for each coloured edge in $P^*$, draw a crossing circle, with arcs between crossing circles the non-coloured edges of $P^*$. Figure~\ref{Fig:LinkFromGraph} shows the local picture. \end{proof} \begin{figure} \caption{Left shows part of the nerve of $P$ in which the edge coloured by the dimer is shown dashed. Middle shows corresponding part of the nerve of $P^*$. Right, replace the dashed line with a crossing circle to draw the link.} \label{Fig:LinkFromGraph} \end{figure} \subsection{Adding Half-Twists} \begin{lemma}\label{Lem:AddTwist} Let $C$ be a crossing circle of a fully augmented link $L$ embedded in a closed 3-manifold $M$ such that $M - L$ is hyperbolic. Then for the link $L'$ obtained by adding a half twist at $C$, the complement $M-L'$ is also hyperbolic. \end{lemma} \begin{proof} This follows from Adams~\cite{adams1985thrice}. The crossing circle $C$ bounds a 3-punctured sphere, which is isotopic to a totally geodesic surface. Cut along this surface and reglue via the homeomorphism of the 3-punctured sphere that keeps the puncture associated with $C$ fixed and swaps the other two punctures. Since there is only one complete hyperbolic structure on a 3-punctured sphere, this is an isometry, hence gives a hyperbolic manifold with the desired properties. \end{proof} If we look back at the original gluing in \refthm{LinksInDouble}, adding a half twist at a crossing circle corresponds to changing the gluing of the black faces in $\partial M_P$ and $\partial M_P'$. Instead of gluing a black triangle to its pair on the same half, it will be glued to the pair in the opposite half. \begin{lemma} \label{Lem:HalfTwistGluing} Let $N$ be a manifold formed in the manner of Construction~\ref{Constr:FAL}, which are complements of fully augmented links without half-twists by \refthm{LinksInDouble}. Adding a half twist at a crossing circle corresponds to gluing a black triangle $T_1$ of $M_P$ with the triangle $T_2'$ on $M_P'$, paired to the reflection $T_1'$ of $T_1$ by the dimer. \end{lemma} \begin{proof} A half-twist is added by rotating the half $T_1\cup T_1'$ of \reffig{GlueBlackFaces}, middle, by $180^\circ$ before gluing. See \reffig{HalfTwistGluing}. This glues $T_1$ with $T_2'$, and $T_1'$ with $T_2$, via an orientation reversing isometry. \end{proof} \begin{figure} \caption{Shows how gluing black triangles in $\partial M_P$ to the paired triangle in $\partial M_P^\prime$ corresponds to adding a half twist.} \label{Fig:HalfTwistGluing} \end{figure} \begin{lemma}\label{Lem:SingleComponent} Let $M={\mathbb{H}}^3/\Gamma$ be a convex cocompact hyperbolic 3-manifold. Let $N$ be the complement of a fully augmented link in ${\mathcal{D}}({\mathcal{M}})$ constructed in Construction~\ref{Constr:FAL}. Then we may form a new hyperbolic 3-manifold $N'$ such that $N'$ is the complement of a fully augmented link $L'$ on $\partial{\mathcal{M}} \subset {\mathcal{D}}({\mathcal{M}})$, where $L'$ has only one component that is not a crossing circle on each component of $\partial \mathcal{M}$, and $L'$ is formed from $L$ by adding half twists at some of the crossing circles of $L$. \end{lemma} \begin{proof} Let $K_1, \dots , K_n$ be the link components of $L$ that are not crossing circles. If $n \geq 2$, then since the diagram graph of $L$ is connected, there must be some crossing circle $C$ such that there are two distinct components $K_j$ and $K_k$ passing through $C$. Let $L_C$ denote the link formed by adding a half twist at $C$ to $L$. Adding the half twist at $C$ concatenates $K_j$ and $K_k$, reducing the number of components by one. Repeat until there is only one component that is not a crossing circle on each component of $\partial \mathcal{M}$. \end{proof} \subsection{Showing Geometric Convergence} Now we show how we can use the construction of the previous section to construct sequences of link complements which converge geometrically to $M$. \begin{lemma}\label{Lem:LinksConvergeDouble} Let $M=\mathbb{H}^3/\Gamma$ be a convex-cocompact hyperbolic 3-manifold homeomorphic to the interior of a compact 3-manifold $\overline M$ and let $\epsilon>0$ and $R>0$. Then there exists a finite volume hyperbolic 3-manifold with framed basepoint $(M_{\epsilon,R},p_{\epsilon,R})$ that is a link complement in ${\mathcal{D}}(\overline M)$ such that $(M_{\epsilon,R},p_{\epsilon,R})$ is $(\epsilon, R)$-close to $(M,p)$, where $p$ is the framed basepoint on $M={\mathbb{H}}^3/\Gamma$ induced by $O$ in ${\mathbb{H}}^3$. \end{lemma} \begin{proof} By \reflem{ScoopedContainingBall}, we can find an $e^\delta$-quasiconformal homeomorphism $\phi$ fixing $0,1,\infty$ conjugating $\Gamma$ to $\Gamma_\delta$ such that the associated convex-cocompact manifold $N_{\delta}={\mathbb{H}}^3/\Gamma_{\delta}$ admits a circle packing $P_{\delta}$ on its conformal boundary, and the metric ball $B(0,R)/\Gamma_\delta$ is completely contained in the corresponding scooped manifold $(N_{\delta})_{P_\delta}$. Further, we may take $N_{\delta}$, $P_{\delta}$ as above so that the nerve of $P_{\delta}$ admits a dimer. By Corollary \ref{Cor:close}, $N_{\delta}$ is $(\varepsilon,R)$-close to $M$ for $\delta$ sufficiently small, if both $M={\mathbb{H}}^3/\Gamma$ and $N_{\delta}={\mathbb{H}}^3/\Gamma_{\delta}$ are endowed with the framed basepoint $p,p_\delta$ induced from $O$ in ${\mathbb{H}}^3$. Let $M_{\epsilon,R}$ be a link complement in ${\mathcal{D}}(\overline{M})$ formed from gluing two copies of $(N_{\delta})_{P_\delta}$ in the manner specified in \refthm{LinksInDouble} for $\delta=\delta(\epsilon,R)$ small as above. Since $(N_{\delta})_{P_\delta}$ isometrically embeds in $M_{\epsilon,R}$, we have (denoting the image of $p_\delta$ by $p_{\epsilon,R}$) that $(M_{\epsilon, R},p_{\epsilon,R})$ is $(\epsilon,R)$-close to $(M,p)$. \end{proof} As an immediate consequence we have: \begin{corollary}\label{Cor:LinksConvergeDouble} The links of \reflem{LinksConvergeDouble} converge geometrically to $M$.\qed \end{corollary} We now turn the link complements of \refcor{LinksConvergeDouble} into knot complements. \begin{theorem}\label{Thm:ConvergenceDouble} Let $M$ be a convex cocompact hyperbolic 3-manifold that is the interior of a compact 3-manifold $\overline{M}$. Then there exists a sequence of finite volume hyperbolic 3-manifolds $M_n$ that are link complements in ${\mathcal{D}} (\overline M)$, with one link component per boundary component of $\overline{M}$, such that $ M_n $ converges geometrically to $M$. In particular, if $\overline{M}$ has a single boundary component, then $M$ is the geometric limit of a sequence of knot complements. \end{theorem} \begin{proof} By taking $(\epsilon,R)=(1/n,n)$ in \reflem{LinksConvergeDouble}, we find a sequence of fully augmented links on a surface in ${\mathcal{D}}(\overline{M})$ which contain $(n+1)/n$-bilipschitz images $B(p,n)\subset M$. By \reflem{SingleComponent}, by adding half twists at some of the crossing circles we obtain a fully augmented link on the surface $\partial\overline{M}\subset {\mathcal{D}}(\overline{M})$ that has a single component that is not a crossing circle on each component of $\partial\overline{M}$. Lemma~\ref{Lem:HalfTwistGluing} shows that adding a half twist corresponds to changing the gluing of black faces, which does not affect the embedding $B(p,n)$ of \reflem{LinksConvergeDouble}. Thus we obtain a sequence $L_n$ of complements of fully augmented links in ${\mathcal{D}} (\overline{M})$ converging geometrically to $(M,p)$, for suitable framed basepoints, such that for each component of $\partial\overline{M}$ embedded in ${\mathcal{D}}(\overline{M})$, only one link component is not a crossing circle. Let $s\in {\mathbb{Z}}$ be a positive integer. Observe that $1/s$ Dehn filling on a crossing circle $C$ of $L_n$ inserts $2s$ crossings into the twist region encircled by $C$ and removes the link component $C$. We do this for all crossing circles. Let $i_n$ be the number of crossing circles in $L_n$, and let $ s_k^1, \dots , s_k^{i_n} $ denote sequences of positive integers approaching infinity as $k\rightarrow \infty$. Thurston's hyperbolic Dehn surgery theorem tells us that for fixed $n$ the sequence of manifolds $ M_n(1/s_k^1, \dots , 1/s_k^{i_n}) $ converges geometrically to $M_n$~\cite{thurston1979geometry}. Taking a diagonal sequence, we obtain a sequence of knot complements in ${\mathcal{D}} (\overline{M})$ converging geometrically to $M$. \end{proof} \subsection{Effective Dehn filling} We promised in the introduction a constructive method to build knot complements converging to $M$. Theorem~\ref{Thm:ConvergenceDouble} uses Thurston's hyperbolic Dehn surgery theorem to imply that such knots must exist, however that theorem is not constructive. In this section, we explain how the proof can be modified to use cone deformation techniques to explicitly construct knots with the desired properties. To do so, we need to know more about the cusp shapes and normalised lengths of Dehn filling slopes on the link complements $M_{\epsilon,R}$ of \reflem{LinksConvergeDouble}. \begin{lemma}\label{Lem:CuspTiling} In the hyperbolic structure on the fully augmented link complement $M_{\epsilon,R}$ of \reflem{LinksConvergeDouble}, each cusp corresponding to a crossing circle is tiled by two identical Euclidean rectangles. Each rectangle has a pair of opposite sides coming from the intersection of a horospherical cusp torus with black sides, and a pair coming from an intersection with white sides. The slope $1/n$ on this cusp is isotopic to a curve as follows: \begin{itemize} \item If the crossing circle does not meet a half-twist, the slope is given by one step along a white side, plus or minus $2n$ steps along black sides. \item If the crossing circle meets a half-twist, then the meridian is sheared. Thus the slope is given by one step along a white side, plus or minus $(2n+1)$ steps along black sides. \end{itemize} In either case, if $c$ is the number of crossings added to this twist region of the diagram after Dehn filling, then the slope is given by one step along a white side plus or minus $c$ steps along black sides. \end{lemma} \begin{proof} The proof is completely analogous to a similar result for crossing circle cusps in the classical setting of fully augmented links in the 3-sphere; see \cite[Proposition~3.2]{Purcell:AugLinksSurvey} or \cite[Lemma~2.6, Theorem~2.7]{FuterPurcell:Exceptional}. We walk through it in this setting. By \reflem{RectangleVertices}, each crossing circle is tiled by rectangles, each with two opposite black sides, coming from intersections of black triangles with a horospherical torus about the cusp, and two opposite white sides, coming from intersections of white faces with a horospherical torus. Tracing through the gluing construction of \ref{Constr:FAL}, with reference to \reffig{GlueBlackFaces}, the crossing circle cusps are built by first gluing one rectangle from the original scooped manifold $M_P$ to an identical copy from $M_P'$, via a reflection in a white side. When there is no half-twist, the black sides of each of these rectangles are then glued together. A longitude runs over the two black sides, meeting two white sides along the way. A meridian runs over exactly one white side, meeting exactly one black side transversely along the way. When a half-twist is added, the longitude still runs over two black sides, but a meridian is obtained by taking a step along a white side plus or minus a step along a black side, depending on the direction of twist. We may assume that the direction of twist matches the sign of $n$, otherwise apply a homeomorphism giving a half-twist in the opposite direction, and reduce $|n|$ by two. This introduces shearing to the meridian. The slope $1/n$ runs over one meridian and $n$ longitudes. In the case of no half-twists, this is one step along a white side, plus $2n$ steps along black sides. This adds $|2n| =c$ crossings to the twist region. When there is a half-twist, the slope $1/n$ still runs over one meridian plus $n$ longitudes, but now this is given by one step along a white side plus or minus one step along a black side (with sign matching sign of $n$), plus $2n$ additional steps along black sides. Again there are $c=|2n+1|$ steps along black sides. \end{proof} The \emph{normalised length} of a slope $s$ on a cusp torus $T$ is the length of a geodesic representative of the slope in the Euclidean metric on $T$, divided by the area of the torus: \[ L(s) = \operatorname{len}(s) / \sqrt{\operatorname{area}(T)}. \] Observe that the normalised length is independent of scale, thus it is an invariant of the cusp rather than the choice of horospherical neighbourhood of the cusp. The following result, for fully augmented links in ${\mathcal{D}}(M)$, is analagous to a calculation for fully augmented links in $S^3$ found in \cite{Purcell:Volumes2007}. \begin{lemma}\label{Lem:NormLength} Let $c$ be the number of crossings added by Dehn filling at a crossing circle. Then the corresponding slope of the Dehn filling has normalised length at least $\sqrt{c}$. \end{lemma} \begin{proof} From \reflem{CuspTiling}, we know that the two rectangles in the cusp tiling of the crossing circle are identical, hence each white side has length $w$ and each black side length $b$. The area of the cusp, with or without half-twists, is given by $2bw$. Thus by \reflem{CuspTiling}, the normalised length of the slope $1/n$ is given by \[ L = \frac{\sqrt{w^2 + c^2 b^2}}{\sqrt{2bw}} = \sqrt{\frac{w}{2b}+\frac{c^2b}{2w}}. \] This is minimised when $w/2b$ equals $c/2$, and the minimum value is $\sqrt{c}$. \end{proof} \begin{lemma}\label{Lem:UnivCrossing} Given $M$, $\epsilon>0, R>0$, and $M_{\epsilon,R}$ as in \reflem{LinksConvergeDouble}, let $\delta>0$ be such that $B(p,R)$ lies in the $\delta/(1+\epsilon)$-thick part of $M$. Let $n$ denote the number of crossing circles of the fully augmented link in $M_{\epsilon,R}$. If after Dehn filling the crossing circles, the number of crossings added to each twist region is at least \[ n\cdot \max \left\{ \frac{107.6}{\delta^2}+14.41, \frac{45.20}{\delta^{5/2}\log(1+\epsilon)}+ 14.41 \right\}, \] then the inclusion map taking $B(p_{\epsilon,R},R)$ in $M_{\epsilon,R}$ into the complement of the resulting knot in ${\mathcal{D}}(M)$ is $(1+\epsilon)$-bilipschitz. It follows that the knot complement contains a set that is $(1+\epsilon)^2$-bilipschitz to $B(p,R)$ in the original $M$. \end{lemma} \begin{proof} By \reflem{LinksConvergeDouble}, if $B(p,R)$ lies in the $\delta/(1+\epsilon)$ thick part of $M$, then $B(p_{\epsilon,R},R)$ lies in the $\delta$ thick part of $M_{\epsilon,R}$ and is $(1+\epsilon)$ bilipschitz to $B(p,R)$. Let $L^2$ be given by \[ \frac{1}{L^2} = \sum_{i=1}^n \frac{1}{L_i^2}, \] where $L_i$ is the normalised length of the Dehn filling slope on the $i$-th crossing circle cusp. In \cite[Corollary~8.16]{FPS:EffectiveBilipschitz}, it is shown that if $L^2$ is at least the maximum given above, then the inclusion map on any submanifold of the $\delta$-thick part is $(1+\epsilon)$-bilipschitz. Let $C$ be the minimal number of crossings added to any twist region. By \reflem{NormLength}, $1/L_i^2 \leq 1/C$, so $1/L^2 \leq n/C$, or $L^2 \geq C/n$. Thus if $C/n$ is at least the maximum in the formula above, we may apply the corollary from \cite{FPS:EffectiveBilipschitz} to $B(p_{\epsilon,R},R)$. \end{proof} \section{Reducing geometrically finite to convex cocompact}\label{Sec:GFtoCC} The previous sections constructed link complements that converge to convex cocompact hyperbolic structures. In the case of a single topological end, the limiting manifolds are all knot complements. The construction can be extended almost immediately to geometrically finite manifolds of infinite volume. However, now in the case that the manifold has a single topological end, if that end contains a rank-1 cusp, the immediate extension produces link complements rather than knot complements. Indeed, in the presence of rank one and rank two cusps our construction above leads to several cusp boundary components and thus to a complementary link with multiple components. Instead we will show that a geometrically finite manifold $M$ can be approximated geometrically by convex cocompact manifolds. Combining this with the previous results, it follows that $M$ can also be approximated geometrically by knot complements if it is of infinite volume with a single topological end. For rank two cusps, a version of Thurston's hyperbolic Dehn surgery theorem for geometrically finite hyperbolic manifolds shows that a geometrically finite manifold is the geometric limit of geometrically finite manifolds without rank two cusps; see, for example, work of Brock and Bromberg~\cite{BrockBromberg}. However in our setting, i.e.\ a 3-manifold with one end, rank one cusps are more problematic. Here we show that for any geometrically finite hyperbolic manifold $M$, there is sequence of geometrically finite hyperbolic manifolds $M_j$ without rank one cusps converging to $M$. Moreover the sequence can be chosen such that the maps establishing this convergence are global diffeomorphisms. In particular $M_j$ is diffeomorphic to $M$ for each $j$. Results such as this go back to work of J{\o}rgensen, and is presumably implicit in the construction of Earle--Marden geometric coordinates (c.f.\ \cite{Mardencoordinates} and the appendix of \cite{HubbardKoch}); compare also Marden~\cite[exercises 4-24 and 5-3]{marden2007outer}. We include the result and a proof for completeness. \begin{theorem}\label{Thm:CCtoGF} Let $M$ be a geometrically finite hyperbolic manifold. Then there exists a sequence of geometrically finite hyperbolic manifolds $M_j$ without any rank one cusps and diffeomorphisms $M\rightarrow M_j$ establishing that the $M_j$ converge geometrically to $M$. The $M_j$ are explicitly constructed starting from $M$ and there are effective bounds for the convergence. \end{theorem} To prove \refthm{CCtoGF}, we first need to set up some notation. Fix a framed basepoint on $p$ on $M$. Then $(M,p)$ corresponds to a Kleinian group $\Gamma$ such that $(M,p)=(\mathbb{H}^3/\Gamma,O)$. We will first construct Kleinian groups $\Gamma_{r(j)n(j)}$ corresponding to suitable hyperbolic 3-manifolds with framed basepoints $(M_j,p_j)$ that converge to $\Gamma$ in the Chabauty topology (and thus $(M_j,p_j)$ converges geometrically to $(M,p)$). When viewed as perturbations of $\Gamma$, the Kleinian groups $\Gamma_{r(j)n(j)}$ also converge algebraically to $\Gamma$ and the desired convergence properties will follow. Consider a fixed rank one cusp of $M$, generated by $\eta_1$. Up to conjugation, we may assume $\eta_1$ corresponds to $z\mapsto z+1$. For $r_1>0$, let $\gamma_{r_1}$ correspond to $z\mapsto z+r_1 \sqrt{-1}$. Add $\gamma_1:=\gamma_{r_1}$ to $\Gamma$ as a generator to obtain $\Gamma_{r_1}$, with presentation $\langle G,\gamma_1 \mid R, [\gamma_1,\eta_1]=1\rangle$, where $\langle G \mid R \rangle$ is a presentation of $\Gamma$. \begin{lemma}\label{Lem:KleinMaskit} For $r_1$ sufficiently large, $\Gamma_{r_1}$ is a discrete group and an HNN extension of $\Gamma$. \end{lemma} \begin{proof} This will be a consequence of the second Klein-Maskit combination theorem; we use the version as stated in Abikoff and Maskit~\cite{AbikoffMaskit}, for a proof see Maskit \cite[VII~E.5]{MaskitBook}. Let $H$ be a subgroup of $\Gamma$. Recall that a subset $B\subset {\mathbb{C}}\cup \{\infty\}$ is precisely invariant under $H$ in $\Gamma$ if (1) for all $h\in H$, $h(B)=B$ and (2) for all $\gamma \in \Gamma\setminus H$, $\gamma(B)\cap B = \varnothing$. In our setting, consider the round discs $D_\pm:=D_\pm(r_1)=\{z\in {\mathbb{C}} \mid \pm\mbox{Im}(z)>r_1/2\}$ in ${\mathbb{C}}\cup \{\infty\}$. We claim that for $r_1$ sufficiently large, the $\Gamma$-orbits of $D_+$ and $D_-$ are disjoint and that $D_\pm$ are both precisely invariant under the subgroup $H=\langle \eta_1 \rangle$ of $\Gamma$. This follows, for example, from work of Bowditch~\cite{Bowditch}, specifically his result that geometrically finite is equivalent to his definition~GF1, which we now recall. By Bowditch's definition GF1, the fundamental domain of a geometrically finite hyperbolic manifold is realised as the union of a compact set and a finite number of disjoint \emph{standard} cusp regions (c.f.\ \cite[Proposition 4.4]{Bowditch} for a proof that geometrically finite hyperbolic manifolds admit standard cusp regions). A standard cusp for $\eta_1$ is modelled as follows. Consider the universal cover ${\mathbb{H}}^3$ of $M$, in the upper half-space model, with boundary ${\mathbb{C}} \cup \{\infty\}$. The parabolic $\eta_1$, taking $z$ to $z+1$, acts as translation on horospheres about infinity, taking vertical planes in ${\mathbb{H}}^3$ with boundary of the form $\{x\in {\mathbb{C}} \mid \mbox{Re}(x)=R\}$, for fixed $R\in {\mathbb{R}}$, to vertical planes in ${\mathbb{H}}^3$ with boundary $\{x\in {\mathbb{C}} \mid \mbox{Re}(x)=R+1\}$. There is an $\eta_1$-invariant subspace $P\subset {\mathbb{C}}$ with $P/\langle \eta_1 \rangle$ compact; in the 3-dimensional rank-1 case at hand, $P=P(r)$ can be chosen to be an infinite strip bounded by two lines $L(\pm r/2)=\{x\in {\mathbb{C}} \mid \mbox{Im}(x) = \pm r/2\}$. See \cite[Figure~3a]{Bowditch}. Bowditch's definition of a standard cusp implies that for some height $h>0$, the region \[C=C(P(r),h)=\{x\in{\mathbb{H}}^3 \mid d_{\mbox{euc}}(x,P(r))\geq h\}\] must satisfy $\gamma(C)\cap C = \varnothing$ for all $\gamma \in \Gamma\setminus H$. For $r_1$ large, $D_\pm \subset C$, and therefore $\gamma(D_+\cup D_-)\cap (D_+\cup D_-)=\varnothing$ for $\gamma\in \Gamma\setminus H$. Combining this with the fact that $H$ preserves both $D_\pm$ separately, it follows that the $\Gamma$-orbits of $D_\pm$ are disjoint and that both $D_\pm$ are precisely invariant under $H$ in $\Gamma$. Now consider $f=\gamma_{r_1}$ defined as above. Note that since $\gamma_{r_1}$ and $\eta_1$ commute, $fHf^{-1} =H < \Gamma$. The observations above on Bowditch's definition~GF1 imply the following three conditions required for the second Klein-Maskit combination theorem: \begin{enumerate} \item $D_+$ is precisely invariant for $H$ in $\Gamma$; \item ${\mathbb{C}}-\gamma_{r_1}(\overline{D}_+)=D_-$ is precisely invariant for $fHf^{-1} = H$ in $\Gamma$ \item $\gamma(D_+) \cap D_- = \varnothing$ for all $\gamma\in\Gamma$. \end{enumerate} Then by the second Klein-Maskit combination theorem, $\Gamma_{r_1}$ is a discrete group and an HNN extension of $\Gamma$. \end{proof} \begin{proof}[Proof of \refthm{CCtoGF}] Apply \reflem{KleinMaskit} iteratively to all rank one cusps of $\Gamma$; we obtain a Kleinian group $\Gamma_r$, $r=(r_1,\ldots , r_k)$, for $r_{i+1}\gg r_i$, $i=1,\ldots,k-1$. It has $k$ rank two cusps corresponding to the $k$ rank one cusps of $M$, and additionally any rank two cusps inherited from $\Gamma$, but no rank one cusps. It has a presentation of the form \[ \Gamma_r=\langle G, \gamma_1,\ldots , \gamma_k\,\vert\, R, [\gamma_i,\eta_i]=1, \forall i=1,\ldots, k\rangle. \] As $r_1=\min_i r_i$ tends to infinity, these groups converge geometrically to $\Gamma$. Now perform $(1,n)$-Dehn surgery on the $k$ new rank two cusps of $\Gamma_r$, where the meridian of the $i$-th cusp (filled for $n=0$) corresponds to the new generator $\gamma_{r_i}$. For $n$ sufficiently large, this yields Kleinian groups $\Gamma_{rn}$ with presentations \[ \Gamma_{rn}=\langle G, \gamma_1,\ldots , \gamma_k\,\vert\, R, [\gamma_i,\eta_i]=1,\gamma_i\eta_i^n=1,\, \forall i=1,\ldots, k\rangle. \] The groups $\Gamma_{rn}$ are canonically isomorphic to $\Gamma$: There is a natural isomorphism $m_{rn}:\Gamma\rightarrow \Gamma_{rn}$ whose inverse sends $\gamma_i $ to $\eta_i^{-n}$ for all $i=1,\ldots, k.$ Thus the $\Gamma_{rn}$ are images of faithful, geometrically finite representations of $\Gamma$. Moreover, since the construction of $\Gamma_{rn}$ is via Dehn surgery, for $n$ large, $m_{rn}(\theta)$ for $\theta\in\Gamma$ is parabolic if and only if $\theta$ is part of a rank two cusp of $\Gamma$. In particular, the elements $m_{rn}(\eta_i)$ are hyperbolic and $\Gamma_{rn}$ has no rank one cusps. These representations converge algebraically to $\Gamma$ as $n\rightarrow \infty$, since Dehn surgery is a perturbation of the identity in terms of representations of the group $\Gamma_r$, thus in particular in terms of the subgroups $\Gamma \subset \Gamma_r$. A suitable formulation of Dehn surgery, due to Comar, can be found in \cite[Theorem~10.1]{AndersonCanaryMcCTopologyDeformation}. Moreover the Kleinian groups $\Gamma_{rn}$ converge geometrically (i.e.\ in the Chabauty topology) to $\Gamma_r$ as $n\rightarrow \infty$~\cite{thurston1979geometry}. Thus for each value of $r_i$, we may choose a sequence $r_{i}(j)_{ j\in \mathbb{N}}$ tending to infinity, and consider $r(j)=(r_1(j),\ldots,r_k(j))$ as above. Choosing $n(j)$ sufficiently large, we find that the diagonal sequence of Kleinian groups $\Gamma_{r(j)n(j)}$, uniformizing the geometrically finite hyperbolic manifolds $M_j$ without rank one cusps, converges both geometrically and algebraically to $\Gamma$, uniformizing $M$. This implies that the limit $M$ is diffeomorphic to $M_j$ for $j$ sufficiently large, as follows (compare \cite[Lemma~3.6]{AndersonCanaryMcCTopologyDeformation}). Indeed, the compact core of $M$ embeds via its interpretation as geometric limit back into $M_j$ for $j$ large. This induces a map on fundamental groups $\Gamma\rightarrow \Gamma_{r(j)n(j)}$, which necessarily coincides with the isomorphism $\Gamma\rightarrow \Gamma_{r(j)n(j)}$ establishing that $\Gamma$ is the algebraic limit of $\Gamma_{r(j)n(j)}$. Thus the compact core of $M$ embeds as a compact core into $M_j$ for $j$ large. By the uniqueness of compact cores and since a diffeomorphism of compact cores can be extended to a diffeomorphism of the ambient hyperbolic manifolds, the claimed result follows. Finally we remark on the constructive nature of the proof. Observe that the process above is obtained by first, choosing a sufficiently large $r_i$ at each rank one cusp to build manifolds with rank two cusps. Then perform high Dehn filling. The choice of $r_1$ will depend heavily on $M$, but given a fundamental domain for $M$, these can be determined effectively. By our choice of the $\gamma_{r_i}$, the new rank two cusps of the manifold ${\mathbb{H}}^3/\Gamma_r$ are rectangular. Thus the normalised length of the slopes $1/n$ have length at least $\sqrt{n}$. Again applying cone deformation techniques, we may choose effective $n$ sufficiently large to obtain constants required in the definition of geometric convergence, as in the proof of \reflem{UnivCrossing}. \end{proof} \begin{corollary}\label{Cor:MainGF} Let $M$ be a geometrically finite hyperbolic 3-manifold of infinite volume that is homeomorphic to the interior of a compact manifold $\overline{M}$ with a single boundary component. Then one can construct an explicit sequence of finite volume hyperbolic manifolds that are knot complements in $\mathcal{D}(\overline{M})$ such that $M_n$ converges geometrically to $M$. \qed \end{corollary} \end{document}
math
लोकसभा चुनाव २०१९: राजनाथ सिंह का दावा,कश्मीर को भारत से अलग करना नामुमकिन - ओनेइंडिया हिन्दी लोकसभा चुनाव २०१९: राजनाथ सिंह का दावा,कश्मीर को भारत से अलग करना नामुमकिन लोकसभा चुनाव २०१९ से पहले गृहमंत्री राजनाथ सिंह ने एक इंटर्व्यू में आर्टिकल ३७० और ३५ए पर बेबाकी से बोलते हुए साफ किया है कि कश्मीर भारत का अभिन्न अंग है । राजनाथ सिंह ने कहा कि, कश्मीर को भारत से अलग समझना बेवकूफी है और उसको बांटने का सोचना भी गलत ।
hindi
This story by Chris Shott (www.washingtoncitypaper.com/blogs/youngandhungry/2011/07/27/critical-distance-the-new-rules-for-restaurant-reviews-there-are-no-rules) appeared on www.washingtoncitypaper.com and is a thoughtful piece by about reviewing restaurants. Akin to our story which ran a few months back (insidefandb.com/2011/04/to-judge-or-not-to-judge-2/), the question still remains, when is it fair to review a restaurant? And what are restaurateurs, chefs, and publicists doing to change the landscape and the entire equation in this age of “new” media? Bottom line folks, there’s no whining in baseball! Let’s work together.
english
require 'methadone' require_relative 'kb8_resource' require_relative 'kb8_container_spec' require_relative 'kb8_pod' class Kb8Controller < Kb8Resource include Methadone::Main include Methadone::CLILogging DEPLOYMENT_LABEL = 'kb8_deploy_id' ORIGINAL_NAME = 'kb8_deploy_name' attr_accessor :selectors, :container_specs, :context, :pod_status_data, :pods, :intended_replicas, :actual_replicas, :new_deploy_id, :volumes class Selectors attr_accessor :selectors_hash def initialize(selectors_data) @selectors_hash = selectors_data end def to_s # Create a sorted key value string selector_string = '' @selectors_hash.keys.sort.each do | key | unless selector_string == '' selector_string = selector_string + ',' end selector_string = "#{selector_string}#{key}=#{@selectors_hash[key]}" end selector_string end def ==(other_obj) (self.to_s == other_obj.to_s) end def [](key) @selectors_hash[key] end def []=(key, value) @selectors_hash[key] = value end end def initialize(yaml_data, file, context) # Initialize the base kb8 resource super(yaml_data, file) @container_specs = [] @no_rolling_updates = context.settings.no_rolling_update @context = context # Initialise the selectors used to find relevant pods unless yaml_data['spec'] raise "Invalid YAML - Missing spec in file:'#{file}'." end if yaml_data['spec'].has_key?('selector') @selectors = Selectors.new(yaml_data['spec']['selector'].dup) elsif yaml_data['spec']['template']['metadata'].has_key?('labels') @selectors = Selectors.new(yaml_data['spec']['template']['metadata']['labels'].dup) end unless @yaml_data['metadata']['labels'] @yaml_data['metadata']['labels'] = {} end @intended_replicas = yaml_data['spec']['replicas'] @pods = [] # Now get the containers and set versions and private registry where applicable containers = [] begin containers = yaml_data['spec']['template']['spec']['containers'] rescue Exception => e raise $!, "Invalid YAML - Missing containers in controller file:'#{file}'.", $!.backtrace end containers.each do |item| container = Kb8ContainerSpec.new(item) container.update(context) @container_specs << container end if yaml_data['spec']['template']['spec'].has_key?('volumes') @volumes = Kb8Volumes.new(yaml_data['spec']['template']['spec']['volumes']) else @volumes = [] end update_md5 end def can_roll_update? if @no_rolling_updates return false end if exist? @live_data['metadata']['labels'].has_key?(DEPLOYMENT_LABEL) else false end end def deploy_id unless @new_deploy_id deploy_id = '0' if data && data['metadata'] && data['metadata']['labels'] && data['metadata']['labels'].has_key?(DEPLOYMENT_LABEL) deploy_id = data['metadata']['labels'][DEPLOYMENT_LABEL] end # Grab the first digits... id = deploy_id.match(/[\d]+/).to_a.first unless id # We have the field but no digits so set back to 0 id = 0 end @new_deploy_id = "v#{id.to_i + 1}" end @new_deploy_id end def update_deployment_data # Add new deployment id and name etc... @name = "#{@original_name}-#{deploy_id}" @yaml_data['metadata']['name'] = @name @yaml_data['metadata']['labels'][ORIGINAL_NAME] = @original_name @yaml_data['metadata']['labels'][DEPLOYMENT_LABEL] = deploy_id @selectors[DEPLOYMENT_LABEL] = deploy_id @yaml_data['spec']['selector'] = @selectors.selectors_hash.dup @yaml_data['spec']['template']['metadata']['labels'][DEPLOYMENT_LABEL] = deploy_id end def refresh_status(refresh=false) if @pod_status_data.nil? refresh = true end if refresh @pod_status_data = Kb8Run.get_pod_status(@selectors.to_s) end end def update unless exist? raise "Can't update #{@kind}/#{@name} as it doesn't exist yet!" end unless can_roll_update? delete create else update_deployment_data yaml_string = YAML.dump(@yaml_data) begin Kb8Run.rolling_update(yaml_string, @live_data['metadata']['name']) @name = yaml_data['metadata']['name'] ensure check_status end end end def delete super debug 'Waiting for delete to succeed...' # Wait for all pods to go... @pods.each do |pod| while pod.exist?(true) do sleep(1) end end end def exist?(refresh=false) exists = false if super(refresh) exists = true else @resources_of_kind['items'].each do |item| if item unless item['metadata'].nil? unless item['metadata']['labels'].nil? if item['metadata']['labels'][ORIGINAL_NAME] == @original_name if namespace_match?(item) @live_data = item @name = @live_data['metadata']['name'] exists = true end end end end end end end exists end def create # Ensure the controller resource is created using the parent class... update_deployment_data super check_status end def check_status # TODO: rewrite as health method / object... # Now wait until the pods are all running or one... phase_status = Kb8Pod::PHASE_UNKNOWN print "Waiting for controller #{@name}" $stdout.flush loop do # TODO: add a timeout option (or options for differing states...) sleep 1 # Tidy stdout when debugging... debug "\n" phase_status = aggregate_phase(true) debug "Aggregate pod status:#{phase_status}" Deploy.print_progress break if phase_status != Kb8Pod::PHASE_PENDING && phase_status != Kb8Pod::PHASE_UNKNOWN end # add new line after content above... puts '' if phase_status == Kb8Pod::PHASE_FAILED # TODO: some troubleshooting - at least show the logs! puts "Controller #{@name} entered failed state!" @pods.each do | pod | puts "Pod:#{pod.name}, status:#{pod.error_message}" if pod.error_message end mark_dirty exit 1 end # Now check health of all pods... failed_pods = [] condition = Kb8Pod::CONDITION_NOT_READY @pods.each do | pod | print "Waiting for '#{pod.name}'" $stdout.flush loop do sleep 1 condition = pod.condition(true) Deploy.print_progress break if condition != Kb8Pod::CONDITION_NOT_READY end print "\n" if condition == Kb8Pod::CONDITION_READY debug "All good for #{pod.name}" else failed_pods << pod end end unless failed_pods.count < 1 mark_dirty failed_pods.each do | pod | pod.report_on_pod_failure end exit 1 end end def mark_dirty @pods.each do |pod| pod.mark_dirty end end def is_dirty? if exist? get_pod_data(true) @pods.each do |pod| if pod.is_dirty? return true end end end false end def get_pod_data(refresh=true) refresh_status(refresh) if refresh || (!@pods) debug 'Reloading pod data...' # First get all the pods... @actual_replicas = @pod_status_data['items'].count debug "Actual pods running:#{@actual_replicas}" debug "Intended pods running:#{@intended_replicas}" @pods = [] @pod_status_data['items'].each do | pod | @pods << Kb8Pod.new(pod, self) end end @pods end # Will get the controller status or the last requested controller status # unless refresh is specified def aggregate_phase(refresh=false) # Return aggregate phase of all pods set to run: # 'Pending' # 'Running' # 'Succeeded' # 'Failed' # If ALL PODS Running, return Running # if ANY Failed set to Failed # If ANY Pending set to Pending (unless any set to Failed) get_pod_data(refresh) aggregate_phase = Kb8Pod::PHASE_UNKNOWN running_count = 0 @pods.each do | pod | debug "Phase:#{pod.phase}" case pod.phase when Kb8Pod::PHASE_RUNNING # TODO check restart count here...? running_count = running_count + 1 when Kb8Pod::PHASE_FAILED aggregate_phase = Kb8Pod::PHASE_FAILED when Kb8Pod::PHASE_PENDING # check pod conditions... condition = pod.condition(false) if condition == Kb8Pod::CONDITION_ERR_WAIT aggregate_phase = Kb8Pod::PHASE_FAILED end aggregate_phase = Kb8Pod::PHASE_PENDING unless aggregate_phase == Kb8Pod::PHASE_FAILED else # Nothing to do here... end end # If the phase has at least been discovered and all pods running, then... if aggregate_phase == Kb8Pod::PHASE_UNKNOWN && running_count == @pods.count aggregate_phase = Kb8Pod::PHASE_RUNNING end aggregate_phase end end
code
\begin{document} \title{Behavioral QLTL } \begin{abstract} In this paper we introduce Behavioral QLTL, which is a ``behavioral'' variant of linear-time temporal logic on infinite traces with second-order quantifiers. Behavioral QLTL is characterized by the fact that the functions that assign the truth value of the quantified propositions along the trace can only depend on the past. In other words such functions must be``processes''. This gives to the logic a strategic flavor that we usually associate to planning. Indeed we show that temporally extended planning in nondeterministic domains, as well as LTL synthesis, are expressed in Behavioral QLTL through formulas with a simple quantification alternation. While, as this alternation increases, we get to forms of planning/synthesis in which conditional and conformant planning aspects get mixed. We study this logic from the computational point of view and compare it to the original QLTL (with non-behavioral semantics) and with simpler forms of behavioral semantics. \end{abstract} \section{Introduction} \label{sec:introduction} Since the very early time of AI, researchers have tried to reduce planning to logical reasoning, i.e., satisfiability, validity, logical implication \cite{Green69}. However as we consider more and more sophisticated forms of planning this becomes more and more challenging, because the logical reasoning we need to do is intrinsically second-order. One prominent case is if we want to express the model of the world (aka the environment) and the goal of the agent directly in Linear-time Temporal Logic, which is the logic used most in formal method to specify dynamic systems. Examples are the pioneering work on using temporal logic as a sort of programming language through the MetateM framework~\cite{BFGGO95}, the work on temporal extended goals and declarative control constraints~\cite{BK98,BK00}, the work on planning via model-checking~\cite{CGGT97,CR00,DTV99,BCR01}, the work on adopting \textsc{ltl}\xspace logical reasoning (plus some meta-theoretic manipulation) for certain forms of planning~\cite{CM98,CDV02}. More recently the connection between planning in nondeterministic domains and (reactive) synthesis \cite{PR89} has been investigated, and in fact it has been shown that planning in nondeterministic domains can be seen in general terms as a form of synthesis in presence of a model of the environment~\cite{CBM19,AminofGMR19}, also related to synthesis under assumptions~\cite{CH07,ChatterjeeHJ08}. However the connection between planning and synthesis also clarifies formally that we cannot use directly the standard forms of reasoning in \textsc{ltl}\xspace, such as satisfiability, validity, or logical implication, to do planning. Indeed the logical reasoning task we have to adopt is a nonstandard one, called “\emph{realizability}” \cite{Chu63, PR89}, which is in inherently a second-order form of reasoning on \textsc{ltl}\xspace specifications. So one question comes natural: can we use the second-order version of \textsc{ltl}\xspace, called {\textsc{qltl}}\xspace (or {\textsc{qptl}}\xspace) \cite{Sis85} and then avoid use nonstandard form of reasoning? In~\cite{CDV02} a positive answer was given limited to conformant planning, in which we cannot observe response of the environment to the agent actions. Indeed it was shown that conformant planning could be captured through standard logical reasoning in {\textsc{qltl}}\xspace. But the results there do not extend to conditional planning (with or without full observability) in nondeterministic environment models. The reason for this is very profound. Any plan must be a ``\emph{process}’’, i.e., observe what has happened so far (the history), observe the current state and take a decision on the next action to do \cite{ALW89}. {\textsc{qltl}}\xspace instead interprets quantified propositions (i.e., in the case of planning, the actions to be chosen) through functions that have access to the whole traces, i.e., also the future instants, hence they cannot be considered processes. This is a clear mismatch that makes standard {\textsc{qltl}}\xspace unsuitable to capture planning through standard reasoning tasks. This mismatch is not only a characteristic of {\textsc{qltl}}\xspace, but, interestingly, even of logics that have been introduced specifically for in strategic reasoning. This has lead to investigating the ``\emph{behavioral}'' semantics in these logics. In their seminal work~\cite{MMPV14}, Mogavero et al. introduce and analyze the behavioral aspects of quantification in Strategy Logic ({\textsc{sl}}\xspace): a logic for reasoning about the strategic behavior of agents in a context where the properties of executions are expressed in \textsc{ltl}\xspace. They show that restricting to behavioral quantification of strategies is a way of both making the semantics more realistic and computationally easier. In addition, they proved that behavioral and non-behavioral semantics coincide for certain fragments, including the well known {\textsc{atl$^{\star}$}}\xspace~\cite{AHK02}, but diverge for more interesting classes of formulas, e.g., the ones that can express game-theoretic properties such as Nash Equilibria and the like. This has started a new line of research that aims at identifying new notions of behavioral and non-behavioral quantification, as well as characterize the syntactic fragments that are invariant to these semantic variations~\cite{GBM18,GBM20}. In this paper we introduce a behavioral semantics for {\textsc{qltl}}\xspace. The resulting logic, called \emph{Behavioral-{\textsc{qltl}}\xspace} (\textsc{qltl$_{\mthfun{B}}$}\xspace) is characterized by the fact that the functions that assign the truth value of the quantified propositions along the trace can only depend on the past. In other words such functions must be ``\emph{processes}''. This makes \textsc{qltl$_{\mthfun{B}}$}\xspace perfectly suitable to capture extended forms of planning through standard reasoning tasks (satisfiability in particular). Indeed, temporally extended planning in nondeterministic domains, as well as \textsc{ltl}\xspace synthesis, are expressed in \textsc{qltl$_{\mthfun{B}}$}\xspace through formulas with a simple quantification alternation. While, as this alternation increases, we get to forms of planning/synthesis in which conditional and conformant planning aspects get mixed. For example, the \textsc{qltl$_{\mthfun{B}}$}\xspace formula of the form $\exists Y \forall X \psi$ represents the conformant planning over the \textsc{ltl}\xspace specification (of both environment model and goal) $\psi$, as it is intended in~\cite{Rin04} (note that this could be done also with standard {\textsc{qltl}}\xspace, since $\exists Y$ is put upfront as it cannot depend on the nondeterministic evolution of the fluents in the planning domain). Instead, the \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\forall X \exists Y \psi$ represents contingent planning, i.e., \emph{Planning in Fully Observable Nondeterministic Domains} (FOND), as well as \textsc{ltl}\xspace synthesis (which, instead, could not be captured in standard {\textsc{qltl}}\xspace). By taking \textsc{qltl$_{\mthfun{B}}$}\xspace formulas with increased alternation, one can describe more complex forms of planning and synthesis. The \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\forall X_1 \exists Y \forall X_2 \varphi$ represents the problem of \emph{Planning in Partially Observable Nondeterministic Domains} (POND), where $X_1$ and $X_2$ are the visible and hidden parts of the domain, respectively. By going even further in alternation, we get a generalized form of POND where a number of actuators with hierarchically reduced visibility are coordinated to execute a plan that fulfills a temporally extended goal in an environment model. Interestingly this instantiates problems of distributed synthesis with hierarchical information studied in formal methods~\cite{PR90,KV01,FS05}. We study \textsc{qltl$_{\mthfun{B}}$}\xspace, by introducing a formal semantics that is \emph{Skolem-based}, meaning that we make use of different notions of Skolem functions and Skolemization to define the truth-value of formulas. The advantage of this approach is in the correspondence between Skolem functions and strategies/plans in synthesis and planning problems. As a matter of fact, they can all be represented as suitable labeled trees, describing all the possible executions of a given process that receive inputs from the environment. We show characterize the complexity of satisfiability in \textsc{qltl$_{\mthfun{B}}$}\xspace is $(n+1)$-EXPTIME-complete, with $n$ being the number of quantification blocks of the form $\forall X_i\exists Y_i$ the in the formula. This improves the complexity of the satisfiability problem for classic {\textsc{qltl}}\xspace, which depends on the overall quantifier alternation in the formula, and in particular is $2(n-1)$-EXSPACE-complete. Moreover, it also shows that the corresponding synthesis and planning problems can be optimally solved in \textsc{qltl$_{\mthfun{B}}$}\xspace, as the matching lower-bound is provided by a reduction of these problems. We also consider a weak variant of \textsc{qltl$_{\mthfun{B}}$}\xspace, called \emph{Weak Behavioral-{\textsc{qltl}}\xspace} (\textsc{qltl$_{\mthfun{WB}}$}\xspace), where the history is always visible while we have restriction visibility on the curent instant only. We show that the complexity of satisfiability in \textsc{qltl$_{\mthfun{WB}}$}\xspace is $2$-EXPTIME-complete, regardless of the number and alternation of quantifiers. The reason for this is in that processes are modeled in a way that they have full visibility on the past computation. This allows them to find the right plan by means of a local reasoning, and so without employing computationally expensive automata projections. As for the case of \textsc{qltl$_{\mthfun{B}}$}\xspace, such procedure is optimal to solve the corresponding synthesis problems, as the matching lower-bound is again provided by a reduction of them. \endinput \section{Quantified Linear-Time Temporal Logic} \label{sec:QLTL} We introduce \emph{Quantified Linear-Temporal Logic} as an extension of \emph{Linear-Time Temporal Logic}. \paragraph{Linear-Time Temporal Logic} Linear Temporal Logic (\textsc{ltl}\xspace) over infinite traces was originally proposed in Computer Science as a specification language for concurrent programs~\cite{Pnu77}. Formulas of \textsc{ltl}\xspace are built from a set $\mthsym{Var}$ of \emph{propositional variables} (or simply variables), together with Boolean and temporal operators. Its syntax can be described as follows: \begin{center} $\varphi ::= x \mid \neg \varphi \mid \varphi \vee \varphi \mid \varphi \wedge \varphi \mid \mthsym{X} \varphi \mid \varphi \mthsym{U} \varphi$ \end{center} where $x \in \mthsym{Var}$ is a propositional variable. Intuitively, the formula $\mthsym{X} \varphi$ says that $\varphi$ holds at the \emph{next} instant. Moreover, the formula $\varphi_1 \mthsym{U} \varphi_2$ says that at some future instant $\varphi_2$ holds and \emph{until} that point, $\varphi_1$ holds. We also use the standard Boolean abbreviations $\mthsym{true}\xspace:= x \vee \neg x$ (\emph{true}), $\mthsym{false}\xspace:= \neg \mthsym{true}\xspace$ (\emph{false}), and $\varphi_1 \to \varphi_2 := \neg \varphi_{1} \vee \varphi_{2}$ (\emph{implies}). In addition, we also use the binary operator $\varphi_{1} \mthsym{R} \varphi_{2} \doteq \neg (\neg \varphi_{1} \mthsym{U} \neg \varphi_{2})$ (\emph{release}) and the unary operators $\mthsym{F} \varphi := \mthsym{true}\xspace \mthsym{U} \varphi$ (\emph{eventually}) and $\mthsym{G} \varphi := \neg \mthsym{F} \neg \varphi$ (\emph{globally}). The classic semantics of \textsc{ltl}\xspace is given in terms of infinite traces, i.e., truth-values over the natural numbers. More precisely, an \emph{interpretation} $\pi: \SetN \to 2^{\mthsym{Var}}$ is a function that maps each natural number $i$ to a truth assignment $\pi(i) \in 2^{\mthsym{Var}}$ over the set of variables $\mthsym{Var}$. Along the paper, we might refer to finite \emph{segments} of a computation $\pi$. More precisely, for two indexes $i, j \in \SetN$, by $\pi(i,j) \doteq \pi(i), \ldots, \pi(j) \in (2^{\mthsym{Var}})^{*}$ we denote the finite segment of $\pi$ from it's $i$-th to its $j$-th position. A segment $\pi(0,j)$ starting from $0$ is also called a \emph{prefix} and is sometimes denoted $\pi_{\leq j}$. We say that an \textsc{ltl}\xspace formula $\varphi$ is true on an assignment $\pi$ at instant $i$, written $\pi, i \models_{\mthsym{C}} \varphi$, if: \begin{itemize}[label={-},leftmargin=*] \setlength\itemsep{0em} \item $\pi, i \models_{\mthsym{C}} x$, for $x \in \mthsym{Var}$ iff $x \in \pi(i)$; \item $\pi, i \models_{\mthsym{C}} \neg \varphi$ iff $\pi, i \not \models_{\mthsym{C}} \varphi$; \item $\pi, i \models_{\mthsym{C}} \varphi_1 \vee \varphi_2$ iff either $\pi, i \models_{\mthsym{C}} \varphi_1$ or $\pi, i \models_{\mthsym{C}} \varphi_2$; \item $\pi, i \models_{\mthsym{C}} \varphi_1 \wedge \varphi_2$ iff both $\pi, i \models_{\mthsym{C}} \varphi_1$ and $\pi, i \models_{\mthsym{C}} \varphi_2$; \item $\pi, i \models_{\mthsym{C}} \mthsym{X} \varphi$ iff $\pi, i + 1 \models_{\mthsym{C}} \varphi$; \item $\pi, i \models_{\mthsym{C}} \varphi_1 \mthsym{U} \varphi_2$ iff for some $j \geq i$, we have that $\pi, j \models_{\mthsym{C}} \varphi_2$ and for all $k \in \{i, \ldots j - 1\}$, we have that $\pi, k \models_{\mthsym{C}} \varphi_1$. \end{itemize} A formula $\varphi$ is \emph{true} over $\pi$, written $\pi \models_{\mthsym{C}} \varphi$ iff $\pi, 0 \models_{\mthsym{C}} \varphi$. A formula $\varphi$ is \emph{satisfiable} if it is true on some interpretation and \emph{valid} if it is true in every interpretation. \paragraph{Quantified Linear-Time Temporal Logic} Quantified Linear-Temporal Logic ({\textsc{qltl}}\xspace) is an extension of \textsc{ltl}\xspace with two \emph{Second-order} quantifiers~\cite{SVW87}. Its formulas are built using the classic \textsc{ltl}\xspace Boolean and temporal operators, on top of which existential and universal quantification over variables is applied. Formally, the syntax is given as follows: \begin{center} $\varphi ::= \exists x \varphi \mid \forall x \varphi \mid x \mid \neg \varphi \mid \varphi \vee \varphi \mid \varphi \wedge \varphi \mid \mthsym{X} \varphi \mid \varphi \mthsym{U} \varphi \mid \varphi \mthsym{R} \varphi$, \end{center} where $x \in \mthsym{Var}$ is a propositional variable. Note that this is a proper extension of \textsc{ltl}\xspace, as {\textsc{qltl}}\xspace has the same expressive power of \textsc{mso}\xspace~\cite{SVW87}, whereas \textsc{ltl}\xspace is equivalent to \textsc{fol}\xspace~\cite{GPSS80}. In order to define the semantics of {\textsc{qltl}}\xspace, we introduce some notation. For an interpretation $\pi$ and a set of variables $X \subseteq \mthsym{Var}$, by $\pi_{\upharpoonright X}$ we denote the \emph{projection} interpretation over $X$ defined as $\pi_{\upharpoonright X}(i) \doteq \pi(i) \cap X$ at any time point $i \in \SetN$. Moreover, by $\pi_{\upharpoonright -X} \doteq \pi_{\upharpoonright \mthsym{Var} \setminus X}$ we denote the projection interpretation over the complement of $X$. For a single variable $x$, we simplify the notation as $\pi_{\upharpoonright x} \doteq \pi_{\upharpoonright \{ x \}}$ and $\pi_{\upharpoonright -x} \doteq \pi_{\upharpoonright \mthsym{Var} \setminus \{ x \}}$. Finally, we say that $\pi$ and $\pi'$ \emph{agree} over $X$ if $\pi_{\upharpoonright X} = \pi'_{\upharpoonright X}$. Observe that we can reverse the projection operation by combining interpretations over disjoint sets of variables. More formally, for two disjoint sets $X, X' \subseteq \mthsym{Var}$ and two interpretations $\pi_{X}$ and $\pi_{X'}$ over $X$ and $X'$, respectively, $\pi_{X} \Cup \pi_{X'}$ is defined as the (unique) interpretation over $X \cup X'$ such that its projections on $X$ and $X'$ correspond to $\pi_{X}$ and $\pi_{X'}$, respectively. The \emph{classic} semantics of the quantifiers in a {\textsc{qltl}}\xspace formula $\varphi$ over an interpretation $\pi$, at instant $i$, denoted $\pi, i \models_{\mthsym{C}} \varphi$, is defined as follows: \begin{itemize}[label={-},leftmargin=*] \setlength\itemsep{0em} \item $\pi, i \models_{\mthsym{C}} \exists x \varphi$ iff there exists an interpretation $\pi'$ such that $\pi_{\upharpoonright -x} = \pi'_{\upharpoonright -x}$ and $\pi', i \models_{\mthsym{C}} \varphi$; \item $\pi, i \models_{\mthsym{C}} \forall x \varphi$ iff for every interpretation $\pi'$ such that $\pi_{\upharpoonright -x} = \pi'_{\upharpoonright -x}$, it holds that $\pi', i \models_{\mthsym{C}} \varphi$; \end{itemize} A variable $x$ is \emph{free} in $\varphi$ if it occurs at least once out of the scope of either $\exists x$ or $\forall x$ in $\varphi$. By $\mthfun{free}(\varphi)$ we denote the set of free variables in $\varphi$. As for \textsc{ltl}\xspace, we say that $\varphi$ is true on $\pi$, and write $\pi \models_{\mthsym{C}} \varphi$ iff $\pi, 0 \models_{\mthsym{C}} \varphi$. Analogously, a formula $\varphi$ is \emph{satisfiable} if it is true on some interpretation $\pi$, whereas it is \emph{valid} if it is true on every possible interpretation $\pi$. Note that, as quantifications in the formula replace the interpretation over the variables in their scope, we can assume that $\pi$ are interpretations over the set $\mthfun{free}(\varphi)$ of free variables in $\varphi$. A {\textsc{qltl}}\xspace formula is in \emph{prenex normal form} if it is of the form $\mthsym{\wp} \psi$, where $\mthsym{\wp} = \mthsym{Qn}_{1} x_{1} \ldots \mthsym{Qn}_{n} x_{n}$ is a \emph{prefix quantification} with $\mthsym{Qn}_{i} \in \{\exists, \forall \}$ and $x_i$ being a variable occurring on a \emph{quantifier-free} subformula $\psi$, which can be regarded as \textsc{ltl}\xspace. Every {\textsc{qltl}}\xspace formula can be rewritten in prenex normal form, meaning that it is true on the same set of interpretations. Consider for instance the formula $\mthsym{G} (\exists y (y \wedge \mthsym{X} \neg y))$. This is equivalent to $\forall x \exists y (\mthsym{singleton}(x) \to (\mthsym{G}(x \to (y \wedge \mthsym{X} \neg y))))$, with $\mthsym{singleton}(x) \doteq \mthsym{F} x \wedge \mthsym{G}(x \to \mthsym{X} \mthsym{G} \neg x)$ expressing the fact that $x$ is true exactly once on the trace~\footnote{The reader might observe that pushing the quantification over $y$ outside the temporal operator does not work. Indeed, the formula $\exists y \mthsym{G}(y \wedge \mthsym{X} \neg y)$ is unsatisfiable.}. A full proof of the reduction to prenex normal form can be derived from~\cite[Section 2.3]{Tho97}. For convenience and without loss of generality, from now on we will assume that {\textsc{qltl}}\xspace formulas are always in prenex normal form. Recall that for a formula $\varphi = \mthsym{\wp} \psi$ is easy to obtain the prefix normal form of its negation $\neg \varphi$ as $\overline{\mthsym{\wp}} \neg \psi$, where $\overline{\mthsym{\wp}}$ is obtained from $\mthsym{\wp}$ by swapping every quantification from existential to universal and vice-versa. From now on, by $\neg \varphi$ we denote its prenex normal form transformation. An \emph{alternation} in a quantification prefix $\mthsym{\wp}$ is either a sequence $\exists x \forall y$ or a sequence $\forall x \exists y$ occurring in $\mthsym{\wp}$. A formula of the form $\mthsym{\wp} \psi$, is of \emph{alternation-depth} $k$ if $\mthsym{\wp}$ contains exactly $k$ alternations. By $k$-{\textsc{qltl}}\xspace we denote the {\textsc{qltl}}\xspace fragment of formulas with alternation $k$. Moreover, $\Sigma_{k}^{{\textsc{qltl}}\xspace}$ and $\Pi_{k}^{{\textsc{qltl}}\xspace}$ denote the fragments of $k$-{\textsc{qltl}}\xspace of formulas starting with an existential and a universal quantification, respectively. It is convenient to make use of the syntactic shortcuts $\exists X \varphi \doteq \exists x_{1} \ldots \exists x_{k} \varphi$ and $\forall X \varphi \doteq \forall x_{1} \ldots \forall x_{k} \varphi$ with $X = \{x_1, \ldots, x_k\}$. Formulas can then be written in the form $\mthsym{Qn}_{1} X_1 \ldots \mthsym{Qn}_{n} X_n \psi$ such that every two consecutive occurrences of quantifiers are in alternation, that is, $\mthsym{Qn}_{i} = \exists$ iff $\mthsym{Qn}_{i + 1} = \forall$, for every $i \leq n$. The satisfiability problem consists into, given a {\textsc{qltl}}\xspace formula $\varphi$, determine whether it is satisfiable or not. Note that every formula $\varphi$ is satisfiable if, and only if, $\exists \mthfun{free}(\varphi) \varphi$ is satisfiable. This means that we can study the satisfiability problem in {\textsc{qltl}}\xspace for \emph{closed} formulas, i.e., formulas where every variable is quantified. Such problem is decidable, though computationally highly intractable in general~\cite{SVW87}. For a given natural number $k$, by $k$-EXPSPACE we denote the language of problems solved by a Turing machine with space bounded by $2^{2^{\ldots^{2^{n}}}}$, where the height of the tower is $k$ and $n$ is the size of the input. By convention $0$-EXPSPACE denotes PSPACE. \begin{theorem}[\cite{Sis85}] \label{thm:qltlsatisfiability} The satisfiability problem for $k$-{\textsc{qltl}}\xspace formulas is $k$-EXPSPACE-complete. \end{theorem} \endinput \subsection{Distributed Synthesis} An \emph{architecture} is a tuple $A = \tuple{P, p_{env}, E, O, H}$ where \begin{enumerate*}[label=(\roman*)] \item $P$ is a (finite) set of processes, with $p_{env} \in P$ a distinguished environment process \item $(P, E)$ is a directed graph \item $O = \set{O_{e}}{e \in E}$ with $O_{e}$ is a nonempty set of output variables \item $H = \set{H_{p}}{p \in P}$ is a set of hidden variables for each process \end{enumerate*} By $P^{-} = P \setminus \{p_{env}\}$ we denote the set of system processes. Moreover, by $I_{p} = \bigcup_{p' \in P} O_{(p',p)}$ we denote the input variables for $p$ and $O_{p} = H_{p} \cup \bigcup_{p' \in P}O_{(p,p')}$ we denote the output variables of $p$. We assume that the sets $O_{p}, O_{p'}$ are pairwise disjoint, that is, every variable is exclusively controlled by a single process. A system process $p \in P^{-}$ is implemented by a strategy, that is a function $\sigma_{p}: (2^{I_{p}})^{*} \to 2^{O_{p}}$ taking a non-empty sequence of inputs and returning an output. An implementation of the architecture $A$ is a set of strategies $S = \set{\sigma_{p}}{p \in P^{-}}$. The composition $\otimes_{p \in P^{-}} \sigma_{p} = \sigma_{P^{-}}: (2^{I_{P^{-}}})^{*} \to 2^{O_{P^{-}}}$ is a strategy that maps combined inputs with combined outputs of the single strategies. For a given stream of outputs from the environment $\pi_{env} \in (2^{O_{env}})^{\omega}$ and an implementation $\sigma_{P^{-}}$, the interpretation $\pi = \pi(\sigma_{P^{-}}, \pi_{env})$ is obtained by assigning, for each $k \in \SetN$, $\pi(k + 1) = \pi_{env}(k + 1) \cup \sigma_{P^{-}}(\pi_{\leq k})$. For a given architecture $A$ and an \textsc{ltl}\xspace formula $\psi$, we say that the implementation $\sigma_{P^{-}}$ \emph{realizes} $(A, \psi)$ if $\pi(\sigma_{P^{-1}}, \pi_{env}) \models_{\mthsym{C}} \psi$ for every possible $\pi_{env}$. Consider for a process $p$ the set $E_{p} = \set{e \in E}{O_{e} \not\subseteq I_{p}}$ of edges that carry some variables invisible to $p$. We say that $p$ is less or equally informed of $p'$, and write $p \preceq p'$ if there is no path from $p_{env}$ to $q$ in $(P,E_{p})$. Observe that the binary relation $\preceq$ is a preorder on $P$. An architecture $A$ is (strictly) \emph{ordered} if the relation $\preceq$ is a (total) order. Moreover, $A$ is \emph{acyclic} if the underlying graph $(P,E)$ is acyclic. For a given architecture $A$ and an \textsc{ltl}\xspace formula $\psi$, the pair $(A, \psi)$ is \emph{realizable} if there exists an implementation that realizes it. The realizability problem is in general undecidable. However, it becomes decidable for the class of \emph{ordered acyclic} architectures. \begin{theorem}[\cite{PR90,FS05}] \label{thm:decidable-dist-synt} The distributed synthesis problem for an ordered acyclic architecture $A$ with $n = \card{P}$ and an \textsc{ltl}\xspace formula $\psi$ is $n$-EXPTIME-Complete. \end{theorem} \subsection{$\omega$-automata and correspondence with QLTL} \label{subsec:automata} We now recall the main definition and results about $\omega$-automata and their relation with \textsc{ltl}\xspace and {\textsc{qltl}}\xspace. In particular, we focus on two kind of acceptance conditions that are used in this paper, namely B\"{u}chi\xspace and Parity. As standard, a \emph{nondeterministic finite automaton} on words (\textsc{nfw}\xspace) is a tuple $\NName = \tuple{\Sigma, Q, q_0, \delta, \alpha}$, where: \begin{inparaenum}[(i)] \item $\Sigma$ is the input alphabet; \item $Q$ is the finite set of states; \item $q_0 \in Q$ is the initial state; \item $\delta: Q \times \Sigma \to 2^{Q}$ is the (nondeterministic )transition function; \item $\alpha$ is the acceptance condition. \end{inparaenum} A \emph{deterministic finite automaton} \textsc{dfw}\xspace is an \textsc{nfw}\xspace where $\delta$ always returns a singleton or, equivalently, is a function of the form $\delta: Q \times \Sigma \rightarrow Q$. For a given word $\pi \in \Sigma^{\omega}$, a run of the automaton over $\pi$ is a sequence $\rho \in Q^{\omega}$ such that $\rho_{0} = q_{0}$ and, for each $i \in \SetN$, $\rho_{i + 1} \in \delta(\rho_{i}, \pi_{i})$. By $\mthfun{Inf}(\rho) = \set{q \in Q}{\card{\rho^{-1}(q)} = \infty}$ we denote the set of states that occur in $\rho$ infinitely many times. The acceptance condition can be of different types. In this paper, we consider B\"{u}chi\xspace and Parity conditions. A B\"{u}chi\xspace acceptance condition $\alpha$ is a subset of the states, called final and hereby denote with $F \subseteq Q$. A run $\rho$ is B\"{u}chi\xspace-accepting if $\mthfun{Inf}(\rho) \cap \alpha \neq \emptyset$, that is, the final states occur infinitely many times in $\rho$. A Parity acceptance condition $\alpha$ is an index-ordered cover $\tuple{F_1, F_2, \ldots, F_n}$ of the states $Q$. Sometimes the sets $F_i$ of a parity acceptance condition are also called \emph{colors} or \emph{priorities}. A run $\rho$ is parity-accepting if $\arg\min_{i \in \numcc{1}{n}}\{\mthfun{Inf}(\rho) \cap F_i \neq \emptyset\}$ is even, that is, the smallest index of states occurring infinitely often is even. Observe that B\"{u}chi\xspace can be seen as a special case of Parity with $F_1 = \emptyset$, $F_2 = F$, and $F_3 = Q \setminus F$. We denote nondeterministic and deterministic B\"{u}chi\xspace automata by \textsc{nbw}\xspace and \textsc{dbw}\xspace, respectively. Whereas, \textsc{npw}\xspace and \textsc{dpw}\xspace denote the nondeterministic and deterministic parity automata. A word $\pi$ is \emph{accepted} by $\AName$ if there exists an accepting run $\rho$ over it. The language of words accepted by $\AName$ is denoted $\LName(\AName)$. Nondeterministic B\"{u}chi\xspace and Parity automata are closed under intersection and complement (and thus union)~\cite{Tho90}. More precisely, for two B\"{u}chi\xspace (resp. Parity) automata $\AName_{1}$ and $\AName_{2}$, there exists an automaton $\AName_{1} \otimes \AName_{2}$ accepting the language $\LName(\AName_{1}) \cap \LName(\AName_{2})$ whose size is polynomial in the size of both $\AName_{1}$ and $\AName_{2}$~\cite{VW94}. Moreover, there exists a B\"{u}chi\xspace (resp. Parity) automaton $\overline{\AName_{1}}$ that recognizes the language $\Sigma^{\omega} \setminus \LName(\AName_{1})$ whose size is exponential in the size of $\AName_{1}$. The \emph{nonemptiness} problem for an automaton $\AName$ is to decide whether $\LName(\AName) \neq \emptyset$, i.e., whether the automaton accepts some word. \begin{theorem}(Automata emptiness) The emptiness problem for B\"{u}chi\xspace automata is NLOGSPACE-Complete. \end{theorem} \section{Skolem Functions for QLTL Semantics} \label{sec:skolemization} We now give an alternative way to capture the semantics of {\textsc{qltl}}\xspace, which is in terms of (second order) Skolem functions. This will allow us later to suitably restrict such Skolem function to capture behavioral semantics, by forcing them to depend only form the past history and the current situation. Let $\mthsym{\wp}$ be a quantification prefix. By $\exists(\mthsym{\wp})$ and $\forall(\mthsym{\wp})$ we denote the set of variables that are quantified existentially and universally, respectively. Moreover, by $X <_{\mthsym{\wp}} Y$ we denote the fact that $X$ occurs \emph{before} $Y$ in $\mthsym{\wp}$. For a given set of consecutive variables $Y \in \exists(\mthsym{\wp})$ that are existentially quantified, by $\mthfun{Dep}_{\mthsym{\wp}}(Y) = \set{X \in \forall(\mthsym{\wp})}{X <_{\mthsym{\wp}} Y}$ we denote the set of variables to which $Y$ depends on in $\mthsym{\wp}$. Moreover, for a given set $F \subseteq \mthsym{Var}$ of variables by $\mthfun{Dep}_{\mthsym{\wp}}^{F}(Y) = F \cup \mthfun{Dep}_{\mthsym{\wp}}(Y)$ we denote the \emph{augmented dependency}, taking into account an additional set of variables for dependency. Whenever clear from the context, we omit the subscript and simply write $\mthfun{Dep}(Y)$ and $\mthfun{Dep}^{F}(Y)$. The relation defined above captures the concept of \emph{functional dependence} generated by quantifiers and free variables in a {\textsc{qltl}}\xspace formula. Intuitively, whenever a dependence occurs between two variables $X$ and $Y$, this means that the existential choices in $Y$ are determined by a function whose domain is given by all possible choices available in $X$, be it universally quantified or free in the corresponding formula. This dependence is know in first-order logic as \emph{Skolem function} and can be described in {\textsc{qltl}}\xspace as follows. \begin{definition}[Skolem function] \label{def:skolem} For a given quantification prefix $\mthsym{\wp}$ defined over a set $\mthsym{Var}(\mthsym{\wp}) \subseteq \mthsym{Var}$ of variables, and a set $F$ of variables, a function $$ \theta: (2^{F \cup \forall(\mthsym{\wp})})^{\omega} \to (2^{\exists(\mthsym{\wp})})^{\omega} $$ \noindent is called Skolem function over $(\mthsym{\wp}, F)$ if, for all $\pi_1, \pi_2 \in (2^{\forall(\mthsym{\wp})})^{\omega}$ and $Y \in \exists(\mthsym{\wp})$, it holds that $${\pi_{1}}_{\upharpoonright \mthfun{Dep}^{F}(Y)} = {\pi_{2}}_{\upharpoonright \mthfun{Dep}^{F}(Y)} \mthsym{R}ightarrow \theta(\pi_1)_{\upharpoonright Y} = \theta(\pi_2)_{\upharpoonright Y}\text{.}$$ \end{definition} Informally, a Skolem function takes interpretations of the variables in $F \cup \forall(\mthsym{\wp})$ to return interpretations of the existentially quantified ones in a functional way. Sometimes, to simplify the notation, we identify $\theta(\pi)$ with $\pi \Cup \theta(\pi)$, that is, $\theta$ \emph{extends} the interpretation $\pi$ to the existentially quantified variables of $\mthsym{\wp}$. Skolem functions can be used to define another semantics in {\textsc{qltl}}\xspace formulas in prenex normal form. \begin{definition}[Skolem semantics] \label{dfn:skolem-semantics} A {\textsc{qltl}}\xspace formula in prenex normal form $\varphi = \mthsym{\wp} \psi$ is \emph{Skolem true} over an interpretation $\pi$ at an instant $i$, written $\pi, i \models_{\mthsym{S}} \varphi$, if there exists a Skolem function $\theta$ over $(\mthsym{\wp}, \mthfun{free}(\varphi))$ such that $\theta(\pi \Cup \pi_{\forall(\mthsym{\wp})}), i \models_{\mthsym{C}} \psi$ \end{definition} Intuitively, the Skolem semantics characterizes the truth of a {\textsc{qltl}}\xspace formula with the existence of a Skolem function that returns the interpretations of the existential quantifications in function of the variables to which they depend. In principle, there might be formulas $\varphi$ and interpretations $\pi$ such that $\pi \models_{\mthsym{S}} \varphi$ and $\pi \models_{\mthsym{S}} \neg \varphi$, as the Skolem semantics require the existence of two Skolem functions that are defined over different domains, and so not necessarily inconsistent with each other. However, as the following theorem shows, the Skolem semantics is equivalent to the classic one. Therefore, for every formula $\varphi$ and an interpretation $\pi$, it holds that $\pi \models_{\mthsym{S}} \varphi$ iff $\pi \not\models_{\mthsym{S}} \neg \varphi$. \begin{theorem} \label{thm:skolemization} For every {\textsc{qltl}}\xspace formula in prenex normal form $\varphi = \mthsym{\wp} \psi$ and an interpretation $\pi \in (2^{F})^{\omega}$ over the free variables $F = \mthfun{free}(\varphi)$ of $\varphi$ it holds that $\pi \models_{\mthsym{C}} \varphi$ if, and only if, $\pi \models_{\mthsym{S}} \varphi$ \end{theorem} \begin{proof} Recall taht $\pi \models_{\mthsym{S}} \varphi$ iff there exists a Skolem function $\theta$ over $(\mthsym{\wp}, F)$ such that, for each interpretation $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$, it holds that $\theta(\pi \Cup \pi') \models_{\mthsym{C}} \psi$. The proof proceeds by induction on the length of $\mthsym{\wp}$. For the case $\card{\mthsym{\wp}} = 0$, and so $\mthsym{\wp} = \epsilon$ , and so that we have that $\varphi = \psi$. Moreover, the only Skolem function possible is the identity function over the free variables of $\varphi$, which means that $\pi = \theta(\pi)$ and implies $\pi \models_{\mthsym{C}} \varphi$ iff $\pi \models_{\mthsym{C}} \psi$ iff $\theta(\pi) \models_{\mthsym{C}} \psi$ iff $\pi \models_{\mthsym{S}} \varphi$, an so the statement holds in both directions. For the inductive case, we prove the two directions separately. From the left to right direction, assume that $\pi \models_{\mthsym{C}} \mthsym{\wp} \psi$. We distinguish two cases. \begin{itemize} \item $\mthsym{\wp} = \exists X \mthsym{\wp}'$. Thus, there exists an interpretation $\pi_{X} \in (2^{X})^{\omega}$ such that $\pi \Cup \pi_{X} \models_{\mthsym{C}} \mthsym{\wp}' \psi$. By induction hypothesis, it holds that $\pi \Cup \pi_{X} \models_{\mthsym{S}} \mthsym{\wp}' \psi$ and so there exists a Skolem function $\theta$ over $(\mthsym{\wp}', F \cup \{X\})$ such that $\theta(\pi \Cup \pi_{X} \Cup \pi') \models_{\mthsym{C}} \psi$ for each $\pi' \in (2^{\forall(\mthsym{\wp}')})^{\omega}$. Now, observe that $\forall(\mthsym{\wp}) = \forall(\mthsym{\wp}')$ and so consider the function $\theta$ is also a Skolem function over $(\mthsym{\wp}, F)$. Hence $\theta(\pi \Cup \pi_{X} \Cup \pi') \models_{\mthsym{C}} \psi$ for every $\pi'$, which implies that $\pi \models_{\mthsym{S}} \varphi$ and proves the statement. \item $\mthsym{\wp} = \forall X \mthsym{\wp}'$. Then, for every $\pi_{X}$, it holds that $\pi \Cup \pi_{X} \models_{\mthsym{C}} \mthsym{\wp}' \psi$. By induction hypothesis, we have that $\pi \Cup \pi_{X} \models_{\mthsym{S}} \mthsym{\wp}' \psi$ and so there exists a Skolem function $\theta_{\pi_{X}}$ over $(\mthsym{\wp}', F \cup \{X\})$ such that $\theta_{\pi_{X}}(\pi \Cup \pi_{X} \Cup \pi') \models_{\mthsym{C}} \psi$ for every $\pi' \in (2^{\forall(\mthsym{\wp}')})^{\omega}$. Now, consider the function $\theta: (2^{F \cup \forall(\mthsym{\wp})})^{\omega} \to (2^{\mthsym{Var}(\mthsym{\wp})})^{\omega}$ such that $\theta(\pi \Cup \pi') = \theta_{\pi_{X}}(\pi \Cup \pi')$ for each $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$. Clearly, $\theta$ is a Skolem function over $(\mthsym{\wp}, F)$. Moreover, by its definition, it holds that $\theta(\pi \Cup \pi') \models_{\mthsym{C}} \psi$ for every $\pi'$, which means that $\pi \models_{\mthsym{S}} \varphi$ and proves the statement. \end{itemize} For the right to left direction, we assume that $\pi \models_{\mthsym{S}} \varphi$ and so that there exists a Skolem function $\theta$ over $(\mthsym{\wp}, F)$ such that $\theta(\pi \Cup \pi') \models_{\mthsym{C}} \psi$ for each $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$. We distinguish two cases. \begin{itemize} \item $\mthsym{\wp} = \exists X \mthsym{\wp}'$. Observe that, since $\mthfun{Dep}^{F}(X) = F$, it holds that $\theta(\pi \Cup \pi')(X) = \theta(\pi \Cup \pi'')(X)$ for every $\pi', \pi''$ and call such interpretation $\pi_{X}$. Now, define the Skolem function $\theta'$ over $(\mthsym{\wp}', F \cup \{X\})$ as $\theta'(\pi \cup \pi') = \theta(\pi \Cup \pi')_{\upharpoonright - X}$, that is, the restriction of $\theta$ with the interpretation over $X$ being projected out. It holds that $\theta'(\pi \Cup \pi_{X} \Cup \pi') = \theta(\pi \Cup \pi')$ and so that $\theta'(\pi \Cup \pi_{X} \Cup \pi') \models_{\mthsym{C}} \psi$. By induction hypothesis, we have that $\pi \Cup \pi_{X} \models_{\mthsym{S}} \mthsym{\wp}' \psi$, which in turns implies that $\pi \models_{\mthsym{C}} \exists X \mthsym{\wp}' \psi$ and so that $\pi \models_{\mthsym{C}} \mthsym{\wp} \psi$, which proves the statement. \item $\mthsym{\wp} = \forall X \mthsym{\wp}'$. Note that $\forall(\mthsym{\wp}) = \forall(\mthsym{\wp}') \cup \{ X \}$, and so that $\theta$ is also a Skolem function over $(\mthsym{\wp}', F \cup \{X\})$. By induction hypothesis, we obtain that, for every $\pi_{X}$, it holds that $\theta(\pi \Cup \pi_{X} \cup \pi') \models_{\mthsym{C}} \psi$ implies that $\pi \Cup \pi_{X} \models_{\mthsym{C}} \mthsym{\wp}' \psi$ for every $\pi_{X}$, which means that $\pi \models_{\mthsym{C}} \forall X \mthsym{\wp}' \psi$, and so that $\pi \models_{\mthsym{C}} \mthsym{\wp} \psi$, and then the statement is proved. \end{itemize} \end{proof} \section{Behavioral QLTL} \label{sec:behavioral-QLTL} The classic semantics of {\textsc{qltl}}\xspace requires to consider at once the evaluation of the variables on the whole trace. This gives rise to counter-intuitive phenomena. Consider the formula $\forall x \exists y (\mthsym{G} x \leftrightarrow y)$. Such a formula is satisfiable. Indeed, on the one hand, for the interpretation assigning always true to $x$, the interpretation that makes $y$ true at the beginning satisfies the temporal part. On the other hand, for every other interpretation making $x$ false sometimes, the interpretation that makes $y$ false at the beginning satisfies the temporal part. However, in order to correctly interpret $y$ on the first instant, one needs to know in advance the entire interpretation of $x$. Such requirement is practically impossible to fulfill and does not reflect the notion of \emph{reactive systems}, where the output of system variables at the $k$-th instant of the computation depends only on the past assignments of the environment variables. Such principle is often referred as \emph{behavioral} principle in the context of strategic reasoning, see e.g., \cite{MMPV14,GBM20}. Here, we propose two alternative semantics for {\textsc{qltl}}\xspace, which are of interest when {\textsc{qltl}}\xspace is used in the context of strategic reasoning and planning. Indeed there we require strategies to be processes in the sense of \cite{ALW89}, i.e., the next move depends only on the past history and the current situation. The two semantics are inspired by two different contexts of planning and distributed synthesis. The first regards \emph{partial controllability} with \emph{partial observability}, in which a process in a distributed architecture controls part of the system variables and assigns their value according to the past and present values of the environment variables that are made visible to it. The second regards \emph{partial controllability} with \emph{full observability}, in which the process can base its choices according to the past evaluation of all variables and the present evaluation of the depending ones. To formally define the two semantics we exploit two different forms of Skolem functions, each of them producing different effects on the notion of formula satisfaction. These definitions take into account the reactive feature of dependency discussed above. In addition, we prove their connection with the classic notion of strategy as intended in synthesis and distributed synthesis~\cite{PR89,KV01,FS05}. In the next subsections, we introduce these two semantics and discuss their relationship with the classic semantics of {\textsc{qltl}}\xspace. Subsequently, we show their connection with the synthesis problem of the corresponding contexts. \subsection{Behavioral semantics} \label{subsec:behavioral-semantics} We now introduce \emph{behavioral {\textsc{qltl}}\xspace}, denoted \textsc{qltl$_{\mthfun{B}}$}\xspace, a logic with the same syntax as of prenex normal form {\textsc{qltl}}\xspace but where the semantics is defined in terms of behavioral Skolem functions: a modified version of the Skolem functions introduced in the previous section. \begin{definition}[Behavioral Skolem function] \label{def:behavioral-skolem} For a given quantification prefix $\mthsym{\wp}$ defined over a set $\mthsym{Var}(\mthsym{\wp}) \subseteq \mthsym{Var}$ of propositional variables and a set $F$ of variables not occurring in $\mthsym{\wp}$, a Skolem function $\theta$ over $(\mthsym{\wp}, F)$ is \emph{behavioral} if , for all $\pi_1, \pi_2 \in (2^{F \cup \forall(\mthsym{\wp})})^{\omega}$, $k \in \SetN$, and $X \in \exists(\mthsym{\wp})$, it holds that \begin{center} $\pi_{1}(0,k)_{\upharpoonright \mthfun{Dep}^{F}(X)} = \pi_{2}(0,k)_{\upharpoonright \mthfun{Dep}^{F}(X)}$ implies $\theta(\pi_{1})_{\upharpoonright X} = \theta(\pi_{2})_{\upharpoonright X}$. \end{center} \end{definition} The behavioral Skolem functions capture the fact that the interpretation of existentially quantified variables depend only on the past and present values of free and universally quantified variables. This offers a way to formalize the semantics of \textsc{qltl$_{\mthfun{B}}$}\xspace as follows. \begin{definition} \label{def:BQLTL-semantics} A \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi = \mthsym{\wp} \psi$ is true over an interpretation $\pi$ in an instant $i$, written $\pi, i \models_{\mthsym{B}} \mthsym{\wp} \psi$, if there exists a behavioral Skolem function $\theta$ over $(\mthsym{\wp}, \mthfun{free}(\varphi))$ such that $\theta(\pi \Cup \pi'), i \models_{\mthsym{C}} \psi$ for every $\pi' \in (2^{F \cup \forall(\mthsym{\wp})})^{\omega}$. \end{definition} A \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi$ is true on an interpretation $\pi$, written $\pi \models_{\mthsym{B}} \varphi$, if $\pi, 0 \models_{\mthsym{B}} \varphi$. A formula $\varphi$ is \emph{satisfiable} if it is true on some interpretation and \emph{valid} if it is true in every interpretation. Clearly, since \textsc{qltl$_{\mthfun{B}}$}\xspace shares the syntax with {\textsc{qltl}}\xspace, all the definitions that involve syntactic elements, such as free variables and alternation, apply to this variant the same way. As for {\textsc{qltl}}\xspace, the satisfiability of a \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi$ is equivalent to the one of $\exists \mthfun{free}(\varphi) \varphi$, as well as the validity is equivalent to the one of $\forall \mthfun{free}(\varphi) \varphi$. However, the proof of this is not as straightforward as for the classic semantics case. \begin{theorem} \label{thm:behavioral-satisfiability} For every \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi = \mthsym{\wp} \psi$, it holds that $\varphi$ is satisfiable if, and only if, $\exists \mthfun{free}(\varphi) \varphi$ is satisfiable. Moreover, $\varphi$ is valid if, and only if, $\forall \mthfun{free}(\varphi) \varphi$ is valid. \end{theorem} \begin{proof} We show the proof only for satisfiability, as the one for validity is similar. The proof proceeds by double implication. From left to right, assume that $\varphi$ is satisfiable, therefore there exists an interpretation $\pi$ over $F = \mthfun{free}(\varphi)$ such that $\pi \models_{\mthsym{B}} \varphi$, which in turns implies that there exists a behavioral Skolem function $\theta$ over $(\mthsym{\wp}, F)$ such that $\theta(\pi \Cup \pi') \models_{\mthsym{C}} \psi$ for every interpretation $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$. Consider the function $\theta': (2^{\forall(\mthsym{\wp})})^{\omega} \to (2^{\exists(\mthsym{\wp}) \cup F})^{\omega}$ defined as $\theta'(\pi') = \theta(\pi \Cup \pi') \Cup \pi$, for every $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$. Clearly, it is a behavioral Skolem function over $(\exists F \mthsym{\wp}, \emptyset)$ such that $\theta'(\pi') \models \psi$ for every $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$, which implies that $\exists F \varphi$ is satisfiable. From right to left, the reasoning is similar and left to the reader. \end{proof} Note that every behavioral Skolem function is also a Skolem function. This means that a formula $\varphi$ interpreted as \textsc{qltl$_{\mthfun{B}}$}\xspace is true on $\pi$ implies that the same formula is true on $\pi$ also when it is interpreted as {\textsc{qltl}}\xspace. The reverse, however, is not true. Consider again the formula $\varphi = \forall x \exists y (\mthsym{G} x \leftrightarrow y)$. We have already shown that this is satisfiable when interpreted as {\textsc{qltl}}\xspace. However, it is not satisfiable as a \textsc{qltl$_{\mthfun{B}}$}\xspace formula. \begin{lemma} \label{lmm:behavioral-implies-classic} For every \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi$ and an interpretation $\pi$ over the set $\mthfun{free}(\varphi)$ of free variables, if $\pi \models_{\mthsym{B}}\varphi$ then $\pi \models_{\mthsym{C}} \varphi$. On the other hand, there exists a formula $\varphi$ and an interpretation $\pi$ such that $\pi \models_{\mthsym{C}} \varphi$ but not $\pi \models_{\mthsym{B}} \varphi$. \end{lemma} \begin{proof} The first part of the theorem follows from the fact that every behavioral Skolem function is also a Skolem function and so, if $\pi \models_{\mthsym{B}} \varphi$, clearly also $\pi \models_{\mthsym{S}} \varphi$ and so, from Theorem~\ref{thm:skolemization}, that $\pi \models_{\mthsym{C}} \varphi$. For the second part, consider the formula $\varphi = \forall x \exists y (\mthsym{G} x \leftrightarrow y)$. We have already shown that such formula is satisfiable. However, it is not behavioral satisfiable. Indeed, assume by contradiction that it is behavioral satisfiable and let $\theta$ the behavioral Skolem function such that $\theta \models_{\mthsym{C}} (\mthsym{G} x \leftrightarrow y)$. Now consider two interpretations $\pi_{1}$ over $x$ that always assigns true, and $\pi_{2}$ that assigns true on $x$ at the first iteration and then always false. It holds that $\pi_{1}(0) = \pi_{2}(0)$ and therefore, since $x \in \mthfun{Dep}(y)$ and $\theta$ is behavioral, it must be the case that $\theta(\pi_1)(0)^{\upharpoonright y} = \theta(\pi_2)(0)^{\upharpoonright y}$. Now, if such value is $\theta(\pi_1)(0)^{\upharpoonright y} = \mthsym{false}\xspace$, then it holds that $\theta(\pi_{1}) \not\models_{\mthsym{C}} \mthsym{G} x \leftrightarrow y)$. On the other hand, if $\theta(\pi_2)(0)^{\upharpoonright y} = \mthsym{true}\xspace$, then it holds that $\theta(\pi_{2}) \not\models_{\mthsym{C}} \mthsym{G} x \leftrightarrow y)$, which means that $\theta \not\models_{\mthsym{C}} (\mthsym{G} x \leftrightarrow y)$, a contradiction. \end{proof} Lemma~\ref{lmm:behavioral-implies-classic} has implications also on the meaning of negation in \textsc{qltl$_{\mthfun{B}}$}\xspace. Indeed, both the formula $\varphi = \forall x \exists y (\mthsym{G} x \leftrightarrow y)$ and its negation are not satisfiable, that is $\not\models_{\mthsym{B}} \varphi$ and $\not\models_{\mthsym{B}} \neg \varphi$~\footnote{Note that, being $\varphi$ with no free variables, we can omit the interpretation $\pi$ as the only possible is the empty one}. This is a common phenomenon, as it also happens when considering the behavioral semantics of logic for the strategic reasoning~\cite{MMPV14,GBM20}. It is important, however, to notice that there are three syntactic fragments for which {\textsc{qltl}}\xspace and \textsc{qltl$_{\mthfun{B}}$}\xspace are equivalent. Precisely, the fragments $\Pi_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, $\Sigma_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, and $\Sigma_{1}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$. The reason is that the sets of Skolem and behavioral Skolem functions for these formulas coincide, and so the existence of one implies the existence of the other. \begin{theorem} \label{thm:no-behavioral-dep-equivalence} For every \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi = \mthsym{\wp} \psi$ in the fragments $\Pi_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, $\Sigma_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, and $\Sigma_{1}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$ and an interpretation $\pi$, it holds that $\pi \models_{\mthsym{B}} \varphi$ if, and only if, $\pi \models_{\mthsym{S}} \varphi$. \end{theorem} \begin{proof} The proof proceeds by double implication. From left to right, it follows from Lemma~\ref{lmm:behavioral-implies-classic}. From right to left, consider first the case that $\varphi \in \Pi_{0}^{{\textsc{qltl}}\xspace}$. Observe that $\exists(\mthsym{\wp}) = \emptyset$ and so the only possible Skolem function $\theta$ returns the empty interpretation on every possible interpretation $\pi \Cup \pi' \in (2^{\mthfun{free}(\varphi) \cup \forall(\mthsym{\wp})})^{\omega}$. Such Skolem function is trivially behavioral and so we have that $\pi \models_{\mthsym{S}} \varphi$ implies $\pi \models_{\mthsym{B}} \varphi$. \\ For the case of $\varphi \in \Sigma_{0}^{{\textsc{qltl}}\xspace} \cup \Sigma_{1}^{{\textsc{qltl}}\xspace}$, assume that $\pi, \models_{\mthsym{S}} \varphi$ and let $\theta$ be a Skolem function such that $\theta(\pi \cup \pi') \models_{\mthsym{C}} \varphi$ for every $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$. Observe that, for every $Y \in \exists(\mthsym{\wp})$, it holds that $\mthfun{Dep}_{\mthsym{\wp}} = \emptyset$ and so the values of $Y$ depend only on the free variables in $\varphi$. Now, consider the Skolem function $\theta'$ over $(\mthsym{\wp}, \mthfun{free}(\varphi))$ defined such that as $\theta'(\pi') \doteq \theta(\pi'_{\upharpoonright \forall(\mthsym{\wp}) \Cup \pi})$. As $\theta$ is a Skolem function and $\mthfun{Dep}_{\mthsym{\wp}} = \emptyset$, it holds that $\theta'(\pi')(Y) = \theta'(\pi'')(Y)$ for every $\pi', \pi'' \in (2^{\forall(\mthsym{\wp})})^{\omega}$ and so $\theta'$ is trivially behavioral. Moreover, from its definition, it holds that $\theta'(\pi \Cup \pi') \models_{\mthsym{C}} \psi$ for every $\pi' \in (2^{\forall(\mthsym{\wp})})^{\omega}$, which implies $\pi \models_{\mthsym{B}} \varphi$. \end{proof} Theorem~\ref{thm:no-behavioral-dep-equivalence} shows that for these three fragments of \textsc{qltl$_{\mthfun{B}}$}\xspace, the satisfiability problem can be solved by employing {\textsc{qltl}}\xspace satisfiability. This also comes with the same complexity, as we just interpret the \textsc{qltl$_{\mthfun{B}}$}\xspace formula directly as {\textsc{qltl}}\xspace one. \begin{corollary} \label{cor:no-behavioral-fragments-complexity} The satisfiability problem for the fragments $\Pi_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$ and $\Sigma_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$ is PSPACE-complete. Moreover, the satisfiability problem for the fragment $\Sigma_{1}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$ is EXPSPACE-complete. \end{corollary} \subsection{Behavioral QLTL Satisfiability} We now turn into solving the satisfiability problem for \textsc{qltl$_{\mthfun{B}}$}\xspace formulas that are not in fragments $\Pi_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, $\Sigma_{0}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$, and $\Sigma_{1}^{\textsc{qltl$_{\mthfun{B}}$}\xspace}$. Analogously to the case of {\textsc{qltl}}\xspace, note that Theorem~\ref{thm:behavioral-satisfiability} allows to restrict our attention to closed formulas. We use an automata-theoretic approach inspired by the one employed in the synthesis of distributed systems~\cite{KV01,FS05,Sch08a}. This requires some definitions and results, presented below. For a given set $\mthsym{U}psilon$ of directions the \emph{$\mthsym{U}psilon$-tree} is the set $\mthsym{U}psilon^{*}$ of finite words. The elements of $\mthsym{U}psilon^{*}$ are called nodes, and the empty word $\varepsilon$ is called \emph{root}. For every $x \in \mthsym{U}psilon^{*}$, the nodes $x \cdot c \in \mthsym{U}psilon^{*}$ are called children. We say that $c = \mthfun{dir}(x \cdot c)$ is the direction of the node $x \cdot c$, and we fix some $\mthfun{dir}(\varepsilon) = c_0 \in \mthsym{U}psilon$ to be the direction of the root. Given two finite sets $\mthsym{U}psilon$ and $\Sigma$, a $\Sigma$-labeled $\mthsym{U}psilon$-tree is a pair $\tuple{\mthsym{U}psilon^{*}, l}$ where $l: \mthsym{U}psilon^{*} \to \Sigma$ maps/labels every node of $\mthsym{U}psilon^{*}$ into a letter in $\Sigma$. For a set $\Theta \times \mthsym{U}psilon$ of directions and a node $x \in (\Theta \times \mthsym{U}psilon)^{*}$, $\mthfun{hide}_{\mthsym{U}psilon}(x)$ denotes the node in $\Theta^{*}$ obtained from $x$ by replacing $(\vartheta, \upsilon)$ with $\vartheta$ in each letter of $x$. The function $\mthfun{xray}_{\mthsym{X}i}$ maps a $\Sigma$-labeled $(\mthsym{X}i \times \mthsym{U}psilon)$-tree $\tuple{(\mthsym{X}i \times \mthsym{U}psilon)^{*}, l}$ into a $\mthsym{X}i \times \Sigma$-labeled $(\mthsym{X}i \times \mthsym{U}psilon)$-tree $\tuple{(\mthsym{X}i \times \mthsym{U}psilon)^{*}, l'}$ where $l'(x) = (pr_1(\mthfun{dir}(x)), l(x))$ adds the $\mthsym{X}i$-direction of $x$ to its labeling. An \emph{alternating automaton} $\AName = (\Sigma, Q, q_0, \delta, \alpha)$ runs over $\Sigma$-labeled $\mthsym{U}psilon$-trees (for a predefined set of directions $\mthsym{U}psilon$). The set of states $Q$ is finite with $q_0$ being a designated initial state, while $\delta: Q \times \Sigma \to \mathcal{B}^{+}(Q \times \mthsym{U}psilon)$ denotes a transition function, returning a positive Boolean formula over pairs of states and directions, and $\alpha$ is an acceptance condition. We say that $\AName$ is \emph{nondeterministic}, and denote it with the symbol $\NName$, if every transition returns a positive Boolean formula with only disjunctions. Moreover, was that it is deterministic \emph{deterministic}, and denote it with the symbol $\DName$, if every transition returns a single state. A run tree of $\AName$ on a $\Sigma$-labeled $\mthsym{U}psilon$ tree $\tuple{\mthsym{U}psilon^{*}, l}$ is a $Q \times \mthsym{U}psilon$-labeled tree where the root is labeled with $(q_0, l(\varepsilon))$ and where, for a node $x$ with a label $(q,x)$, and a set of children $\mthfun{child}(x)$, the labels of these children have the following properties: \begin{itemize} \item for all $y \in \mthfun{child}(x)$, the label of $y$ is of the form $(q_y, x \cdot c_y)$ such that $(q_y, c_y)$ is an atom of the formula $\delta(q, l(x))$ and \item the set of atoms defined by the children of $x$ satisfies $\delta(q, l(x))$. \end{itemize} We say that $\alpha$ is a \emph{parity} condition if it is a function $\alpha: Q \to C (\subset \SetN)$ mapping every state to a natural number, sometimes referred as color. Alternatively, it is a \emph{Streett} condition if it is a set of pairs $\{(G_i, R_i)\}_{i \in I}$, where each $G_i, R_i$ is a subset of $Q$. An infinite path $\rho$ over $Q$ fulfills a parity condition $\alpha$ if the highest color of mapped by $\alpha$ over $\rho$ that appears infinitely often is even. The path $\rho$ fulfills a Streett condition if for every $i \in I$, either an element of $G_i$ or no element of $R_i$ occurs infinitely often on $\rho$. A run tree is \emph{accepting} if all its path fulfill the acceptance condition $\alpha$. A tree is accepted by $\AName$ if there is an accepting tree run over it. By $\LName(\AName)$ we denote the set of trees accepted by $\AName$. An automaton $\AName$ is \emph{empty} if $\LName(\AName) = \emptyset$. For a $\Sigma$-labeled $\mthsym{U}psilon$-tree $\tuple{\mthsym{U}psilon^{*}, l_{\Sigma}}$ and a $\mthsym{X}i$-labeled $\mthsym{U}psilon \times \Theta$-tree $\tuple{(\mthsym{U}psilon \times \Theta)^{*}, l_{\mthsym{X}i}}$, their \emph{composition}, denoted $\tuple{\mthsym{U}psilon^{*}, l_{\Sigma}} \oplus \tuple{(\mthsym{U}psilon \times \Theta)^{*}, l_{\mthsym{X}i}}$ is the $\mthsym{X}i \times \Sigma$-labeled $\mthsym{U}psilon \times \Theta$-tree $\tuple{(\mthsym{U}psilon \times \Theta)^{*}, l}$ such that, for every $x \in (\mthsym{U}psilon \times \Theta)^{*}$, it holds that $l(x) = l_{\mthsym{X}i}(x) \cup l_{\Sigma}(\mthfun{hide}_{\Theta}(x))$. Observe that the $\mthsym{U}psilon$-component appears in both the trees. Their composition, indeed, can be seen as an extension of the labeling $l_{\mthsym{X}i}$ with the labeling $l_{\Sigma}$ in a way that the choices for it are oblivious to the $\Theta$-component of the direction. A more general definition of tree composition is given in~\cite{FS05} where the $\Sigma$-labeling is included as a direction and made consistent with it by means of an $\mthfun{xray}$ operation. For a set $\TName$ of $\mthsym{X}i \times \Sigma$-labeled $\mthsym{U}psilon \times \Theta$-trees, $\mthfun{shape}_{\mthsym{X}i,\mthsym{U}psilon}(\TName)$ is the set of $\Sigma$-labeled $\mthsym{U}psilon$-trees $\tuple{\mthsym{U}psilon^{*}, l_{\Sigma}}$ for which there exists a $\mthsym{X}i$-labeled $\mthsym{U}psilon \times \Theta$-tree $\tuple{(\mthsym{U}psilon \times \Theta)^{*}, l_{\mthsym{X}i}}$ such that $\tuple{\mthsym{U}psilon^{*}, l_{\Sigma}} \oplus \tuple{(\mthsym{U}psilon \times \Theta)^{*}, l_{\mthsym{X}i}} \in \TName$. Intuitively, the shape operation performs a nondeterministic guess on the $\Sigma$-component of the trees by taking into account only the $\mthsym{U}psilon$-component of the directions. This allows to refine the set of trees into those ones for which a decomposition consistent with this limited dependence is possible. Interestingly, being this nondeterministic guess similar to an existential projection, we can also refine a (nondeterministic) parity tree automaton $\NName$ in order to recognize the shape operation of its language. Indeed, consider a nondeterministic parity tree automaton $\NName = (\mthsym{X}i \times \Sigma, Q, q_0, \delta, \alpha)$ recognizing $\mthsym{X}i \times \Sigma$-labeled $\mthsym{U}psilon$-trees, the automaton $\mthfun{change}_{\mthsym{X}i,\mthsym{U}psilon}(\NName) = (\Sigma, Q, q_0, \delta', \alpha)$ recognizes $\Sigma$-labeled $\mthsym{U}psilon$-trees where \begin{center} $\delta'(q, \sigma) = \bigvee_{\xi \in \mthsym{X}i, f \in \delta(q, (\xi, \sigma))} \bigwedge_{\upsilon \in \mthsym{U}psilon, \vartheta \in \Theta}(f(\upsilon, \vartheta), (\xi, \sigma))$. \end{center} Intuitively, the automaton $\mthfun{change}_{\mthsym{X}i,\mthsym{U}psilon}(\NName)$ encapsulates and then nondeterministically guesses $\mthsym{X}i$-labeled $\mthsym{U}psilon \times \Theta$-trees in a way that their composition with the read $\Sigma$-labeled $\mthsym{U}psilon$-tree is accepted by $\NName$. The following holds. \begin{theorem}{\cite[Theorem 4.11]{FS05}} \label{thm:shape-automaton} For every nondeterministic parity tree automaton $\NName$ over $\mthsym{X}i \times \Sigma$-labeled $\mthsym{U}psilon \times \Theta$-trees, it hols that $\LName(\mthfun{change}_{\mthsym{X}i,\mthsym{U}psilon}(\NName)) = \mthfun{shape}_{\mthsym{X}i,\mthsym{U}psilon}(\LName(\NName))$. \end{theorem} We can apply the change operation only on nondeterministic automata. This means that, in order to recognize the shape language of a parity alternating automaton $\AName$, we first need to turn it into a nondeterministic one. This can be done by means of two steps: we first turn $\AName$ into a nondeterministic Street automaton $\NName_{S}$ that recognize the same language $\LName(\NName_{S}) = \LName(\AName)$, and then turn it into a nondeterministic parity $\NName$ such that $\LName(\NName) = \LName(\NName_{S}) = \LName(\AName)$. If $\AName$ has $n = \card{Q}$ states and $c = \card{C}$ colors, then the automaton $\NName_{S}$ has $n^{O(c \cdot n)}$ states and $O(c \cdot n)$ pairs such that $\LName(\AName) = \LName(\NName_{S})$~\cite{MS84}. In addition, it the nondeterministic Street automaton $\NName_{S}$ has $m$ states and $p$ pairs, we can build a nondeterministic parity automaton $\NName$ with $p^{O(p)} \cdot m$ states and $O(p)$ colors~\cite{FS05}. By applying these two constructions, we then transform an alternating parity automaton $\AName$ into a nondeterministic one $\NName$ accepting the same tree-language. Note that $\NName$ is of size single exponential with respect to $\AName$. Indeed, we obtain it with $n' = O(c \cdot n)^{O(c \cdot n)} = n^{O(c \cdot n)}$ states~\footnote{The last equivalence because the number $c$ of colors is bounded by the number $n$ of states.} and $c' = O(c \cdot n)$ colors. By $\mthfun{ndet}(\AName) = \NName$ we denote the transformation of an alternating parity automaton into a nondeterministic parity one. From now on, we consider closed \textsc{qltl$_{\mthfun{B}}$}\xspace formulas being of the form $\mthsym{\wp} \psi = \exists Y_1 \forall X_1 \allowbreak \ldots \exists Y_n \forall X_n \psi$ with $Y_1$ and $X_n$ being possibly empty. Therefore, we refer to $\theta$ as a behavioral Skolem function over $\mthsym{\wp}$, as the set $F = \emptyset$ is always empty. Moreover, we define $\hat{X_{i}} = \bigcup_{j \leq i} X_i$ and $\hat{Y_{i}} = \bigcup_{j \leq i} Y_i$, with $X = \hat{X_{0}}$ and $Y = \hat{Y_{0}}$, respectively. Finally, we define $\check{X_{i}} = \bigcup_{j > i} X_i$ and $\check{Y_{i}} = \bigcup_{j > i} Y_i$, respectively. A behavioral Skolem function $\theta$ over $\mthsym{\wp}$ can be regarded as the labeling function of a $2^{Y}$-labeled $2^{X}$-tree. In addition, such labeling fulfills a \emph{compositional} property, as it is expressed in the following lemma. \begin{lemma} \label{lmm:bijection-behavioral-trees} Let $\mthsym{\wp} = \exists Y_1 \forall X_1 \ldots \exists Y_n \forall X_n$ be a prefix quantifier. A $2^{Y}$-labeled $2^{X}$-tree $\theta$ is a behavioral Skolem function over $\mthsym{\wp}$ iff there exist a tuple $\theta_1, \ldots, \theta_n$, where $\theta_i$ is a $2^{Y_i}$-labeled $2^{\hat{X_{i}}}$-tree, such that $\theta = \theta_1 \oplus \ldots \oplus \theta_n$. \end{lemma} \begin{proof} The proof proceeds by double implication. From left to right, consider a behavioral Skolem function $\theta$ and, for every $1 \leq i \leq n$, consider the $2^{Y_i}$-labeled $2^{\hat{X_{i}}}$-tree $\theta_i$, defined as $\theta_{i}(x) = \theta(x \times x')_{\upharpoonright Y_{i}}$ where $x \in (2^{\hat{X_{i}}})^{*}$ and $x' \in (2^{X \setminus \hat{X_{i}}})^{*}$. Note that $\mthfun{Dep}_{\mthsym{\wp}}(Y_{i}) = \hat{X_{i}}$ and so the definition of $\theta_{i}$ over $x$ does not really depend on the values in $x'$, therefore it is well-defined. By applying the definition of tree composition, it easily follows that $\theta = \theta_1 \oplus \ldots \oplus \theta_n$. For the right to left direction, let $\theta_1, \ldots, \theta_{n}$ be labeled trees and consider the composition $\theta = \theta_1 \oplus \ldots \oplus \theta_n$. From the definition of tree composition, it follows that for every $i$, $\theta(x)_{\upharpoonright Y_{i}} = \theta_{i}(x_{\hat{X_{i}}})$, which fulfills the requirement for $\theta$ of being a behavioral Skolem function over $\mthsym{\wp}$. \end{proof} We now show how to solve the satisfiability problem for \textsc{qltl$_{\mthfun{B}}$}\xspace with an automata theoretic approach. To do this, we first introduce some notation. For a list of variables $(Y_i, X_i)$, consider the quantification prefix $\check{\mthsym{\wp}_{i}} \doteq \forall \check{X_{i}} \exists \check{Y_{i}}$ and then the quantification prefix $\mthsym{\wp}_{i} \doteq \exists Y_1 \forall X_1 \ldots \exists Y_i \forall X_i \check{\mthsym{\wp}_{i}}$. Intuitively, every quantification prefix $\mthsym{\wp}_{i + 1}$ is obtained from $\mthsym{\wp}_{i}$ by pulling the existential quantification of $Y_{i + 1}$ up before the universal quantification of $X_{i + 1}$. Clearly, we obtain that $\mthsym{\wp}_{0} = \forall X \exists Y$ and $\mthsym{\wp}_{n} = \mthsym{\wp}$. The automata construction builds on top of this quantifier transformation. First, recall that the satisfiability of $\mthsym{\wp}_{0} \psi$ amounts to solving the synthesis problem for $\psi$ with $X$ and $Y$ being the set of variables controlled by the environment and the system, respectively. Let $\AName_{0}$ be an alternating parity automaton that solves the synthesis problem, thus $2^{Y}$-labeled $2^{X}$-trees representing the models of $\psi$. Now, for every $i < n$, define $\AName_{i + 1} \doteq \mthfun{change}_{2^{Y_i}, 2^{\hat{X_{i}}}}(\mthfun{ndet}(\AName_{i}))$. We have the following. \begin{theorem} \label{thm:automata-sequence} For every $i \leq n$, the formula $\mthsym{\wp}_{i} \psi$ is satisfiable iff $\LName(\AName_{i}) \neq \emptyset$, where \begin{itemize} \item $\AName_{0}$ is the alternating parity automaton that solves the synthesis problem for $\psi$ with system variables $Y$ and environment variables $X$, and \item $\AName_{i + 1} \doteq \mthfun{change}_{2^{Y_i}, 2^{\hat{X_{i}}}}(\mthfun{ndet}(\AName_{i}))$, for every $i < n$. \end{itemize} \end{theorem} \begin{proof} We prove the theorem by induction through a stronger statement. We show that the automaton $\AName_{i}$ accepts $2^{\check{Y_{i}}}$-labeled $2^{X}$-trees $\theta_{i}$ for which there exists a sequence $\theta_{1}, \ldots, \theta_{i-1}$ such that $\theta_1 \oplus \ldots \oplus \theta_{i}$ is a behavioral Skolem function over $\mthsym{\wp}_{i}$ that satisfies $\mthsym{\wp}_{i} \psi$. For the base case, the statement boils down to the fact that the automaton $\AName_{0}$ accepts the $2^{Y}$-labeled $2^{X}$-trees that solve the synthesis problem for $\psi$. For the induction case, assume that the statement is true for some $i$. Thus, the automaton $\AName_{i}$, and then its nondeterministic version $\mthfun{ndet}(\AName_{i})$ accept $2^{\check{Y_{i}}}$-labeled $2^{X}$-trees $\theta_{i}$ for which there exists a sequence $\theta_{1}, \ldots, \theta_{i-1}$ such that $\theta_1 \oplus \ldots \oplus \theta_{i}$ is a behavioral Skolem function that satisfies $\mthsym{\wp}_{i} \psi$. Now, consider the automaton $\AName_{i + 1} = \mthfun{change}_{2^{Y_i}, 2^{\hat{X_{i}}}}(\mthfun{ndet}(\AName_{i}))$. From Theorem~\ref{thm:shape-automaton}, it holds that it accepts $2^{\check{Y_{i + 1}}}$-labeled $2^{X}$-trees $\theta_{i + 1}$ that are in $\mthfun{shape}_{2^{Y_i + 1}, 2^{\hat{X_{i+1}}}}(\LName(\AName_{i}))$ and so for which there exists a $2^{Y_i}$-labeled $2^{\hat{X_{i+1}}}$-tree $\theta_{i}'$ such that $\theta_{i}' \oplus \theta_{i + 1} \in \LName(\AName_{i})$. Observe that now the variables $Y_{i}$ are handled over a $2^{\hat{X_{i}}}$-tree and so they do not depend on variables in $\check{X_{i}}$ anymore. This implies that the composition $\theta_1 \oplus \theta_{i - 1} \oplus \theta'_{i} \oplus \theta_{i + 1}$ is a behavioral Skolem over $\mthsym{\wp}_{i + 1}$ that satisfies $\mthsym{\wp}_{i + 1} \psi$, and the statement is proved. \end{proof} Theorem~\ref{thm:automata-sequence} shows that the automata construction is correct. The complexity of solving the satisfiability of \textsc{qltl$_{\mthfun{B}}$}\xspace is stated below. \begin{theorem} \label{thm:behavioral-satisfiability-complexity} The satisfiability problem of a \textsc{qltl$_{\mthfun{B}}$}\xspace formula of the form $\varphi = \exists Y_1 \forall X_1 \ldots \allowbreak \exists Y_n \forall X_n \psi$ can be solved in $(n+1)$-EXPTIME-complete. \end{theorem} \begin{proof} From Theorem~\ref{thm:automata-sequence}, we reduce the problem to the emptiness of the automaton $\AName_{n}$, whose size is $n$-times exponential in the size of $\psi$, as we apply $n$ times the nondeterminisation, starting from the automaton $\AName_{\psi}$ that solves the synthesis problem for $\psi$. As the emptiness of the alternating parity automaton $\AName_{n}$ involves another exponential blow-up, we obtain that the overall procedure is $(n+1)$-EXPTIME. A matching lower-bound is obtained from the synthesis of distributed synthesis for hierarchically ordered architecture processes with \textsc{ltl}\xspace objectives, presented in~\cite{PR90}, that is $(n+1)$-EXPTIME-complete with $n$ being the number of processes. Indeed, every process $p_i$ in such architecture synthesizes a strategy represented by a $2^{O_i}$-labeled $2^{I_i}$-tree, with $O_i$ being the output variables and $I_i$ the input variables. An architecture $A$ is hierarchically ordered if $I_i \subseteq I_{i + 1}$, for every process $p_i$. Thus, for an ordered architecture $A$ and an \textsc{ltl}\xspace formula $\psi$, consider the variables $Y_{i} = O_{i}$ and $X_{i} = I_i \setminus I_{i -1}$ and the \textsc{qltl$_{\mthfun{B}}$}\xspace formula $\varphi = \exists Y_1 \forall X_1 \ldots \exists Y_n \forall X_n \psi$. A behavioral Skolem function $\theta$ that makes $\varphi$ true corresponds to an implementation for the architecture that realizes $(A, \psi)$. Moreover, the satisfiability of $\varphi$ is $(n + 1)$-EXPTIME, matching the lower-bound complexity of the realizability instance. \end{proof} \section{Weak-Behavioral QLTL} \label{sec:weak-behavioral-semantics} We now introduce \emph{weak-behavioral} {\textsc{qltl}}\xspace, denoted \textsc{qltl$_{\mthfun{WB}}$}\xspace, that can be used to model systems with \emph{full observability} over the executions history. In such system every action is \emph{public}, meaning that it is visible to the entire system once it is occurred. In order to model this, we introduce an alternative definition of Skolem function, which we call here \emph{weak-behavioral}. We study the satisfiability problem of \textsc{qltl$_{\mthfun{WB}}$}\xspace and show that its complexity is 2-EXPTIME-complete via a reduction to a Multi-Player Parity Game~\cite{MMS16} with a double exponential number of states and a (single) exponential number of color. Analogously to the case of \textsc{qltl$_{\mthfun{B}}$}\xspace, the logic \textsc{qltl$_{\mthfun{WB}}$}\xspace is defined in a Skolem-based approach. \begin{definition} \label{def:weak-behavioral-skolem} For a given quantification prefix $\mthsym{\wp}$ defined over a set $\mthsym{Var}(\mthsym{\wp}) \subseteq \mthsym{Var}$ of propositional variables and a set $F$ of variables not occurring in $\mthsym{\wp}$, a function $\theta: (2^{F \cup \forall(\mthsym{\wp})})^{\omega} \to (2^{\exists(\mthsym{\wp})})^{\omega}$ is a \emph{weak-behavioral Skolem function} over $(\mthsym{\wp}, \mthfun{free}(\varphi))$ if, for all $\pi_1, \pi_2 \in (2^{F \cup \forall(\mthsym{\wp})})^{\omega}$, $k \in \SetN$, and $Y \in \exists(\mthsym{\wp})$, it holds that \begin{center} $\theta(\pi_{1})(0,k) = \theta(\pi_{2})(0,k)$ and $\pi_{1}(k + 1)_{\upharpoonright \mthfun{Dep}^{F}(Y)} = \pi_{2}(k + 1)_{\upharpoonright \mthfun{Dep}^{F}(Y)}$ implies $\theta(\pi_{1})_{\upharpoonright Y} = \theta(\pi_{2})_{\upharpoonright Y}$. \end{center} \end{definition} In weak-behavioral Skolem functions, the evaluation of existential variables $Y$ at every instant depends not only on the current evaluation of $\mthfun{Dep}^{F}(Y)$ but also the evaluation history of each variable. The semantics of \textsc{qltl$_{\mthfun{WB}}$}\xspace is given below. \begin{definition} \label{def:weak-behavioral-semantics} A \textsc{qltl$_{\mthfun{WB}}$}\xspace formula $\varphi = \mthsym{\wp} \psi$ is true over an interpretation $\pi$ at an instant $i$, written $\pi, i \models_{\mthsym{WB}} \varphi$, if there exists a weak-behavioral Skolem function $\theta$ over $(\mthsym{\wp}, \mthfun{free}(\varphi))$ such that $\theta(\pi \Cup \pi'), i \models_{\mthsym{C}} \psi$, for every $\pi' \in (2^{F \cup \forall(\mthsym{\wp})})^{\omega}$. \end{definition} Differently from behavioral, \textsc{qltl$_{\mthfun{WB}}$}\xspace is not a special case of {\textsc{qltl}}\xspace. As a matter of fact, they are incomparable. Consider again the formula This is due to the fact that the existentially quantified variables depend, for standard Skolem functions, on the future of their dependencies, whereas, weak-behavioral functions, on the whole past of the computation, including the non-dependencies. Consider again the formula $\varphi = \forall x \exists y (\mthsym{G} x \leftrightarrow y)$. This is not satisfiable as a \textsc{qltl$_{\mthfun{WB}}$}\xspace formula, as this semantics still does not allow existential variables to depend on the future interpretation of the universally quantified ones. On the other hand, the formula $\varphi = \exists y \forall x (\mthsym{F} x \leftrightarrow \mthsym{F} y)$ is satisfiable as a \textsc{qltl$_{\mthfun{WB}}$}\xspace. Indeed, the existentially quantified variable $y$ can determine its value on an instant $i$ by looking at the entire history of assignments, including those for $x$, although only on the past but not the present instant $i$ itself. However, the semantics of both {\textsc{qltl}}\xspace and \textsc{qltl$_{\mthfun{B}}$}\xspace does not allow such dependence, which makes $\varphi$ non satisfiable as both {\textsc{qltl}}\xspace and \textsc{qltl$_{\mthfun{B}}$}\xspace. \begin{lemma} \label{lmm:semantics-comparison} There exists a satisfiable \textsc{qltl$_{\mthfun{B}}$}\xspace formula that is not satisfiable as \textsc{qltl$_{\mthfun{WB}}$}\xspace. Moreover, there exists a satisfiable \textsc{qltl$_{\mthfun{WB}}$}\xspace formula that is not satisfiable as \textsc{qltl$_{\mthfun{B}}$}\xspace. \end{lemma} \section{Weak-Behavioral QLTL Satisfiability} We now address the satisfiability problem for \textsc{qltl$_{\mthfun{WB}}$}\xspace by showing a reduction to multi-agent parity games~\cite{MMS16}. Intuitively, a \textsc{qltl$_{\mthfun{WB}}$}\xspace formula of the form $\varphi = \mthsym{\wp} \psi$, with $\psi$ being an \textsc{ltl}\xspace formula, establishes a multi-player parity game with $\psi$ determining the parity acceptance condition and $\mthsym{\wp}$ setting up the Player's controllability and team side. In order to present this result, we need some additional definition. An $\omega$-word over an alphabet $\Sigma$ is a special case of a $\Sigma$-labeled $\mthsym{U}psilon$-tree where the set of directions is a singleton. Being the set $\mthsym{U}psilon$ irrelevant, an $\omega$-word is also represented as an infinite sequence over $\Sigma$. The tree automata accepting $\omega$-words are also called \emph{word automata}. Word automata are a very useful way to (finitely) represent all the models of an \textsc{ltl}\xspace formula $\psi$. As a matter of fact, for every \textsc{ltl}\xspace formula $\psi$, there exists a deterministic parity word automaton $\DName_{\psi}$ whose language is the set of interpretation on which $\psi$ is true. The size of such automaton is double-exponential in the length of $\psi$. The following theorem gives precise bounds. \begin{lemma}[\cite{Pit07}] \label{lmm:ltl-to-parity} For every \textsc{ltl}\xspace formula $\psi$ over a set $\mthsym{Var}$ of variables, there exists a \emph{deterministic parity automaton} $\DName_{\psi} = \tuple{2^{\mthsym{Var}}, Q, q_0, \delta, \alpha}$ of size double-exponential w.r.t. $\psi$ and a (single) exponential number of priorities such that $\LName(\DName_{\psi}) = \set{\pi \in (2^{\mthsym{Var}})^{\omega}}{\pi \models_{\mthsym{C}} \psi}$. \end{lemma} A multi-player parity game is a tuple $\mthsym{G}ame= \tuple{\mthsym{Pl}, (\mthsym{Ac}_{i})_{i \in \mthsym{Pl}}, \mthsym{St}, s_{0}, \lambda, \mthfun{tr}}$ where \begin{inparaenum}[(i)] \item $\mthsym{Pl} = \{0, \ldots, n\}$ is a set of players; \item $\mthsym{Ac}_{i}$ is a set of actions that player $i$ can play; \item $\mthsym{St}$ is a set of states with $s_0$ being a designated initial state; \item $\lambda: \mthsym{St} \to C$ is a coloring function, assigning a natural number in $C$ to each state of the game; \item $\mthfun{tr}: \mthsym{St} \times (\mthsym{Ac}_{1} \times \ldots \times \mthsym{Ac}_{n}) \to \mthsym{St}$ is a transition function that prescribe how the game evolves in accordance with the actions taken by the players. \end{inparaenum} Players identified with an even index are the Even team, whereas the other are the Odd team. Objective of the Even team is to generate an infinite play over the set of states whose coloring fulfills the parity condition established by $\lambda$. A strategy for Player $i$ of the Even team is a function $\mthfun{s}_{i}: \mthsym{St}^{*} \times (\mthsym{Ac}_{1} \times \mthsym{Ac}_{i - 1}) \to \mthsym{Ac}_{i}$, that determines the action to perform in a given instant according to the past history and the current actions of players that perform their choices before $i$. A tuple of strategies $\tuple{\mthfun{s}_{0}, \mthfun{s}_{2}, \ldots}$ for the Even team is \emph{winning} if every play that is generated by that, no matter what the Odd team responds, fulfills the parity condition. Now, consider a \textsc{qltl$_{\mthfun{WB}}$}\xspace formula of the form $\varphi = \exists X_0 \forall X_1 \ldots \exists X_{n - 1} \forall X_{n} \psi$, with $X_0, X_{n}$ being possibly empty, and $\DName_{\psi} = \tuple{2^{\mthsym{Var}}, Q, q_0, \delta, \alpha}$ being the \textsc{dpw}\xspace that recognizes the interpretations satisfying $\psi$, with $\alpha = \tuple{F_{0}, F_{1}, \ldots, F_{k}}$. Then, consider the multi-player parity game $\mthsym{G}ame_{\varphi} = \tuple{\mthsym{Pl}, (\mthsym{Ac}_{i})_{i \in \mthsym{Pl}}, \mthsym{St}, s_{0}, \lambda, \mthfun{tr}}$ where \begin{inparaenum}[(i)] \item $\mthsym{Pl} = \{0, \ldots, n\}$; \item $\mthsym{Ac}_{i} = 2^{X_{i}}$ for each $i \in \mthsym{Pl}$; \item $\mthsym{St} = Q$ with $s_{0} = q_{0}$; \item $\lambda: \mthsym{St} \to \SetN$ such that $\lambda(s) = \arg_{j}\{q \in F_{j} \}$; \item $\mthfun{tr} = \delta$. \end{inparaenum} The next theorem provides the correctness of this construction. \begin{theorem} \label{thm:wbqltl-satisfiability} A \textsc{qltl$_{\mthfun{WB}}$}\xspace formula $\varphi$ is satisfiable iff there exists a winning strategy for the Even team in the multi-player parity game $\mthsym{G}ame_{\varphi}$. \end{theorem} \begin{proof} Observe that every player $i$ is associated to the set of actions $\mthsym{Ac}_{i}$ corresponding to the evaluation of variables in $X_{i}$. In addition, every set of existentially quantified variables is associated to a player whose index is even and so playing for the Even team in $\mthsym{G}ame_{\varphi}$. Also, the ordering of player reflects the order in the quantification prefix $\mthsym{\wp}$. In addition to this, note that the strategy tuples for the Even team correspond to the weak-behavioral Skolem functions over $\mthsym{\wp}$ and so they generate the same set of outcomes over $(2^{X})^{\omega}$. Since the automaton $\DName_{\psi}$ accepts all and only those $\omega$-words on which $\psi$ is true, it follows straightforwardly that every weak-behavioral Skolem function $\theta$ over $\mthsym{\wp}$ is such that $\theta(\pi) \models_{\mthsym{C}} \psi$ iff $\theta$ is a winning strategy for the Even team in $\mthsym{G}ame_{\varphi}$. Hence, the \textsc{qltl$_{\mthfun{WB}}$}\xspace formula $\varphi$ is satisfiable iff $\mthsym{G}ame_{\varphi}$ admits a winning strategy for the Even team. \end{proof} Regarding the computational complexity of \textsc{qltl$_{\mthfun{WB}}$}\xspace, consider that solving a multi-player parity game amounts to decide whether the Even team has a winning strategy in $\mthsym{G}ame$. A precise complexity result is provided below. \begin{lemma}{\cite{MMS16}} \label{lmm:multi-player-parity} The complexity of solving a multi-player parity game $\mthsym{G}ame$ is polynomial in the number of states and exponential in the number of colors and players. \end{lemma} Therefore, we can conclude that the complexity of \textsc{qltl$_{\mthfun{WB}}$}\xspace satisfiability is as stated below. \begin{theorem} \label{thm:wbqltl-satisfiability-complexity} The complexity of \textsc{qltl$_{\mthfun{WB}}$}\xspace satisfiability is 2EXPTIME-complete \end{theorem} \begin{proof} The procedure described in Theorem~\ref{thm:wbqltl-satisfiability} is 2EXPTIME. Indeed, the automata construction of Lemma~\ref{lmm:ltl-to-parity}, produces a game whose set of states $\mthsym{St}$ is doubly-exponential in $\psi$ and a number of colors $C$ singly exponential in the size of $\psi$. Moreover, the number $n$ of players in $\mthsym{G}ame_{\varphi}$ is bounded by the length of $\varphi$ itself, as it corresponds to the number of quantifiers in the formula. Now, from Lemma~\ref{lmm:multi-player-parity}, we obtain that solving $\mthsym{G}ame_{\varphi}$ is polynomial in $\mthsym{St}$, and exponential in both $C$ and $n$. This amounts to a procedure that is double-exponential in the size of $\varphi$. Regarding the lower-bound, observe that the formula $\forall X \exists Y \psi$ represents the synthesis problem for the \textsc{ltl}\xspace formula $\psi$ with $X$ and $Y$ being the uncontrollable and controllable variables, which is already 2EXPTIME-Complete~\cite{PR89}. \end{proof} \endinput \gdg{We show a connection between behavioral satisfiability and synthesis with partial observability for distributed architecture of a specific form.} \figprefarchitecture A quantification prefix of the form $\mthsym{\wp} = \exists Y_{0} \forall X_ 1 \exists Y_1 \ldots \forall X_{k} \exists Y_{k} \forall X_{k + 1}$ defines an architecture $A_{\mthsym{\wp}} = \tuple{P, p_{env}, E, O, H}$ as depicted in Figure~\ref{fig:quant-to-architecture} where $P = \{p_{env}, p_{0}, p_{1}, \ldots p_{k} \}$, $E = \set{(p_{env}, p_{i})}{p_{i} \in P \setminus \{p_{env}, p_{0} \}}$, $O_{(p_{env}, p_{i})} = \bigcup_{j \leq i}(X_{j})$, $H_{p_{i}} = Y_{i}$, and $H_{p_{env}} = X_{k + 1}$ for every $i \leq k$. Such architecture counts $k$ system processes $p_{1}, \ldots p_{k}$, each of them outputting a set of variables $Y_i$ in $\exists(\mthsym{\wp})$ with input given by all the variables in $\mthfun{Dep}_{\mthsym{\wp}}(Y_i)$. This means a strategy for $Y_i$ in $A_{\mthsym{\wp}}$ is described as $\sigma_{Y_i}: (2^{\mthfun{Dep}(Y_i)})^{*} \to 2^{Y_i}$. The set of behavioral Skolem functions over $\mthsym{\wp}$ then corresponds to the possible implementations of the architecture $A_{\mthsym{\wp}}$, as shown in the following lemma. \begin{lemma} \label{lmm:bijection-behavioral} For every prefix quantifier $\mthsym{\wp}$, the set of behavioral Skolem functions corresponds to the set of implementations in the architecture $A_{\mthsym{\wp}}$. \end{lemma} \begin{proof}[Sketch] Clearly, every implementation $S$ in $A_{\mthsym{\wp}}$ fulfills the requirement in Definition~\ref{def:behavioral-skolem} with $F = \emptyset$. On the other hand, as every process controlling a set of variables $Y$ has input $\mthfun{Dep}_{\mthsym{\wp}}(Y)$, every Skolem function $\theta$ over $(\mthsym{\wp}, \emptyset)$ defines a strategy $\sigma_{Y}$ as $\sigma_{Y}(\pi(0,k)) \doteq \theta(\pi)(k)_{\upharpoonright Y}$, for every finite trace $\pi(0,k)$. Thus $\theta$ is an implementation for $A_{\mthsym{\wp}}$. \end{proof} The connection between behavioral Skolem functions and implementations shown in Lemma~\ref{lmm:bijection-behavioral} proves a reduction of the behavioral satisfiability problem from uniform distributed synthesis with no information forks. \begin{theorem} \label{thm:qltl-behav-sat-to-dist-synt} The complexity of solving the behavioral satisfiability problem for a {\textsc{qltl}}\xspace formula is $h$-EXPTIME-Complete, with $h = \card{\exists(\mthsym{\wp})} + 1$. \end{theorem} \begin{proof} Assume the quantification prefix being of the form $\mthsym{\wp} = \exists Y_{0} \forall X_ 1 \exists Y_1 \ldots \forall X_{k} \exists Y_{k} \forall X_{k + 1}$ (with $Y_{0}$ and $X_{k + 1}$ possibly being empty). Observe that $\mthsym{\wp} \psi$ is behavioral satisfiable iff there exists a behavioral Skolem function $\theta$ over $(\mthsym{\wp}, \emptyset)$ such that $\theta(\pi) \models_{\mthsym{C}} \psi$, for every possible $\pi \in (2^{\forall(\mthsym{\wp})})^{\omega}$. From Lemma~\ref{lmm:bijection-behavioral}, it holds that $\theta \models_{\mthsym{C}} \psi$ iff $\theta$, regarded as an implementation, realizes $(A_{\mthsym{\wp}}, \psi)$ is realizable and therefore the satisfiability of $\mthsym{\wp} \psi$ is reduced to the realizability of $(A_{\mthsym{\wp}}, \psi)$. Regarding the complexity, observe that the architecture $A_{\mthsym{\wp}}$ is of size linear with respect to the formula $\varphi$. In particular, the set of processes $P$ is of size $h = \exists(\mthsym{\wp}) + 1$, counting a process for each existentially quantified group of variables plus the environment $p_{env}$. Moreover, it is ordered acyclic, and so, from~\cite[Thm 4.12]{FS05}, solving its distributed synthesis is $h$-EXPTIME-Complete. \end{proof} \section{Related Work in Formal Methods} The interaction of second-order quantified variables is of interest in the logic and formal method community. For instance, Independence-Friendly logic considers dependence atoms as a syntactic extension~\cite{MSS11,GV13}. Another approach generalizes quantification by means of partially ordered quantifiers~\cite{BG86,KM92} in which existential variables may depend on disjoint sets of universal quantification. The notion of behavioral has recently drawn the attention of many researchers in the area of logic for strategic reasoning. Strategy Logic~\cite{MMPV14} ({\textsc{sl}}\xspace) has been introduced as a formalism for expressing complex strategic and game-theoretic properties. Strategies in {\textsc{sl}}\xspace are first class citizens. Unfortunately, and similarly to {\textsc{qltl}}\xspace, quantifications over them sets up a kind of dependence that cannot be realized through actual processes, as they involve future and counter-factual possible computations that are not accessible by reactive programs. To overcome this, and also mitigate the computational complexities of the main decision problems, the authors introduced a \emph{behavioral} semantics as a way to restrict the dependence among strategies to a realistic one. They also showed that for a small although significant fragment of {\textsc{sl}}\xspace, which includes {\textsc{atl$^{\star}$}}\xspace, behavioral semantics has the same expressive power of the standard one. This means that ``\emph{behavioral strategies}'' are able to solve the same set of problems that can be expressed in such fragment. Further investigations around this notion has been carried out in the community. In~\cite{GBM18,GBM20}, the authors characterize different notions of behavioral, ruling out future and counter-factual dependence one by one, providing a classification of syntactic fragments for which the behavioral and non-behavioral semantics are equivalent. \section{Conclusion} \label{sec:conclusion} We introduced a behavioral semantic for {\textsc{qltl}}\xspace, getting a new logic \emph{Behavioral {\textsc{qltl}}\xspace} (\textsc{qltl$_{\mthfun{B}}$}\xspace). This logic is characterized by the fact that the (second-order) existential quantification of variables is restricted to depend, at every instant, only on the past interpretations of the variables that are universally quantified upfront in the formula, and not on their entire trace, as it is for classic {\textsc{qltl}}\xspace. This makes such dependence to be a function ready implementable by processes, thus making \textsc{qltl$_{\mthfun{B}}$}\xspace suitable for capturing advanced forms of planning and synthesis through standard reasoning, as envisioned since in the early days of AI \cite{Green69}. We studied satisfiability for \textsc{qltl$_{\mthfun{B}}$}\xspace, providing tight complexity bounds. For the simplest syntactic fragments, which do not include quantification blocks of the form $\forall X_{i} \exists Y_{i}$, the complexity is the same as {\textsc{qltl}}\xspace, given the two semantics are equivalent. For the rest of \textsc{qltl$_{\mthfun{B}}$}\xspace, where the characteristics of behavioral semantics become apparent, we present an automata-based technique that is $(n+1)$-EXPTIME, with $n$ being the number of quantification blocks $\forall X_{i} \exists Y_{i}$. The matching lower-bound comes from a reduction of the corresponding (distributed) synthesis problems. We also consider a weaker-version of Behavioral {\textsc{qltl}}\xspace, denoted \textsc{qltl$_{\mthfun{WB}}$}\xspace, where the history of quantification is completely visible to every existentially quantified variable, except for the current instant in which only the upfront quantification is available. We give a technique for satisfiability that is $2$-EXPTIME, regardless of the number of quantifications in the formula. This is due to the fact that full visibility of variables allows for solving the problem with a simple local reasoning that avoids computationally expensive automata constructions. Also in this case, the matching lower-bound comes from a reduction of the corresponding synthesis problem, again proving that our technique is optimal. \begin{paragraph}{Acknowledgments} This work is partially supported by ERC Advanced Grant WhiteMech (No. 834228) and the EU ICT-48 2020 project TAILOR (No. 952215). \end{paragraph} \end{document}
math
पं. दीनदयाल उपाध्याय जयन्ती को सुकमा भाजपा ने मनाया मतदान केन्द्रों पर - क्लीपर२८ डिजिटल मीडिया होम/राज्य/छत्तीसगढ़/पं. दीनदयाल उपाध्याय जयन्ती को सुकमा भाजपा ने मनाया मतदान केन्द्रों पर पं. दीनदयाल उपाध्याय जयन्ती को सुकमा भाजपा ने मनाया मतदान केन्द्रों पर सुकमा: सोमवार २५ सितम्बर को भारतीय जनता पार्टी ने पं. दीनदयाल उपाध्याय की जयन्ती को मतदान केन्द्रों मे मनाया । साथ ही देश के प्रधानमंत्री श्री नरेन्द्र मोदी जी की भाजपा की राष्टीय कार्यसमिति बैठक का सम्बोधन को लाईव टी.वी. के माध्यम से दिखाया गया । पं. दीनदयाल की जयंती को कोयला बट्टी( सोडीपार) मे कार्यकर्ताओ को सम्बोधित करते हुये भारतीय जनता युवा मोर्चा जिला अध्यक्ष दिलीप पेद्दी ने कहा पं. दीनदयाल जी के विचारों के साथ साथ हम युवा कार्यकर्ता राज्य व केन्द्र सरकार की योजनाओं को समाज के अंतिम व्यक्ति तक पहुँचा कर लाभ दिलाना है जिस तेज़ी के साथ राज्य व केन्द्र की सरकार गॉव, ग़रीब, किसान, के हित मे कार्य कर रही है उसे हमे जनता को बताना है साथ ही कांग्रेस के पिछले ६० वर्षों मे खुकर्मो को देश को लूट- कसोट, व आये दिन जो भ्रष्टाचार किया करते थे उन सब पापों को जनता को बताने की आपिल किया । भाजपा जिला अध्यक्ष मनोज देव ने कहा समाज के हर युवा- जनता, तक पंडित दीनदयाल जी के एकात्म मनाववाद, अंत्योदय, व अन्य विचारों को लेके हमे कार्य करना होगा, पिछले ६० वर्षों की सरकार के कार्यों की आप लोग तुलना कर लो और आज की राज्य की १४ साल व केन्द्र की ३ साल की तुलना कर लो फर्क आपके सामने होगा । भारतीय जनता पार्टी का लक्ष्य सत्ता नही है , सत्ता सिर्फ़ भाजपा के लिये एक मात्र साधन है । देश की सेवा अंतिम व्यक्ति का उदय ग़रीब, किसान, पिछड़ा व्यक्ति का विकास व भारत को फिर से विश्व गुरू बनाना ही भाजपा का लक्ष्य है । आप सबको पंडित जी के विचारों को अनुचरण करते हुये भारत को फिर से विश्व गुरू बनाने के लिये भारतीय जनता पार्टी को जन जन तक पहुँचाने की अवश्यकता है पंडित दिनदयाल जी जिन विचारों को लेकर देश व देश की जनता के लिये कार्य किये जो त्याग, तपस्या, बलिदान दिया था उन विचारों को राज्य व केन्द्र की भारतीय जनता पार्टी की सरकार पूर्ण कर रही है । एवं पंडित दीनदयाल उपाध्याय जी की जयंती मतदान केन्द्र तक मनाने के लिये पूरे सुकमा जिले के पॉंचो मण्डलो मे सभी शक्ति केन्द्रों, मतदान केन्द्रो के संयोजक,पालक व सभी पदाधिकारीयों को ज़िम्मेदारी सौंपा गया था। इस कार्यक्रम मे मुख्यरूप से भाजपा सुकमा मण्डल अध्यक्ष पो. नारायण, पिलूराम यादव, रमाकान्त नायक, शोभन गंदामी, नरेन्द्र सोडी, आलम हाशमी, राजलक्ष्मी, प्रताप दास, व अन्य कार्यकर्ता उपस्थित थे.! नारायण राणे : बाला साहब को प्रताड़ित किया था उद्धव ने जानिए भगवान विष्णु को क्यों कहते हैं नारायण और हरि
hindi
इन ४२० कंपनियों के शेयर बेचने पर नहीं लगेगा ल्टग टैक्स आइडआलस्टॉक | घरेलू बाजार में आई चौतरफा गिरावट ने बीते आठ महीनों में शेयरों में आई मजबूती को खत्म कर दिया है. बाजार ने अपनी सारी बढ़त गंवा दी है. मगर इस वजह से निवेशकों के लिए एक अच्छी खबर है. निवेशकों को अभी बेचने पर ४२० शेयरों पर एलटीसीजी टैक्स नहीं चुकाना पड़ेगा. अभी ये शेयर ३१ जनवरी के अपने भाव से नीचे कारोबार कर रहे हैं. लॉन्ग टर्म कैपिटल गेन टैक्स (एलटीसीजी) के कैलकुलेशन के लिए इस साल ३१ जनवरी की तारीख निर्धारित की गई है. इस तारीख को शेयरों के भाव के आधार पर ही एलटीसीजी आंकलन होगा. गुरुवार को बीएसई ५०० इंडेक्स के ३६६ शेयरों ने ३१ जनवरी की कीमतों से १० फीसदी नीचे कारोबार खत्म किया. २३ शेयरों में गिरावट ५ से १० फीसदी की रही, जबकि ३१ शेयरों ने ५ फीसदी तक की गिरावट दर्ज की थी.
hindi
सालासर टायर की अनूठी पहल टायरों पर फाइनेंस सुविधा प्रारम्भ कोविड-१९ महामारी के चलते अनूठी पहल जोधपुर। कोरोना वायरस के चलते देशभर हुए लोकडाउन व आर्थिक मंदी के चलते सालासर टायर की अनूठी पहल टायरों फाइनेंस सुविधा प्रारम्भ की तथा व्हील एलाइमेन्ट पर डिस्... रेड मोर दोस्तों ने अपनी जेब खर्ची से रुपये जोडक़र भोजन वितरित किया प्रताप नगर वार्ड नं १८ में दोस्तों ने अपने स्तर पर किया भोजन वितरित सेवा भारती समाचार जोधपुर। इन दिनों सूर्यनगरी में कई स्वयंसेवी संस्थाएं सेवा में जुटी है। विभिन्न क्षेत्रों में जाकर गरीब... रेड मोर महात्मा गांधी आयुष्मान भारत बीमा योजना ने सुनीता को दी नई जिदंगी बीकानेर में ५ माह में २८,6५6 मरीजों को १८ करोड़ रूपए से ज्यादा का कैशलेस इलाज लाभ १२ प्राइवेट हॉस्पिटलों में भी हो रहे फ्री ऑपरेशन जयपुर (सेवा भारती समाचार)। बीकानेर जिले की नोखा तहसील के... रेड मोर निरोगी कार्नर से जांच रहे हैं लोग अपनी फिटनेस (सेवा भारती समाचार) जयपुर। आजकल की भागदौड़ भरी व्यस्थ जिंदगी में आम आदमी अपने स्वास्थ्य पर उचित ध्यान नहीं दे पाता है। पौष्टिक आहार लेने और शारीरिक परिश्रम करने से दूर भागता है इसके चलते व्य... रेड मोर ई-टिकट के लिए बुजुर्ग यात्रियों को चुकाना पड़ रहा है पूरा किराया सीनियर सिटीजन रियायत पर रोक लगाई थी जोधपुर (सेवा भारती समाचार)। रेलवे द्वारा कोरोना संक्रमण को देखते हुए १४ अप्रेल तक ट्रेनों का संचालन स्थगित करते हुए कुछ रियायतें जिनमें वरिष्ठ नागरिक को... रेड मोर सेवानिवृत्त होने वाले डाक्टरों की सेवाएं १ से ६ महीने तक बढाई जोधपुर (सेवा भारती समाचार)। राज्य सरकार ने कोविड-१९ महामारी घोषित करने के बाद रिटायर्ड होने वाले डाक्टरों का कार्यकाल १ से ६ महीने तक के लिए बढा दिया है। जिला प्रशासन के अधिकारियों के अनुसार... रेड मोर लोकडाउन में घर-घर स्क्रीनिंग कर रहे नर्सिंगकर्मियों का स्वागत सेवा भारती संवाददाता जोधपुर। सूर्यनगरी में जारी लॉक डाउन के बीच चार पुलिस थाना क्षेत्रों में कपर्यू के कारण सन्नाटा पसरा है। वहीं घर-घर स्क्रीनिंग के लिए घूमते पुलिस के वाहन, नर्सिंग कर्मचा... रेड मोर इंदौर शेफ में हिस्सा लेकर बने देश का पहला ऑर्गेनिक फ़ूड चैम्पियन सेवा भारती समाचार जोधपुर। भारत अपनी अनेकता में एकता के लिए जाना जाता है, लेकिन एक विषय है जो इस एकता में अनेक है वो है भारत का... रेड मोर सेवा भारती समाचार जोधपुर। हम देख रहे हैं कि आने वाला दौर मानव इतिहास को दो युगों में विभाजित कर देगा कोविड-१९ के प्रसार से पहले की दुनिया और प्रसार के बाद की बची हुई दुनिया। राष्ट्रों द्वारा... रेड मोर
hindi
January presents a unique time in people’s lives to reflect on this past year’s sweet successes, and learn from their bitter failures. Regardless of last year’s outcome, people desire to continue pushing themselves, achieving both personal and organizational success, and earning the respect of their peers along the way. This presents a wonderful opportunity for businesses to devise new goals for the beginning of this year. Many people set personal and professional goals in the New Year, such as losing ten pounds or striving to obtain a particular sales volume. However, many of these goals fall short of making it past the month of January. Having commitment to these goals is absolutely vital in making them a reality while putting them into clear writing can be extremely helpful. As with any goal, the objective should be both difficult yet attainable. Goals such as integrating a new and emerging technology into your business, gaining a certain number of clientele, or acquiring a desired consumer base will aid in the growth of your business for the upcoming year. Setting the bar high will challenge your employees to dig down and discover their very best while maximizing their potential. The goal should be both specific and clear, allowing those working towards it to thoroughly understand what it is your company seeks to achieve. In order to know if you have achieved a goal, it must be quantifiable. That way employees can monitor their progress and receive feedback during their difficult yet obtainable climb to success. Small movements combine to create major impacts that can lead to a rippling effect around the workplace. Consistency is the final ingredient; without consistency, your goal is likely to die off like the number of people in the gym on February 1st. Setting up a tracking and reminder system can aid in your company’s consistency, enabling employees to visualize their progress while keeping them updated on the work that still needs to be completed to reach the overall goal. Finally, celebrate! Plan on the end of December being a time of celebration with your coworkers for all the long hours and hard work they put in helping achieve your company’s goal. This will reinforce the mindset of commitment and consistency carrying through to January, with a whole new mountain to climb. < Technology – Friend or Foe?
english
/* * Copyright (C) 2013 salesforce.com, inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.auraframework.impl.root; import java.util.Set; import org.auraframework.Aura; import org.auraframework.def.DefDescriptor; import org.auraframework.def.DefDescriptor.DefType; import org.auraframework.def.DescriptorFilter; import org.auraframework.def.RootDefinition; import org.auraframework.impl.parser.ParserFactory; import org.auraframework.impl.source.SourceFactory; import org.auraframework.impl.system.DefFactoryImpl; import org.auraframework.system.CacheableDefFactory; import org.auraframework.system.Parser; import org.auraframework.system.Source; import org.auraframework.throwable.quickfix.QuickFixException; /** * Creates new ComponentDefs from source or cache. * * Should not be used directly. Used by ComponentDefRegistry population of * registry. This is actually incorrectly typed, as it is not meant to return a * single type. We probably should allow a non-typed Factory, or somehow clean * this up. */ public final class RootDefFactory extends DefFactoryImpl<RootDefinition> implements CacheableDefFactory<RootDefinition> { private final SourceFactory sourceFactory; public RootDefFactory(SourceFactory sourceFactory) { this.sourceFactory = sourceFactory; } @Override public RootDefinition getDef(DefDescriptor<RootDefinition> descriptor) throws QuickFixException { RootDefinition def; Source<RootDefinition> source = sourceFactory.getSource(descriptor); /* * We don't require the xml file to actually exist for namespaces. The * existance of the dir is enough. If the dir doesn't exist, source will * be null */ if (source == null || (!source.exists() && descriptor.getDefType() != DefType.NAMESPACE)) { return null; } descriptor = source.getDescriptor(); Parser<RootDefinition> parser = ParserFactory.getParser(source.getFormat(), descriptor); def = parser.parse(descriptor, source); return def; } @Override public long getLastMod(DefDescriptor<RootDefinition> descriptor) { return sourceFactory.getSource(descriptor).getLastModified(); } @Override public boolean hasFind() { return true; } @Override public Set<DefDescriptor<?>> find(DescriptorFilter matcher) { return sourceFactory.find(matcher); } @Override public Set<String> getNamespaces() { return sourceFactory.getNamespaces(); } @Override public Source<RootDefinition> getSource(DefDescriptor<RootDefinition> descriptor) { return sourceFactory.getSource(descriptor); } @Override public boolean exists(DefDescriptor<RootDefinition> descriptor) { Aura.getLoggingService().incrementNum("RootDefFactory.exists"); Source<?> s = getSource(descriptor); return s != null && s.exists(); } }
code
The multilingual management consultant will be responsible for maintaining the highest level of security compliance for MYPINPAD’s products, which encompass both merchant and consumer solutions. He will also play a key role in setting the firm’s security vision, evolving strategy and policies for its ongoing product development. MYPINPAD solutions delivers secure PIN authentication on unsecured electronic devices, such as mobile phones, tablets and PCs. The solution optimises security around the payment process, reducing risk and fraud. The technology provides consumers and merchants with a familiar and trusted PIN authentication process. Global Head of Mobile POS Solutions at MYPINPAD, David Poole, said: “This is a defining time for the business following the release of PCI SSC’s new standard for software-based PIN entry on mobile devices for card payment acceptance. The standard will revolutionise payments, increase global card acceptance and mitigate fraud. Importantly, it will reduce barriers to entry for all merchants. This month MYPINPAD launched its MPES (MYPINPAD’s PIN Entry Solution) product, aligning with the release of PCI SSC’s new standard for software-based PIN entry on mobile devices for card payment acceptance. MPES has been developed to enhance and expand the current mPOS market offering to meet the growth in card acceptance points. It is a software-based PIN pad solution that securely enables PIN authentication on merchants’ mobile devices, such as smartphones and tablets, as an alternative to traditional mPOS hardware-based solutions. MYPINPAD’s MPES technology has recently been integrated into Datecs’ (a leading manufacturer and developer of innovative POS hardware Secure Card Readers) Bluelite Secure Card Reader (SCR) to provide merchants with a low cost, PCI compliant PoM payment acceptance solution.
english
Predictions from the applied model of imagery use (Martin, Moritz, & Hall, 1999) were tested by examining the perceived effectiveness of five imagery types in serving specific functions. Potential moderation effects of this relationship by imagery ability and perspective were also investigated. Participants were 155 athletes from 32 sports, and materials included a chart for rating imagery effectiveness constructed specifically for the study as well as a modified version of the Sport Imagery Questionnaire (SIQ; Hall, Mack, Paivio, & Hausenblas, 1998). Results supported the predictions for cognitive but not motivational imagery types, and MG‐M imagery was perceived to be the most effective imagery type for motivational functions. Significant differences existed between imagery types regarding frequency and ease of imaging. The relationship between frequency and effectiveness was not moderated by imagery ability or perspective, and athletes who imaged more frequently found imagery more effective and easier to do.
english
स्क गद कॉन्स्टेबल २०१८ रेविसेड रेसल्ट कुटॉफ घोषित स्क आयोग ने ११त फेब २०१९ तो ११त मार्च २०१९ तक कांस्टेबल जीडी भर्ती के लिए परीक्षा आयोजित की। अधिकारियों ने अधिक दिनों में परीक्षा आयोजित की। क्योंकि कुल उपलब्ध रिक्तियां अधिक हैं इसलिए आवेदकों की संख्या अधिक है और अब स्क कांस्टेबल गद लिखित परीक्षा में भाग लेने वाले उम्मीदवार परिणाम की प्रतीक्षा कर रहे थे। तो उन उम्मीदवारों के लिए इस लेख पर हमने स्क गद कॉन्स्टेबल रेसल्ट २०१९ प्रदान किया। उपस्थित उम्मीदवार इस पृष्ठ से भी स्क गद कॉन्स्टेबल रेसल्ट २०१९ की जांच कर सकते हैं। कर्मचारी चयन आयोग भारत के अधिकारियों ने रेसल्ट जारी कर दिया है। भारतीय कर्मचारी चयन आयोग ने असम राइफल्स में कपस, निया, स्फ और राइफलमैन (गद) में कॉन्स्टेबल (गद) के ५४९५३ रिक्त पदों को भरने के लिए भर्ती परीक्षा आयोजित की। तो अधिकारी एसएससी में कांस्टेबल जीडी के रूप में प्रतिभाशाली और योग्य उम्मीदवारों को चुनना चाहते हैं। जिन्होंने सफलतापूर्वक परीक्षा पूरी की। अधिकारियों ने स्क कॉन्स्टेबल गद रेविसेड रेसल्ट १२ सिप्तंबर २०१९ घोषित कर दिया है। अधिक अपडेट प्राप्त करने के लिए, उम्मीदवार इस पृष्ठ का अनुसरण कर सकते हैं। पंजीकृत उम्मीदवारों की संख्या के बारे में आवश्यक जानकारी उम्मीदवारों को दिखाई दी और अधिक जानकारी नीचे दी गई है। स्क रेसल्ट घोषित करते समय स्क कॉन्स्टेबल गद कुटॉफ २०१९ जारी किया है। क्योंकि कुटॉफ मार्क परीक्षा में न्यूनतम उत्तीर्ण अंक हैं। और सभी उम्मीदवारों के पास समान कटऑफ अंक नहीं हैं। स्क गद कॉन्स्टेबल कुटॉफ विभिन्न श्रेणियों के लिए अलग-अलग है। और रिक्त पदों की संख्या और प्रदर्शित उम्मीदवारों की संख्या के आधार पर कुटॉफ मार्क को अंतिम रूप दिया है। कुटॉफ मार्क से अधिक पाने वाले अपीयर उम्मीदवार लिखित परीक्षा में क्वालिफाई कर सकते हैं। और वे अगले योग्यता दौर में भाग लेने के लिए पात्र हैं। अधिकांश आवेदक, परिणाम और कुटॉफ मार्क की जांच करना नहीं जानते हैं। उन आवेदकों के लिए हमने नीचे प्रोसेसिंग चरण दिए हैं। सबसे पहले, उम्मीदवार आधिकारिक साइट पर जाए होम पेज पर रेसल्ट लिंक पर क्लिक करें। आपकी स्क्रीन पर एक नया पेज खुलेगा। दिए गए कॉलम में सभी आवश्यक जानकारी भरें। अब सबमिट टैब पर क्लिक करें। आपका रेसल्ट दिखाई देगा। आगे की प्रक्रिया के लिए इसे डाउनलोड करे और प्रिंटआउट ले।
hindi
A collaboration between direct marketers and public relations specialists will reap benefits for both parties, says Colin Lloyd. specialists will reap benefits for both parties, says Colin Lloyd. being sent to people who have died. but it is vital to get it right. campaigns should go hand-in-hand. Good direct marketing is good PR.
english
\begin{document} \title[Properties and applications of the PDF ...]{Properties and applications\\ of the prime detecting function:\\ infinitude of twin primes,\\ asymptotic law of distribution\\ of prime pairs\\ differing by an even number} \author[R. M.~Abrarov]{R. M.~Abrarov$^1$} \email{$^[email protected]} \author[S. M.~Abrarov]{S. M.~Abrarov$^2$} \email{$^[email protected]} \date{September 29, 2011} \begin{abstract} The prime detecting function (PDF) approach can be effective instrument in the investigation of numbers. The PDF is constructed by recurrence sequence - each successive prime adds a sieving factor in the form of PDF. With built-in prime sieving features and properties such as simplicity, integro-differentiability and configurable capability for a wide variety of problems, the application of PDF leads to new interesting results. As an example, in this exposition we present proofs of the infinitude of twin primes and the first Hardy-Littlewood conjecture for prime pairs (the twin prime number theorem). On this example one can see that application of PDF is especially effective in investigation of asymptotic problems in combination with the proposed method of test and probe functions. \\ \\ \noindent{\bf Keywords:} prime detecting function, twin primes (prime twins), twin prime counting function, distribution of twin primes, Hardy-Littlewood conjecture, twin prime number theorem, Dirac delta comb \end{abstract} \maketitle \section{\textbf{Properties of the prime detecting function}} In our previous paper \cite{RAbrarov} we introduced the prime detecting function (PDF) with outcomes 1 for primes and 0 for composite numbers. In the general form the PDF looks like \begin{equation} \label{Eq_1} \begin{aligned} p_{}^\delta \left( n \right): = & \left( {1 - \delta \left( {\frac{n}{{{p_1}}} - {p_1}} \right)} \right)\left( {1 - \delta \left( {\frac{n}{{{p_2}}} - {p_2}} \right)} \right)...\\ &\left( {1 - \delta \left( {\frac{n}{{{p_k}}} - {p_k}} \right)} \right)... \ , \qquad n \ge 2. \end{aligned} \end{equation} where ${p_k} \ {\rm{ }}\left( {k = 1,{\rm{ 2, 3, }}...} \right)$ - all prime numbers in ascending order (throughout the article $k,{\rm{ }}l,{\rm{ }}m,{\rm{ }}n,{\rm{ }}q,{\rm{ }}r,{\rm{ }}s$ are integer numbers) and delta-function \begin{equation} \label{Eq_2} \delta\left(x\right):= \begin{cases} 1, \text{if $x \in \mathbb{N}_0=\{0, 1, 2,\dots\}$} \\ 0, \text{if $x \notin \mathbb{N}_0$} \end{cases} . \end{equation} Directly from \eqref{Eq_1} and \eqref{Eq_2} follows if $n$ coincides with any ${p_k}{\rm{ }}\left( {k = 1,{\rm{ 2, 3, }}...} \right)$ then $p_{}^\delta \left( n \right) = 1$, else $p_{}^\delta \left( n \right) = 0$. Consider in more detail the properties of this function. We do not know in advance the values of prime numbers ${p_k}$ for any $k$, but we can determine them successively by procedure described below. Initially, we assume that no single value of primes is known and the PDF has the form $p_0^\delta \left( n \right) = 1$. This function must be recharged timely by the known prime numbers ${p_k}$, and it will provide information about further numbers whether it is prime or composite. It is a recurrent routine - once the prime has been found, it induces change in the form of the PDF that, in turn, determines consecutively further primes. Let us examine positive integers from the first integer following 1. It is 2. Consequently, we have $p_0^\delta \left( 2 \right) = 1$. It indicates that we found the fist prime number ${p_1} = 2$. Next, we have to charge it into PDF. After that it takes the form $p_1^\delta \left( n \right) = \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)$. The next integer is 3 and $p_1^\delta \left( 3 \right) = \left( {1 - \delta \left( {\frac{3}{2} - 2} \right)} \right) = 1$. Our function shows correct result that the second prime number ${p_2} = 3$. Again, it is necessary to charge prime ${p_2}$ in the function $p_1^\delta \left( n \right)$ getting the new form\\ $p_2^\delta \left( n \right) = \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)$. Next natural number is 4 and $p_2^\delta \left( 4 \right) = \left( {1 - \delta \left( {\frac{4}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{4}{3} - 3} \right)} \right)$. The first factor in the product gives zero and, correspondingly, $p_2^\delta \left( 4 \right) = 0$. It means, 4 is composite number inducing no change in the form of the PDF. Further, we have $n = 5$. None of the factors in the product is zero. Therefore $p_2^\delta \left( 5 \right) = 1$ and ${p_3} = 5$. We have to charge this prime in our function $p_2^\delta \left( n \right)$ and now we get $p_3^\delta \left( n \right) = \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)\left( {1 - \delta \left( {\frac{n}{5} - 5} \right)} \right)$. And so on, going consistently over all integer numbers, we can find out what the numbers are - prime or composite. Since there are infinitely many primes (Euclid's theorem \cite{Goldston, Hardy, Ribenboim, RAbrarov}), the PDF ultimately becomes \begin{equation} \label{Eq_3} \begin{aligned} {p^\delta }\left( n \right) = p_\infty ^\delta \left( n \right) = & \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)\\ & \left( {1 - \delta \left( {\frac{n}{5} - 5} \right)} \right)... \\ = & \prod\limits_{p \ {\rm{primes}}} {\left( {1 - \delta \left( {\frac{n}{p} - p} \right)} \right)}, \qquad n \ge 2. \\ \end{aligned} \end{equation} The prime detecting function $p_k^\delta \left( n \right)$ can be defined by recurrence relations for $n = {\rm{2, 3, 4, }}...$ as \begin{equation} \label{Eq_4} \begin{cases} p_0^\delta \left( n \right) = 1,\\ \text{if $p_{k - 1}^\delta \left( n \right) = 1 \Rightarrow {p_k} = n$ and replace $p_{k - 1}^\delta \left( n \right)$ with}\\ \text{\qquad \qquad \qquad \quad \ \ \,$p_k^\delta \left( n \right) = p_{k - 1}^\delta \left( n \right)\left( {1 - \delta \left( {\frac{n}{{{p_k}}} - {p_k}} \right)} \right)$,} \end{cases} \end{equation} Each ${p_k}$ is always a prime, since by recurrence procedure each number is checked on divisibility by each smaller prime and $p_{k - 1}^\delta \left( n \right) = 1$ means that ${p_k} = n$ does not have any smaller prime as a divisor. This function looks very simple and obvious. Nevertheless, it is significant enough to get important information about the numbers. Let us emphasize some important characteristics of the PDF. It is not difficult to observe that PDF utilizes an \emph{inclusion - exclusion principle} \cite{Wiki-Inclusion}. Each bracket $\left( {1 - \delta \left( {\frac{n}{p} - p} \right)} \right)$ is a \emph{building block} of the PDF acting as \emph{numerical (digital) filter or sieve}, which filters out all the numbers divisible by $p$ starting with the \emph{square} of this number, i.e. \emph{argument of each delta-function has a quadratic zero-point boundary}. It is an effective \emph{functional alternative} to the well-known sieve of Eratosthenes \cite{Halberstam, Hardy, Ribenboim}. Depending on the assigned problem we can construct the PDF \emph{in various blocks (brackets)} - not necessarily with recurrence procedure \eqref{Eq_4} - and, instead of a sequence of positive integers $n$, we can examine some \emph{special sets of numbers or some function}: $\left( {1 - \delta \left( {\frac{{f(n)}}{p} - p} \right)} \right)$. It means the PDF possesses substantial \emph{flexibility for applications to a wide variety of problems}. As we can see further due to \emph{simple properties of the delta-functions}, the PDF is \emph{especially effective in the investigation of asymptotic problems}. Quite natural generalization of PDF to the function of real argument leads to \emph{integro-differentiability of the function} and in this case, as we can see below, it becomes in explicit form a \emph{derivative from the prime counting function} \cite{Goldston, Hardy, Ribenboim}. If we charge PDF only with first $k$ primes ${p_k}$ \begin{equation} \label{Eq_5} \begin{aligned} p_k^\delta \left( n \right) = & \left( {1 - \delta \left( {\frac{n}{{{p_1}}} - {p_1}} \right)} \right)\left( {1 - \delta \left( {\frac{n}{{{p_2}}} - {p_2}} \right)} \right)...\\ & \left( {1 - \delta \left( {\frac{n}{{{p_k}}} - {p_k}} \right)} \right) , \quad n \ge 2 \end{aligned} \end{equation} it can give us answer about testing number $n$ whether it is prime or not until $p_{k + 1}^2 - 1$. After that, PDF's outcome 0 indicates that testing number is composite (and certainly one of ${p_k}$ is divisor) while outcome 1 shows that number is relatively prime to first $k$ primes (none of ${p_k}$ is divisor), but not necessarily the prime number. Note that for the prime counting function (the quantity of primes $p \le n$) we have \begin{equation} \label{Eq_6} \pi \left( n \right) = \sum\limits_{m = 2}^n {{p^\delta }\left( m \right)} . \end{equation} Consequently for each $p_k^\delta \left( n \right)$, we can define corresponding summatory function \begin{equation} \label{Eq_7} {\pi _k}\left( n \right) = \sum\limits_{m = 2}^n {p_k^\delta \left( m \right)} = \sum\limits_{m = 2}^n {\prod\limits_{l = 1}^k {\left( {1 - \delta \left( {\frac{m}{{{p_l}}} - {p_l}} \right)} \right)} } , \end{equation} which is counting quantity of numbers up to $n$ relatively prime to first $k$ primes ${p_k}$. Also we have \begin{equation} \label{Eq_8} \pi \left( n \right) = \sum\limits_{m = 2}^n {{p^\delta }\left( m \right)} = \sum\limits_{m = 2}^n {p_\infty ^\delta \left( m \right)} = {\pi _\infty }\left( n \right). \end{equation} Further we investigate asymptotic densities of these functions. Obviously, for $$\pi_1 \left(n\right)=\sum _{m=2}^{n}p_{{1}}^{\delta} \left(m\right)=\sum _{\text{odd numbers $\leq n$}} 1$$ we have an asymptotic density of odd numbers \begin{equation} \label{Eq_9} \mathop{\lim }\limits_{n\to \infty } \frac{\pi _1 \left(n\right)}{n}=\frac{1}{2}=\left(1-\frac{1}{p_{1} } \right) . \end{equation} Using the following two remarkable properties of the delta-functions (further $\left\lfloor {\ } \right\rfloor {\rm{ and }}\left\lceil {\ } \right\rceil $ are floor and ceiling functions, respectively, and $\left\{ {\ } \right\}$ is fractional part of the number \cite{Wiki-Floor}) \begin{equation} \label{Eq_10} \begin{aligned} & \delta \left(\frac{n }{p_{m_1} } -p_{m_1} \right)\dots \delta \left(\frac{n}{p_{m_{i}}} - p_{m_{i}} \right) \delta \left(\frac{n}{p_{k}} - p_{k} \right) = \\ & \delta \left( {\frac{n}{p_{m_1} }} \right) \dots \delta \left( {\frac{n}{p_{m_{i}} }} \right) \delta \left(\frac{n}{p_{k}} - p_{k} \right) = \\ & \delta \left( {\frac{n}{{{p_{m_1}} \dots {p_{m_{i}}} \cdot p_{k} }} - \left\lceil {\frac{{{p_k}}}{{{p_{m_1}} \dots {p_{m_{i}}}}}} \right\rceil } \right), \quad m_1< \ldots <m_i<k, \end{aligned} \end{equation} and \begin{equation} \label{Eq_11} \begin{aligned} & \mathop {\lim}\limits_{n \to \infty } \frac{{\sum\limits_{m = 2}^n {\delta \left( {\frac{m}{p_{k} \cdot q} - r_k} \right)} }}{n} = \frac{1}{p_{k} \cdot q} = \\ & \frac{1}{p_{k}} \mathop {\lim}\limits_{n \to \infty } \frac{{\sum\limits_{m = 2}^n {\delta \left( {\frac{m}{q} - r_q} \right)} }}{n} \text{ (for any finite $r_q$) ,} \end{aligned} \end{equation} it is not difficult to get recurrence relation (assuming that $\mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1} \left(n\right)}{n}$ exists) \begin{equation} \label{Eq_12} \begin{aligned} & \mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k} \left(n\right)}{n} =\\ & \mathop{\lim }\limits_{n\to \infty } \frac { \sum\limits_{m = 2}^n {\left( {1 - \delta \left( {\frac{m}{{{p_k}}} - {p_k}} \right)} \right)\prod\limits_{l = 1}^{k-1} {\left( {1 - \delta \left( {\frac{m}{{{p_l}}} - {p_l}} \right)} \right)} } }{n} =\\ & \mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1} \left(n\right)}{n}-\mathop{\lim }\limits_{n\to \infty } \frac { \sum\limits_{m = 2}^n {{ \delta \left( {\frac{m}{{{p_k}}} - {p_k}} \right)} \prod\limits_{l = 1}^{k-1} {\left( {1 - \delta \left( {\frac{m}{{{p_l}}} - {p_l}} \right)} \right)} } }{n} = \\ & \left( {1 - \frac{1}{{{p_k}}}} \right)\mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1} \left(n\right)}{n} . \end{aligned} \end{equation} From \eqref{Eq_9}, repeatedly applying the recurrence relation \eqref{Eq_12}, we obtain \small \begin{equation} \label{Eq_13} \mathop {\lim}\limits_{n \to \infty } \frac{{{\pi _k}\left( n \right)}}{n} = \left( {1 - \frac{1}{{{p_1}}}} \right)\left( {1 - \frac{1}{{{p_2}}}} \right) \cdot ... \cdot \left( {1 - \frac{1}{{{p_k}}}} \right) = \prod\limits_{l = 1}^k {\left( {1 - \frac{1}{{{p_l}}}} \right)} {\rm{ }}{\rm{.}} \end{equation} \normalsize which is valid for any $k \in \{1, 2, 3,\ldots\}$ . In the limit of infinitely large $k$ in \eqref{Eq_13} from \eqref{Eq_5} and \eqref{Eq_8}, we conclude that asymptotic density of primes is \begin{equation} \label{Eq_14} \mathop{\lim }\limits_{n\to \infty } \frac{\pi _{\infty} \left(n\right)}{n} = \mathop{\lim }\limits_{n\to \infty } \frac{\pi \left(n\right)}{n} = \prod\limits_{\text{$p$ prime}} {\left( {1 - \frac{1}{p}} \right)} . \end{equation} We can generalize argument $n$ of PDF for any real number $x$ simply accepting \begin{equation} \label{Eq_15} {\hat p^\delta }\left( x \right): = {p^\delta }\left( {\left\lceil x \right\rceil } \right). \end{equation} For this function there is corresponding analog of the prime counting function \eqref{Eq_6} \begin{equation} \label{Eq_16} \hat \pi \left( x \right): = \pi \left( {\left\lfloor x \right\rfloor } \right) + \left( {\pi \left( {\left\lfloor {x + 1} \right\rfloor } \right) - \pi \left( {\left\lfloor x \right\rfloor } \right)} \right)\left\{ x \right\}, \end{equation} which is represented graphically as the values of the original prime counting function for integer arguments $\pi \left( {\left\lfloor x \right\rfloor } \right)$ \eqref{Eq_6} connected by the linear function $\left( {\pi \left( {\left\lfloor {x + 1} \right\rfloor } \right) - \pi \left( {\left\lfloor x \right\rfloor } \right)} \right)\left\{ x \right\}$. Then for the left derivative of \eqref{Eq_16} we get \begin{equation} \label{Eq_17} \frac{{{d_{left}}\hat \pi \left( x \right)}}{{dx}} = {\hat p^\delta }\left( x \right) \end{equation} and \begin{equation} \label{Eq_18} \hat \pi \left( x \right) = \int_2^x {{{\hat p}^\delta }\left( y \right)dy}. \end{equation} If we want to see the prime counting function in a traditional way as a unit step function with jumps at the primes then for the corresponding PDF we have to perform few steps. First, we generalize \eqref{Eq_1} for any real $x$ \small \begin{equation} \label{Eq_19} \begin{aligned} p_{}^\delta \left( x \right): = & \left( {\delta \left( {x - 2} \right) - \delta \left( {\frac{x}{{{p_1}}} - {p_1}} \right)} \right)\left( {\delta \left( {x - 2} \right) - \delta \left( {\frac{x}{{{p_2}}} - {p_2}} \right)} \right)... \\ & {\rm{ }}\left( {\delta \left( {x - 2} \right) - \delta \left( {\frac{x}{{{p_k}}} - {p_k}} \right)} \right) \ldots \quad . \end{aligned} \end{equation} \normalsize The function above is zero everywhere on real $x$ except primes on which the function is 1. Define the PDF for real argument \begin{equation} \label{Eq_20} p_{}^{{\delta ^D}}\left( x \right): = p_{}^\delta \left( x \right){\delta ^D}\left( {x - \delta \left( x \right)x} \right) , \end{equation} where ${\delta ^D}\left( x \right)$ - Dirac delta-function (distribution) \cite{Estrada}, \cite{SAbrarov}\footnote{In our work \cite{SAbrarov} we showed simple examples involving number series, their reciprocals and some identities how the Dirac delta can be introduced from the new point of view. Compared to the Stieltjes integration methods widely utilized in number theory, the Dirac delta function approach turns out to be more efficient, natural and simple for understanding especially with the stepwise/discrete functions due to its integro-differentiability and other remarkable properties.}, \cite{Wiki-Dirac} with the sifting property $$\int_{ - y - \varepsilon }^{ + y + \varepsilon } {f\left( x \right){\delta ^D}\left( {x - y} \right)dx} = f\left( y \right){\rm{, }} \quad \forall \varepsilon > 0.$$ It should be noted that ${\delta ^D}\left( {x - \delta \left( x \right)x} \right) = \sum\limits_{m = 0}^\infty {{\delta ^D}\left( {x - m} \right)}$ is nothing but a Dirac comb \cite{Wiki-Dirac} for $x \ge 0$, while the PDF $p_{}^{{\delta ^D}}\left( x \right)$ is linear combination of the delta distributions with spikes on primes representing a "reminder" left after sampling of the comb. It means, $p_{}^{{\delta ^D}}\left( x \right)$ has all the properties of a delta distribution (integro-differentiability, well defined Fourier, Laplace and other transforms, etc.). The prime counting function will be \begin{equation} \label{Eq_21} \pi \left( x \right) = \int_{ - \infty }^x {{p^{{\delta ^D}}}\left( y \right)dy} = \int_2^x {{p^{{\delta ^D}}}\left( y \right)dy}, \end{equation} and correspondingly \begin{equation} \label{Eq_22} \frac{{d\pi \left( x \right)}}{{dx}} = {p^{{\delta ^D}}}\left( x \right). \end{equation} We see that the PDF for real argument is in explicit form just derivative from the prime counting function. \section{\textbf{Infinitude of twin primes,\\ the asymptotic law of distribution of prime pairs\\ differing by an even number}} Twin primes (or prime twins) are pairs of prime numbers that only differ by two. Some \ examples of twin primes \ (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), \dots . There is only one pair of primes that differ less by two. It is pair (2, 3), which we can call the unique twin primes and put them apart from the set of twin primes. Obviously, all \ twin primes \ are \ pairs of \ successive \ odd \ numbers \ and \ except of \ (3, 5), (5, 7) there is no more two twin primes consisting of three successive odd numbers as in any such sequence bigger than 3, certainly one number is not a prime (must be divisible by 3). There are many heuristic and computational \cite{Caldwell, Goldston, Hardy23, Ribenboim} evidences that the twin primes are infinitely many (twin primes conjecture), but infinitude of twin primes has not been proved until now. There have been several outstanding results towards the twin primes conjecture. In 1919 Brun showed that the sum of reciprocals of the twin primes (unlike the sum of reciprocals of primes) is either finite or convergent and comes to some constant $B = 1.9021604\dots,$ called Brun's constant \begin{equation} \label{Eq_23} \left( {\frac{1}{3} + \frac{1}{5}} \right) + \left( {\frac{1}{5} + \frac{1}{7}} \right) + \left( {\frac{1}{{11}} + \frac{1}{{13}}} \right) + \left( {\frac{1}{{17}} + \frac{1}{{19}}} \right) + ... = B. \end{equation} Convergence signifies that the twin primes are quite sparse among the sequence of all primes. In 1920 Brun also gave upper bound for quantity of twin primes up to $x$, which is at present time improved to \begin{equation} \label{Eq_24} \le 3.42 \cdot 2\prod\limits_{p > 2} {\left( {1 - \frac{1}{{{{\left( {p - 1} \right)}^2}}}} \right)} \frac{x}{{{{\left( {\log x} \right)}^2}}} . \end{equation} In famous paper \cite{Hardy23} (1923) Hardy and Littlewood on heuristic grounds conjectured that the estimation for quantity of twin primes up to $x$ should be the following: \begin{equation} \label{Eq_25} 2{C^{twin}}\int_2^x {\frac{{dx}}{{{{\left( {\log x} \right)}^2}}}} \end{equation} with the twin prime constant (also for constant there are designations $C_2$ and $\Pi _2$) \begin{equation} \label{Eq_26} \begin{aligned} {C^{twin}} & = \prod\limits_{p > 2} {\frac{{p\left( {p - 2} \right)}}{{{{\left( {p - 1} \right)}^2}}} } = \prod\limits_{p > 2} {\left( {1 - \frac{1}{{{{\left( {p - 1} \right)}^2}}}} \right) }\\ & = 0.66016181584686957392781211\dots . \end{aligned} \end{equation} Chen proved that for infinitely many primes $p$, the number $p + 2$ is at most product of two primes \cite{Chen, Halberstam}. Quite recently another spectacular progress on this problem has been obtained \cite{Goldston-Pintz, Soundararajan}: team of mathematicians Goldston, Pintz and Yildirim proved that there are infinitely many consecutive primes, which are closer than any arbitrarily small multiple of the average spacing between primes \begin{equation} \label{Eq_27} \Delta : = \mathop {\lim \inf }\limits_{n \to \infty } \frac{{{p_{n + 1}} - {p_n}}}{{\log n}} = 0. \end{equation} Assuming a very regular distribution of primes in arithmetic progressions same team showed that there are infinitely many pairs of primes differing by 16 or less. The current state of the subject can be found in Refs. \cite{Goldston, Goldston-Pintz, Korevaar, Soundararajan, Yildirim}. The PDF approach can help to get further advance on the problem. By analogy with the PDF for any $n \ge 3$ we can construct a twin primes detecting function for pair $\left( {n,n + 2} \right)$ with outcomes 1 for twin primes, and 0 otherwise. To do this we apply the product of two PDFs \begin{equation} \label{Eq_28} {p^{\delta :twin}}\left( n \right) = {p^\delta }\left( n \right){p^\delta }\left( {n + 2} \right),{\rm{ }} \quad n \ge 3. \end{equation} It is clear that such a function gives for $n$ a value 1 if and only if ${p^\delta }\left( n \right)$ and ${p^\delta }\left( {n + 2} \right)$ both are 1, i.e. pair $\left( {n,n + 2} \right)$ are prime numbers (twin primes). Recalling product formula \eqref{Eq_3} for each ${p^\delta }\left( n \right)$ and ${p^\delta }\left( {n + 2} \right)$ we can represent ($n \ge 3$) \begin{equation} \label{Eq_29} \begin{aligned} {p^{\delta :twin}}\left( n \right) = & \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{2} - 2} \right)} \right)\\ & \left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{3} - 3} \right)} \right) \\ & \left( {1 - \delta \left( {\frac{n}{5} - 5} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{5} - 5} \right)} \right) \dots . \end{aligned} \end{equation} Consider two pairs of products: $$\left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{2} - 2} \right)} \right)$$ and $$\left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{3} - 3} \right)} \right).$$ We have \small \begin{equation} \label{Eq_30} \begin{aligned} & \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right) \left( {1 - \delta \left( {\frac{{n + 2}}{2} - 2} \right)} \right) = 1 - \delta \left( {\frac{n}{2} - 2} \right) - \\ & \delta \left( {\frac{{n + 2}}{2} - 2} \right) + \delta \left( {\frac{n}{2} - 2} \right)\delta \left( {\frac{{n + 2}}{2} - 2} \right). \end{aligned} \end{equation} \normalsize Sum of two last terms $$ - \delta \left( {\frac{{n + 2}}{2} - 2} \right) + \delta \left( {\frac{n}{2} - 2} \right)\delta \left( {\frac{{n + 2}}{2} - 2} \right)$$ for $n > 2$ are always zero because either $n$ is odd number and both summands are zero or $n$ is even and summands have opposite values $ - 1 + 1 = 0$. Thus, product is simplified \begin{equation} \label{Eq_31} \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{2} - 2} \right)} \right) = 1 - \delta \left( {\frac{n}{2} - 2} \right). \end{equation} In the second product \small \begin{equation} \label{Eq_32} \begin{aligned} & \left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right) \left( {1 - \delta \left( {\frac{{n + 2}}{3} - 3} \right)} \right) = 1 - \delta \left( {\frac{n}{3} - 3} \right) - \\ & \delta \left( {\frac{{n + 2}}{3} - 3} \right) + \delta \left( {\frac{n}{3} - 3} \right)\delta \left( {\frac{{n + 2}}{3} - 3} \right) \end{aligned} \end{equation} \normalsize term $\delta \left( {\frac{n}{3} - 3} \right)\delta \left( {\frac{{n + 2}}{3} - 3} \right)$ always zero because 3 never can divide both $n$ and $n + 2$. So we have \footnotesize \begin{equation} \label{Eq_33} \left( {1 - \delta \left( {\frac{n}{3} - 3} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2}}{3} - 3} \right)} \right) = 1 - \delta \left( {\frac{n}{3} - 3} \right) - \delta \left( {\frac{{n + 2}}{3} - 3} \right). \end{equation} \normalsize Above simplification for prime 3 can be applied for any bigger prime numbers. Hence, the twin primes detecting function becomes \footnotesize \begin{equation} \label{Eq_34} \begin{aligned} {p^{\delta :twin}}\left( n \right) = & \left( {1 - \delta \left( {\frac{n}{2} - 2} \right)} \right) \cdot \left( {1 - \delta \left( {\frac{n}{3} - 3} \right) - \delta \left( {\frac{{n + 2}}{3} - 3} \right)} \right) \cdot \\ & \left( {1 - \delta \left( {\frac{n}{5} - 5} \right) - \delta \left( {\frac{{n + 2}}{5} - 5} \right)} \right) \cdot ...{\rm{ , }} \quad n \ge 3. \end{aligned} \end{equation} \normalsize In general, the expression for the twin primes detecting function we represent as \footnotesize \begin{equation} \label{Eq_35} \begin{aligned} {p^{\delta :twin}}\left( n \right) = & \left( {1 - \delta \left( {\frac{n}{{{p_1}}} - {p_1}} \right)} \right)\left( {1 - \delta \left( {\frac{n}{{{p_2}}} - {p_2}} \right) - \delta \left( {\frac{{n + 2}}{{{p_2}}} - {p_2}} \right)} \right) \dots \\ & \left( {1 - \delta \left( {\frac{n}{{{p_k}}} - {p_k}} \right) - \delta \left( {\frac{{n + 2}}{{{p_k}}} - {p_k}} \right)} \right) \dots , \quad n \ge 3, \end{aligned} \end{equation} \normalsize where ${p_1} = 2$, ${p_2} = 3$, ${p_3} = 5$ and so on. Now following the method discussed in previous section, we try to find asymptotic densities for twin pairs relatively prime to first $k$ prime numbers. The corresponding twin pairs detecting function is \footnotesize \begin{equation} \label{Eq_36} \begin{aligned} p_k^{\delta :twin}\left( n \right) = & \left( {1 - \delta \left( {\frac{n}{{{p_1}}} - {p_1}} \right)} \right)\\ & \prod\limits_{m = 2}^k {\left( {1 - \delta \left( {\frac{n}{{{p_m}}} - {p_m}} \right) - \delta \left( {\frac{{n + 2}}{{{p_m}}} - {p_m}} \right)} \right)}, \quad n \ge 3, \end{aligned} \end{equation} \normalsize where ${p_m}{\rm{ }}\left( {m = 1,{\rm{ 2, 3, }}...{\rm{, }}k} \right)$ - first $k$ prime numbers in ascending order. Corresponding summatory function \begin{equation} \label{Eq_37} \pi _k^{twin}\left( n \right) = \sum\limits_{m = 3}^n {p_k^{\delta {\rm{:}}twin}\left( m \right)} \quad . \end{equation} For $\pi_1^{twin} \left(n\right)$ counting twin pairs relatively prime to 2, i.e. all odd pairs beginning from 3 up to $n$ $$\pi_1^{twin} \left(n\right)=\sum _{m=2}^{n}p_{{1}}^{\delta} \left(m\right)=\sum _{3\leq\text{odd numbers $\leq n$}} 1,$$ we have same asymptotic density as in \eqref{Eq_9} \begin{equation} \label{Eq_38} \mathop{\lim }\limits_{n\to \infty } \frac{\pi _1^{twin} \left(n\right)}{n}=\frac{1}{2}=\left(1-\frac{1}{p_{1} } \right) . \end{equation} Unlike the PDF \eqref{Eq_5}, disclosure of the brackets in \eqref{Eq_36} leads also to the terms such as \footnotesize \begin{equation} \label{Eq_39} \begin{aligned} & \delta \left(\frac{n }{p_{k_1} } -p_{k_1} \right)\dots \delta \left(\frac{n}{p_{k_{i}}} - p_{k_{i}} \right) \delta \left(\frac{n+2}{p_{l_1} } -p_{l_1} \right)\dots \delta \left(\frac{n+2}{p_{l_{j}}} - p_{l_{j}} \right) = \\ & \delta \left( \frac{n}{{q_1} }- \left\lceil \frac{p_{k_{i}}^2}{q_1} \right\rceil \right)\delta \left( \frac{n+2}{{q_2} }- \left\lceil \frac{p_{l_{j}}^2}{q_2} \right\rceil \right), \\ & k_1< \ldots <k_{i-1}<k_i, \quad l_1< \ldots <l_{j-1}<l_j, \\ & q_1=p_{k_1}\ldots p_{k_i}, \quad q_2=p_{l_1}\ldots p_{l_j}, \quad \left(q_1,\ q_2 \right)=1, \end{aligned} \end{equation} \normalsize Above product of delta-functions can be reduced to a single delta-function of the form $\delta \left( {\frac{{m + s}}{q} - r} \right)$ with $q = q_1 \cdot q_2 $ since $q_1$ and $q_2$ are relatively prime, and we always can find finite integers $m_1$ and $m_2$ such as $q_1 \cdot m_1= q_2 \cdot m_2 +2$. So we rewrite \eqref{Eq_39} as \small \begin{equation} \label{Eq_40} \begin{aligned} & \delta \left( \frac{n+q_1 m_1 }{{q_1}}- \left\lceil \frac{p_{k_{i}}^2}{q_1} + m_1 \right\rceil \right) \delta \left( \frac{n+2 + q_2 m_2}{{q_2} }- \left\lceil \frac{p_{l_{j}}^2}{q_2} + m_2 \right\rceil \right) = \\ & \delta \left( \frac{n+s}{q} - r \right), \quad s=q_1 m_1=q_2 m_2+2, \quad r \text{ - some finite integer}. \end{aligned} \end{equation} \normalsize Applying \eqref{Eq_39}, \eqref{Eq_40} and analog of \eqref{Eq_11} \begin{equation} \label{Eq_41} \begin{aligned} & \mathop {\lim}\limits_{n \to \infty } \frac{{\sum\limits_{m = 2}^n {\delta \left( {\frac{m+s_k}{p_{k} \cdot q} - r_k} \right)} }}{n} = \frac{1}{p_{k} \cdot q} = \\ & \frac{1}{p_{k}} \mathop {\lim}\limits_{n \to \infty } \frac{{\sum\limits_{m = 2}^n {\delta \left( {\frac{m+s_q}{q} - r_q} \right)} }}{n} \text{ (for any finite $s_q$ and $r_q$) ,} \end{aligned} \end{equation} it is not difficult to get recurrence relation for $k\geq2$ (assuming that $\mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1}^{twin} \left(n\right)}{n}$ exists) \begin{equation} \label{Eq_42} \begin{aligned} & \mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k}^{twin} \left(n\right)}{n} =\\ & \mathop{\lim }\limits_{n\to \infty } \frac { \sum\limits_{m = 2}^n {\left( {1 - \delta \left( {\frac{m}{{{p_k}}} - {p_k}} \right) - \delta \left( {\frac{m+2}{{{p_k}}} - {p_k}} \right)} \right) p_{k-1}^{\delta :twin}\left( m \right) } }{n} =\\ & \mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1}^{twin} \left(n\right)}{n}-\mathop{\lim }\limits_{n\to \infty } \frac { \sum\limits_{m = 2}^n {{ \delta \left( {\frac{m}{{{p_k}}} - {p_k}} \right)} p_{k-1}^{\delta :twin}\left( m \right) } }{n} - \\ & \mathop{\lim }\limits_{n\to \infty } \frac { \sum\limits_{m = 2}^n {{ \delta \left( {\frac{m+2}{{{p_k}}} - {p_k}} \right)} p_{k-1}^{\delta :twin}\left( m \right) } }{n} = \left( {1 - \frac{2}{{{p_k}}}} \right)\mathop{\lim }\limits_{n\to \infty } \frac{\pi _{k-1}^{twin} \left(n\right)}{n} . \end{aligned} \end{equation} From \eqref{Eq_38}, applying recurrence relation \eqref{Eq_42} as many time as we want, we can obtain an asymptotic density \begin{equation} \label{Eq_43} \begin{aligned} \mathop {\lim}\limits_{n \to \infty } \frac{{\pi _k^{twin}\left( n \right)}}{n} & = \left( {1 - \frac{1}{{{p_1}}}} \right)\left( {1 - \frac{2}{{{p_2}}}} \right) \cdot ... \cdot \left( {1 - \frac{2}{{{p_k}}}} \right) \\ & = \left( {1 - \frac{1}{{{p_1}}}} \right)\prod\limits_{m = 2}^k {\left( {1 - \frac{2}{{{p_m}}}} \right)}, \end{aligned} \end{equation} which is valid for any $k \in \{1,\ 2,\ 3,\ \ldots\}$ . Further, we can infer that the twin prime counting function (quantity of twin primes $\left( {p,p + 2} \right)$ such that $p \le n$) \begin{equation} \label{Eq_44} {\pi ^{twin}}\left( n \right) = \sum\limits_{m = 3}^n {{p^{\delta {\rm{:}}twin}}\left( n \right)} = \sum\limits_{m = 3}^n {p_\infty ^{\delta {\rm{:}}twin}\left( n \right)} = \pi _\infty ^{twin}\left( n \right) \end{equation} has asymptotic density \small \begin{equation} \label{Eq_45} \begin{aligned} \mathop {\lim}\limits_{n \to \infty } \frac{{{\pi_\infty^{twin}}\left( n \right)}}{n} = \mathop {\lim}\limits_{n \to \infty } \frac{{{\pi ^{twin}}\left( n \right)}}{n} & = \left( {1 - \frac{1}{2}} \right)\left( {1 - \frac{2}{3}} \right)\left( {1 - \frac{2}{5}} \right)\dots \\ & = \left( {1 - \frac{1}{2}} \right)\prod\limits_{p > 2} {\left( {1 - \frac{2}{p}} \right)}. \end{aligned} \end{equation} \normalsize Now we combine the ratio of asymptotic density \eqref{Eq_43} and square of asymptotic density \eqref{Eq_13} \begin{equation} \label{Eq_46} \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _k^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _k}\left( n \right)}}} \right)}^2}} \right) \end{equation} to get for each $k = 1,{\rm{ }}2,{\rm{ }}3,...$\\ \small \leftline{$\qquad\quad \begin{array}{l} \begin{aligned} \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _1^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _1}\left( n \right)}}} \right)}^2}} \right) & = \frac{1}{2}{\left( 2 \right)^2} = 2,\\ \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _2^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _2}\left( n \right)}}} \right)}^2}} \right) & = 2\left( {1 - \frac{2}{3}} \right){\left( {1 - \frac{1}{3}} \right)^{ - 2}} \\ & = 2\left( {1 - \frac{1}{{{{\left( {3 - 1} \right)}^2}}}} \right), \\ \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _3^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _3}\left( n \right)}}} \right)}^2}} \right) & = 2\left( {1 - \frac{1}{{{{\left( {3 - 1} \right)}^2}}}} \right)\left( {1 - \frac{2}{5}} \right){\left( {1 - \frac{1}{5}} \right)^{ - 2}}\\ & = 2\left( {1 - \frac{1}{{{{\left( {3 - 1} \right)}^2}}}} \right)\left( {1 - \frac{1}{{{{\left( {5 - 1} \right)}^2}}}} \right), \end{aligned} \end{array}$}\\ \leftline{\qquad\qquad\dots} \begin{equation} \label{Eq_47} \begin{aligned} \mathop {\lim}\limits_{n \to \infty } & \left( {\frac{{\pi _k^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _k}\left( n \right)}}} \right)}^2}} \right) =\\ 2 & \left( {1 - \frac{1}{{{{\left( {3 - 1} \right)}^2}}}} \right) \left( {1 - \frac{1}{{{{\left( {5 - 1} \right)}^2}}}} \right)...\left( {1 - \frac{1}{{{{\left( {{p_k} - 1} \right)}^2}}}} \right). \end{aligned} \end{equation} \normalsize Ultimately, when $k$ tends to infinity right hand side of expression \eqref{Eq_47} has limit (see \eqref{Eq_26}) \begin{equation} \label{Eq_48} 2\prod\limits_{k = 2}^\infty {\left( {1 - \frac{1}{{{{\left( {{p_k} - 1} \right)}^2}}}} \right)} = 2{C^{twin}}. \end{equation} With tendency of $k$ to bigger numbers we see the process of "refining" of primes and twin primes among the integers relatively prime to first $k$ primes. Also the least composite number relatively prime to first $k$ primes is drifting to infinity, when $k \to \infty$. For extremely large $k$ expanding to infinity the asymptotic of $\frac{{\pi _k^{twin}\left( n \right)}}{n}{\left( {\frac{n}{{{\pi _k}\left( n \right)}}} \right)^2}$ remains almost same (more and more close to constant $2{C^{twin}}$), i.e. for extremely large $k$ it is weakly sensitive for switching from one $k$ to another, signifying that the greatest contribution in asymptotic is getting from "pure" primes and "pure" twin primes only. These two processes of "refining" and approaching to constant $2{C^{twin}}$ are going parallel. Ultimately, in the relation of asymptotic densities $\frac{{\pi _k^{twin}\left( n \right)}}{n}$ and ${\left( {\frac{{{\pi _k}\left( n \right)}}{n}} \right)^2}$ we get the ratio of only prime asymptotic densities $\frac{{\pi _{}^{twin}\left( n \right)}}{n}$ and ${\left( {\frac{{\pi \left( n \right)}}{n}} \right)^2}$ (see \eqref{Eq_45} and \eqref{Eq_14}) \begin{equation} \label{Eq_49} \begin{aligned} & \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _\infty ^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{{\pi _\infty }\left( n \right)}}} \right)}^2}} \right) = \\ & \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi _{}^{twin}\left( n \right)}}{n}{{\left( {\frac{n}{{\pi \left( n \right)}}} \right)}^2}} \right) = 2{C^{twin}}. \end{aligned} \end{equation} Using the Prime Number Theorem \cite{Hardy, RAbrarov} stating that \begin{equation} \label{Eq_50} \mathop {\lim}\limits_{n \to \infty } \left( {\frac{{\pi \left( n \right)}}{n}\log n} \right) = 1 \text{\ or \ } \frac{{\pi \left( n \right)}}{n} \thicksim \frac{1}{{\log n}} \end{equation} we can rewrite \eqref{Eq_49} \begin{equation} \label{Eq_51} \begin{aligned} \pi _{}^{twin}\left( n \right) & \thicksim 2{C^{twin}}n{\left( {\frac{{\pi \left( n \right)}}{n}} \right)^2} \\ & \thicksim 2{C^{twin}}\frac{n}{{{{\left( {\log n} \right)}^2}}} \thicksim 2{C^{twin}}\int_2^n {\frac{{dx}}{{{{\left( {\log x} \right)}^2}}}} . \end{aligned} \end{equation} Expression \eqref{Eq_51} explicitly asserts the infinitude of twin primes and gives the asymptotic law of distribution for twin primes. Thus, twin primes conjecture and Hardy - Littlewood conjecture for twin primes \eqref{Eq_25} is proved. In our method, we define an asymptotic of the $\frac{{\pi _{}^{twin}\left( n \right)}}{n}$ by comparing it with the asymptotic of the function ${\left( {\frac{{\pi \left( n \right)}}{n}} \right)^2}$. In other words, here $\frac{{\pi _{}^{twin}\left( n \right)}}{n}$ is test-function with uncertain behaviour in the asymptotic and ${\left( {\frac{{\pi \left( n \right)}}{n}} \right)^2}$ is probe-function with known asymptotic. The same idea was used in our previous paper \cite{RAbrarov}, where we presented our version of proof of the Prime Number Theorem (in that work the test-function was $\frac{{\pi \left( n \right)}}{n}$ and probe-function was Harmonic Number $H\left( n \right)$). Result obtained for twin primes can be readily generalized for any even distance $2k$ between prime pairs $\left( {p,p + 2k} \right)$ and for corresponding $2k$ - pair prime detecting function \begin{equation} \label{Eq_52} {p^{\delta :2k}}\left( n \right) = {p^\delta }\left( n \right){p^\delta }\left( {n + 2k} \right), \quad n \ge 3 . \end{equation} For this purpose we recall all inferences \eqref{Eq_28}-\eqref{Eq_45} and just examine that in \[ \begin{aligned} & \left( {1 - \delta \left( {\frac{n}{p} - p} \right)} \right)\left( {1 - \delta \left( {\frac{{n + 2k}}{p} - p} \right)} \right) = 1 - \delta \left( {\frac{n}{p} - p} \right) \\ & - \delta \left( {\frac{{n + 2k}}{p} - p} \right) + \delta \left( {\frac{n}{p} - p} \right)\delta \left( {\frac{{n + 2k}}{p} - p} \right) \end{aligned} \] four terms for any prime $p > 2$ dividing $k$ can be reduced to only two terms for $n \ge {p^2}$ \begin{equation} \label{Eq_53} \left( {1 - \delta \left( {\frac{n}{p} - p} \right)} \right)\left( {1 - \delta \left( {\frac{{n + k}}{p} - p} \right)} \right) = 1 - \delta \left( {\frac{n}{p} - p} \right) \end{equation} due to same reasons as we reduced in \eqref{Eq_30} and \eqref{Eq_31} for prime 2. It means that each prime $p$ dividing $2k$ contributes to asymptotic densities with factor $\left( {1 - \frac{1}{p}} \right)$ instead of factor $\left( {1 - \frac{2}{p}} \right)$. Consequently, in asymptotic densities for prime pairs differing by $2k$ we have to multiply on factors \begin{equation} \label{Eq_54} \prod\limits_{\scriptstyle p > 2 \atop \scriptstyle p\left| k \right. } {\frac{{1 - \frac{1}{p}}}{{1 - \frac{2}{p}}}} = \prod\limits_{\scriptstyle p > 2 \atop \scriptstyle p\left| k \right. } {\frac{{p - 1}}{{p - 2}}} . \end{equation} Finally, for the prime pairs counting function (quantity of prime pairs $\left( {p,p + 2k} \right)$ $p \le n$) we have \begin{equation} \label{Eq_55} \begin{aligned} \pi _{}^{2k}\left( n \right) & \thicksim 2{C^{twin}}\prod\limits_{\scriptstyle p > 2 \atop \scriptstyle p\left| k \right. } {\frac{{p - 1}}{{p - 2}}} \frac{n}{{{{\left( {\log n} \right)}^2}}} \\ & \thicksim 2{C^{twin}}\prod\limits_{\scriptstyle p > 2 \atop \scriptstyle p\left| k \right. } {\frac{{p - 1}}{{p - 2}}} \int_2^n {\frac{{dx}}{{{{\left( {\log x} \right)}^2}}}} . \end{aligned} \end{equation} It is what Hardy and Littlewood conjectured in 1923 about prime pairs (conjecture B) [\cite{Hardy23}, p.42]. Hence, the Hardy - Littlewood conjecture B now is proved. Obviously, formula \eqref{Eq_55} also claims a weaker statement - for every natural number $k$ there exist infinitely many pair of primes $p,{\rm{ }}q$ such that $q - p = 2k$ (without requiring them to be consecutive). Remarkably, the prime detecting function approach and method with asymptotic densities for test and probe functions also can be applied to a wide variety of problems with sequences of prime polynomials (Goldbach conjecture, primes in arithmetic progressions, clusters of twin primes, $k - tuple$ conjecture, Cunningham chains, \dots) and non-polynomial primes (such as Mersenne and Fermat primes) and other problems \cite{Caldwell, Ribenboim}. \end{document}
math
थाईलैंड - विकियात्रा थाईलैंड (थाई: ) को आधिकारिक रूप से थाई का साम्राज्य () कहा जाता है। अच्छे भोजन, अच्छा मौसम और प्राकृतिक सौंदर्य के कारण यह दक्षिण पूर्व एशिया का सबसे पसंदीदा पर्यटक स्थल है। इसे मुस्कान की भूमि भी कहा जाता है। थाईलैंड का नक्शा १३.७५१००.४६६६६७१ बैंगकोक (थाई: ) ये थाईलैंड की राजधानी है, जिसकी जनसंख्या एक करोड़ से भी अधिक है। क्योंकि ये एक बहुत बड़ा शहर है, इस कारण इसकी बहुत ऊँची इमारते हैं और यातायात भी भीड़ वाला ही रहता है। ये एशिया के कुछ ऐसे विशाल महानगरों में से एक है, जिसमें कई सारे शानदार मंदिर, नहर और काफी व्यस्त बाजार मौजूद है। १४.३४७७७८१००.५६०५५६२ आयुत्थया (थाई: ) १८.७९५२७८९८.९९८६११३ चियांग मइ (थाई: ) १९.९०९४४४९९.८२७५४ चियांग राय (थाई: ) १४.०१९४४४९९.५३११११५ कंचनबूरी (थाई: ) १४.९७५१०२.१६ नखोन रत्चासीमा (थाई: ) १२.९२७५१००.८७५२७८७ पट्टाया (थाई: ) १७९९.८१६६६७८ सुखोठाई (थाई: ) ९.13९722९९.330556९ सूरत ठानी (थाई: ) दक्षिण पूर्व एशिया में सबसे अधिक पर्यटक यहीं आते हैं। यहाँ हरा भरा जंगल है, जो उतना ही हरा है, जितना होता है। साफ नीला पानी और भोजन, जिसे बिना खाये आप रह नहीं पाएंगे। कई एशियाई और पश्चिमी देशों के लोगों के पास यदि पासपोर्ट हो और केवल घूमने के लिए थाईलैंड आयें हों, तो उन्हें वीजा की कोई जरूरत नहीं होती है। यहाँ बैंकॉक और फुकेट में थाईलैंड के मुख्य हवाई अड्डे मौजूद हैं, जो घरेलू और अंतरराष्ट्रीय, दोनों तरह के उड़ान भरते हैं। लगभग सभी विमान जो एशिया में उतरते हैं, वे लोग बैंकॉक में भी आते ही हैं। इस कारण यहाँ की सेवा काफी अच्छी है और चुनौती मिलने के कारण इस मार्ग की सभी टिकटें सस्ती होती है। बैंकॉक में दो बड़े हवाई अड्डे हैं। इसमें सुवर्णभूमि हवाई अड्डा मुख्य हवाई अड्डा है, जो बड़े विमानों के लिए है। वहीं डॉन मुएंग अंतरराष्ट्रीय हवाई अड्डा केवल छोटे घरेलू और अंतरराष्ट्रीय विमानों के लिए है। " से लिया गया इस पृष्ठ का आखिरी बदलाव विकिपीडिया सदस्य स ने १०:२६, १५ जनवरी २०१८ पर किया। विकियात्रा सदस्य स्पैरॉबिन के कार्य के अनुसार।
hindi
module.exports={A:{A:{"2":"J C G VB","4":"E B A"},B:{"4":"D Y g H L"},C:{"1":"0 1 3 4 5 n o p q r s x y u t","2":"TB z F I J C G E B A D Y g H L M N O P Q R S T U V W X v Z a b c d e RB QB","194":"f K h i j k l m"},D:{"1":"0 1 3 4 5 9 f K h i j k l m n o p q r s x y u t FB BB UB CB DB","4":"F I J C G E B A D Y g H L M N O P Q R S T U V W X v Z a b c d e"},E:{"1":"B A KB LB","4":"8 F I J C G E EB GB HB IB JB"},F:{"1":"S T U V W X v Z a b c d e f K h i j k l m n o p q r s","2":"6 7 E A D MB NB OB PB SB w","4":"H L M N O P Q R"},G:{"1":"A cB dB","4":"2 8 G AB WB XB YB ZB aB bB"},H:{"2":"eB"},I:{"1":"t","4":"2 z F fB gB hB iB jB kB"},J:{"2":"C","4":"B"},K:{"2":"6 7 B A D w","4":"K"},L:{"1":"9"},M:{"1":"u"},N:{"4":"B A"},O:{"4":"lB"},P:{"1":"I","4":"F"},Q:{"1":"mB"},R:{"2":"nB"}},B:4,C:"Font unicode-range subsetting"};
code
The first and most obvious way to increase happiness in our lives is to have a positive outlook. For the most part, being happy is less about circumstances and more about attitude. What we think about most we become. If positive thinking does not come naturally, and it doesn’t for most of us, this will take some effort. The good news is that optimism becomes easier the longer we apply it in our lives. Rather than complaining or ruminating over things that go wrong, we should put our energy into doing whatever we can to make things better; adopt the “this too shall pass” and the “everything happens for a reason” attitude. It may sound idealized, but trying to find the silver lining in everything that happens really does work. Another way to increase happiness is through self-belief. Get to know yourself; and when you do, always stay true to yourself. It is wise to take what others say into consideration, but don’t be an approval seeker. What other people think of you does not matter. There is no right or wrong way to be as long as no one else gets hurt. Focus less on impressing others and more on trying to be authentically you. It has been said that “The difference between the images you have had for your life and the reality of your life is the amount of unhappiness in your life.” Accept and celebrate the reality you are living. If you don’t like that reality and it is possible to change it, change it. Just don’t hold yourself to an unattainable image. If you do that you will never be happy. Be the one in charge of your life. Don’t allow others to dictate the standards for you to live by. When we are in charge of our lives we gain great satisfaction and happiness from the things we do. Take responsibility for your life. This is different than taking charge of it. Those who take responsibility for their lives do not play the blame game. They don’t make the problems in their lives the fault of others. They don’t make excuses or blame others for their failures. They just accept what is and are sure to do things different or better the next time. Taking responsibility for ourselves and our lives gives us a feeling of empowerment. When we are empowered we are happy. Another way to achieve happiness is to figure out what we are looking for, what we truly want for ourselves. It is about setting goals and pursuing them. Research shows that the achievement of goals is not what matters; it is the pursuit of them and the focus toward them that increases one’s sense of well-being. Identify your personal strengths and use them to their fullest. Each of us has a unique set of personal resources. We each possess talents and skills. We should use these gifts as tools for obtaining personal achievements. We often see people with disabilities doing this. Someone may be wheelchair bound and still be a champion athlete. Someone else may be blind, yet be a phenomenal musician. Focusing on success by utilizing our strengths and talents is another great way to achieve happiness. Finding opportunities to give of ourselves is a very important way to bring authentic happiness to our lives. When we engage or volunteer in causes or organizations that we are passionate about or believe in; religious organizations, community or civic minded causes, charitable causes, or social clubs, we gain great fulfillment. Endeavors that allow us to unselfishly give of ourselves to others bring tremendous meaning, and therefore happiness, into our lives. The only moment that we have any control over is the present one. Regretting the past and worrying about tomorrow only distracts us from the happiness that exists right now. The past already happened; it is only a memory that we can’t change. What we can do is extract the lessons from the things that have happened; we can learn from hindsight. And, just as living in the past keeps us from living a happy life, so does worrying about the future. Events we fear will happen may never happen. If or when they do they probably won’t happen the way we imagined they would. Happiness doesn’t exist in the past or the future; it exists in the now. Living in the present moment is the only way to be happy. We all have fears—fears of what might or might not happen, fears of failure, fears of being judged by others. These fears hold us back from fulfilling our dreams, starting a new business, changing careers, embarking on a new relationship or ending one. Our fears keep us stuck in places we don’t want to be and with people we should move on from. We can’t let our fears become obstacles. We can’t cling to the safe and the familiar just because we are afraid to venture out. It is easy to put things off, to wait for the perfect moment, but when we do that time is wasted; days, months, and years pass us by. We don’t have to take huge leaps; only tiny steps in the right direction. As we let go of our fears we can embrace the happiness we deserve. Pleasurable moments are just that—moments. They are temporary—they will come and go. And they will never be as exciting or intriguing the second, third, or fourth time around. We need to allow ourselves to enjoy the pleasures of life without feeling the need to cling to, capture, or cage the things that bring us pleasure. No one can be happy when they are waiting for the next thing to make them happy. We will never be fulfilled with what is if we are always waiting for what will be. As the popular quote says, “The happiest of people don’t necessarily have the best of everything; they just make the most of everything that comes along their way.” Be someone who practices gratitude. Be someone who expresses appreciation for the simplest of things. Make time each day to reflect on what you have to be thankful for. Look at life from the perspective of what you have rather than what you don’t have. Contentment comes when we count our blessings, not when we focus on what we don’t have. Compelling research shows that reflecting back to the enjoyable aspects of our day can significantly boost our feeling of well-being. Our natural tendency may be to focus on all the things that went wrong or frustrated us, but when we do that we leave little room for reflection of the positive things that happened. It’s fine to reflect on ways to correct what went wrong or think about how we can do things better next time, but if we want to be happy we should give equal time to the reflection of the positive outcomes of our day. It has been psychologically shown that time affluence, “the feeling that one has sufficient time to pursue activities that are personally meaningful, to reflect, and to engage in leisure,” is a factor in achieving happiness. We are never happy when we are rushing or under the gun. So it is important that we allow enough time to do whatever we need or want to do; that we under-schedule instead of over-schedule, under commit rather than over commit. We are much happier when we don’t have the weight of the world on our shoulders. To accomplish that, we need to give up trying to control everyone and everything in our lives. We have to let go of the beliefs that we are the only ones who know what is right and that we are the only ones who know how to do things. Engage competent people in your life and then hand off some of your responsibilities. When the challenges in our lives are attainable success is a realistic, predictable outcome. And along with success comes contentment. What this means is that when seeking challenges for ourselves we shouldn’t set the bar unreasonably high. We cannot be happy if we are constantly stressed and overwhelmed. We should always set ourselves up for success, not failure. Joy can be extracted from the most basic things in life; simple pleasures and breathtaking moments. As the expression goes, “the best things in life are free.” Happiness comes from quality not quantity, simplicity not complexity, and moderation not excess. When our lives and our surroundings are cluttered with too much stuff it stresses us out. The less we have the freer and happier we will feel. The way we end an experience greatly influences our perception of that experience. If we want to create positive, happy perceptions of all our experiences we should do our best to end everything on a positive note rather than a sour one. We should create closure whenever possible rather than leaving loose ends untied. It’s difficult to be happy when we have nagging thoughts about what we have left undone. When we clear away that unnecessary debris we free our minds, and happiness is the byproduct. When I tell you that conflict brings negativity and unhappiness into our lives, I’m not telling you anything you don’t already know. But being aware of the part we play will help to reduce the amount of conflict we willingly subject ourselves to. When others try to goad us into arguments we need to take a deep breath and think before we speak. Conflict takes two people—we don’t have to be one of them. People often quarrel over trivial, unimportant matters. Learning to listen well, communicate well, and let things roll off our backs will keep us from being sucked into that nonsense. And when conflict does arise, we should always practice forgiveness. And last but not least, probably the easiest ways to keep happiness in our lives are to lighten up, not take ourselves so seriously, and to laugh often. Life is painful enough. We don’t have to be so serious. We don’t have to make things harder for ourselves. We can be deliberate when choosing how we view and react to everyday occurrences. Realize that every moment is exclusive, every moment should be cherished. Once it is gone it is gone. Asking ourselves if something problematic will matter in a year from now will help us put things into perspective. So laugh at yourself and laugh at life. There is no better stress reducer or formula for happiness. This entry was posted in Happiness, Positive Thinking and tagged Happiness, Happy, Life, randi fine, Randi G. Fine. Bookmark the permalink.
english
कीर्ति मर्डर केस: दो महीने बाद सुलझी गुत्थी, हत्या के बाद फेसबुक पर कैम्पेन चलाती रही खुशी फरहान अख्तर की एक्स वाइफ अधुना अख्तर के सैलून चैन 'बी ब्लंट' की फाइनेंस मैनेजर कीर्ति व्यास पिछले ५० दिनों से लापता थीं। कीर्ति व्यास। दूसरी ओर अधुना और फरहान अख्तर। मुंबई। फरहान अख्तर की एक्स वाइफ अधुना अख्तर के सैलून चैन 'बी ब्लंट' की फाइनेंस मैनेजर कीर्ति व्यास पिछले ५० दिनों से लापता थीं। हालांकि मुंबई क्राइम ब्रांच ने अब इस गुत्थी को सुलझा लिया है। पुलिस के मुताबिक, कीर्ति का मर्डर किया गया था। इस मामले में पुलिस ने कीर्ति के साथ ही काम करने वाले दो एम्प्लाई सिद्धेश तम्हणकर और खुशी सहजवानी को गिरफ्तार किया है। इन्होंने आखिरी बार १६ मार्च को कीर्ति को ग्रांट रोड स्टेशन पर छोड़ा था और तभी से कीर्ति का कोई अता-पता नहीं था। ये है पूरा मामला... - कीर्ति व्यास बी ब्लंट में फाइनेंस मैनेजर थी। सिद्धेश तम्हणकर कीर्ति के अंडर अकाउंट्स में काम करता था। यहां कीर्ति सिद्धेश के काम से खुश नहीं थी। - उसने कई बार सिद्धेश को समझाया भी कि वो अपना काम ठीक से करे लेकिन उसमें कोई बदलाव नहीं आया। तंग आकर कीर्ति ने सिद्धेश को टर्मिनेशन लेटर दे दिया। - १६ मार्च को सिद्धेश का ऑफिस में आखिरी दिन था। उसी दिन सिद्धेश ने कीर्ति को मिलने के लिए बुलाया। उस वक्त सिद्धेश के साथ खुशी भी थी, जो कि सिद्धेश के साथ ही अकाउंट्स में काम करती थी। - यहां से सिद्धेश और खुशी ने कीर्ति को कार में यह कहते हुए बैठाया कि चलिए आपको ऑफिस छोड़ दें। इसके बाद तीनों खुशी की ही ईको स्पोर्ट्स कार से रवाना हुए। - इसके बाद सिद्धेश ने रास्ते में कीर्ति का गला दबा दिया। बाद में लाश को गाड़ी की डिक्की में रखकर खुशी अपने घर चली गई। फिर कपड़े चेंज करके ऑफिस आ गई। - दिनभर लाश कार में ही रखी रही। बाद में सिद्धेश और खुशी ऑफिस से शाम को गाड़ी लेकर निकले। फिर कहीं जाकर लाश को डंप किया। - मर्डर के बाद इन लोगों ने गाड़ी धुलवाई। दूसरी ओर, खुशी ने कीर्ति के लापता होने को लेकर सोशल मीडिया पर कैम्पेन भी चलाया। वो पुलिस स्टेशन भी जाती रही। ताकि किसी को शक न हो। - इधर पुलिस कीर्ति को तलाश रही थी। इसी बीच सीसीटीवी में दिखा कि कीर्ति खुशी की कार में बैठ रही है। इससे खुशी पर शक हुआ और पूछताछ शुरू हुई। - पूछताछ में उसने झूठ बोला कि हम उसे कार से ऑफिस छोड़ने गए थे। फिर कार को फॉरेंसिक जांच के लिए भेजा गया। गाड़ी के कार्पेट पर ब्लड भी मिला। डीएनए टेस्टिंग हुई। सैंपल कीर्ति के डीएनए से मिला। इसके बाद शनिवार को दोनों को गिरफ्तार किया गया। आगे की स्लाइड्स पर, कीर्ति व्यास और खुशी सहजवानी के कुछ फोटो... खुशी ने कीर्ति के लिए फेसबुक पर चलाया था कैम्पेन।
hindi
/* * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. * * Copyright (c) 2010-2015 Oracle and/or its affiliates. All rights reserved. * * The contents of this file are subject to the terms of either the GNU * General Public License Version 2 only ("GPL") or the Common Development * and Distribution License("CDDL") (collectively, the "License"). You * may not use this file except in compliance with the License. You can * obtain a copy of the License at * https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html * or packager/legal/LICENSE.txt. See the License for the specific * language governing permissions and limitations under the License. * * When distributing the software, include this License Header Notice in each * file and include the License file at packager/legal/LICENSE.txt. * * GPL Classpath Exception: * Oracle designates this particular file as subject to the "Classpath" * exception as provided by Oracle in the GPL Version 2 section of the License * file that accompanied this code. * * Modifications: * If applicable, add the following below the License Header, with the fields * enclosed by brackets [] replaced by your own identifying information: * "Portions Copyright [year] [name of copyright owner]" * * Contributor(s): * If you wish your version of this file to be governed by only the CDDL or * only the GPL Version 2, indicate your decision by adding "[Contributor] * elects to include this software in this distribution under the [CDDL or GPL * Version 2] license." If you don't indicate a single choice of license, a * recipient has the option to distribute your version of this file under * either the CDDL, the GPL Version 2 or to extend the choice of license to * its licensees as provided above. However, if you add GPL Version 2 code * and therefore, elected the GPL Version 2 license, then the option applies * only if the new code is made subject to such option by the copyright * holder. */ package org.glassfish.grizzly.samples.simpleauth; import org.glassfish.grizzly.Connection; import org.glassfish.grizzly.filterchain.BaseFilter; import org.glassfish.grizzly.filterchain.FilterChainContext; import org.glassfish.grizzly.filterchain.NextAction; import java.io.IOException; import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.ConcurrentMap; import org.glassfish.grizzly.utils.DataStructures; /** * Client authentication filter, which intercepts client<->server communication, * and checks whether client connection has been authenticated. If not - filter * suspends current message write and initialize authentication. Once authentication * is done - filter resumes all the suspended writes. If connection is authenticated - * filter adds "auth-id: <connection-id>" header to the outgoing message. * * @author Alexey Stashok */ public class ClientAuthFilter extends BaseFilter { // Authentication packet (authentication request). The packet is the same for all connections. private static final MultiLinePacket authPacket = MultiLinePacket.create("authentication-request"); // Map of authenticated connections private final ConcurrentMap<Connection, ConnectionAuthInfo> authenticatedConnections = DataStructures.<Connection, ConnectionAuthInfo>getConcurrentMap(); /** * The method is called once we have received {@link MultiLinePacket}. * Filter check if incoming message is the server authentication response. * If yes - we suppose client authentication is completed, store client id * (assigned by the server), and resume all the pending writes. If client * was authenticated before - we check the "auth-id: <connection-id>" * header to be equal to the client id, stored on client. If it's ok - we * pass control to a next filter, if not - throw an Exception. * * @param ctx Request processing context * * @return {@link NextAction} * @throws IOException */ @Override public NextAction handleRead(FilterChainContext ctx) throws IOException { // Get the connection final Connection connection = ctx.getConnection(); // Get the processing packet final MultiLinePacket packet = (MultiLinePacket) ctx.getMessage(); final String command = packet.getLines().get(0); // Check if the packet is authentication response if (command.startsWith("authentication-response")) { // if yes - retrieve the id, assigned by server final String id = getId(packet.getLines().get(1)); synchronized(connection) { // store id in the map ConnectionAuthInfo info = authenticatedConnections.get(connection); info.id = id; // resume pending writes for (FilterChainContext pendedContext : info.pendingMessages) { pendedContext.resume(); } info.pendingMessages = null; } // if it's authentication response - we don't pass processing to a next filter in a chain. return ctx.getStopAction(); } else { // if it's some custom message // Get id line final String idLine = packet.getLines().get(1); // Check the client id if (checkAuth(connection, idLine)) { // if id corresponds to what client has - // Remove authentication header packet.getLines().remove(1); // Pass to a next filter return ctx.getInvokeAction(); } else { // if authentication failed - throw an Exception. throw new IllegalStateException("Client is not authenticated!"); } } } /** * The method is called each time, when client sends a message to a server. * First of all filter check if this connection has been authenticated. * If yes - add "auth-id: <connection-id>" header to a message and pass it * to a next filter in a chain. If appears, that client wasn't authenticated yet - * filter initialize authentication (only once for the very first message), * suspends current write and adds suspended context to a queue to resume it, * once authentication will be completed. * * @param ctx Response processing context * * @return {@link NextAction} * @throws IOException */ @Override public NextAction handleWrite(final FilterChainContext ctx) throws IOException { // Get the connection final Connection connection = ctx.getConnection(); // Get the sending packet final MultiLinePacket packet = (MultiLinePacket) ctx.getMessage(); // Get the connection authentication information ConnectionAuthInfo authInfo = authenticatedConnections.get(connection); if (authInfo == null) { // connection is not authenticated authInfo = new ConnectionAuthInfo(); final ConnectionAuthInfo existingInfo = authenticatedConnections.putIfAbsent(connection, authInfo); if (existingInfo == null) { // it's the first message for this client - we need to start authentication process // sending authentication packet ctx.write(authPacket); } else { // authentication has been already started. authInfo = existingInfo; } } if (authInfo.pendingMessages != null) { // it might be a sign, that authentication has been completed on another thread // synchronize and check one more time synchronized (connection) { if (authInfo.pendingMessages != null) { if (authInfo.id == null) { // Authentication has been started by another thread, but it is still in progress // add suspended write context to a queue ctx.suspend(); authInfo.pendingMessages.add(ctx); return ctx.getSuspendAction(); } } } } System.out.println("packet: " + packet); // Authentication has been completed - add "auth-id" header and pass the message to a next filter in chain. packet.getLines().add(1, "auth-id: " + authInfo.id); return ctx.getInvokeAction(); } /** * The method is called, when a connection gets closed. * We remove connection entry in authenticated connections map. * * @param ctx Request processing context * * @return {@link NextAction} * @throws IOException */ @Override public NextAction handleClose(FilterChainContext ctx) throws IOException { authenticatedConnections.remove(ctx.getConnection()); return ctx.getInvokeAction(); } /** * Method checks, whether authentication header, sent in the message corresponds * to a value, stored in the client authentication map. * * @param connection {@link Connection} * @param idLine authentication header string. * * @return <tt>true</tt>, if authentication passed, or <tt>false</tt> otherwise. */ private boolean checkAuth(Connection connection, String idLine) { // Get the connection id, from the client map final ConnectionAuthInfo registeredId = authenticatedConnections.get(connection); if (registeredId == null || registeredId.id == null) return false; if (idLine.startsWith("auth-id:")) { // extract client id from the authentication header String id = getId(idLine); // check whether extracted id is equal to what client has in his map return registeredId.id.equals(id); } else { return false; } } /** * Retrieve connection id from a packet header * * @param idLine header, which looks like "auth-id: <connection-id>". * @return connection id */ private String getId(String idLine) { return idLine.split(":")[1].trim(); } /** * Single connection authentication info. */ public static class ConnectionAuthInfo { // Connection id public volatile String id; // Queue of the pending writes public volatile Queue<FilterChainContext> pendingMessages; public ConnectionAuthInfo() { pendingMessages = new LinkedList<FilterChainContext>(); } } }
code
बेटी के जन्म पर फ्री शेविंग-हेयरकटिंग का ऑफर दे रहा है यह शख्स आप समाज के लिए कुछ करना चाहते हैं, तो इसके लिए जरूरी नहीं कि आप बहुत धनवान हों तभी आप कुछ कर पाएंगे। आपकी जितनी सामर्थ्य है उतने में ही आप समाज के लिए कुछ कर सकते हैं। ऐसी ही एक मिसाल पेश कर रहे हैं महाराष्ट्र के बीड जिले में एक नाई अशोक कुमार। खराब लिंगानुपात सुधारने के लिए अनोखी पहल खुद एक बच्ची के पिता हैं अशोक टाइम्स ऑफ इंडिया की खबर के मुताबिक पेशे से नाई अशोक, एक साल की बच्ची के पिता हैं। वह मराठवाड़ा के बीड जिले के कुंभेरपाल गांव में जेन्ट्स सैलून चलाते हैं। यह इलाका कन्या भ्रूण हत्या और अवैध लिंग परीक्षण टेस्ट के लिए कुख्यात है। अपने सैलून के बाहर अशोक ने इस 'स्कीम' की जानकारी देते हुए एक बोर्ड टांग रखा है, जो १ जनवरी 20१7 से चालू है। बेटी मेरे लिए लकी चार बहनों के भाई अशोक ने कहा, 'हमारी संस्कृति ऐसी है कि हम देवी की पूजा तो करते हैं लेकिन बच्ची नहीं चाहते हैं। यह मानसिकता बदलनी होगी और यह बदलेगी जब हम एक समाज के तौर पर कुछ करेंगे। बेटी का जन्म मेरे लिए लकी साबित हुआ और मैंने कुछ ही महीनों में खुद की दुकान खोल ली।'उन्होंने बताया, 'लोग बेटी पैदा नहीं करना चाहते हैं और इसकी एक मुख्य वजह खर्च में बढ़ोत्तरी का डर है। छह महीने तक फ्री में शेविंग और हेयरकटिंग की सेवा देकर मैं खर्च का कुछ भार वहन करना चाहता हूं। जिससे लोग बिना किसी चिंता के बेटियों को जन्म दे सकें।' अशोक के इस अनोखे 'स्कीम' की चर्चा जिले में फैल रही है और लोगों ने दूर दूर से यहां आ रहे हैं। वह अभी तक १५ ग्राहकों को यह 'फ्री' सेवा दे चुके हैं।
hindi
import axios from '~/lib/utils/axios_utils'; class ServiceDeskService { constructor(endpoint) { this.endpoint = endpoint; } fetchIncomingEmail() { return axios.get(this.endpoint); } toggleServiceDesk(enable) { return axios.put(this.endpoint, { service_desk_enabled: enable }); } updateTemplate({ selectedTemplate, outgoingName, projectKey = '' }, isEnabled) { const body = { issue_template_key: selectedTemplate, outgoing_name: outgoingName, project_key: projectKey, service_desk_enabled: isEnabled, }; return axios.put(this.endpoint, body); } } export default ServiceDeskService;
code